Abstract:
An input apparatus (2000) includes a motion detection unit (2020) and an input recognition unit (2040). The motion detection unit (2020) detects motion of an object by using a captured image including the object. Here, the detected motion of the object is motion of the object in a period defined based on a result of detection by a sensor attached to the body of a user of the input apparatus (2000). The input recognition unit (2040) recognizes input to an information processing apparatus based on the detected motion of the object.
Abstract:
An information processing system (3000) includes a marker (3020). The marker (3020) is any part of the body of a user of the information processing system (3000), or is any object attached to the user of the information processing system (3000). An information processing apparatus (2000) includes an operation region extraction unit (2020) and a recognition unit (2040). The operation region extraction unit (2020) extracts an operation region from a captured image on the basis of a position of the marker (3020). The recognition unit (2040) calculates a position or motion of an operation body in the operation region on a captured image. The recognition unit (2040) recognizes an input operation on the basis of the calculated position or motion of the operation body.
Abstract:
An information terminal is provided with a display device, a position detection unit that detects the position of a portion receiving a tactile force sense, an information processing unit that changes content according to the detected position and calculates the tactile force sense on the basis of the changed content, and a tactile force sense providing device. The tactile force sense providing device includes a fitting member including an operation input unit, a conveyance member that conveys a force via the fitting member, a drive unit that produces the force to the conveyance member, and a control unit that provides a tactile force sense by increasing or reducing an initial force, and outputs a signal to the information processing unit according to input from the operation input unit by a user. The information processing unit changes content according to the signal from the control unit.
Abstract:
To perform overlay display of information related to a food item in an appropriate position, an information processing apparatus (1) includes: a detection section (101) that detects a predetermined detection target in an image that is obtained by capturing at least part of a field of view of a user and shows a food item; an overlay area setting section (102) that sets, in the image, an overlay area for information related to the food item by using the detection target as a reference; and a display control section (103) that performs overlay display of the information on the overlay area.
Abstract:
Augmented reality display apparatus includes: display part; image analysis part that detects shooting target object feature amount digitizing feature point of shooting target object included in image data; and control part, wherein control part transmits post download instruction information including shooting position information relating to shooting position of image data and shooting target object feature amount to a server apparatus; and causes display part to overlay post data at relative position to shooting target object feature amount in image data displayed on the display part based on information relating to post data associated with shooting position information and shooting target object feature amount transmitted from the server apparatus. Relative position to shooting target object feature amount is position based on post position information included in information relating to post data (FIG. 13).
Abstract:
One aim of the present invention is to provide an information processing device whereby a user can be made aware of the audio transmission range in a situation where a virtual space is used and users communicate with each other. This information processing device includes: a detection unit that detects audio generated by a user operating an avatar inside a virtual space; an audio control unit that outputs the audio to a user of an avatar that fulfills prescribed conditions in a relationship with a speaking avatar, being an avatar operated by the user that provided the audio; and a display control unit that changes the display mode for a listener avatar, being an avatar that fulfills the prescribed conditions.
Abstract:
An analysis apparatus comprises at least one memory storing instructions, and at least one processor configured to execute the instructions to comprises at least one memory storing instructions, and at least one processor configured to execute the instructions to acquire emotion data from an emotion data generation apparatus that generates emotion data from face image data of a meeting participant in an online meeting, generate analysis data for the meeting on the basis of the emotion data, acquire meeting data including attribute data of the meeting, store message data in which a pattern of a message to be presented to a user is associated with the meeting data, select the message on the basis of the analysis data and the message data, and store an analysis result including the selected message in a storage unit in an outputtable manner.
Abstract:
Provided is a server device with which a participant in a conference can easily acquire information pertaining to another participant. The server device is provided with an acquisition unit and an information provision unit. The acquisition unit acquires a profile of each of a plurality of users using a conference assistance system. The information provision unit provides, to a first participant participating in a conference from among a plurality of users, a profile pertaining to a second participant participating in the same conference as that in which the first participant is participating.
Abstract:
An information processing apparatus (2000) includes a first determination unit (2020), a second determination unit (2040), and a display control unit (2060). The first determination unit (2020) detects a first marker (10) from a captured image (30), and determines content corresponding to the detected first marker (10). The second determination unit (2040) detects a second marker (20) from the captured image (30), and determines a display position of a content image (40) in a display screen on the basis of the detected second marker (20). The display control unit (2060) displays the content image (40) at the determined display position in the display screen.
Abstract:
Provided is a technology for recording effective data, as a result of inspection on a to-be-inspected object. This inspection assistance device is provided with: a reception unit that receives information about a to-be-inspected object; an acquisition unit that acquires image data captured by an imaging device; and a recording control unit that, in the case when the imaging time of the image data falls within a prescribed time range based on the time at which the information was received being set as a reference, and when the to-be-inspected object indicated by the received information coincides with the to-be-inspected object recognized from the image data, records the information pertaining to the result of inspection on the to-be-inspected object and the information about the to-be-inspected object in association with each other.