Abstract:
A display control apparatus comprising circuitry configured to obtain information of an input speed of a gesture input of at least one user from a sensor configured to detect a hand of the user, estimate an attribute of the user based on the input speed, control a display apparatus to display a layout image for the gesture input based on the estimated attribute of the user.
Abstract:
According to the present disclosure, there is provided a control device including a detection unit configured to detect, as a manipulation region, at least a part of a substantial object present at a position at which a user is estimated to be able to perform a manipulation, a function setting unit configured to perform setting in a manner that a predetermined function matches the manipulation region detected by the detection unit, and a control unit configured to perform the function matched with the manipulation region based on a positional relation between the manipulation region and a manipulator.
Abstract:
There is provided a data generation device including a content specifying unit configured to specify content, a period estimation unit configured to estimate a period in which the content specified by the content specifying unit attracts interest, and a metadata generation unit configured to associate as metadata the period estimated by the period estimation unit with the content.
Abstract:
There is provided an input apparatus including an estimation unit configured to automatically estimate an attribute of a user, a setting unit configured to set an input mode based on the attribute of the user estimated by the estimation unit, and a control unit configured to control input based on the set input mode.
Abstract:
There is provided an information processing device including a recognition unit that recognizes a face area from a captured image, a storage controller that causes a storage unit to store face recognition information indicating a face area recognized by the recognition unit, and a display controller that superimposes onto a display unit a display indicating, as a face area candidate in a current captured image captured by an image capture unit according to an external instruction, an area that is among areas corresponding to a first face area indicated by face recognition information from another captured image stored in the storage unit, but is not included in a second face area recognized by the recognition unit from the current captured image, distinguishably from a display indicating the second face area.
Abstract:
There is provided a signal processing system including a first detection section which detects, from outside a body cavity, first audio signals of a prescribed part inside the body cavity, a second detection section which detects, from inside the body cavity, second audio signals of the prescribed part inside the body cavity, and a generation section which generates third audio signals based on the first audio signals and the second audio signals.
Abstract:
There is provided a mobile object including an input detection unit configured to detect an input from an outside, an acquisition unit configured to acquire environmental information detected by a sensor in a remote location, in accordance with a content of detection performed by the input detection unit, and a control unit configured to control an actuator other than a driving actuator in accordance with the environmental information acquired by the acquisition unit, the driving actuator relating to movement of the mobile object.
Abstract:
To provide a mechanism for selectively taking an external sound from an appropriate sound source into an internal space of a moving object. An information processing apparatus including an acquisition unit configured to acquire an audio signal from a sound source existing outside a moving object, a generation unit configured to generate an audio signal from a target sound source at a distance from the moving object, the distance being a distance according to a speed of the moving object, of the sound sources, on the basis of the audio signal acquired by the acquisition unit, and an output control unit configured to output the audio signal generated by the generation unit toward an internal space of the moving object.
Abstract:
Disclosed is a signal processing apparatus including a surrounding sound signal acquisition unit, a NC (Noise Canceling) signal generation part, a cooped-up feeling elimination signal generation part, and an addition part. The surrounding sound signal acquisition unit is configured to collect a surrounding sound to generate a surrounding sound signal. The NC signal generation part is configured to generate a noise canceling signal from the surrounding sound signal. The cooped-up feeling elimination signal generation part is configured to generate a cooped-up feeling elimination signal from the surrounding sound signal. The addition part is configured to add together the generated noise canceling signal and the cooped-up feeling elimination signal at a prescribed ratio.
Abstract:
Provided is a wearable device that is worn on a human ear and mainly outputs or inputs sound. The outer casing 110 is formed into an elongated shape so as to fit the cymba conchae 306 formed between the antihelix inferior crus 302b and the crus helicis 309. The sound generating unit 120 includes a vibration element 121 and a weight 122. The vibration element 121 has an elongated shape along the longitudinal direction of the outer casing 110, and has one end fixed to the inner wall of the outer casing 110, and the other end to which a weight 122 is attached, the other hand being an open end. By applying an alternating electric field to the vibration element 121, vibration corresponding to audio is generated.