Abstract:
Example embodiments relate to an apparatus, method and computer program associated with modification of captured images. The method may comprise capturing images using an under-display camera of an apparatus in which at least some display pixels which overlie a camera sensor are disabled. The method may also detecting at least one predetermined condition and enabling at least some of the disabled display pixels to modify at least part of an image or images being captured, responsive to detecting the at least one predetermined condition.
Abstract:
There is provided an apparatus comprising means for receiving spatial audio capture requirement(s) of one or more user devices; determining position and/or orientation information of the one or more user devices; generating one or more privacy masks at least partly based on the spatial audio capture requirement(s) and position and/or orientation information of the one or more user devices; and transmitting the generated privacy masks to the one or more user devices.
Abstract:
Examples of the disclosure are configured to reduce the effects of leaked images from mediated reality headsets. Examples of the disclosure comprise determining that an object is positioned relative to a mediated reality headset such that one or more portions of an image displayed by the mediated reality headset could be leaked in a field of view of the object. Examples of the disclosure also comprise identifying one or more portions of the image that are displayed by the mediated reality headset such that leakage of those one or more portions is expected to be in the field of view of the object. Examples of the disclosure also comprise causing modification of the display of the image by the mediated reality headset wherein the modification reduces light leakage for the one or more portions of the image that are identified as being displayed such that leakage of those one or more portions is expected to be in the field of view of the object.
Abstract:
Examples of the disclosure relate to monitoring of facial characteristics. In examples of the disclosure there may be provided an apparatus that is configured to determine that a communications device is positioned close to an ear of a user of the communication device during a communication session wherein the communications device comprises at least one display. The apparatus may also be configured to use sensors in or under the display of the communications device to monitor one or more facial characteristics of the user while the communication device is close to the user's ear and to identifying an emotional context of the user based on the monitored one or more facial characteristics of the user.
Abstract:
Examples of the disclosure relate to sharing a device between two or more authenticated users. In examples of the disclosure an apparatus is configured to enable a first user to access one or more applications of a device. The apparatus can then enable authenticating a second user and enable the second user to access one or more functions of at least one application. The apparatus can also detect one or more peripheral devices associated with the second user and configure the apparatus to provide outputs to and/or receive inputs from the one or more peripheral devices associated with the second user. The outputs and/or inputs relate to the one or more functions to which access has been enabled for the second user.
Abstract:
An apparatus, for enabling adaptive playback, comprising means configured to: obtain, for a first point of view, a first audio signal for at least a first channel and a second channel; obtain, for a second point of view, a second audio signal for at least the first channel and the second channel; determine a single-channel difference audio signal, for the second point of view, based on at least a difference between the first audio signal and the second audio signal; and enable estimation of both the first channel and the second channel of the second audio signal for the second point of view in dependence on the single-channel difference audio signal and the first audio signal.
Abstract:
An apparatus including means for: identifying audio-focus attenuation of a sound source; determining a correspondence between the sound source that is subject to audio focus attenuation and a corresponding visual object; and modifying capturing of an image to at least partially exclude and/or modify the visual object corresponding to the sound source subject to audio focus attenuation.
Abstract:
An apparatus, method and computer program is described comprising: capturing a plurality of visual images from an image capturing start time to an image capturing end time, for use in generating a panorama image; and capturing audio data relating to said visual images from an audio capturing start time to an audio capturing end time.
Abstract:
A method comprising: storing a continuous audio composition having plural tracks at least partially overlapping with one another in the temporal domain and having a specific alignment in the temporal domain; obtaining time-varying audio characteristics of an audio recording; identifying at least part of one of the plural tracks that corresponds to the audio recording; using the time-varying audio characteristics of the audio recording to align the audio recording with said at least part of the identified track; and substituting said at least part of the identified track with the audio recording with substantially the same alignment in the temporal domain as said at least part of the identified track.
Abstract:
A method comprises receiving information associated with a content item, designating a first bead apparatus (716) to be associated with a first content item segment of the content item, the first content item segment being identified by a first content item segment identifier, causing display of a visual representation of the first content item segment identifier by the first bead apparatus (726), designating a second bead apparatus (712) to be associated with a second content item segment of the content item, the second content item segment being identified by a second content item segment identifier, causing display of a visual representation of the second content item segment identifier by the second bead apparatus (722), receiving information indicative of a selection input of the second bead apparatus, and causing rendering of the second content item segment based, at least in part, on the selection input (734). The causation of rendering comprises sending information indicative of a content item segment to a separate apparatus or causing sending of information indicative of a content item segment by another apparatus to a separate apparatus (732) such as a bead apparatus, an electronic apparatus, a server, a computer, a laptop, a television, a phone and/or the like.