Abstract:
The subject disclosure is directed towards a high resolution, high frame rate, robust stereo depth system. The system provides depth data in varying conditions based upon stereo matching of images, including actively illuminated IR images in some implementations. A clean IR or RGB image may be captured and used with any other captured images in some implementations. Clean IR images may be obtained by using a notch filter to filter out the active illumination pattern. IR stereo cameras, a projector, broad spectrum IR LEDs and one or more other cameras may be incorporated into a single device, which may also include image processing components to internally compute depth data in the device for subsequent output.
Abstract:
Various examples related to determining a location of an active speaker are provided. In one example, image data of a room from an image capture device is received and a three dimensional model is generated. First audio data from a first microphone array at the image capture device is received. Second audio data from a second microphone array laterally spaced from the image capture device is received. Using the three dimensional model, a location of the second microphone array with respect to the image capture device is determined. Using the audio data and the location and angular orientation of the second microphone array, an estimated location of the active speaker is determined. Using the estimated location, a setting for the image capture device is determined and outputted to highlight the active speaker.
Abstract:
Architecture for a communication system providing a user experience that includes a conversation environment and a meeting embodiment in a single application. A navigation menu enables the user to select between multiple communications environments, including a conversations environment. Multiple conversation threads can be accessed in various conversation formats, including formats associated with instant messaging, group chat, a telephone call, voice, video, email, application sharing, or an online meeting. A meeting environment can be navigated for accessing one or more meetings. Other suitable communications environments can be also be navigated from the same navigation menu, besides the conversation environment and meeting environment. The communications system and application also includes a selection pane for displaying a list of the conversation threads or meetings, depending on the environment selected by the user. A preview pane can also be included in the communications system.
Abstract:
A computing device includes a touch-sensitive user interface configured to present a unified collaborative session for two or more users, and an authentication module configured to simultaneously identify and authenticate multiple users physically co-located within a collaborative environment, allowing each of the multiple users to interact with the touch-sensitive user interface. A content module is configured to simultaneously provide one or more content portals within the unified collaborative session for each authenticated user. Each content portal is configured to enable an authenticated user to access, retrieve, and present user-owned content files within the unified collaborative session. In this way, multiple users may simultaneously access, retrieve, and present their own content files on a single computing device.
Abstract:
Various examples related to determining a location of an active participant are provided. In one example, image data of a room from an image capture device is received. First audio data from a first microphone array at the image capture device is received. Second audio data from a second microphone array spaced from the image capture device is received. Using a three dimensional model, a location of the second microphone array is determined. Using the first audio data, second audio data, location of the second microphone array, and an angular orientation of the second microphone array, an estimated location of the active participant is determined.
Abstract:
A computing device includes a touch-sensitive user interface configured to present a unified collaborative session for two or more users, and an authentication module configured to simultaneously identify and authenticate multiple users physically co-located within a collaborative environment, allowing each of the multiple users to interact with the touch-sensitive user interface. A content module is configured to simultaneously provide one or more content portals within the unified collaborative session for each authenticated user. Each content portal is configured to enable an authenticated user to access, retrieve, and present user-owned content files within the unified collaborative session. In this way, multiple users may simultaneously access, retrieve, and present their own content files on a single computing device.
Abstract:
Various examples related to determining a location of an active participant are provided. In one example, image data of a room from an image capture device is received. First audio data from a first microphone array at the image capture device is received. Second audio data from a second microphone array spaced from the image capture device is received. Using a three dimensional model, a location of the second microphone array is determined. Using the first audio data, second audio data, location of the second microphone array, and an angular orientation of the second microphone array, an estimated location of the active participant is determined.