摘要:
Disclosed herein are systems and methods for iterative adjustment of video-capture settings based on identified persona. In an embodiment, a method includes receiving video frames being captured by a video camera of an ongoing scene. The method also includes identifying a persona in one or more of the received frames at least in part by identifying, in each such frame, a set of pixels that is representative of the persona in the frame and that does not include any pixels representative of a background of the frame. The method also includes selecting, based collectively on the brightness values of the pixels in the identified set of pixels of one or more frames, an adjustment command for one or more adjustable video-capture settings of the camera, as well as outputting the selected commands to the camera for use in continuing to capture video data representative of the ongoing scene.
摘要:
A color image and a depth image of a live video are received. Each of the color image and the depth image are processed to identify the foreground and the background of the live video. The background of the live video is removed in order to create a foreground video that comprises the foreground of the live video. A control input may be received to control the embedding of the foreground video into a second background from a background feed. The background feed may also comprise virtual objects such that the foreground video may interact with the virtual objects.
摘要:
Embodiments disclose systems and methods for transmitting user-extracted video and content more efficiently by recognizing that user-extracted video provides the potential to treat parts of a single frame of a user-extracted video differently. An alpha mask of the image part of the user-extracted video is used when encoding the image part so that it retains a higher quality upon transmission than the remainder of the user-extracted video.
摘要:
Disclosed herein are methods and systems for presenting personas according to a common cross-client configuration. An embodiment takes the form of a method that includes extracting a persona from video frames being received from a video camera. The method also includes transmitting an outbound stream of persona data that includes the extracted persona. The method also includes receiving at least one inbound stream of persona data, where the at least one inbound stream of persona data includes one or more other personas. The method also includes presenting a full persona set of the extracted persona and the one or more other personas on a user interface according to a common cross-client persona configuration. The method also includes presenting one or more shared-content channels on the user interface according to a common cross-client shared-content-channel configuration.
摘要:
A system and method is disclosed for extracting a user persona from a video and embedding that persona into a background feed that may have other content, such as text, graphics, or additional video content. The extracted video and background feed are combined to create a composite video that comprises the display in a videoconference. Embodiments cause the user persona to be embedded at preset positions, or in preset formats, or both, depending on the configuration, position, or motion of the user's body.
摘要:
Disclosed herein are systems and methods for iterative adjustment of video-capture settings based on identified persona. In an embodiment, a method includes receiving video frames being captured by a video camera of an ongoing scene. The method also includes identifying a persona in one or more of the received frames at least in part by identifying, in each such frame, a set of pixels that is representative of the persona in the frame and that does not include any pixels representative of a background of the frame. The method also includes selecting, based collectively on the brightness values of the pixels in the identified set of pixels of one or more frames, an adjustment command for one or more adjustable video-capture settings of the camera, as well as outputting the selected commands to the camera for use in continuing to capture video data representative of the ongoing scene.
摘要:
A system and method is disclosed for extracting a user persona from a video and embedding that persona into a background feed that may have other content, such as text, graphics, or additional video content. The extracted video and background feed are combined to create a composite video that comprises the display in a videoconference. Embodiments cause the user persona to be embedded at preset positions, or in preset formats, or both, depending on the configuration, position, or motion of the user's body.
摘要:
Disclosed herein are methods and systems for combining foreground video and background video using chromatic matching. In an embodiment, a system obtains foreground video data. The system obtains background video data. The system determines a color-distribution dimensionality of the background video data to be either high-dimensional chromatic or low-dimensional chromatic. The system selects a chromatic-adjustment technique from a set of chromatic-adjustment techniques based on the determined color-distribution dimensionality of the background video data. The system adjusts the foreground video data using the selected chromatic-adjustment technique. The system generates combined video data at least in part by combining the background video data with the adjusted foreground video data. The system outputs the combined video for display.
摘要:
Embodiments disclose extracting a user persona from a video of arbitrary duration and associating that persona with text for a chat session. Embodiments cause the persona to be extracted at the moment text is sent or received to convey the body language associated with the text.
摘要:
Embodiments disclose systems and methods for transmitting user-extracted video and content more efficiently by recognizing that user-extracted video provides the potential to treat parts of a single frame of a user-extracted video differently. An alpha mask of the image part of the user-extracted video is used when encoding the image part so that it retains a higher quality upon transmission than the remainder of the user-extracted video.