Abstract:
In one embodiment, a video conference endpoint may detect a one or more participants within a field of view of a camera of the video conference endpoint. The video conference endpoint may determine one or more alternative framings of an output of the camera of the video conference endpoint based on the detected one or more participants. The video conference endpoint may send the output of the camera of the video conference endpoint to one or more far-end video conference endpoints participating in a video conference with the video conference endpoint. The video conference endpoint may send data descriptive of the one or more alternative framings of the output of the camera to the far-end video conference endpoints. The far-end video conference endpoints may utilize the data to display one of the one or more alternative framings.
Abstract:
A system that automatically calibrates multiple speaker tracking systems with respect to one another based on detection of an active speaker at a collaboration endpoint is presented herein. The system collects a first data point set of an active speaker at the collaboration endpoint using at least a first camera and a first microphone array. The system then receives a plurality of second data point sets from one or more secondary speaker tracking systems located at the collaboration endpoint. Once enough data points have been collected, a reference coordinate system is determined using the first data point set and the one or more second data point sets. Finally, after a reference coordinate system has been determined, the system generates the locations of the one or more secondary speaker tracking systems with respect to the first speaker tracking system.
Abstract:
In one embodiment, a method is provided to intelligently frame groups of participants in a meeting. This gives a more pleasing experience with fewer switches, better contextual understanding, and more natural framing, as would be seen in a video production made by a human director. Furthermore, in accordance with another embodiment, conversational framing techniques are provided. During speaker tracking, when two local participants are addressing each other, a method is provided to show a close-up framing showing both participants. By evaluating the direction participants are looking and a speaker history, it is determined if there is a local discussion going on, and an appropriate framing is selected to give far-end participants the most contextually rich experience.
Abstract:
A video conference endpoint detects faces at associated face positions in video frames capturing a scene. The endpoint frames the video frames to a view of the scene encompassing all of the detected faces. The endpoint detects that a previously detected face is no longer detected. In response, a timeout period is started and independently of detecting faces, motion is detected across the view. It is determined if any detected motion (i) coincides with the face position of the previously detected face that is no longer detected, and (ii) occurs before the timeout period expires. If conditions (i) and (ii) are not both met, the endpoint reframes the view.
Abstract:
In one embodiment, a method includes receiving at a network device, video and activity data for a video conference, automatically processing the video at the network device based on the activity data, and transmitting edited video from the network device. Processing comprises identifying active locations in the video and editing the video to display each of the active locations before a start of activity at the location and switch between the active locations. An apparatus and logic are also disclosed herein.
Abstract:
In one embodiment, a method includes receiving at a network device, video and activity data for a video conference, automatically processing the video at the network device based on the activity data, and transmitting edited video from the network device. Processing comprises identifying active locations in the video and editing the video to display each of the active locations before a start of activity at the location and switch between the active locations. An apparatus and logic are also disclosed herein.
Abstract:
A video conference endpoint detects faces at associated face positions in video frames capturing a scene. The endpoint frames the video frames to a view of the scene encompassing all of the detected faces. The endpoint detects that a previously detected face is no longer detected. In response, a timeout period is started and independently of detecting faces, motion is detected across the view. It is determined if any detected motion (i) coincides with the face position of the previously detected face that is no longer detected, and (ii) occurs before the timeout period expires. If conditions (i) and (ii) are met, the endpoint restarts the timeout period and repeats the independently detecting motion and the determining. Otherwise, the endpoint reframes the view to encompass the remaining detected faces.
Abstract:
The present disclosure provides methods and systems related to automatic adjustment of screen brightness for optimized presentation to both physically present and remote audience during the multimedia collaboration session. In one aspect, a method includes detecting presence of a screen in the field of view of a camera in a meeting room; determining if exposure of the camera or brightness of the screen is to be adjusted, to yield a determination; and controlling at least one of the exposure of the camera or the brightness of the screen based on the determination such that viewing of meeting room and the screen are legible for one or more audience and the screen is legible for one or more audience present in the meeting room.
Abstract:
In one embodiment, a method is provided to intelligently frame groups of participants in a meeting. This gives a more pleasing experience with fewer switches, better contextual understanding, and more natural framing, as would be seen in a video production made by a human director. Furthermore, in accordance with another embodiment, conversational framing techniques are provided. During speaker tracking, when two local participants are addressing each other, a method is provided to show a close-up framing showing both participants. By evaluating the direction participants are looking and a speaker history, it is determined if there is a local discussion going on, and an appropriate framing is selected to give far-end participants the most contextually rich experience.
Abstract:
In one embodiment, a video conference endpoint may detect a one or more participants within a field of view of a camera of the video conference endpoint. The video conference endpoint may determine one or more alternative framings of an output of the camera of the video conference endpoint based on the detected one or more participants. The video conference endpoint may send the output of the camera of the video conference endpoint to one or more far-end video conference endpoints participating in a video conference with the video conference endpoint. The video conference endpoint may send data descriptive of the one or more alternative framings of the output of the camera to the far-end video conference endpoints. The far-end video conference endpoints may utilize the data to display one of the one or more alternative framings.