Abstract:
An electronic device, a method, and a computer program product provides freeze frame videos in place of a live video feed to a video communication session. A processor of the electronic device determines that a local participant is connected via the electronic device to an ongoing video communication session with second participants having corresponding second participant devices. The processor captures, via an image capturing device, local video encompassing the local participant. The processor determines a video segment of the local video to identify as a freeze frame video. The processor presents the freeze frame video to the video communication session in response to a trigger condition that pauses a presentation of live video feed of the local participant to the video communication session. The processor loops the presentation of the freeze frame video until an expiration of a threshold maximum time established for presenting the freeze frame video.
Abstract:
A method provides non-mirrored preview of text-based demonstration objects in a locally captured video stream. The method includes identifying a demonstration object within the video stream. The demonstration object contains content that is best when pre-viewed in a non-mirrored orientation. The method includes spatially segmenting the video image into two or more segments, including a demonstration object preview (DOP) segment that encompasses the defined area with the demonstration object and at least one second segment encompassing a remaining portion of the video image. The method includes presenting a preview of the video image on the display device with the remaining portion of the video image mirrored within the preview and at least the DOP segment presented without mirroring in a correct spatial location relative to the remaining portion of the video image. A person presenting the demonstration object receives a non-mirrored presentation preview of the demonstration object.
Abstract:
An electronic system, a method, and a computer program product generate a video recording or stream with a spherical background image that maintains orientation like a natural background for a video call as an image capturing device changes location or aim direction. The method includes connecting, by a communication subsystem of an electronic system, over a network to a video communication session with second electronic device(s). The method includes determining a stationary orientation of a spherical background image in a virtual space geospatially aligned with a first physical location of the electronic system. The method includes extracting a foreground image from an image stream from image capturing device(s). The method includes generating a composite image stream of the foreground image superimposed on the spherical background image. The method includes communicating the composite image stream to second electronic system(s) that participate in a video communication session.
Abstract:
An electronic system, a method, and a computer program product generate a video recording or stream with a spherical background image that maintains orientation like a natural background as an image capturing device changes location or aim direction. The method includes determining a stationary orientation of a spherical background image in a virtual space geospatially aligned with a first physical location of an image capturing device. The method includes extracting a foreground image from an image stream of the field of view captured by the image capturing device. The method includes generating a composite image stream of a foreground image, which is extracted from an image stream from the image capturing device, superimposed on the spherical background image. In response to movement, the method includes updating a virtual orientation and/or a virtual location of the spherical background image to remain geospatially aligned with the first physical location.
Abstract:
A conferencing system terminal device includes a communication device electronically in communication with a content presentation companion device operating as a primary display for the conferencing system terminal device during a videoconference. An image capture device of the conferencing system terminal device captures one or more images of a subject for presentation on the content presentation companion device. One or more processors apply a mirroring function to the one or more images of the subject when operating in a normal videoconference mode of operation and, in response to one or more sensors detecting an initiation of a demonstration operation by the subject, transition to a demonstration videoconference mode of operation where application of the mirroring function to the one or more images of the subject is precluded.
Abstract:
Techniques for classes of tables for use in image compression are described. Classes of tables for use in image compression may provide increased compression without a reduction in quality compared to conventional image compression techniques. In at least some implementations, a plurality of table classes are generated that correspond to a particular camera subsystem, each table class containing a Huffman table and a quantization table. When an image captured by the particular camera subsystem is encoded, a table class is selected based on camera subsystem parameters associated with the image, and the Huffman table and the quantization table of the selected table class are utilized to encode the image.
Abstract:
Disclosed are techniques that provide a “best” picture taken within a few seconds of the moment when a capture command is received (e.g., when the “shutter” button is pressed). In some situations, several still images are automatically (that is, without the user's input) captured. These images are compared to find a “best” image that is presented to the photographer for consideration. Video is also captured automatically and analyzed to see if there is an action scene or other motion content around the time of the capture command. If the analysis reveals anything interesting, then the video clip is presented to the photographer. The video clip may be cropped to match the still-capture scene and to remove transitory parts. Higher-precision horizon detection may be provided based on motion analysis and on pixel-data analysis.
Abstract:
Disclosed are techniques that provide a “best” picture taken within a few seconds of the moment when a capture command is received (e.g., when the “shutter” button is pressed). In some situations, several still images are automatically (that is, without the user's input) captured. These images are compared to find a “best” image that is presented to the photographer for consideration. Video is also captured automatically and analyzed to see if there is an action scene or other motion content around the time of the capture command. If the analysis reveals anything interesting, then the video clip is presented to the photographer. The video clip may be cropped to match the still-capture scene and to remove transitory parts. Higher-precision horizon detection may be provided based on motion analysis and on pixel-data analysis.
Abstract:
Digital video stabilization is selectively turned off in circumstances where it could actually decrease the quality of a captured video. A video camera includes a device for directly detecting physical motion of the camera. Motion data from the motion detector are analyzed to see if video stabilization is appropriate. If the motion data indicate that the video camera is stable, for example, then video stabilization is not applied to the video, thus preventing the possibility of introducing “motion artifacts” into the captured video. In another example, motion as detected by the motion detector can be compared with motion as detected by the video-stabilization engine. If the two motions disagree significantly, then the video-stabilization engine is probably responding more to motion in the captured video rather than to motion of the camera itself, and video stabilization should probably not be applied to the video.
Abstract:
A communication device, a method, and a computer program product provide virtual action centers superimposed over secondary segments of a video image preview on a display during video capture for a video communication session. The method includes receiving a local video stream comprising video images, identifying a primary region of interest (ROI) within a video image, and delineating the video image into a primary segment that encompasses the primary ROI and at least a secondary segment. The method includes associating a virtual interface with a location of the secondary segment in the video image, the virtual interface presenting at least one feature that can be selected via one of air gestures and screen touches during presentation of the primary segment of the video image preview. A preview of the delineated segments of the video image presents at least the primary segment and the virtual interface visible within the display device.