Abstract:
A computing device for reducing horizontal misalignment in a 360-degree video converts the 360-degree video to a rectilinear 360-degree video. At least a pair of views of rectilinear images from the rectilinear 360-degree video are generated and displayed. A user interface for facilitating adjustment of a view angle is generated, the user interface displaying the at least the pair of views of the rectilinear images from the rectilinear 360-degree video. The computing device obtains vertical reference object is obtained in one of the views of rectilinear images; at least one of: a roll angle adjustment, a pitch angle adjustment, and a yaw angle adjustment for aligning the vertical reference object with a vertical axis; and a view angle adjustment corresponding to reduction of the horizontal misalignment. A panoramic 360-degree video is then generated.
Abstract:
A computing device executing an instant messaging application receives a selection from a user specifying at least one instant message conversation record to hide from view. The selected conversation record is hidden from view in response to occurrence of an event of a first type. In response to the occurrence of an event of a second type, a timer hidden from the user is launched. An unlock procedure is received from the user, where the user enters the unlock procedure. In response to the entered unlock procedure matching a predetermined unlock procedure prior to expiration of the timer, the corresponding hidden conversation record is made viewable and is accessible again by the user.
Abstract:
A computing device for inserting an effect into a 360 video receives the effect from a user. A target region is also received from the user, where the target region corresponds to a location within the 360 video for inserting the effect. Next, the following steps are performed for each frame in the 360 video. The effect is inserted on a surface of a spherical model based on the target region, and two half-sphere frames containing the effect from the spherical model are generated. The two half-sphere frames are stitched to generate a panoramic representation of the effect, and the panoramic representation of the effect is blended with an original source panorama to generate a modified 360 video frame with the effect.
Abstract:
In a cloud computing device for synchronizing digital content with a client device, a first hash value and a second hash value in a media file are received from the client device, the media file comprising a plurality group of pictures (GOP) blocks and searching for payloads and headers based on the first hash value and the second hash value. Based on the searching step, edited portions of the media file are identified. For each edited portion, payload data is requested from the client device based on the first hash value and header data based on the second hash value. The payload data and the header data received from the client device are then stored.
Abstract:
Disclosed are systems and methods for automatically applying special effects based on media content characteristics. A digital image is obtained and depth information in the digital image is determined. A foreground region and a background region in the digital image are identified based on the depth information. First and second effects are selected from a grouping of effects, where the first effect is applied to at least a portion of the foreground region and the second effect is applied to at least a portion of the background region.
Abstract:
Disclosed are systems and methods for automatically applying special effects based on media content characteristics. A digital image is obtained and depth information in the digital image is determined. A foreground region and a background region in the digital image are identified based on the depth information. First and second effects are selected from a grouping of effects, where the first effect is applied to at least a portion of the foreground region and the second effect is applied to at least a portion of the background region.
Abstract:
A system and method for licensing software using a clearinghouse to license only the technology modules that an end user registers. The clearinghouse maintains registration information which can be used to bill a software provider for the technology licensed to the end user. The system can be used to compensate technology owners only after the end user registers an unlicensed technology module. Thus, the system and method allows software vendors to reduce costs by licensing only the technologies that an end user actually uses. The clearinghouse can also be used to track the usage of software functionality to determine the popularity of a particular technology.
Abstract:
Various embodiments are disclosed for performing inpainting. One embodiment is a method for editing a digital image in an image editing device. The method comprises obtaining an inpainting region in the digital image, determining a target resolution for scaling a resolution of the digital image based on an original resolution of the digital image, and determining an intermediate resolution level for scaling a resolution of the digital image based on the target resolution. The method further comprises scaling the resolution of the digital image to the intermediate resolution level, performing partial inpainting of the inpainting region at the intermediate resolution, and performing inpainting on a remainder portion in the inpainting region at a final target resolution.
Abstract:
Various embodiments are disclosed for image editing. A frame is obtained from a frame sequence depicting at least one individual, and facial characteristics in the frame are analyzed. A utilization score is assigned to the frame based on the detected facial characteristics, and a determination of whether to utilize the frame is made based on the utilization score. A completeness value is assigned, and a determination is made based on the completeness value of whether to repeat the steps above for an additional frame in the frame sequence based on the completeness value. Regions from the frames are combined to generate a composite image.
Abstract:
Local contrast enhancement comprises obtaining a first frame as a reference frame and a second frame as a current frame. The reference frame is partitioned into a plurality of reference regions, and a first color mapping function is derived for at least one of the reference regions in the reference frame according to the corresponding color distribution. The first color mapping function comprises, for at least one of a predetermined set of colors, a first source color value and a first contrast-enhanced color value. The current frame is partitioned into a plurality of regions, and a second color mapping function is derived for at least one of the regions in the current frame according to the first color mapping functions of at least two of the reference regions in the reference frame. The second color mapping function is applied to generate a contrast-enhanced frame.