摘要:
Embodiments that provide cartoon personalization are disclosed. In accordance with one embodiment, cartoon personalization includes selecting a face image having a pose orientation that substantially matches an original pose orientation of a character in a cartoon image. The method also includes replacing a face of the character in the cartoon image with the face image. The method further includes blending the face image with a remainder of the character in the cartoon image.
摘要:
Methods and apparatus for synthesizing images from two or more existing images are described. The described embodiment makes use of an illumination model as a mathematical model to combine the images. A first of the images is utilized as an object color or color source (i.e. the foreground) for a resultant image that is to be formed. A second of the images (utilized as the background or texture) is utilized as a perturbation source. In accordance with the described embodiment, the first image is represented by a plane that has a plurality of surface normal vectors. Aspects of the second image are utilized to perturb or adjust the surface normal vectors of the plane that represents the first image. Perturbation occurs, in the described embodiment, by determining individual intensity values for corresponding pixels of the second image. The intensity values are mapped to corresponding angular displacement values. The angular displacement values are used to angularly adjust or deviate the surface normal vectors for corresponding image pixels of the plane that represents the first image. This yields a virtual surface whose normal vectors are not fully specified, but constrained only by the angles between the original surface normal vectors and the perturbed normal vectors. In the described embodiment, after some assumptions concerning the viewing and lighting source direction, an illumination model is then applied to the virtual surface to yield a resultant synthesized image.
摘要:
Various embodiments provide techniques for calibrating and annotating video content. In one or more embodiments, an instance of video content can be calibrated with one or more geographical models and/or existing calibrated video content to correlate the instance of video content with one or more geographical locations. According to some embodiments, geographical information can be used to annotate the video content. Geographical information can include identification information for one or more structures, natural features, and/or locations included in the video content. Some embodiments enable a particular instance of video content to be correlated with other instances of video content based on common geographical information and/or common annotation information. Thus, a user can access video content from other users with similar travel experiences and/or interests. A user may also access annotations provided by other users that may be relevant to a particular instance of video content.
摘要:
Techniques are described for rendering annotations associated with an image. A view of an image maybe shown on a display, and different portions of the image are displayed and undisplayed in the view according to panning and/or zooming of the image within the view. The image may have annotations. An annotation may have a location in the image and may have associated renderable media. The location of the annotation relative to the view may change according to the panning and/or zooming. A strength of the annotation may be computed, the strength changing based the panning and/or zooming of the image. The media may be rendered according to the strength. Whether to render the media may be determined by comparing the strength to a threshold.
摘要:
Techniques are described for rendering annotations associated with an image. A view of an image maybe shown on a display, and different portions of the image are displayed and undisplayed in the view according to panning and/or zooming of the image within the view. The image may have annotations. An annotation may have a location in the image and may have associated renderable media. The location of the annotation relative to the view may change according to the panning and/or zooming. A strength of the annotation may be computed, the strength changing based the panning and/or zooming of the image. The media may be rendered according to the strength. Whether to render the media may be determined by comparing the strength to a threshold.
摘要:
Mean shift is a nonparametric estimator of density which has been applied to image and video segmentation. Traditional mean shift based segmentation uses a radially symmetric kernel to estimate local density, which is not optimal in view of the often structured nature of image and more particularly video data. The system and method of the invention employs an anisotropic kernel mean shift in which the shape, scale, and orientation of the kernels adapt to the local structure of the image or video. The anisotropic kernel is decomposed to provide handles for modifying the segmentation based on simple heuristics. Experimental results show that the anisotropic kernel mean shift outperforms the original mean shift on image and video segmentation in the following aspects: 1) it gets better results on general images and video in a smoothness sense; 2) the segmented results are more consistent with human visual saliency; and 3) the system and method is robust to initial parameters.
摘要:
The automated social networking graph mining and visualization technique described herein mines social connections and allows creation of a social networking graph from general (not necessarily social-application specific) Web pages. The technique uses the distances between a person's/entity's name and related people's/entities names on one or more Web pages to determine connections between people/entities and the strengths of the connections. In one embodiment, the technique lays out these connections, and then clusters them, in a 2-D layout of a social networking graph that represents the Web connection strengths among the related people's or entities' names, by using a force-directed model.
摘要:
Embodiments that provide cartoon personalization are disclosed. In accordance with one embodiment, cartoon personalization includes selecting a face image having a pose orientation that substantially matches an original pose orientation of a character in a cartoon image. The method also includes replacing a face of the character in the cartoon image with the face image. The method further includes blending the face image with a remainder of the character in the cartoon image.
摘要:
A user may perform an image search on an object shown in an image. The user may use a mobile device to display an image. In response to displaying the image, the client device may send the image to a visual search system for image segmentation. Upon receiving a segmented image from the visual search system, the client device may display the segmented image to the user who may select one or more segments including an object of interest to instantiate a search. The visual search system may formulate a search query based on the one or more selected segments and perform a search using the search query. The visual search system may then return search results to the client device for display to the user.
摘要:
A user may perform an image search on an object shown in an image. The user may use a mobile device to display an image. In response to displaying the image, the client device may send the image to a visual search system for image segmentation. Upon receiving a segmented image from the visual search system, the client device may display the segmented image to the user who may select one or more segments including an object of interest to instantiate a search. The visual search system may formulate a search query based on the one or more selected segments and perform a search using the search query. The visual search system may then return search results to the client device for display to the user.