Abstract:
The present invention is embodied in an apparatus, and related method, for sensing a person's facial movements, features and characteristics and the like to generate and animate an avatar image based on facial sensing. The avatar apparatus uses an image processing technique based on model graphs and bunch graphs that efficiently represent image features as jets. The jets are composed of wavelet transforms processed at node or landmark locations on an image corresponding to readily identifiable features. The nodes are acquired and tracked to animate an avatar image in accordance with the person's facial movements. Also, the facial sensing may use jet similarity to determine the person's facial features and characteristic thus allows tracking of a person's natural characteristics without any unnatural elements that may interfere or inhibit the person's natural characteristics.
Abstract:
A method and apparatus for surfacing content during a photo sharing process is described. The method may include detecting the initiation of a social networking photo sharing process for a digital image. The method may also include transmitting digital image data corresponding to the digital image to a content surfacing server to locate content relevant to the digital image. The method may also include receiving one or more items of data relevant to the digital image from the content surfacing server. Furthermore, the method may include presenting a notification, before the digital image is uploaded to a social networking system by the social networking photo sharing process, to a user to indicate that content relevant to the digital image has been received.
Abstract:
Embodiments of this invention relate to detecting and blurring images. In an embodiment, a system detects objects in a photographic image. The system includes an object detector module configured to detect regions of the photographic image that include objects of a particular type at least based on the content of the photographic image. The system further includes a false positive detector module configured to determine whether each region detected by the object detector module includes an object of the particular type at least based on information about the context in which the photographic image was taken.
Abstract:
A method and apparatus for enabling dynamic product and vendor identification and the display of relevant purchase information are described herein. According to embodiments of the invention, a recognition process is executed on sensor data captured via a mobile computing device to identify one or more items, and to identify at least one product associated with the one or more items. Product and vendor information for the at least one product is retrieved and displayed via the mobile computing device. In the event a user gesture is detected in response to displaying the product and vendor information data, processing logic may submit a purchase order for the product (e.g., for an online vendor) or contact the vendor (e.g., for an in-store vendor).
Abstract:
In one embodiment the present invention is a method for populating and updating a database of images of landmarks including geo-clustering geo-tagged images according to geographic proximity to generate one or more geo-clusters, and visual-clustering the one or more geo-clusters according to image similarity to generate one or more visual clusters. In another embodiment, the present invention is a system for identifying landmarks from digital images, including the following components: a database of geo-tagged images; a landmark database; a geo-clustering module; and a visual clustering module. In other embodiments the present invention may be a method of enhancing user queries to retrieve images of landmarks, or a method of automatically tagging a new digital image with text labels.
Abstract:
A method and apparatus for enabling virtual tags is described. The method may include receiving a first digital image data and virtual tag data to be associated with a real-world object in the first digital image data, wherein the first digital image data is captured by a first mobile device, and the virtual tag data includes metadata received from a user of the first mobile device. The method may also include generating a first digital signature from the first digital image data that describes the real-world object, and in response to the generation, inserting in substantially real-time the first digital signature into a searchable index of digital images. The method may also include storing, in a tag database, the virtual tag data and an association between the virtual tag data and the first digital signature inserted into the index of digital images.
Abstract:
A gaze tracking technique is implemented with a head mounted gaze tracking device that communicates with a server. The server receives scene images from the head mounted gaze tracking device which captures external scenes viewed by a user wearing the head mounted device. The server also receives gaze direction information from the head mounted gaze tracking device. The gaze direction information indicates where in the external scenes the user was gazing when viewing the external scenes. An image recognition algorithm is executed on the scene images to identify items within the external scenes viewed by the user. A gazing log tracking the identified items viewed by the user is generated.
Abstract:
A visual query is received from a client system, along with location information for the client system, and processed by a server system. The server system sends the visual query and the location information to a visual query search system, and receives from the visual query search system enhanced location information based on the visual query and the location information. The server system then sends a search query, including the enhanced location information, to a location-based search system. The search system receives and provides to the client one or more search results to the client system.
Abstract:
The present invention provides a technique for translating facial animation values to head mesh positions for rendering facial features of an animated avatar. In the method, an animation vector of dimension Na is provided. Na is the number of facial animation values in the animation vector. A mapping algorithm F is applied to the animation vector to generate a target mix vector of dimension M. M is the number of targets associated with the head mesh positions. The head mesh positions are deformed based on the target mix vector.