Abstract:
This patent document relates generally to steganography and digital watermarking. One claim recites an apparatus comprising: memory for storing data representing an image or video, in which the data comprises first data corresponding to first color data, second data corresponding to second color data and third data corresponding to third color data, the image or video to host auxiliary information; a processor programmed for: weighting the first data, the second data and the third data according to at least the following two factors: i) a color direction associated with an expected embedding direction; and ii) expected image capture or signal processing; and determining from weighted first data, weighted second data and weighted third data, changes in one or more image or video attribute(s), in which the auxiliary information is conveyed through the changes. Of course, other claims and combinations are provided too.
Abstract:
The present disclosures relates generally to digital watermarking and data hiding. One claim recites an apparatus comprising: means for storing a watermark signal; means for embedding a watermark signal in a first portion of a video signal; means for preconditioning the watermark signal in a first manner to allow expanded detection of said preconditioned watermark signal in the presence of first distortion; means for embedding the watermark signal preconditioned in the first manner in a second portion of the video signal; means for preconditioning the watermark signal in a second manner to allow expanded detection of said preconditioned watermark signal in the presence of second distortion; and means for embedding the watermark signal preconditioned in the second manner in a third portion of the video signal. Of course, other claims are provided too.
Abstract:
In some arrangements, product packaging is digitally watermarked over most of its extent to facilitate high-throughput item identification at retail checkouts. Imagery captured by conventional or plenoptic cameras can be processed (e.g., by GPUs) to derive several different perspective-transformed views—further minimizing the need to manually reposition items for identification. Crinkles and other deformations in product packaging can be optically sensed, allowing such surfaces to be virtually flattened to aid identification. Piles of items can be 3D-modeled and virtually segmented into geometric primitives to aid identification, and to discover locations of obscured items. Other data (e.g., including data from sensors in aisles, shelves and carts, and gaze tracking for clues about visual saliency) can be used in assessing identification hypotheses about an item. Logos may be identified and used—or ignored—in product identification. A great variety of other features and arrangements are also detailed.
Abstract:
Methods and arrangements involving portable user devices such smartphones and wearable electronic devices are disclosed, as well as other devices and sensors distributed within an ambient environment. Some arrangements enable a user to perform an object recognition process in a computationally- and time-efficient manner. Other arrangements enable users and other entities to, either individually or cooperatively, register or enroll physical objects into one or more object registries on which an object recognition process can be performed. Still other arrangements enable users and other entities to, either individually or cooperatively, associate registered or enrolled objects with one or more items of metadata. A great variety of other features and arrangements are also detailed.
Abstract:
In some arrangements, product packaging is digitally watermarked over most of its extent to facilitate high-throughput item identification at retail checkouts. Imagery captured by conventional or plenoptic cameras can be processed (e.g., by GPUs) to derive several different perspective-transformed views—further minimizing the need to manually reposition items for identification. One claim recites a system for capturing imagery of retail items supported by a fixture, comprising: at least one camera unit having at least one operative camera to capture imagery of at least a portion of the retail items supported by the fixture; and one or more processors configured to process the captured imagery to discern identifying information associated with the imaged retail items. In one implementation the at least one camera unit comprises a plurality of cameras, wherein a field of view of a first one of the plurality of cameras overlaps with a field of view of a second one of the plurality of camera units. A great variety of other features, claims, combinations and arrangements are also detailed.
Abstract:
The present technology concerns cell phones and other portable devices, and more particularly concerns use of such devices in connection with media content (electronic and physical) and with other systems (e.g., televisions, digital video recorders, and electronic program directories). One particular aspect of the technology concerns complementing primary content viewed on one screen (e.g., a television screen) with auxiliary content displayed on a second screen (e.g., a cell phone screen). Different auxiliary content can be paired with the primary content, depending on the profile of the user (e.g., age, location, etc.). Some embodiments make use of location information provided by the primary screen device. Other embodiments make use of content identification data provided by the primary screen device. A great number of other features and arrangements are also detailed.
Abstract:
Content objects are associated with metadata via content identifiers that are derived from sensed signals captured by requesting mobile devices. In response to a content based query from a mobile device, content fingerprints and extracted digital codes decoded from the sensed signals are issued to a network based router system. This system determines identification priority, metadata responses associated with different forms of identification, and priority of metadata responses to the query.
Abstract:
The present disclosure relates generally to mobile devices and content recognition. One claim recites a mobile device comprising: a sensor; a display screen; memory storing instructions for execution by a processor; and one or more processors programmed with said instructions for: obtaining information from the sensor; selecting a user profile from among a plurality of different user profiles based on the information; and selecting—based on a selected user profile—an image or graphic for display on the display screen, the image or graphic being associated with the selected user profile. Other claims and combinations are provided as well.
Abstract:
The disclosure relates generally to geographic-based signal detection. One claim recites an apparatus comprising: an input for receiving a signal from a cell phone; an electronic processor for determining, based at least in part on the signal, whether the cell phone is physically located in a predetermined home area; and upon a condition of not being in the predetermined home area, communicating a machine-readable code detector to the cell phone for use as its primary machine-readable code detector to detect machine-readable code while outside of its predetermined home area. Of course, other claims and combinations are provided as well.
Abstract:
In one arrangement, a first device presents a display that is based on context data, derived from one or more of its sensors. This display is imaged by a camera in a second device. The second device uses context data from its own sensors to assess the information in the captured imagery, and makes a determination about the first device. In another arrangement, social network friend requests are automatically issued, or accepted, based on contextual similarity. In yet another arrangement, delivery of a message is triggered by a contextual circumstance other than (or in addition to) location. In still another arrangement, two or more devices automatically establish an ad hoc network (e.g., Bluetooth pairing) based on contextual parallels. In still another arrangement, historical context information is archived and used in transactions with other devices, e.g., in challenge-response authentication. A great number of other features and arrangements—many involving head-mounted displays—are also detailed.