Abstract:
Methods and apparatus to monitor shoppers in a retail environment are disclosed herein. In a disclosed example method involves collecting location information indicative of a measured path of travel of a person through a monitored environment. The example method also involves collecting person detection event information associated with a plurality of zones in the monitored environment. The person detection event information is indicative of detections of the person in each of the zones. In addition, the example method involves determining an adjusted path of travel of the person through the monitored environment based on the location information indicative of the measured path of travel and the person detection event information.
Abstract:
Methods and apparatus to identify media content using temporal signal characteristics are disclosed. An example method to identify media content includes receiving a reference signal corresponding to known media content and generating a reference signature based on the reference signal. The method further includes generating a plurality of sums based on peaks in the media signal and identifying one or more signal peaks based on the generated sums. The method then generates a second signature based on a plurality of normalized curve features, wherein each normalized curve feature corresponds to a signal peak at the temporal location of the signal peak, and determines whether the media signal corresponds to the reference signal based a comparison of the reference signature and the second signature.
Abstract:
Methods and apparatus for identifying audio/video content using temporal characteristics of a signal are disclosed. The disclosed apparatus and methods receive a signature associated with audio/video content presented at a monitored site, wherein the signature is based on a plurality of time intervals associated with audio features of the audio/video content presented at the monitored site and identify the audio/video content presented at the monitored site based on the received signature.
Abstract:
Methods and apparatus for identifying audio/video content using temporal characteristics of a signal are disclosed. The disclosed apparatus and methods receive a signature associated with audio/video content presented at a monitored site, wherein the signature is based on a plurality of time intervals associated with audio features of the audio/video content presented at the monitored site and identify the audio/video content presented at the monitored site based on the received signature.
Abstract:
A housing for mounting an identification tag to a shopping carrier is disclosed. An example housing includes a base for mounting the housing to the shopping carrier and a guard cover for protecting the identification tag. The guard cover includes a plurality of sidewalls extending from the base plate and a top wall extending between the sidewalls, the sidewalls and the top wall bounding an interior having an opening. The identification tag is mounted at least partially inside the interior of the guard cover.
Abstract:
Methods and apparatus for nonintrusive monitoring of web browser usage are disclosed. An example method for monitoring web browsing disclosed herein comprises obtaining a video signal from a video output of a device implementing a web browser, processing a video image obtained from the video signal to identify a region of the video image displaying at least a portion of the web browser, determining textual information displayed by the web browser in the identified region of the video image, and using the textual information to record usage of the web browser.
Abstract:
Methods and apparatus to monitor consumer activity are disclosed herein. In a disclosed example method, a first signal is received via a portable device from a first one of a plurality of stationary devices positioned throughout a monitored environment. A first stationary device location of the first stationary device is determined based on the first signal. Absolute location information indicative of a first portable device location of the portable device is determined based on the first stationary device location. Navigational sensing information is generated by the portable device. Relative location information is determined based on the absolute location information and the navigational sensing information, wherein the relative location information is indicative of a second portable device location of the portable device.
Abstract:
Methods, apparatus and articles of manufacture for video comparison using color histograms are disclosed. An example method disclosed herein to compare a first video and a second video comprises obtaining a first color histogram corresponding to a sequence of frames of the first video, obtaining a second color histogram corresponding to a sequence of frames of the second video, determining a first comparison metric based on differences between bin values of the first color histogram and adjusted bin values of the second color histogram, and determining whether the first video and the second video match based on the first comparison metric.
Abstract:
An example system disclosed herein comprises a web browser monitor to extract textual information from a captured image comprising at least a portion of content displayed by a web browser implemented by a monitored device, and determine color scheme information for a region of the captured image comprising at least a portion of the content displayed by the web browser. The example system also comprises a central processing facility to process the extracted textual information received from the web browser monitor, and compare the color scheme information received from the web browser monitor to reference color schemes associated with reference web pages to determine web sites accessed using the web browser.
Abstract:
Methods, articles of manufacture, and apparatus to count people in an image are disclosed. An example method includes estimating, based on a location of a face of a person in a first image frame, a portion of the image that corresponds to a body region of the person; and using image data corresponding to the portion to determine whether the person is present in a second image frame in which the face of the person is undetected.