Abstract:
This disclosure provides a video-based method and system for busyness detection and notification. Specifically, according to an exemplary embodiment, multiple overhead image capturing devices are used to acquire video including multiple non-overlapping ROIs (regions of interest) and the video is processed to count the number of people included within the ROIs. A busyness metric is calculated based on the number of people counted and notification of the busyness metric or changes in the busyness metric is communicated to appropriate personnel, e.g., a manager of a retail store.
Abstract:
This disclosure provides a video-based method and system for busyness detection and notification. Specifically, according to an exemplary embodiment, multiple overhead image capturing devices are used to acquire video including multiple non-overlapping ROIs (regions of interest) and the video is processed to count the number of people included within the ROIs. A busyness metric is calculated based on the number of people counted and notification of the busyness metric or changes in the busyness metric is communicated to appropriate personnel, e.g., a manager of a retail store.
Abstract:
A system and method to identify the leader of a group in a retail, restaurant, or queue-type setting (or virtually any setting) through recognition of payment gestures. The method comprises acquiring initial video of a group, developing feature models for members of the group, acquiring video at a payment location, identifying a payment gesture in the acquired video, defining the person making the gesture as the leader of the group, and forwarding/backtracking through the video to identify timings associated with leader events (e.g., entering, exiting, ordering, etc.).
Abstract:
A method for printing on a continuous print medium having a plurality of pages includes identifying a location of a feature in image data that are generated from a portion of a first page in the print medium. The method includes modifying a time of operation of a marking unit to form an image on each page in the plurality of pages at a predetermined distance from the edge of each page with reference to the location of the identified feature in the image data. In one configuration, the method enables precise placement of printed images over preprinted forms.
Abstract:
A method analyzes image data of a test pattern printed on an image receiving member by a printer. The method includes identifying a process direction position for each row of dashes in a test pattern printed on an image receiving member, identifying a center of each dash in a cross-process direction, identifying an inkjet ejector that formed each dash in the row of dashes. These data are used to identify a process direction position for each printhead, a cross-process displacement for each column of printheads, and a stitch displacement in the cross-process direction between neighboring printheads in a print bar unit that print a same color of ink. An actuator can be operated with reference to the identified process direction positions, cross-process displacements, and the identified stitch displacements to move at least some of the printheads in the printer.
Abstract:
A method uses a sparse test pattern to identify a spatial relationship between a printhead and an image receiving surface in a printer. The method includes operating a plurality of ejectors in the printhead to form a printed marks on the image receiving surface, generating image data of the test pattern, and applying a predetermined disjoint template to the image data to identify a location of the printed marks. The disjoint template matching process improves the accuracy of identifying the printed marks in noisy image data and for sparse test patterns.
Abstract:
A computer vision system (100) operates to monitor an environment (e.g., such as a restaurant, store or other retail establishment) including a resource located therein (e.g., such as a restroom, a dining table, a drink, condiment or supply dispenser, a trash receptacle or a tray collection rack). The system includes: an image source or camera (104) that supplies image data (130) representative of at least a portion of the environment monitored by the system, the portion including the resource therein; and an event detection device (102) including a data processor (112) and operative to detect an event involving the resource. Suitably, the event detection device is arranged to: (i) be selectively configurable by a user to define the event involving the resource; (ii) receive the image data supplied by the image source; (iii) analyze the received image data to detect the defined event; and (iv) output a notification in response to detecting the defined event.
Abstract:
A system and method for automatic classification and detection of a payment gesture are disclosed. The method includes obtaining a video stream from a camera placed above at least one region of interest, the region of interest classifying the payment gesture. A background image is generated from the obtained video stream. Motion is estimated in at least two consecutive frames from the video stream. A representation is created from the background image and the estimated motion occurring within the at least one region of interest. The payment gesture is detected based on the representation.
Abstract:
A system for delivering one of a good and service to a customer in a retail environment includes a computer located at an order station. The computer is configured to receive an order for the one good and service. The system includes a first image capture device in communication with the computer. The first image capture device captures a first image of a customer ordering the one good and service in response to the order being submitted. The system further includes a wearable computer peripheral device configured to acquire the first image from the first image capture device and electronically display the first image to a user tasked with delivering the one good and service while carrying the second wearable computer peripheral device. In this manner, an identity of the customer can be compared against the first image upon a delivery of the one good and service.