Abstract:
Disclosed are methods and systems to automatically detect and optionally classify when a recipient receives goods in surveillance video. According to one exemplary embodiment, a computer implemented method determines a customer has received goods which are associated with a retail environment.
Abstract:
What is disclosed is a system and method for enhancing a spatio-temporal resolution of a depth data stream. In one embodiment, time-sequential reflectance frames and time-sequential depth frames of a scene are received. If a temporal resolution of the reflectance frames is greater than the depth frames then a new depth frame is generated based on correlations determined between motion patterns in the sequence of reflectance frames and the sequence of depth frames. The new depth frame is inserted into the sequence of depth frames at a selected time point. If a spatial resolution of the reflectance frames is greater than the depth frames then the spatial resolution of a selected depth frame is enhanced by generating new pixel depth values which are added to the selected depth frame. The spatially enhanced depth frame is then inserted back into the sequence of depth frames.
Abstract:
When monitoring a workspace to determine whether scheduled tasks or chores are completed according to a predetermined schedule, a video monitoring system monitors a region of interest (ROI) to identify employee-generated signals representing completion of a scheduled task. An employee makes a mark or gesture in the ROI monitored by the video monitoring system and the system analyzes pixels in each captured frame of the ROI to identify an employee signal, map the signal to a corresponding scheduled task, update the task as having been completed upon receipt of the employee signal, and alert a manager of the facility as to whether the task has been completed or not.
Abstract:
A method for printing on a continuous print medium having a plurality of pages includes identifying a location of a feature in image data that are generated from a portion of a first page in the print medium. The method includes modifying a time of operation of a marking unit to form an image on each page in the plurality of pages at a predetermined distance from the edge of each page with reference to the location of the identified feature in the image data. In one configuration, the method enables precise placement of printed images over preprinted forms.
Abstract:
When monitoring a workspace to determine whether scheduled tasks or chores are completed according to a predetermined schedule, a video monitoring system monitors a region of interest (ROI) to identify employee-generated signals representing completion of a scheduled task. An employee makes a mark or gesture in the ROI monitored by the video monitoring system and the system analyzes pixels in each captured frame of the ROI to identify an employee signal, map the signal to a corresponding scheduled task, update the task as having been completed upon receipt of the employee signal, and alert a manager of the facility as to whether the task has been completed or not.
Abstract:
A system and method of monitoring a region of interest comprises obtaining visual data comprising image frames of the region of interest over a period of time, analyzing individual subjects within the region of interest, the analyzing including at least one of tracking movement of individual subjects over time within the region of interest or extracting an appearance attribute of the individual subjects, and defining a group to include individual subjects having at least one of similar movement profiles or similar appearance attributes. The tracking movement includes detecting at least one of a trajectory of an individual subject within the region of interest, a dwell of an individual subject in at least one location within the region of interest, or an entrance or exit location within the region of interest.
Abstract:
A system for delivering one of a good and service to a customer in a retail environment includes a computer located at an order station. The computer is configured to receive an order for the one good and service. The system includes a first image capture device in communication with the computer. The first image capture device captures a first image of a customer ordering the one good and service in response to the order being submitted. The system further includes a wearable computer peripheral device configured to acquire the first image from the first image capture device and electronically display the first image to a user tasked with delivering the one good and service while carrying the second wearable computer peripheral device. In this manner, an identity of the customer can be compared against the first image upon a delivery of the one good and service.
Abstract:
Disclosed are methods and systems to automatically detect and optionally classify when a recipient receives goods in surveillance video. According to one exemplary embodiment, a computer implemented method determines a customer has received goods which are associated with a retail environment.
Abstract:
A system and method for automatic classification and detection of a payment gesture are disclosed. The method includes obtaining a video stream from a camera placed above at least one region of interest, the region of interest classifying the payment gesture. A background image is generated from the obtained video stream. Motion is estimated in at least two consecutive frames from the video stream. A representation is created from the background image and the estimated motion occurring within the at least one region of interest. The payment gesture is detected based on the representation.
Abstract:
A method analyzes image data of a test pattern printed on an image receiving member by a printer. The method includes identifying a process direction position for each row of dashes in a test pattern printed on an image receiving member, identifying a center of each dash in a cross-process direction, identifying an inkjet ejector that formed each dash in the row of dashes. These data are used to identify a process direction position for each printhead, a cross-process displacement for each column of printheads, and a stitch displacement in the cross-process direction between neighboring printheads in a print bar unit that print a same color of ink. An actuator can be operated with reference to the identified process direction positions, cross-process displacements, and the identified stitch displacements to move at least some of the printheads in the printer.