Abstract:
A system and method for adaptively defining a region of interest for motion analysis in digital video is disclosed. In one embodiment, a method of detecting a gesture is disclosed which comprises receiving a video sequence comprising a plurality of frames, determining a region of interest which excludes a portion of the frame, and detecting the gesture within the region of interest.
Abstract:
Systems and methods for estimating the centers of moving objects in a video sequence are disclose. One embodiment is a method of defining one or more motion centers in a video sequence, the method comprising receiving a video sequence comprising a plurality of frames, receiving a motion history image for each of a subset of the plurality of frames based on the video sequence, identifying, through use of the motion history image, one or more data segments having a first orientation, wherein each data segment having the first orientation has a start location and a length, identifying, one or more data segments having a second orientation, wherein each element of a data segment having the second orientation is associated with a data segment having the first orientation, and defining a corresponding motion center for one or more of the indentified data segments having the second orientation.
Abstract:
The overlay human interactive proof system (“OHIPS”) and techniques described herein operate in conjunction with any known or later developed computer-based applications or services to provide secure access to resources by reliably differentiating between human and non-human users. Humans have a generally superior ability to differentiate misaligned characters or objects from correctly aligned ones. As such, the OHIP splits an image including one or more visual objects into two or more partial images to form a HIP. The partial images may also be further split into groups of sub-partial images, and/or the partial images (or the sub-partial images) may be moved, so that at any given alignment position, a user can recognize only some visual objects. A user is instructed to reassemble the partial images at one or more predetermined alignment positions using a GUI, and the user is asked to identify information regarding one or more visible objects.
Abstract:
An adaptive temporal noise reduction method that adaptively combines motion adaptive filtering results and motion compensated results to reduce Gaussian additive noise in video sequences is described herein. The system determines the motion detection and motion compensation results from the current frame and the filtered previous frame. Measurements on the video are used to determine a probabilistic measure of noise that is employed to adaptively combine the motion detection and motion compensation results.
Abstract:
A color quantization method in RGB color space which preserves high precision of luminance information in an original high precision RGB image signal which is quantized to a lower precision (lower bit depth) RGB signal. The method can be used to convert the original RGB signal to arbitrary quantization levels in RGB space.
Abstract:
A plurality of modules interact to form an adaptive network in which each module transmits and receives data signals indicative of proximity of objects. A central computer accumulates the data produced or received and relayed by each module for analyzing proximity responses to transmit through the adaptive network control signals to a selectively-addressed module to respond to computer analyses of the data accumulated from modules forming the adaptive network. Interactions of local processors in modules that sense an intrusion determine the location and path of movements of the intruding object and control cameras in the modules to retrieve video images of the intruding object. Multiple operational frequencies in adaptive networks permit expansions by additional networks that each operate at separate radio frequencies to avoid overlapping interaction. Additional modules may be introduced into operating networks without knowing the operating frequency at the time of introduction. Remote modules operating as leaf nodes of the adaptive network actively adapt to changed network conditions upon awaking from power-conserving sleep mode. New programs are distributed to all or selected modules under control of the base station.
Abstract:
A method is presented for realizing standard colors on a display that does not have the standard primaries. The method enables a display with non-standard primaries to show colors as it has standard SMPTE-C primaries. Further, a method is presented to show the same standard colors on different locations of the display. Measurements of the physical parameters of the display's color primaries are obtained and implemented as calibrated display or a display which can be calibrated by user/calibrator with a color measuring tool such as calorimeter or spectroradiometer.
Abstract:
This invention presents a YUV to RGB conversion method which preserves high precision of luminance information in an original YUV image signal when converting it to RGB signal. The method can be used to convert the original YUV signal to arbitrary quantization levels in RGB space. In addition, this invention presents methods of pre-quantization and re-quantization as to compensate conventional YUV to RGB color conversion.
Abstract:
An active depth detection system can generate a depth map from an image and user interaction data, such as a pair of clicks. The active depth detection system can be implemented as a recurrent neural network that can receive the user interaction data as runtime inputs after training. The active depth detection system can store the generated depth map for further processing, such as image manipulation or real-world object detection.
Abstract:
Examples are disclosed that relate to depth-aware late-stage reprojection. One example provides a computing system configured to receive and store image data, receive a depth map for the image data, processing the depth map to obtain a blurred depth map, and based upon motion data, determine a translation to be made to the image data. Further, for each pixel, the computing system is configured to translate an original ray extending from an original virtual camera location to an original frame buffer location to a reprojected ray extending from a translated camera location to a reprojected frame buffer location, determine a location at which the reprojected ray intersects the blurred depth map, and sample a color of a pixel for display based upon a color corresponding to the location at which the reprojected ray intersects the blurred depth map.