Abstract:
In an immersive surveillance system, videos or other data from a large number of cameras and other sensors is managed and displayed by a video processing system overlaying the data within a rendered 2D or 3D model of a scene. The system has a viewpoint selector configured to allow a user to selectively identify a viewpoint from which to view the site. A video control system receives data identifying the viewpoint and based on the viewpoint automatically selects a subset of the plurality of cameras that is generating video relevant to the view from the viewpoint, and causes video from the subset of cameras to be transmitted to the video processing system. As the viewpoint changes, the cameras communicating with the video processor are changed to hand off to cameras generating relevant video to the new position. Playback in the immersive environment is provided by synchronization of time stamped recordings of video. Navigation of the viewpoint on constrained paths in the model or map-based navigation is also provided.
Abstract:
A composition comprising perfluoro-[2,2]-paracyclophane dimer compound is disclosed. The synthesis reaction of the paracyclophane dimer from 1,4-bis(chlorodifluoromethane)-2,3,5,6-tetrafluorobenzene involves heating in the presence of a metal catalyst and a solvent. A perfluorinated paraxylylene coating formed from the perfluorinated paracyclophane dimer is also disclosed.
Abstract:
A system and method for identifying objects, particularly vehicles, between two non-overlapping cameras (2). More specifically, the method and system determines whether a vehicle depicted in an image captured by a first camera is the same vehicle or a different vehicle than a vehicle depicted in an image captured by a second camera. This intra-camera analysis determines whether the vehicle viewed by the first camera is the same as the vehicle viewed by the second camera, without directly matching the two vehicle images (4), thus eliminating the problems and inaccuracies caused by disparate environmental conditions acting on the two cameras, such as dramatic appearance and aspect changes.
Abstract:
Conformal coatings are disclosed. A fluorinated thin film, for example, is formed by vapor phase polymerization on a variety of sensitive surfaces that may include electronic and automotive sensors, biochips, implantable sensors and biomedical devices.
Abstract:
A method and apparatus for video surveillance is disclosed. In one embodiment, a sequence of scene imagery representing a field of view is received. One or more moving objects are identified within the sequence of scene imagery and then classified in accordance with one or more extracted spatio-temporal features. This classification may then be applied to determine whether the moving object and/or its behavior fits one or more known events or behaviors that are causes for alarm.
Abstract:
A method and apparatus for automatically generating a three-dimensional computer model from a "point cloud" of a scene produced by a laser radar (LIDAR) system (114 in Figure 9). Given a point cloud of an indoor or outdoor scene, the method extracts certain structures from the imaged scene, i.e., ceiling, floor, furniture, rooftops, ground, and the like, and models (904) these structures with planes and/or prismatic structures to achieve a three-dimensional computer model (902) of the scene. The method may then add photographic and/or synthetic texturing to the model to achieve a realistic model.
Abstract:
A method and apparatus (10) for tracking a movable object (16) using a plurality of images, each of which is separated by an interval of time . The plurality of images includes first and second images. The method and apparatus include elements for aligning the first and second images as a function of (i) at least one feature of a first movable object captured in the first image, and (ii) at least one feature of a second movable object captured in the second image; and after aligning the first and second images, comparing at least one portion of the first image with at least one portion of the second image.
Abstract:
A method and apparatus for recognizing an object, comprising providing a set of scene features from a scene, pruning a set of model features, generating a set of hypotheses associated with the pruned set of model features for the set of scene features, pruning the set of hypotheses, and verifying the set of pruned hypotheses is provided.
Abstract:
The present disclosure relates to the field of cellular tumorigenesis and cancer biology. More specifically, the present disclosure relates to tumorigenesis and cancer in estrogen-responsive cell types, including cell types such as testis, ovary and uterine tissues, mammary gland, brain, skeletal muscle, and lung tissues. The present disclosure further relates to compositions including polypeptides, oligopeptides, petidomimetics, antibodies, and nucleic acids, and pharmaceutical compositions, diagnostic kits, and therapeutic kits useful in the diagnosis or treatment of turnorigenesis in estrogen-responsive cell types.
Abstract:
Method and apparatus for dynamically placing sensors in a 3D model (101) is provided. Specifically, in one embodiment, the method selects a 3D model (101) and a sensor for placement into the 3D model (101). The method renders the sensor and the 3D model (101) in accordance with sensor parameters associated with the sensor and parameters desired by a user. In addition, the method determines whether an occlusion to the sensor is present.