Abstract:
Techniques are described for using multiple data capture devices at a building to automatically generate a building floor plan and to determine associated absolute location data for the generated floor plan. In some situations, a building floor plan is automatically generated by analyzing visual data of images captured at multiple image acquisition locations by a camera device to determine room shapes of surrounding rooms, and GPS absolute location data is associated with the generated floor plan using additional data captured at other data capture locations at the building by a separate mobile device that moves independently from the camera device, such as by automatically determining relative positions of the camera device and the mobile device to extend the absolute location data from the mobile device to the camera device's image acquisition location and its surrounding room shape.
Abstract:
A medical system comprising: an instrument including an instrument shape sensor; a display system; and a control system including one or more processors, wherein the control system is configured to: receive an anatomic model of a patient anatomy, wherein an area of interest is identified in the anatomic model; receive shape sensor data from the instrument shape sensor while the instrument is positioned within the patient anatomy and registered to the anatomic model; and determine a fluoroscopic image plane for display on the display system based on the received shape sensor data and the area of interest.
Abstract:
To provide an image processing apparatus capable of permitting local shape deformation of a measuring target in pattern matching using a shape feature of the measuring target. A model is defined by a plurality of first positions on an edge extracted from a model image and a changing direction of the edge in each of the first positions. An image processing apparatus (100) calculates a changing direction of an edge in a second position of an input image corresponding to the first position on the edge of the model image. The image processing apparatus (100) accepts an instruction associated with a permissible value of the changing direction of the edge. The image processing apparatus (100) calculates a similarity degree of the first position and the second position corresponding to the first position based on the accepted instruction, the changing direction of the edge in the first and second position. The image processing apparatus (100) determines whether a specific area in the input image is similar to the model or not based on the calculated similarity degree in the second positions.
Abstract:
The present invention relates to a system for tracking the position of an ultrasonic probe in a body part. It is described to acquire (110) an X-ray image of a portion of a body part within which an ultrasonic probe (20) is positioned. First geometrical positional information of the ultrasonic probe in the portion of the body part is determined (120), utilizing the X-ray image. At least one ultrasonic image comprising a part of a body feature with the ultrasonic probe is acquired (130), the acquiring (130) comprising acquiring (140) an ultrasonic image of the at least one ultrasonic image at a later time than a time of acquisition of the X-ray image. Second geometrical positional information of the ultrasonic probe in the body part at the later time is determined (150), comprising utilizing the first geometrical positional information and the at least one ultrasonic image comprising the part of the body feature.
Abstract:
An electronic apparatus (1) including a display generation unit (110) configured to generate a display area (210) in a user interface, the display area being configured to display a 3-D model of a patient's face and a 3-D model of a patient interface device fitted to the 3-D model of the patient's face; and an interaction map unit (160) configured to generate an interaction map tool (260) in the user interface and to calculate an interaction map between the patient's face and the patient interface device indicating levels of an interaction characteristic between the patient's face and the patient interface device, wherein the interaction map tool is operable to toggle display of the interaction map in the user interface.
Abstract:
A method can include generating image data of an interior of a fuel tank (12) disposed within a wing of an aircraft, and determining (114), by a processing device, an amount of wing bending of the wing of the aircraft based on the generated image data of the interior of the fuel tank. The method can further include producing (116), by the processing device, a fuel measurement value representing an amount of fuel contained in the fuel tank based on the amount of wing bending of the wing of the aircraft, and outputting (118), by the processing device, an indication of the fuel measurement value.
Abstract:
Images are processed to compensate for rolling shutter effects. A pair of images are registered. A set of pixel rows in the first image and a corresponding set of pixel rows in the second image are obtained. A parametric model is generated characterizing a transformation between pixels in the set of pixel rows in the first image with pixels in the corresponding set of pixel rows of the second image. Using the generated parametric model, the set of pixel rows in the second image is warped with respect to the set of pixel rows in the first image, reducing rolling shutter effects.
Abstract:
An accurate, flexible and scalable technique for multi-modal image registration is described, a technique that does not need to rely on direct feature matching and does not need to rely on precise geometric models. The methods and/or systems described in this disclosure enable the registration (fusion) of multi-modal images of a scene (700) with a three dimensional (3D) representation of the same scene (700) using, among other information, viewpoint data from a sensor (706, 1214, 1306) that generated a target image (402), as well as 3D-GeoArcs. The registration techniques of the present disclosure may be comprised of three main steps, as shown in FIG. 1. The first main step includes forming a 3D reference model of a scene (700). The second main step includes estimating the 3D geospatial viewpoint of a sensor (706, 1214, 1306) that generated a target image (402) using 3D-GeoArcs. The third main step includes projecting the target image's data into a composite 3D scene representation.