Abstract:
The present teaching relates to method, system, medium, and implementations for fusing a 3D virtual model with a 2D image associated with an organ of a patient. A key-pose is determined as an approximate position and orientation of a medical instrument with respect to the patient's organ. Based on the key-pose, an overlay is generated on a 2D image of the patient's organ, acquired by the medical instrument, by projecting the 3D virtual model on to the 2D image. A pair of feature points includes a 2D feature point from the 2D image and a corresponding 3D feature point from the 3D virtual model. The 3D coordinate of the 3D feature point is determined based on the 2D coordinate of the 2D feature point. The depth of the 3D coordinate is on a line of sight of the 2D feature point.
Abstract:
The present teaching relates to a method and system for path planning. A target is tracked via one or more sensors. Information of a desired pose of an end-effector with respect to the target and a current pose of the end-effector is obtained. Also, a minimum distance permitted between an arm including the end-effector and each of at least one obstacle identified between the current pose of the end-effector and the target is obtained. A weighting factor previously learned is retrieved and a cost based on a cost function is computed in accordance with a weighted smallest distance between the arm including the end-effector and the at least one obstacle, wherein the smallest distance is weighted by the weighting factor. A trajectory is computed from the current pose to the desired pose by minimizing the cost function.
Abstract:
The present teaching relates to surgical procedure assistance. In one example, a first image of an organ having a lesion is obtained prior to a surgical procedure. Information related to a pose of a surgical instrument at a first location with respect to the lesion is received from a sensor coupled with the surgical instrument. A visual environment having the lesion and the surgical instrument rendered therein is generated based on the first image and the information received from the sensor. A second image of the organ is obtained when the surgical instrument is moved to a second location with respect to the lesion during the surgical procedure. The second image captures the lesion and the surgical instrument. The pose of the surgical instrument and the lesion rendered in the visual environment are adjusted based, at least in part, on the second image.
Abstract:
The present teaching relates to method, system, medium, and implementations for estimating 3D coordinate of a 3D virtual model. Two pairs of feature points are obtained. Each of the pairs includes a respective 2D feature point on an organ observed in a 2D image, acquired during a medical procedure, and a respective corresponding 3D feature point from a 3D virtual model, constructed for the organ prior to the procedure based on a plurality of images of the organ. The first and the second 3D feature points have different depths. A 3D coordinate of a 3D feature point is determined based on the pairs of feature points so that a projection of the 3D virtual model from the 3D coordinate substantially matches the organ observed in the 2D image.
Abstract:
Systems and methods relating to placement of an ultrasound transducer for a procedure.In a non-limiting embodiment, a 3D environment including images of a body region of a patient, as well as images including a first virtual representation of an ablation needle at a first location and a second virtual representation of an ultrasound transducer at a second location is rendered on a display device. A determination may be made as to whether the first and second virtual representations collide at a first collision point. If so, at least one parameter associated with an orientation and/or position of the second virtual representation may be adjusted. A determination may then be made as to whether or not the first and second virtual representations still collide and, in response to determining that there is no collision, position data indicating the location of the second virtual representation after the adjustment, is stored.
Abstract:
The present teaching relates to surgical procedure assistance. In one example, a plurality of similarity measures are determined between a first set of positions and a plurality of second sets of positions, respectively. The first set of positions is obtained with respect to a plurality of sensors coupled with a patient in an image captured prior to a surgical procedure. The plurality of second sets of positions are obtained from the plurality of sensors and change in accordance with movement of the patient. A target lesion is segmented in the image captured prior to the surgical procedure to obtain a lesion display object. The lesion display object is duplicated to generate a plurality of lesion display objects. The plurality of lesion display objects are presented on a display screen so that a distance between the plurality of lesion display objects changes in accordance with the plurality of the similarity measures.
Abstract:
Methods, systems, and programs for real-time surgical procedure assistance are provided. A first set of 3D poses of the 3D points on the organ may be received. An electronic organ map built for the organ via pre-surgical medical information may be retrieved. A tissue parameter of the organ may be obtained based on the first set of 3D poses and their corresponding 3D poses from the electronic organ map. A deformation transformation of the electronic organ map may be calculated based on the obtained tissue parameter and the first set of 3D poses during the surgical procedure. The deformed electronic organ map may be projected onto the organ with respect to the first set of 3D poses during the surgical procedure.
Abstract:
A method and system is provided for image segmentation for liver objects. Segmentation is performed to obtain a first set of objects relating to liver. More than one type in association with one of the first set of objects is identified. Landmarks are identified based on the segmented first set of objects or the different types of one of the first set of objects. A second set of objects including liver lobes are segmented based on the landmarks.
Abstract:
The present teaching relates to method, system, medium, and implementations for estimating 3D coordinate of a 3D virtual model. Two pairs of feature points are obtained. A first 3D coordinate of the first 3D feature point and a second 3D coordinate of the second 3D feature point are automatically determined based on the pairs of feature points so that a first distance between the determined first 3D coordinate and the determined second 3D coordinate equals to a second distance between a first actual 3D coordinate of the first 3D feature point and a second actual 3D coordinate of the second 3D feature point in the 3D virtual model.
Abstract:
The present disclosure relates to robot path planning. Depth information of a plurality of obstacles in an environment of a robot are obtained at a first time instance. A static distance map is generated based on the depth information. A path is computed for the robot based on the static distance map. At a second time instant, depth information of one or more obstacles is obtained. A dynamic distance map is generated based on the one or more obstacles, wherein for each obstacle that satisfies a condition: a vibration range of the obstacle is computed based on a position of the obstacle and the static distance map, and the obstacle is classified as a dynamic obstacle or a static obstacle based on a criterion associated with the vibration range. A repulsive speed of the robot is computed based on the dynamic distance map to avoid the dynamic obstacles.