-
公开(公告)号:US20210118173A1
公开(公告)日:2021-04-22
申请号:US16656511
申请日:2019-10-17
Inventor: Ziyan Wu , Srikrishna Karanam
IPC: G06T7/73
Abstract: A system for patient positioning is provided. The system may acquire image data relating to a patient holding a posture and a plurality of patient models. Each patient model may represent a reference patient holding a reference posture, and include at least one reference interest point of the referent patient and a reference representation of the reference posture. The system may also identify at least one interest point of the patient from the image data using an interest point detection model. The system may further determine a representation of the posture of the patient based on a comparison between the at least one interest point of the patient and the at least one reference interest point in each of the plurality of patient models.
-
公开(公告)号:US20210090736A1
公开(公告)日:2021-03-25
申请号:US16580053
申请日:2019-09-24
Inventor: Arun Innanje , Ziyan Wu , Abhishek Sharma , Srikrishna Karanam
Abstract: The present disclosure relates to systems and methods for anomaly detection for a medical procedure. The method may include obtaining image data collected by one or more visual sensors via monitoring a medical procedure and a trained machine learning model for anomaly detection. The method may include determining a detection result for the medical procedure based on the image data using the trained machine learning model. The detection result may include whether an anomaly regarding the medical procedure exists. In response to the detection result that the anomaly exists, the method may further include providing feedback relating to the anomaly.
-
公开(公告)号:US20240233338A9
公开(公告)日:2024-07-11
申请号:US17969876
申请日:2022-10-20
Inventor: Meng Zheng , Srikrishna Karanam , Ziyan Wu , Arun Innanje , Terrence Chen
IPC: G06V10/774 , G06T7/00
CPC classification number: G06V10/774 , G06T7/0012 , G06T2207/20081 , G06T2207/20108
Abstract: Described herein are systems, methods, and instrumentalities associated with automatically annotating a 3D image dataset. The 3D automatic annotation may be accomplished based on a 2D annotation provided by an annotator and by propagating the 2D annotation through multiple images of a sequence of 2D images associated with the 3D image dataset. The automatically annotated 3D image dataset may then be used to annotate other 3D image datasets based on similarities between the first 3D image dataset and the other 3D image datasets. The automatic annotation of the first 3D image dataset and/or the other 3D image datasets may be conducted based on one or more machine-learning models trained for performing those tasks.
-
公开(公告)号:US12026913B2
公开(公告)日:2024-07-02
申请号:US17564792
申请日:2021-12-29
Inventor: Ziyan Wu , Srikrishna Karanam , Meng Zheng , Abhishek Sharma
Abstract: Automatically validating the calibration of an visual sensor network includes acquiring image data from visual sensors that have partially overlapping fields of view, extracting a representation of an environment in which the visual sensors are disposed, calculating one or more geometric relationships between the visual sensors, comparing the calculated one or more geometric relationships with previously obtained calibration information of the visual sensors, and verifying a current calibration of the visual sensors based on the comparison.
-
公开(公告)号:US12014815B2
公开(公告)日:2024-06-18
申请号:US17869852
申请日:2022-07-21
Inventor: Benjamin Planche , Ziyan Wu , Meng Zheng
IPC: G16H30/40 , G06T19/00 , H04N13/189
CPC classification number: G16H30/40 , G06T19/00 , H04N13/189
Abstract: Described herein are systems, methods, and instrumentalities associated with generating a multi-dimensional representation of a medical environment based on images of the medical environments. Various pre-processing and/or post-processing operations may be performed to supplement and/or improve the multi-dimensional representation. These operations may include determining semantic information associated with the medical environment based on the images and adding the semantic information to the multi-dimensional representation in addition to space and time information. The operations may also include anonymizing a person presented in the multi-dimensional representation, adding synthetic views to the multi-dimensional representation, improving the quality of the multi-dimensional representation, etc. The multi-dimensional representation of the medical environment generated using these techniques may allow a user to experience and explore the medical environment, for example, via a virtual reality device.
-
公开(公告)号:US20240164758A1
公开(公告)日:2024-05-23
申请号:US17989251
申请日:2022-11-17
Inventor: Ziyan Wu , Shanhui Sun , Arun Innanje , Benjamin Planche , Abhishek Sharma , Meng Zheng
CPC classification number: A61B8/5261 , A61B6/5247 , A61B8/4254 , A61B8/466 , A61B8/5223
Abstract: Sensing device(s) may be installed in a medical environment to captures images of the medical environment, which may include an ultrasound probe and a patient. The images may be processed to determine, automatically, the position of the ultrasound probe relative to the patient's body. Based on the determined position, ultrasound image(s) taken by the ultrasound probe may be aligned with a 3D patient model and displayed with the 3D patient model, for example, to track the movements of the ultrasound probe and/or provide a visual representation of the anatomical structure(s) captured in the ultrasound image(s) against the 3D patient model. The ultrasound images may also be used to reconstruct a 3D ultrasound model of the anatomical structure(s).
-
公开(公告)号:US11966852B2
公开(公告)日:2024-04-23
申请号:US16710070
申请日:2019-12-11
Inventor: Ziyan Wu , Srikrishna Karanam , Lidan Wang
CPC classification number: G06N5/02 , G06F9/3836 , G06F17/18 , G06N3/04 , G06N3/08
Abstract: The present disclosure generally provides systems and methods for situation awareness. When executing a set of instructions stored in at least one non-transitory storage medium, at least one processor may be configured to cause the system to perform operations including obtaining, from at least one of one or more sensors, environmental data associated with an environment corresponding to a first time point, generating a first static global representation of an environment corresponding to the first time point based at least in part on the environmental data, generating a first dynamic global representation of the environment corresponding to the first time point based at least in part on the environmental data, and estimating, based on the first static global representation and the first dynamic global representation, a target state of the environment at a target time point using a target estimation model.
-
公开(公告)号:US20240118796A1
公开(公告)日:2024-04-11
申请号:US17960367
申请日:2022-10-05
Inventor: Arun Innanje , Zheng Peng , Ziyan Wu , Qin Liu , Terrence Chen
IPC: G06F3/04842 , G06T7/12 , G06T11/00
CPC classification number: G06F3/04842 , G06T7/12 , G06T11/00 , G06T2200/24 , G06T2210/12 , G06T2210/22
Abstract: Click based contour editing includes detecting a selection input with respect to an image presented on a graphical user interface; designating an area of the image corresponding to the selection input as a region of interest; detecting at least one other selection input on the graphical user interface with respect to the image; determining if the at least one other selection input is within the region of interest or outside of the region of interest; and if the at least one other selection input is within the region of interest, excluding the portion of the image corresponding to the other input; or if the other selection input is outside of the region of interest, including the portion of the image corresponding to an area of the image associated with the other selection input.
-
公开(公告)号:US11941738B2
公开(公告)日:2024-03-26
申请号:US17513392
申请日:2021-10-28
Inventor: Srikrishna Karanam , Meng Zheng , Ziyan Wu
CPC classification number: G06T13/40 , G06T17/20 , G06T19/20 , G06T2210/41 , G06T2219/2004
Abstract: A three-dimensional (3D) model of a person may be obtained using a pre-trained neural network based on one or more images of the person. Such a model may be subject to estimation bias and/or other types of defects or errors. Described herein are systems, methods, and instrumentalities for refining the 3D model and/or the neural network used to generate the 3D model. The proposed techniques may extract information such as key body locations and/or a body shape from the images and refine the 3D model and/or the neural network using the extracted information. In examples, the 3D model and/or the neural network may be refined by minimizing a difference between the key body locations and/or body shape extracted from the images and corresponding key body locations and/or body shape determined from the 3D model. The refinement may be performed in an iterative and alternating manner.
-
公开(公告)号:US20240029867A1
公开(公告)日:2024-01-25
申请号:US17869852
申请日:2022-07-21
Inventor: Benjamin Planche , Ziyan Wu , Meng Zheng
IPC: G16H30/40 , H04N13/189 , G06T19/00
CPC classification number: G16H30/40 , H04N13/189 , G06T19/00
Abstract: Described herein are systems, methods, and instrumentalities associated with generating a multi-dimensional representation of a medical environment based on images of the medical environments. Various pre-processing and/or post-processing operations may be performed to supplement and/or improve the multi-dimensional representation. These operations may include determining semantic information associated with the medical environment based on the images and adding the semantic information to the multi-dimensional representation in addition to space and time information. The operations may also include anonymizing a person presented in the multi-dimensional representation, adding synthetic views to the multi-dimensional representation, improving the quality of the multi-dimensional representation, etc. The multi-dimensional representation of the medical environment generated using these techniques may allow a user to experience and explore the medical environment, for example, via a virtual reality device.
-
-
-
-
-
-
-
-
-