-
公开(公告)号:US20250069406A1
公开(公告)日:2025-02-27
申请号:US18944080
申请日:2024-11-12
Applicant: TRIGO VISION LTD.
Inventor: Daniel GABAY
Abstract: A method for acquiring data relating to an object including arranging a multiplicity of cameras to view a scene, at least one reference object within the scene being viewable by at least a plurality of the multiplicity of cameras, each of the plurality of cameras acquiring at least one image of the reference object viewable thereby, finding a point of intersection of light rays illuminating each of the plurality of cameras and correlating a pixel location at which the reference object appears within each the at least one image to the light rays illuminating each of the plurality of cameras and intersecting with the region of intersection, irrespective of a three-dimensional location of the reference object within the scene.
-
公开(公告)号:US20250069400A1
公开(公告)日:2025-02-27
申请号:US18452596
申请日:2023-08-21
Applicant: Macondo Vision, Inc.
Inventor: Bryan McCormick Kelly , Debbie Fortnum , Frank Layo
Abstract: Disclosed is an area monitoring system including a camera module having configured to generate an output representative of a captured image and/or a video of an area including a object. The system includes a state machine including an inference engine configured to: identify and track an object; establish a state and/or a change in state of an object via machine vision inferencing; and generate an output representative of an object identification, an object state, and/or a change in an object's state. The state machine module includes a mapping engine configured to: process the camera module output to assign coordinates to an object contained in an image and/or a video based on a coordinate system of a computer vision map of an area; apply a timestamp in connection with assigning coordinates an object; and generate a coordinate timeseries log for plural coordinate-timestamp pairs.
-
公开(公告)号:US20250069262A1
公开(公告)日:2025-02-27
申请号:US18948164
申请日:2024-11-14
Applicant: The Boeing Company
Inventor: Nick S. Evans , Eric R. Muir , Cullen W. Billhartz , Jasper P. Corleis
Abstract: A method is provided for supporting an aircraft approaching a runway on an airfield. The method includes receiving a sequence of images of the airfield, captured by a camera onboard the aircraft approaching the runway. For at least one image of the sequence of images, the method includes applying the image(s) to a machine learning model trained to predict a pose of the aircraft relative to the runway. The machine learning model is configured to map the image(s) to the pose based on a training set of labeled images with respective ground truth poses of the aircraft relative to the runway. The pose is output as a current pose estimate of the aircraft relative to the runway for use in at least one of monitoring the current pose estimate, generating an alert based on the current pose estimate, or guidance or control of the aircraft on a final approach.
-
公开(公告)号:US20250069258A1
公开(公告)日:2025-02-27
申请号:US18771112
申请日:2024-07-12
Applicant: SUBARU CORPORATION
Inventor: Takumi FUNABASHI , Yuichiroh TAMURA , Atsuki MUNAKATA
Abstract: An image processing apparatus includes an estimation circuit and a correction circuit. The estimation circuit is configured to estimate, based on captured image data including an image of a lane line that defines a traveling road, a position of the lane line on a road surface of the traveling road. The correction circuit is configured to correct, based on a height position of an imager that has generated the captured image data with respect to the road surface of the traveling road, the position of the lane line on the road surface of the traveling road estimated by the estimation circuit.
-
公开(公告)号:US20250069257A1
公开(公告)日:2025-02-27
申请号:US18662406
申请日:2024-05-13
Applicant: Samsung Electronics Co., Ltd.
Inventor: Ho-Ik CHOI , Yonggonjong PARK , Jaewoo LEE
IPC: G06T7/73 , B60W60/00 , G06V10/762 , G06V10/774 , G06V10/776 , G06V20/58
Abstract: A processor-implemented method including determining, from a first image frame, a first amodal region including a visible region in which a static landmark is visible and an occluded region in which the static landmark is occluded, calculating an occluded region confidence information for the occluded region in the first amodal region based on the first amodal region, determining a second amodal region corresponding to the static landmark from a second image frame temporally subsequent to the first image frame, calculating transformation information between the first image frame and the second image frame based on the first amodal region, the second amodal region, and the occluded region confidence information, and calculating localization information of an electronic device comprising the processor based on the transformation information.
-
公开(公告)号:US20250067572A1
公开(公告)日:2025-02-27
申请号:US18947717
申请日:2024-11-14
Applicant: Mobileye Vision Technologies Ltd.
Inventor: Yoav Taieb , Raz Cohen Maslaton , Maxim Schwartz , Kfir Viente
IPC: G01C21/00 , B60W10/18 , B60W30/18 , G01C21/36 , G06T7/00 , G06T7/246 , G06T7/32 , G06T7/70 , G06T7/73 , G06V10/46 , G06V20/56 , G06V20/58 , G06V20/64 , G06V40/10
Abstract: A system for correlating information collected from a plurality of vehicles relative to a common road segment is disclosed. The vehicle system includes at least one processor programmed to receive a first set of drive information from a first vehicle including first and second indicators of position associated with detected semantic and non-semantic road features; receive a second set of drive information from a second vehicle including third and fourth indicators of position associated with the detected semantic and non-semantic road features; correlate the first and second sets of drive information by determining a refined position of the detected semantic road feature based on the first and third indicators and a refined position of the detected non-semantic road feature based on the second and forth indicators; store the refined positions of the detected semantic and non-semantic road features in a map; and distribute the map to one or more vehicles.
-
公开(公告)号:US20250065360A1
公开(公告)日:2025-02-27
申请号:US18768956
申请日:2024-07-10
Applicant: SEMES CO., LTD. , Samsung Electronics Co., Ltd.
Inventor: Jin Yeong Sung , Ki Hoon Choi , Seung Un Oh , Young Ho Park , Sang Hyeon Ryu , Jang Jin Lee , Hyun Yoon , Sang Gun Lee , Yu Jin Cho , Ho Jong Hwang , Jong Ju Park , Jong Keun Oh , Yong Woo Kim
IPC: B05C13/00 , G05B19/401 , G05B19/404 , G06T7/73
Abstract: A control device and a substrate processing apparatus including the same are provided. The substrate processing apparatus includes: a support unit including a spin head and configured to support and to rotate a substrate; a spraying unit configured to spray processing liquid onto the substrate; a correction unit in a swing arm, the correction unit configured to move to a target point on the substrate and to irradiate a beam when the processing liquid is sprayed onto the substrate; and a control unit configured to calculate the target point, wherein the control unit is configured to convert image coordinates associated with a first coordinate system and then to calculate the target point by converting the image coordinates associated with the first coordinate system into image coordinates associated with a second coordinate system, and the second coordinate system is based on rotation angles of the spin head and the swing arm.
-
公开(公告)号:US12236631B2
公开(公告)日:2025-02-25
申请号:US17484601
申请日:2021-09-24
Applicant: QUALCOMM Incorporated
Inventor: Pushkar Gorur Sheshagiri , Ajit Deepak Gupte , Chiranjib Choudhuri , Gerhard Reitmayr , Youngmin Park
Abstract: Systems and techniques are described herein for processing frames. The systems and techniques can be implemented by various types of systems, such as by an extended reality (XR) system or device. In some cases, a process can include obtaining feature information associated with a feature in a current frame, wherein the feature information is based on one or more previous frames; determining an estimated pose of the apparatus associated with the current frame; obtaining a distance associated with the feature in the current frame; and determining an estimated scale of the feature in the current frame based on the feature information associated with the feature, the estimated pose, and the distance associated with the feature.
-
公开(公告)号:US12236589B2
公开(公告)日:2025-02-25
申请号:US17693456
申请日:2022-03-14
Applicant: QISDA CORPORATION
Inventor: Chuang-Wei Wu , Hung-Chih Chan , Lung-Kai Cheng
Abstract: A method for generating a three-dimensional image includes capturing a set of color images of an object, generating a first point cloud according to at least the set of color images, generating a second point cloud by performing a filtering operation to the first point cloud according to the set of color images, selectively performing a pairing operation using the second point cloud and a target point cloud to generate pose information, and combining the first point cloud and the target point cloud according to the pose information to update the target point cloud to generate the three-dimensional image of the object. The set of color images is related to color information of the object. The relativity of the second point cloud and the rigid surface is higher than the relativity of the second point cloud and the non-rigid surface.
-
10.
公开(公告)号:US12236536B2
公开(公告)日:2025-02-25
申请号:US17486677
申请日:2021-09-27
Applicant: Russell Todd Nevins , David Jon Backstein , Bradley H. Nathan
Inventor: Russell Todd Nevins , David Jon Backstein , Bradley H. Nathan
IPC: G06T19/00 , A61B34/10 , A61B34/20 , A61B90/00 , A61B90/50 , A61B90/92 , G06F3/03 , G06T7/73 , G06T19/20
Abstract: A system and method for determining a location for a surgical jig in a surgical procedure includes providing a mixed reality headset, a 3D spatial mapping camera, and a computer system configured to transfer data to and from the mixed reality headset and the 3D spatial mapping camera. The system and method also include attaching a jig to a bone, mapping the bone and jig using the 3D spatial mapping camera, and then identifying a location for the surgical procedure using the computer system. Then the system and method use the mixed reality headset to provide a visualization of the location for the surgical procedure.
-
-
-
-
-
-
-
-
-