-
公开(公告)号:US12128927B2
公开(公告)日:2024-10-29
申请号:US16729589
申请日:2019-12-30
申请人: Cortica Ltd.
发明人: Igal Raichelgauz , Karina Odinaev
IPC分类号: B60W60/00 , B60W30/09 , B60W30/095 , B60W40/02 , B60W40/06 , B60W40/08 , G05D1/00 , G06F18/23 , G06F18/23213 , G06V10/20 , G06V10/44 , G06V20/58 , G06V20/59 , G08G1/056 , G08G1/16 , B60W50/14 , G05B13/02 , G06F18/22 , G06F18/2431 , G06N3/042 , G06N3/08 , G06N5/04 , G06N20/00 , G06V10/75 , G06V20/56 , G07C5/02 , G08G1/048 , G08G1/0962 , H04W4/46
CPC分类号: B60W60/0025 , G06F18/23 , G06F18/23213 , G06V10/255 , G06V10/454 , G06V20/58 , G06V20/597 , G08G1/056 , G08G1/162 , G08G1/165 , G08G1/166 , B60W30/09 , B60W30/0956 , B60W40/02 , B60W40/06 , B60W40/08 , B60W2040/0872 , B60W50/14 , B60W60/0011 , B60W60/0016 , B60W60/0017 , B60W60/0051 , B60W2540/10 , B60W2540/12 , B60W2540/18 , B60W2554/00 , B60W2554/4023 , B60W2554/4046 , B60W2556/65 , G05B13/029 , G05D1/0044 , G05D1/0061 , G05D1/0088 , G05D1/0214 , G05D1/0293 , G06F18/22 , G06F18/2431 , G06N3/042 , G06N3/08 , G06N5/04 , G06N20/00 , G06T2207/30261 , G06V10/759 , G06V20/56 , G06V20/584 , G07C5/02 , G08G1/048 , G08G1/09626 , H04W4/46
摘要: A method for situation aware processing, the method may include detecting a situation, based on first sensed information at a first period; selecting, from reference information, a situation related subset of the reference information, wherein the situation related subset of the reference information is related to the situation; and performing, by a situation related processing unit, a situation related processing, wherein the situation related processing is based on the situation related subset of the reference information and on second sensed information sensed at a second period, wherein the situation related processing comprises at least one out of object detection and object behavior estimation.
-
公开(公告)号:US12086992B2
公开(公告)日:2024-09-10
申请号:US17341140
申请日:2021-06-07
发明人: Fukashi Yamazaki , Daisuke Furukawa
IPC分类号: G06T7/174 , G06F18/2431 , G06N3/04 , G06N3/08 , G06T7/11 , G06T7/149 , G06T7/162 , G06T7/187 , G06T7/194 , G06V10/25 , G06V10/26 , G06V10/44 , G06V10/762 , G06V10/774 , G06V10/82 , G06V20/64 , G16H30/40
CPC分类号: G06T7/174 , G06F18/2431 , G06N3/04 , G06N3/08 , G06T7/11 , G06T7/149 , G06T7/162 , G06T7/187 , G06T7/194 , G06V10/25 , G06V10/26 , G06V10/454 , G06V10/7635 , G06V10/774 , G06V10/82 , G06V20/64 , G16H30/40 , G06T2200/04 , G06T2207/10081 , G06T2207/20072 , G06T2207/20081 , G06T2207/20084 , G06T2207/20116 , G06T2207/20161 , G06T2207/30056 , G06T2207/30084 , G06V2201/031
摘要: An image processing apparatus according to the present invention includes a first classification unit configured to classify a plurality of pixels in two-dimensional image data constituting first three-dimensional image data including an object into a first class group by using a trained classifier, and a second classification unit configured to classify a plurality of pixels in second three-dimensional image data including the object into a second class group based on a result of classification by the first classification unit, the second class group including at least one class of the first class group. According to the image processing apparatus according to the present invention, a user's burden of giving pixel information can be reduced and a region can be extracted with high accuracy.
-
3.
公开(公告)号:US12086717B2
公开(公告)日:2024-09-10
申请号:US18316474
申请日:2023-05-12
发明人: Aydogan Ozcan , Yair Rivenson , Xing Lin , Deniz Mengu , Yi Luo
IPC分类号: G06N3/082 , G02B5/18 , G02B27/42 , G06F18/214 , G06F18/2431 , G06N3/04 , G06N3/08 , G06V10/94
CPC分类号: G06N3/082 , G02B5/1866 , G02B27/4205 , G02B27/4277 , G06F18/214 , G06F18/2431 , G06N3/04 , G06N3/08 , G06V10/95
摘要: An all-optical Diffractive Deep Neural Network (D2NN) architecture learns to implement various functions or tasks after deep learning-based design of the passive diffractive or reflective substrate layers that work collectively to perform the desired function or task. This architecture was successfully confirmed experimentally by creating 3D-printed D2NNs that learned to implement handwritten classifications and lens function at the terahertz spectrum. This all-optical deep learning framework can perform, at the speed of light, various complex functions and tasks that computer-based neural networks can implement, and will find applications in all-optical image analysis, feature detection and object classification, also enabling new camera designs and optical components that can learn to perform unique tasks using D2NNs. In alternative embodiments, the all-optical D2NN is used as a front-end in conjunction with a trained, digital neural network back-end.
-
公开(公告)号:US20240282434A1
公开(公告)日:2024-08-22
申请号:US18650347
申请日:2024-04-30
发明人: Min Chul KIM , Chang Min PARK , Eui Jin HWANG
IPC分类号: G16H30/40 , A61B6/00 , A61B6/46 , A61B34/10 , A61M1/04 , G06F18/2431 , G06N20/00 , G06T7/00 , G06T7/11 , G06T7/70 , G06V10/25 , G06V10/764 , G06V10/82 , G16H50/20
CPC分类号: G16H30/40 , A61B6/5217 , A61B34/10 , A61M1/04 , G06F18/2431 , G06N20/00 , G06T7/0012 , G06T7/11 , G06T7/70 , G06V10/25 , G06V10/764 , G06V10/82 , G16H50/20 , A61B6/461 , A61B2034/107 , G06T2207/20081 , G06T2207/30012 , G06T2207/30061 , G06V2201/03
摘要: Some embodiments of the present disclosure provide a pneumothorax detection method performed by a computing device. The method may comprise obtaining predicted pneumothorax information, predicted tube information, and a predicted spinal baseline with respect to an input image from a trained pneumothorax prediction model; determining at least one pneumothorax representative position for the predicted pneumothorax information and at least one tube representative position for the predicted tube information, in a prediction image in which the predicted pneumothorax information and the predicted tube information are displayed; dividing the prediction image into a first region and a second region by the predicted spinal baseline; and determining a region in which the at least one pneumothorax representative position and the at least one tube representative position exist among the first region and the second region.
-
公开(公告)号:US12062246B2
公开(公告)日:2024-08-13
申请号:US17490770
申请日:2021-09-30
发明人: Tim Prebble
IPC分类号: G06V30/18 , G06F18/2431 , G06T7/11 , G06T7/13 , G06T11/00
CPC分类号: G06V30/18 , G06F18/2431 , G06T7/11 , G06T7/13 , G06T11/00
摘要: A method for extracting text from an input image and generating a document includes: generating an edges mask from the input image; generating an edges image that is derived from the edges mask; identifying, within the edges mask, one or more probable text areas; extracting a first set of text characters by performing a first optical character recognition (OCR) operation on each of one or more probable text portions, of the derived edges image, corresponding to each of the probable text areas; generating a modified image by erasing, from the input image, image characters corresponding to the first set of text characters extracted by the first OCR operation; and generating a document by overlaying the extracted first set of text characters on the modified image.
-
公开(公告)号:US12062184B2
公开(公告)日:2024-08-13
申请号:US17485535
申请日:2021-09-27
申请人: FUJIFILM CORPORATION
发明人: Takashi Wakui
IPC分类号: G06T7/149 , G06F18/2431 , G06N20/00 , G06T7/11
CPC分类号: G06T7/149 , G06F18/2431 , G06N20/00 , G06T7/11 , G06T2207/20081
摘要: An image processing apparatus includes an extraction unit that extracts, from among a plurality of designated regions in which labels of classes are designated, complicated regions which are regions of at least a part of the designated regions and are regions having relatively complicated contours, in an annotation image given as learning data to a machine learning model for performing semantic segmentation in which a plurality of the classes in an image are discriminated on a per-pixel basis, and a setting unit that sets additional labels for the complicated regions separately from original labels originally designated for the annotation image.
-
公开(公告)号:US12057232B2
公开(公告)日:2024-08-06
申请号:US17295248
申请日:2019-12-05
发明人: Gari Clifford , Ayse Cakmak , Amit Shah , Erik Reinertsen
摘要: Methods and systems for monitoring of sensor data for processing by machine-learning models to generate event predictions to estimate a risk a medical event are provided. An electronic device or wearable smart device may monitor the output of various sensors to collect data related to a person's activity level, location changes, and communications and may use this information as input to a personalized trained machine-learning model to predict a likelihood of an event.
-
公开(公告)号:US12056623B2
公开(公告)日:2024-08-06
申请号:US17670189
申请日:2022-02-11
申请人: NETRADYNE, INC.
IPC分类号: G06N5/04 , G06F8/65 , G06F18/2431 , G06F18/25 , G06N3/08 , G06N5/043 , G06N20/00 , G06V30/24 , H04L67/00 , G06N3/044 , G06N7/01
CPC分类号: G06N5/04 , G06F8/65 , G06F18/2431 , G06F18/25 , G06N3/08 , G06N5/043 , G06N20/00 , G06V30/2504 , H04L67/34 , G06N3/044 , G06N7/01
摘要: Methods and systems for joint processing for data inference in a vehicle-to-cloud communication system are provided. The method includes processing sensor data from a first sensor in a first vehicle using a first model at a first device, resulting in a first inference data. The first communication data derived from the first sensor data is sent to a cloud device, where it undergoes further processing using a second model to generate cloud inference data. Subsequently, the cloud communication data based on the cloud inference data is sent to a second device in a second vehicle from the cloud device.
-
公开(公告)号:US12050671B2
公开(公告)日:2024-07-30
申请号:US17858775
申请日:2022-07-06
IPC分类号: G06F21/16 , G06F18/2431 , G06N3/08
CPC分类号: G06F21/16 , G06F18/2431 , G06N3/08
摘要: Disclosed herein is a system for watermarking a neural network, comprising memory; and at least one processor in communication with the memory; wherein the memory stores instructions for causing the at least one processor to carry out a method comprising: generating a trigger set by obtaining examples from a training set by random sampling from the training set, respective examples being associated with respective true classes of a plurality of classes; generating a set of adversarial examples by structured perturbation of the examples; generating, for each adversarial example, one or more adversarial class labels by passing the adversarial example to the neural network; and applying one or more trigger labels to each said adversarial example, wherein the one or more trigger labels are selected randomly from the plurality of classes, and wherein each trigger label is not a said true class label for the corresponding example or a said adversarial class label for the corresponding adversarial example; and storing the adversarial examples and corresponding trigger labels as the trigger set; and performing a tuning process to adjust parameters at each layer of the neural network using the trigger set, to thereby generate a watermarked neural network.
-
公开(公告)号:US12039443B2
公开(公告)日:2024-07-16
申请号:US18045722
申请日:2022-10-11
申请人: Google LLC
发明人: Sercan Omer Arik , Chen Xing , Zizhao Zhang , Tomas Jon Pfister
IPC分类号: G06N3/08 , G06F18/214 , G06F18/2413 , G06F18/2431 , G06N3/04
CPC分类号: G06N3/08 , G06F18/2148 , G06F18/2413 , G06F18/2431 , G06N3/04
摘要: A method includes receiving a training data set including a plurality of training data subsets. From two or more training data subsets in the training data set, the method includes selecting a support set of training examples and a query set of training examples. The method includes determining, using the classification model, a centroid value for each respective class. For each training example in the query set of training examples, the method includes generating, using the classification model, a query encoding, determining a class distance measure, determining a ground-truth distance, and updating parameters of the classification model. For each training example in the query set of training examples identified as being misclassified, the method further includes generating a standard deviation value, sampling a new query, and updating parameters of the confidence model based on the new query encoding.
-
-
-
-
-
-
-
-
-