-
公开(公告)号:US11884296B2
公开(公告)日:2024-01-30
申请号:US17128554
申请日:2020-12-21
Applicant: QUALCOMM Incorporated
Inventor: Hee Jun Park , Abhinav Goel
CPC classification number: B60W60/0015 , B60W60/0013 , G06N3/002 , G06N3/042 , G06N3/045 , G06N7/046
Abstract: Embodiments include methods performed by a processor of a vehicle for allocating processing resources to concurrently-executing neural networks. The methods may include determining a priority of each of a plurality of neural networks executing on a vehicle processing system based on a contribution of each neural network to overall vehicle safety performance, and allocating computing resources to the plurality of neural networks based on the determined priority of each neural network. In some embodiments, the methods may dynamically adjust hyperparameters of one or more neural networks.
-
公开(公告)号:US11783174B2
公开(公告)日:2023-10-10
申请号:US15971332
申请日:2018-05-04
Applicant: Apple Inc.
Inventor: Christopher L. Mills
Abstract: Embodiments of the present disclosure relate to splitting input data into smaller units for loading into a data buffer and neural engines in a neural processor circuit for performing neural network operations. The input data of a large size is split into slices and each slice is again split into tiles. The tile is uploaded from an external source to a data buffer inside the neural processor circuit but outside the neural engines. Each tile is again split into work units sized for storing in an input buffer circuit inside each neural engine. The input data stored in the data buffer and the input buffer circuit is reused by the neural engines to reduce re-fetching of input data. Operations of splitting the input data are performed at various components of the neural processor circuit under the management of rasterizers provided in these components.
-
公开(公告)号:US11593637B2
公开(公告)日:2023-02-28
申请号:US16399928
申请日:2019-04-30
Applicant: Samsung Electronics Co., Ltd.
Inventor: Chenchi Luo , Yuming Zhu , Hyejung Kim , John Seokjun Lee , Manish Goel
Abstract: A method, an electronic device, and computer readable medium are provided. The method includes receiving an input into a neural network that includes a kernel. The method also includes generating, during a convolution operation of the neural network, multiple panel matrices based on different portions of the input. The method additionally includes successively combining each of the multiple panel matrices with the kernel to generate an output. Generating the multiple panel matrices can include mapping elements within a moving window of the input onto columns of an indexing matrix, where a size of the window corresponds to the size of the kernel.
-
公开(公告)号:US20230045003A1
公开(公告)日:2023-02-09
申请号:US17876501
申请日:2022-07-28
Applicant: ILLUMINA, INC.
Inventor: Chen CHEN , Hong GAO , Laksshman S. SUNDARAM , Kai-How FARH
IPC: G16H70/60 , G06N3/08 , G16B40/20 , G16B10/00 , G16B20/20 , G06K9/62 , G06N3/04 , G06N7/04 , G16B20/50 , G16B25/10 , G16B30/10 , G16B50/00
Abstract: The technology disclosed relates to a variant pathogenicity classifier. The variant pathogenicity classifier comprises memory and runtime logic. The memory stores (i) a reference amino acid sequence of a protein, (ii) an alternative amino acid sequence of the protein that contains a variant amino acid caused by a variant nucleotide, and (iii) a protein contact map of the protein. The runtime logic has access to the memory, and is configured to provide (i) the reference amino acid sequence, (ii) the alternative amino acid sequence, and (iii) the protein contact map as input to a first neural network, and to cause the first neural network to generate a pathogenicity indication of the variant amino acid as output in response to processing (i) the reference amino acid sequence, (ii) the alternative amino acid sequence, and (iii) the protein contact map.
-
公开(公告)号:US11574185B2
公开(公告)日:2023-02-07
申请号:US16665957
申请日:2019-10-28
Applicant: SAMSUNG SDS CO., LTD.
Inventor: Jong-Won Choi , Young-Joon Choi , Ji-Hoon Kim , Byoung-Jip Kim , Seong-Won Bak
Abstract: A method for training a deep neural network according to an embodiment includes training a deep neural network model using a first data set including a plurality of labeled data and a second data set including a plurality of unlabeled data, assigning a ground-truth label value to some of the plurality of unlabeled data, updating the first data set and the second data set such that the data to which the ground-truth label value is assigned is included in the first data set, and further training the deep neural network model using the updated first data set and the updated second data set.
-
公开(公告)号:US20220398479A1
公开(公告)日:2022-12-15
申请号:US17346913
申请日:2021-06-14
Applicant: International Business Machines Corporation
Inventor: Radu Marinescu
Abstract: In an approach for reasoning with real-valued propositional logic, a processor receives a set of propositional logic formulae, a set of intervals representing upper and lower bounds on truth values of a set of atomic propositions in the set of propositional logic formulae, and a query. A processor generates a logical neural network based on the set of propositional logic formulae and the set of intervals representing upper and lower bounds on truth values. A processor generates a credal network with a same structure of the logical neural network. A processor runs probabilistic inference on the credal network to compute a conditional probability based on the query. A processor outputs the conditional probability as an answer to the query.
-
公开(公告)号:US11423284B2
公开(公告)日:2022-08-23
申请号:US16380788
申请日:2019-04-10
Applicant: Black Sesame International Holding Limited
Inventor: Xiangdong Jin , Fen Zhou , Chengyu Xiong
IPC: G06N3/04 , G06N3/08 , G06N3/10 , G06N7/00 , G06F16/90 , G06F17/16 , G06F17/15 , G06F30/18 , G06F30/27 , G06F30/3308 , G06F30/392 , G06F16/901 , G06N7/04
Abstract: A method of subgraph tile fusion in a convolutional neural network, including partitioning a network into at least one subgraph node, determining a layer order of at least one layer of the at least one subgraph node, determining a input layer of the at least one subgraph node, determining a weight layer of the at least one subgraph node, determining a output layer of the at least one subgraph node and fusing the at least one subgraph node, the input layer, the weight layer and the output layer in the layer order.
-
公开(公告)号:US11087231B2
公开(公告)日:2021-08-10
申请号:US16708370
申请日:2019-12-09
Applicant: RESEARCH NOW GROUP, LLC
Inventor: Melanie D. Courtright , Vincent P. Derobertis , Michael D. Bigby , William C. Robinson , Greg Ellis , Heidi D. E. Wilton , John R. Rothwell , Jeremy S. Antoniuk
Abstract: This disclosure is directed to an apparatus for intelligent matching of disparate input data received from disparate input data systems in a complex computing network for establishing targeted communication to a computing device associated with the intelligently matched disparate input data.
-
公开(公告)号:US10373066B2
公开(公告)日:2019-08-06
申请号:US14069362
申请日:2013-10-31
Applicant: Model N, Inc.
Inventor: Manfred Hettenkofer , Eric Burin des Roziers , John Ellithorpe
Abstract: Various implementations for simplified product configuration using table-based rule editing, rule conflict resolution through voting, and efficient model compilation are described. In one example implementation, a rule definition table is provided for presentation to a user. One or inputs defining a rule for a model using the rule definition table are received. The rule is compiled into a compiled rule that is executable during evaluation of the model and the model is evaluated based on the compiled rule. Numerous additional implementations are also described.
-
公开(公告)号:US10354183B2
公开(公告)日:2019-07-16
申请号:US14537857
申请日:2014-11-10
Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventor: Charles J. Alpert , Pallab Datta , Myron D. Flickner , Zhuo Li , Dharmendra S. Modha , Gi-Joon Nam
Abstract: Embodiments of the present invention relate to meeting latency constraints in a multi-core neurosynaptic network. In one embodiment of the present invention, a method of and computer program product for power-driven synthesis under latency constraints is provided. Power consumption of a neurosynaptic network is modeled as wire length. The neurosynaptic network comprises a plurality of neurosynaptic cores. Each of the plurality of neurosynaptic cores is modeled as a node in a placement graph. The graph has a plurality of edges. A weight is assigned to each of the plurality of edges based on a spike frequency. An arrangement of the neurosynaptic cores is determined. The arrangement comprises a length of each of the plurality of edges. A maximum length is compared to the length of each of the plurality of edges. The weight of at least one of the plurality of edges is increased where the length is greater than the maximum length.
-
-
-
-
-
-
-
-
-