-
公开(公告)号:US11386318B2
公开(公告)日:2022-07-12
申请号:US15399450
申请日:2017-01-05
Applicant: EVOLV TECHNOLOGY SOLUTIONS, INC.
Inventor: Neil Iscoe , Risto Miikkulainen
IPC: G06N3/04 , G06F16/26 , G06F16/23 , G06F16/958 , G06Q30/02 , G06F40/143 , G06N3/08 , G06F3/0484 , G06F11/36 , G06N3/12 , G06F9/451 , G06F8/36 , G06N3/06
Abstract: Roughly described, the technology disclosed provides a so-called machine learned conversion optimization (MLCO) system that uses evolutionary computations to efficiently identify most successful webpage designs in a search space without testing all possible webpage designs in the search space. The search space is defined based on webpage designs provided by marketers. Website funnels with a single webpage or multiple webpages are represented as genomes. Genomes identify different dimensions and dimension values of the funnels. The genomes are subjected to evolutionary operations like initialization, testing, competition, and procreation to identify parent genomes that perform well and offspring genomes that are likely to perform well. Each webpage is tested only to the extent that it is possible to decide whether it is promising, i.e., whether it should serve as a parent for the next generation, or should be discarded.
-
公开(公告)号:US11354133B2
公开(公告)日:2022-06-07
申请号:US16663210
申请日:2019-10-24
Applicant: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
Inventor: Shaoli Liu , Tianshi Chen , Bingrui Wang , Yao Zhang
Abstract: A matrix-multiplying-vector operation method and a processing device for performing the same are provided. The matrix-multiplying-vector method includes distributing, by a main processing circuit, basic data blocks of the matrix and broadcasting the vector to a plurality of the basic processing circuits. That way, the basic processing circuits can perform inner-product operations between the basic data blocks and the broadcasted vector in parallel. The results are then provided back to main processing circuit for combining. The technical solutions proposed by the present disclosure provide short operation time and low energy consumption.
-
公开(公告)号:US11347516B2
公开(公告)日:2022-05-31
申请号:US16663205
申请日:2019-10-24
Applicant: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
Inventor: Shaoli Liu , Tianshi Chen , Bingrui Wang , Yao Zhang
Abstract: A fully connected operation method and a processing device for performing the same are provided. The fully connected operation method designates distribution data and broadcast data. The distribution data is divided into basic data blocks and distributed to parallel processing units, and the broadcast data is broadcasted to the parallel processing units. Operations between the basic data blocks and the broadcasted data are carried out by the parallel processing units before the results are returned to a main unit for further processing. The technical solutions disclosed by the present disclosure provide short Operation time and low energy consumption.
-
公开(公告)号:US11341401B2
公开(公告)日:2022-05-24
申请号:US16289501
申请日:2019-02-28
Applicant: International Business Machines Corporation
Inventor: Rodrigo Alvarez-Icaza Rivera , John V. Arthur , Andrew S. Cassidy , Pallab Datta , Paul A. Merolla , Dharmendra S. Modha
Abstract: Embodiments of the invention relate to a neural network system for simulating neurons of a neural model. One embodiment comprises a memory device that maintains neuronal states for multiple neurons, a lookup table that maintains state transition information for multiple neuronal states, and a controller unit that manages the memory device. The controller unit updates a neuronal state for each neuron based on incoming spike events targeting said neuron and state transition information corresponding to said neuronal state.
-
55.
公开(公告)号:US11308388B2
公开(公告)日:2022-04-19
申请号:US15781680
申请日:2016-12-07
Inventor: Jean-Marc Philippe , Alexandre Carbon , Marc Duranton
Abstract: A circuit comprises a series of calculating blocks that can each implement a group of neurons; a transformation block that is linked to the calculating blocks by a communication means and that can be linked at the input of the circuit to an external data bus, the transformation block transforming the format of the input data and transmitting the data to said calculating blocks by means of K independent communication channels, an input data word being cut up into sub-words such that the sub-words are transmitted over multiple successive communication cycles, one sub-word being transmitted per communication cycle over a communication channel dedicated to the word such that the N channels can transmit K words in parallel.
-
公开(公告)号:US11250326B1
公开(公告)日:2022-02-15
申请号:US16212643
申请日:2018-12-06
Applicant: Perceive Corporation
Inventor: Jung Ko , Kenneth Duong , Steven L. Teig
Abstract: Some embodiments provide a method for compiling a neural network (NN) program for an NN inference circuit (NNIC) that includes multiple partial dot product computation circuits (PDPCCs) for computing dot products between weight values and input values. The method receives an NN definition with multiple nodes. The method assigns a group of filters to specific PDPCCs. Each filter is assigned to a different set of the PDPCCs. When a filter does not have enough weight values equal to zero for a first set of PDPCCs to which the filter is assigned to compute dot products for nodes that use the filter, the method divides the filter between the first set and a second set of PDPCCs. The method generates program instructions for instructing the NNIC to execute the NN by using the first and second PDPCCs to compute dot products for the nodes that use the filter.
-
公开(公告)号:US11250107B2
公开(公告)日:2022-02-15
申请号:US16511689
申请日:2019-07-15
Applicant: International Business Machines Corporation
Inventor: Christophe Piveteau , Nikolas Ioannou , Igor Krawczuk , Manuel Le Gallo-Bourdeau , Abu Sebastian , Evangelos Stavros Eleftheriou
Abstract: The present disclosure relates to a method for executing a computation task composed of at least one set of operations where subsets of pipelineable operations of the set of operations are determined in accordance with a pipelining scheme. A single routine may be created for enabling execution of the determined subsets of operations by a hardware accelerator. The routine has, as arguments, a value indicative of input data and values of configuration parameters of the computation task, where a call of the routine causes a scheduling of the subsets of operations on the hardware accelerator in accordance with the values of the configuration parameters. Upon receiving input data of the computation task, the routine may be called to cause the hardware accelerator to perform by the computation task in accordance with the scheduling.
-
58.
公开(公告)号:US11222259B2
公开(公告)日:2022-01-11
申请号:US15840322
申请日:2017-12-13
Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventor: Siyuranga Koswatta , Yulong Li , Paul M. Solomon
Abstract: Technical solutions are described for storing weight in a crosspoint device of a resistive processing unit (RPU) array. An example method includes setting a state of each single bit counter from a set of single bit counters in the crosspoint device, the states of the single bit counters representing the weight to be stored at the crosspoint device. The method further includes adjusting electrical conductance of a resistor device of the crosspoint device. The resistor device includes a set of resistive circuits, each resistive circuit associated with a respective single bit counter from the set of single bit counters, the electrical conductance adjusted by activating or deactivating each resistive circuit according to a state of the associated single bit counter.
-
公开(公告)号:US11182667B2
公开(公告)日:2021-11-23
申请号:US15813952
申请日:2017-11-15
Applicant: Microsoft Technology Licensing, LLC
Inventor: George Petre , Chad Balling McBride , Amol Ashok Ambardekar , Kent D. Cedola , Larry Marvin Wall , Boris Bobrov
IPC: G06N3/06 , G06N3/10 , G06N3/04 , G06F9/38 , G06N3/063 , G06F12/0862 , G06F9/46 , G06F1/324 , G06F3/06 , G06F12/08 , G06F12/10 , G06F15/80 , G06F17/15 , G06N3/08 , H03M7/30 , H04L12/715 , H04L29/08 , G06F9/30 , G06F13/16 , G06F1/3234 , G06F12/02 , G06F13/28 , H03M7/46 , H04L12/723
Abstract: The performance of a neural network (NN) and/or deep neural network (DNN) can be limited by the number of operations being performed as well as management of data among the various memory components of the NN/DNN. By inserting a selected padding in the input data to align the input data in memory, data read/writes can be optimized for processing by the NN/DNN thereby enhancing the overall performance of a NN/DNN. Operatively, an operations controller/iterator can generate one or more instructions that inserts the selected padding into the data. The data padding can be calculated using various characteristics of the input data as well as the NN/DNN as well as characteristics of the cooperating memory components. Padding on the output data can be utilized to support the data alignment at the memory components and the cooperating processing units of the NN/DNN.
-
60.
公开(公告)号:US20210339777A1
公开(公告)日:2021-11-04
申请号:US16863214
申请日:2020-04-30
Inventor: Yuta Kataoka , Kentaro Oguchi
Abstract: A system and method for controlling one or more vehicles with one or more controlled vehicles may include one or more processors and a memory in communication with the one or more processors. The memory may include one or more modules that cause the one or more modules to obtain a state of an environment having a universe of vehicles operating therein, identify one or more anomaly vehicles from the universe of vehicles operating in the environment, select one more actions to control a plurality of controlled vehicles to control the operation of one or more anomaly vehicles and direct the plurality of controlled vehicles execute the one or more actions. The selecting of one or more actions may be performed by utilizing a reinforcement-learning trained algorithm.
-
-
-
-
-
-
-
-
-