-
公开(公告)号:US20240135148A1
公开(公告)日:2024-04-25
申请号:US18479242
申请日:2023-10-02
Inventor: Nickvash Kani , Neeraj Gangwar , Hongbo Zheng
IPC: G06N3/0455 , G06N3/044 , G06N3/0475 , G06N3/048
CPC classification number: G06N3/0455 , G06N3/044 , G06N3/0475 , G06N3/048
Abstract: Methods are provided herein for training and using models to generate semantically representative continuous vectors for input mathematical expressions. These methods result in models that output continuous vectors that are nearby in an embedding space for equations that are mathematically equivalent but differently written. Such continuous vectors can be used to facilitate indexing and searching of databases of mathematical equations, e.g., to facilitate semantically-aware searching of databases of mathematical texts for equations that are mathematically equivalent to, or mathematically similar to, input query expressions. These training methods include training an encoder together with a decoder to predict pairs of mathematically equivalent but different training expressions, with the output of the encoder being the continuous vector that represents the semantic mathematical content of the pair of training expressions. Also provided are methods for efficiently generating such pairs of mathematically equivalent but different training expressions.
-
52.
公开(公告)号:US20240135138A1
公开(公告)日:2024-04-25
申请号:US18146346
申请日:2022-12-23
Inventor: Xi GUO , Lei CUI , Qingwei CAO , Chenhui NIU , Feng LI , Dong LI , Jie YIN , Kenan CAO , Yang YANG
IPC: G06N3/006 , G06N3/0464 , G06N3/0475 , G06N3/048
CPC classification number: G06N3/006 , G06N3/0464 , G06N3/0475 , G06N3/048
Abstract: A building photovoltaic data interpolation method based on WGAN and whale optimization algorithm is provided, which includes: obtaining historical building roof photovoltaic output data, perform preprocessing on the historical building roof photovoltaic output data, and uses CNN to build a GAN; describing missing value position of preprocessed data by using a binary mask matrix, and setting Wasserstein distance to define a loss function of a GAN generator and a discriminator; taking the loss function as a fitness function, optimizing an input to the GAN generator through a whale optimization algorithm and obtaining optimized candidate samples; fusing the optimized candidate samples and a photovoltaic data processed by the binary mask matrix to obtain completed reconstructed samples, so as to improve the complementary accuracy, optimize the random noise, remove the unfavorable influencing components, and provide services for building rooftop PV data interpolation more accurately.
-
公开(公告)号:US11966843B2
公开(公告)日:2024-04-23
申请号:US17839010
申请日:2022-06-13
Applicant: Intel Corporation
Inventor: Meenakshi Arunachalam , Arun Tejusve Raghunath Rajan , Deepthi Karkada , Adam Procter , Vikram Saletore
IPC: G06N3/08 , G06F1/3203 , G06F1/3206 , G06F18/214 , G06N3/063 , G06V10/774 , G06V10/82 , G06V10/94 , G06N3/048
CPC classification number: G06N3/08 , G06F1/3203 , G06F1/3206 , G06F18/214 , G06N3/063 , G06V10/774 , G06V10/82 , G06V10/94 , G06N3/048
Abstract: Methods, apparatus, systems and articles of manufacture for distributed training of a neural network are disclosed. An example apparatus includes a neural network trainer to select a plurality of training data items from a training data set based on a toggle rate of each item in the training data set. A neural network parameter memory is to store neural network training parameters. A neural network processor is to generate training data results from distributed training over multiple nodes of the neural network using the selected training data items and the neural network training parameters. The neural network trainer is to synchronize the training data results and to update the neural network training parameters.
-
公开(公告)号:US11965743B2
公开(公告)日:2024-04-23
申请号:US17843974
申请日:2022-06-18
Applicant: Movidius Ltd.
Inventor: David Macdara Moloney , Jonathan David Byrne
IPC: G01C21/20 , G01C21/30 , G05D1/00 , G06F9/30 , G06F17/16 , G06N3/04 , G06N3/045 , G06T1/20 , G06T15/06 , G06T15/08 , G06T17/00 , G06T17/05 , G06T19/00 , G06V20/13 , G06V20/17 , G06V20/64 , G06N3/048
CPC classification number: G01C21/20 , G01C21/30 , G05D1/0214 , G05D1/0274 , G06F9/30029 , G06F17/16 , G06N3/04 , G06N3/045 , G06T1/20 , G06T15/06 , G06T15/08 , G06T17/005 , G06T17/05 , G06T19/00 , G06T19/006 , G06V20/13 , G06V20/17 , G06V20/64 , G06N3/048 , G06T2200/04 , G06T2200/28 , G06T2210/08 , G06T2210/36 , G06T2219/004
Abstract: A volumetric data structure models a particular volume representing the particular volume at a plurality of levels of detail. A first entry in the volumetric data structure includes a first set of bits representing voxels at a first level of detail, the first level of detail includes the lowest level of detail in the volumetric data structure, values of the first set of bits indicate whether a corresponding one of the voxels is at least partially occupied by respective geometry, where the volumetric data structure further includes a number of second entries representing voxels at a second level of detail higher than the first level of detail, the voxels at the second level of detail represent subvolumes of volumes represented by voxels at the first level of detail, and the number of second entries corresponds to a number of bits in the first set of bits with values indicating that a corresponding voxel volume is occupied.
-
公开(公告)号:US11960993B2
公开(公告)日:2024-04-16
申请号:US17094262
申请日:2020-11-10
Applicant: EQUIFAX INC.
Inventor: Jonathan Boardman , Xiao Huang
Abstract: Various aspects involve a monotonic recurrent neural network (MRNN) trained for risk assessment or other purposes. For instance, the MRNN is trained to compute a risk indicator from a predictor variable. Training the MRNN includes adjusting weights of nodes of the MRNN subject to a set of monotonicity constraints, wherein the set of monotonicity constraints causes output risk indicators computed by the RNN to be a monotonic function of input predictor variables. The trained monotonic RNN can be used to generate an output risk indicator for a target entity.
-
公开(公告)号:US20240119268A1
公开(公告)日:2024-04-11
申请号:US18524523
申请日:2023-11-30
Applicant: HUAWEI TECHNOLOGIES CO., LTD.
Inventor: Lu HOU , Lifeng SHANG , Xin JIANG , Li QIAN
IPC: G06N3/048
CPC classification number: G06N3/048
Abstract: This disclosure relates to the field of artificial intelligence, and discloses a data processing method. The method includes: obtaining a transformer model including a target network layer and a target module; and processing to-be-processed data by using the transformer model, to obtain a data processing result. The target module is configured to: perform a target operation on a feature map output at the target network layer, to obtain an operation result, and fuse the operation result and the feature map output, to obtain an updated feature map output. In this disclosure, the target module is inserted into the transformer model, and the operation result generated by the target module and an input are fused, so that information carried in a feature map output by the target network layer of the transformer model is increased.
-
公开(公告)号:US11947935B2
公开(公告)日:2024-04-02
申请号:US17535391
申请日:2021-11-24
Applicant: MICROSOFT TECHNOLOGY LICENSING, LLC.
Inventor: Colin Bruce Clement , Neelakantan Sundaresan , Alexey Svyatkovskiy , Michele Tufano , Andrei Zlotchevski
Abstract: Custom source code generation models are generated by tuning a pre-trained deep learning model by freezing the model parameters and optimizing a prefix. The tuning process is distributed across a user space and a model space where the embedding and output layers are performed in the user space and the execution of the model is performed in a model space that is isolated from the user space. The tuning process updates the embeddings of the prefix across the separate execution spaces in a manner that preserves the privacy of the data used in the tuning process.
-
公开(公告)号:US20240104385A1
公开(公告)日:2024-03-28
申请号:US18492490
申请日:2023-10-23
Applicant: NORTH WEBER & BAUGH LLP
Inventor: Tuna OEZER
Abstract: Presented herein are embodiments that allow the representation of complex systems and processes for resource efficient machine learning and inference. Furthermore, disclosed are new reinforcement learning techniques that are capable of learning to plan and optimize dynamic and nuanced systems and processes. Different embodiments comprising combinations of one or more neural networks, reinforcement learning, and linear programming are discussed to learn representations and models—even for complex systems and methods. Furthermore, the introduction of neural field embodiments and methods to compute a Deep Argmax, as well to invert neural networks and neural fields with linear programming, provide the ability to create and train models that are accurate and resource efficient—using less memory, less computations, less time, and, as a result, less energy. As a result, these models can be trained and re-trained quickly and efficiently; thereby not only using fewer resources but also providing models that are continually improving.
-
公开(公告)号:US20240104378A1
公开(公告)日:2024-03-28
申请号:US18363408
申请日:2023-08-01
Applicant: Intel Corporation
Inventor: Michael E. Deisher
IPC: G06N3/08 , G06F5/01 , G06F7/544 , G06F7/57 , G06N3/02 , G06N3/044 , G06N3/045 , G06N3/048 , G06N3/063
CPC classification number: G06N3/08 , G06F5/01 , G06F7/5443 , G06F7/57 , G06N3/02 , G06N3/044 , G06N3/045 , G06N3/048 , G06N3/063 , G06F7/023
Abstract: An apparatus for applying dynamic quantization of a neural network is described herein. The apparatus includes a scaling unit and a quantizing unit. The scaling unit is to calculate an initial desired scale factors of a plurality of inputs, weights and a bias and apply the input scale factor to a summation node. Also, the scaling unit is to determine a scale factor for a multiplication node based on the desired scale factors of the inputs and select a scale factor for an activation function and an output node. The quantizing unit is to dynamically requantize the neural network by traversing a graph of the neural network.
-
公开(公告)号:US20240104357A1
公开(公告)日:2024-03-28
申请号:US18077686
申请日:2022-12-08
Applicant: Silicon Storage Technology, Inc.
Inventor: Hieu Van Tran , Stephen Trinh , Stanley Hong , Thuan Vu , Nghia Le , Hien Pham
IPC: G06N3/048
CPC classification number: G06N3/048
Abstract: Numerous examples are disclosed of input circuitry and associated methods in an artificial neural network. In one example, a system comprises a plurality of address decoders to receive an address and output a plurality of row enabling signals in response to the address; a first plurality of registers to store, sequentially, activation data in response to the plurality of row enabling signals; and a second plurality of registers to store, in parallel, activation data received from the first plurality of registers.
-
-
-
-
-
-
-
-
-