ACCELERATED TR-L-BFGS ALGORITHM FOR NEURAL NETWORK
    2.
    发明申请
    ACCELERATED TR-L-BFGS ALGORITHM FOR NEURAL NETWORK 审中-公开
    加速TR-L-BFGS神经网络算法

    公开(公告)号:US20170046614A1

    公开(公告)日:2017-02-16

    申请号:US14823167

    申请日:2015-08-11

    IPC分类号: G06N3/04 G06N3/08

    CPC分类号: G06N3/082

    摘要: Techniques herein train a multilayer perceptron, sparsify edges of a graph such as the perceptron, and store edges and vertices of the graph. Each edge has weight. A computer sparsifies perceptron edges. The computer performs a forward-backward pass on the perceptron to calculate a sparse Hessian matrix. Based on that Hessian, the computer performs quasi-Newton perceptron optimization. The computer repeats this until convergence. The computer stores edges in an array and vertices in another array. Each edge has weight and input and output indices. Each vertex has input and output indices. The computer inserts each edge into an input linked list based on its weight. Each link of the input linked list has the next input index of an edge. The computer inserts each edge into an output linked list based on its weight. Each link of the output linked list comprises the next output index of an edge.

    摘要翻译: 本文中的技术训练多层感知器,稀释图形的边缘,例如感知器,并存储图形的边缘和顶点。 每个边缘都有重量。 一台计算机使感知器边缘变得稀疏。 计算机在感知器上执行向前 - 向后传递以计算稀疏Hessian矩阵。 基于该Hessian,计算机执行准牛顿感知器优化。 计算机重复此直到收敛。 计算机将阵列中的边缘存储在另一个阵列中。 每个边缘都有权重和输入和输出索引。 每个顶点都有输入和输出索引。 计算机根据其重量将每个边缘插入到输入链表中。 输入链表的每个链接都具有边的下一个输入索引。 计算机根据其重量将每个边缘插入输出链表。 输出链表的每个链接包括边的下一个输出索引。

    SHARING DATA STRUCTURES BETWEEN PROCESSES BY SEMI-INVASIVE HYBRID APPROACH
    3.
    发明申请
    SHARING DATA STRUCTURES BETWEEN PROCESSES BY SEMI-INVASIVE HYBRID APPROACH 有权
    通过半自动混合方法共享流程之间的数据结构

    公开(公告)号:US20170046270A1

    公开(公告)日:2017-02-16

    申请号:US14823328

    申请日:2015-08-11

    IPC分类号: G06F12/10 G06F9/54

    摘要: Techniques herein are for sharing data structures between processes. A method involves obtaining a current memory segment that begins at a current base address within a current address space. The current memory segment comprises a directed object graph and a base pointer. The graph comprises object pointers and objects. For each particular object, determine whether a different memory segment contains an equivalent object that is equivalent to the particular object. If the equivalent object exists, for each object pointer having the particular object as its target object, replace the memory address of the object pointer with a memory address of the equivalent object that does not reside in the current memory segment. Otherwise, for each object pointer having the particular object as its target object, increment the memory address of the object pointer by an amount that is a difference between the current base address and the original base address.

    摘要翻译: 这里的技术是用于在进程之间共享数据结构。 一种方法涉及获得从当前地址空间内的当前基地址开始的当前存储器段。 当前存储器段包括定向对象图和基指针。 该图包括对象指针和对象。 对于每个特定对象,确定不同的内存段是否包含等效于特定对象的等效对象。 如果存在等效对象,则对于具有特定对象作为其目标对象的每个对象指针,都使用不在当前内存段中的等效对象的内存地址替换对象指针的内存地址。 否则,对于具有特定对象作为其目标对象的每个对象指针,将对象指针的存储器地址增加当前基地址和原始基地址之间的差值。

    ACCELERATED TR-L-BFGS ALGORITHM FOR NEURAL NETWORK

    公开(公告)号:US20200034713A1

    公开(公告)日:2020-01-30

    申请号:US16592585

    申请日:2019-10-03

    IPC分类号: G06N3/08

    摘要: Techniques herein train a multilayer perceptron, sparsify edges of a graph such as the perceptron, and store edges and vertices of the graph. Each edge has weight. A computer sparsifies perceptron edges. The computer performs a forward-backward pass on the perceptron to calculate a sparse Hessian matrix. Based on that Hessian, the computer performs quasi-Newton perceptron optimization. The computer repeats this until convergence. The computer stores edges in an array and vertices in another array. Each edge has weight and input and output indices. Each vertex has input and output indices. The computer inserts each edge into an input linked list based on its weight. Each link of the input linked list has the next input index of an edge. The computer inserts each edge into an output linked list based on its weight. Each link of the output linked list comprises the next output index of an edge.

    Minimizing Global Error in an Artificial Neural Network
    6.
    发明申请
    Minimizing Global Error in an Artificial Neural Network 审中-公开
    最小化人造神经网络中的全局误差

    公开(公告)号:US20150088795A1

    公开(公告)日:2015-03-26

    申请号:US14492440

    申请日:2014-09-22

    IPC分类号: G06N3/08

    CPC分类号: G06N3/08 G06N3/04

    摘要: Computer systems, machine-implemented methods, and stored instructions are provided for minimizing an approximate global error in an artificial neural network that is configured to predict model outputs based at least in part on one or more model inputs. A model manager stores the artificial neural network model. The model manager may then minimize an approximate global error in the artificial neural network model at least in part by causing evaluation of a mixed integer linear program that determines weights between artificial neurons in the artificial neural network model. The mixed integer linear program accounts for piecewise linear activation functions for artificial neurons in the artificial neural network model. The mixed integer linear program comprises a functional expression of a difference between actual data and modeled data, and a set of one or more constraints that reference variables in the functional expression.

    摘要翻译: 提供了计算机系统,机器实现的方法和存储的指令,用于最小化人造神经网络中的近似全局误差,其被配置为至少部分地基于一个或多个模型输入来预测模型输出。 模型经理存储人工神经网络模型。 然后,模型管理者可以至少部分地通过对确定人造神经网络模型中的人造神经元之间的权重的混合整数线性程序的评估来最小化人造神经网络模型中的近似全局误差。 混合整数线性程序考虑了人造神经网络模型中人造神经元的分段线性激活函数。 混合整数线性程序包括实际数据和建模数据之间的差异的功能表达式,以及引用功能表达式中引用变量的一个或多个约束的集合。

    Minimizing global error in an artificial neural network

    公开(公告)号:US10068170B2

    公开(公告)日:2018-09-04

    申请号:US14492440

    申请日:2014-09-22

    IPC分类号: G06N3/08 G06N3/04

    摘要: Computer systems, machine-implemented methods, and stored instructions are provided for minimizing an approximate global error in an artificial neural network that is configured to predict model outputs based at least in part on one or more model inputs. A model manager stores the artificial neural network model. The model manager may then minimize an approximate global error in the artificial neural network model at least in part by causing evaluation of a mixed integer linear program that determines weights between artificial neurons in the artificial neural network model. The mixed integer linear program accounts for piecewise linear activation functions for artificial neurons in the artificial neural network model. The mixed integer linear program comprises a functional expression of a difference between actual data and modeled data, and a set of one or more constraints that reference variables in the functional expression.

    Quadratic regularization for neural network with skip-layer connections
    9.
    发明授权
    Quadratic regularization for neural network with skip-layer connections 有权
    具有跳过层连接的神经网络的二次正则化

    公开(公告)号:US09047566B2

    公开(公告)日:2015-06-02

    申请号:US13795613

    申请日:2013-03-12

    IPC分类号: G06E1/00 G06N3/04

    CPC分类号: G06N3/04

    摘要: According to one aspect of the invention, target data comprising observations is received. A neural network comprising input neurons, output neurons, hidden neurons, skip-layer connections, and non-skip-layer connections is used to analyze the target data based on an overall objective function that comprises a linear regression part, the neural network's unregularized objective function, and a regularization term. An overall optimized first vector value of a first vector and an overall optimized second vector value of a second vector are determined based on the target data and the overall objective function. The first vector comprises skip-layer weights for the skip-layer connections and output neuron biases, whereas the second vector comprises non-skip-layer weights for the non-skip-layer connections.

    摘要翻译: 根据本发明的一个方面,接收包括观测值的目标数据。 使用包括输入神经元,输出神经元,隐藏神经元,跳过层连接和非跳过层连接的神经网络,基于包括线性回归部分,神经网络的非规则化目标的总体目标函数来分析目标数据 功能和正规化术语。 基于目标数据和总体目标函数确定第一向量的总体优化的第一向量值和第二向量的总体优化的第二向量值。 第一向量包括跳过层连接的跳过层权重和输出神经元偏移,而第二向量包括用于非跳过层连接的非跳过层权重。

    Accelerated TR-L-BFGS algorithm for neural network

    公开(公告)号:US10467528B2

    公开(公告)日:2019-11-05

    申请号:US14823167

    申请日:2015-08-11

    摘要: Techniques herein train a multilayer perceptron, sparsify edges of a graph such as the perceptron, and store edges and vertices of the graph. Each edge has weight. A computer sparsifies perceptron edges. The computer performs a forward-backward pass on the perceptron to calculate a sparse Hessian matrix. Based on that Hessian, the computer performs quasi-Newton perceptron optimization. The computer repeats this until convergence. The computer stores edges in an array and vertices in another array. Each edge has weight and input and output indices. Each vertex has input and output indices. The computer inserts each edge into an input linked list based on its weight. Each link of the input linked list has the next input index of an edge. The computer inserts each edge into an output linked list based on its weight. Each link of the output linked list comprises the next output index of an edge.