SYNCHRONIZING SENSOR DATA ACROSS DEVICES
    2.
    发明申请
    SYNCHRONIZING SENSOR DATA ACROSS DEVICES 有权
    同步设备传感器数据同步

    公开(公告)号:US20120163520A1

    公开(公告)日:2012-06-28

    申请号:US12979140

    申请日:2010-12-27

    IPC分类号: H04L7/00

    摘要: Techniques are provided for synchronization of sensor signals between devices. One or more of the devices may collect sensor data. The device may create a sensor signal from the sensor data, which it may make available to other devices upon a publisher/subscriber model. The other devices may subscribe to sensor signals they choose. A device could be a provider or a consumer of the sensor signals. A device may have a layer of code between an operating system and software applications that processes the data for the applications. The processing may include such actions as synchronizing the data in a sensor signal to a local time clock, predicting future values for data in a sensor signal, and providing data samples for a sensor signal at a frequency that an application requests, among other actions.

    摘要翻译: 提供技术用于设备之间的传感器信号的同步。 一个或多个设备可以收集传感器数据。 该设备可以从传感器数据创建传感器信号,这可以在发布者/订户模型上可用于其他设备。 其他设备可以订阅他们选择的传感器信号。 设备可以是传感器信号的提供者或消费者。 设备可以在处理用于应用的数据的操作系统和软件应用之间具有代码层。 处理可以包括将传感器信号中的数据同步到本地时钟的动作,预测传感器信号中的数据的未来值,以及提供应用请求的频率的传感器信号的数据采样以及其他动作。

    Parallel processing machine learning decision tree training
    4.
    发明授权
    Parallel processing machine learning decision tree training 有权
    并行处理机器学习决策树训练

    公开(公告)号:US09171264B2

    公开(公告)日:2015-10-27

    申请号:US12969112

    申请日:2010-12-15

    摘要: Embodiments are disclosed herein that relate to generating a decision tree through graphical processing unit (GPU) based machine learning. For example, one embodiment provides a method including, for each level of the decision tree: performing, at each GPU of the parallel processing pipeline, a feature test for a feature in a feature set on every example in an example set. The method further includes accumulating results of the feature tests in local memory blocks. The method further includes writing the accumulated results from each local memory block to global memory to generate a histogram of features for every node in the level, and for each node in the level, assigning a feature having a lowest entropy in accordance with the histograms to the node.

    摘要翻译: 本文公开的实施例涉及通过基于图形处理单元(GPU)的机器学习来生成决策树。 例如,一个实施例提供一种方法,其包括对于决策树的每个级别:在并行处理流水线的每个GPU处执行在示例集合中的每个示例上的特征集中的特征的特征测试。 该方法还包括在本地存储器块中积累特征测试的结果。 该方法还包括将来自每个本地存储器块的累积结果写入全局存储器以产生该级别中的每个节点的特征的直方图,并且对于该级别中的每个节点,根据直方图将具有最低熵的特征分配到 节点。

    PARALLEL PROCESSING MACHINE LEARNING DECISION TREE TRAINING
    5.
    发明申请
    PARALLEL PROCESSING MACHINE LEARNING DECISION TREE TRAINING 有权
    平行加工机器学习决策树培训

    公开(公告)号:US20120154373A1

    公开(公告)日:2012-06-21

    申请号:US12969112

    申请日:2010-12-15

    IPC分类号: G06F15/80 G06T15/00

    摘要: Embodiments are disclosed herein that relate to generating a decision tree through graphical processing unit (GPU) based machine learning. For example, one embodiment provides a method including, for each level of the decision tree: performing, at each GPU of the parallel processing pipeline, a feature test for a feature in a feature set on every example in an example set. The method further includes accumulating results of the feature tests in local memory blocks. The method further includes writing the accumulated results from each local memory block to global memory to generate a histogram of features for every node in the level, and for each node in the level, assigning a feature having a lowest entropy in accordance with the histograms to the node.

    摘要翻译: 本文公开的实施例涉及通过基于图形处理单元(GPU)的机器学习来生成决策树。 例如,一个实施例提供一种方法,其包括对于决策树的每个级别:在并行处理流水线的每个GPU处执行在示例集合中的每个示例上的特征集中的特征的特征测试。 该方法还包括在本地存储器块中积累特征测试的结果。 该方法还包括将来自每个本地存储器块的累积结果写入全局存储器以产生该级别中的每个节点的特征的直方图,并且对于该级别中的每个节点,根据直方图将具有最低熵的特征分配到 节点。

    Distributed decision tree training
    6.
    发明授权
    Distributed decision tree training 有权
    分布式决策树训练

    公开(公告)号:US08543517B2

    公开(公告)日:2013-09-24

    申请号:US12797430

    申请日:2010-06-09

    IPC分类号: G06F15/18

    CPC分类号: G06K9/6282

    摘要: A computerized decision tree training system may include a distributed control processing unit configured to receive input of training data for training a decision tree. The system may further include a plurality of data batch processing units, each data batch processing unit being configured to evaluate each of a plurality of split functions of a decision tree for respective data batch of the training data, to thereby compute a partial histogram for each split function, for each datum in the data batch. The system may further include a plurality of node batch processing units configured to aggregate the associated partial histograms for each split function to form an aggregated histogram for each split function for each of a subset of frontier tree nodes and to determine a selected split function for each frontier tree node by computing the split function that produces highest information gain for the frontier tree node.

    摘要翻译: 计算机化决策树训练系统可以包括配置成接收用于训练决策树的训练数据的输入的分布式控制处理单元。 该系统还可以包括多个数据批处理单元,每个数据批处理单元被配置为评估用于训练数据的相应数据批的决策树的多个分离函数中的每一个,从而计算每个 分割功能,用于数据批处理中的每个数据。 该系统可以进一步包括多个节点批量处理单元,其被配置为针对每个分割函数聚合相关联的部分直方图,以形成用于边界树节点的子集中的每一个的每个分割函数的聚合直方图,并且为每个分割函数确定每个 边缘树节点通过计算为边界树节点产生最高信息增益的分割函数。

    DISTRIBUTED DECISION TREE TRAINING
    7.
    发明申请
    DISTRIBUTED DECISION TREE TRAINING 有权
    分布式决策树培训

    公开(公告)号:US20110307423A1

    公开(公告)日:2011-12-15

    申请号:US12797430

    申请日:2010-06-09

    IPC分类号: G06F15/18 G06K9/62

    CPC分类号: G06K9/6282

    摘要: A computerized decision tree training system may include a distributed control processing unit configured to receive input of training data for training a decision tree. The system may further include a plurality of data batch processing units, each data batch processing unit being configured to evaluate each of a plurality of split functions of a decision tree for respective data batch of the training data, to thereby compute a partial histogram for each split function, for each datum in the data batch. The system may further include a plurality of node batch processing units configured to aggregate the associated partial histograms for each split function to form an aggregated histogram for each split function for each of a subset of frontier tree nodes and to determine a selected split function for each frontier tree node by computing the split function that produces highest information gain for the frontier tree node.

    摘要翻译: 计算机化决策树训练系统可以包括配置成接收用于训练决策树的训练数据的输入的分布式控制处理单元。 该系统还可以包括多个数据批处理单元,每个数据批处理单元被配置为评估用于训练数据的相应数据批的决策树的多个分离函数中的每一个,从而计算每个 分割功能,用于数据批处理中的每个数据。 该系统可以进一步包括多个节点批量处理单元,其被配置为针对每个分割函数聚合相关联的部分直方图,以形成用于边界树节点的子集中的每一个的每个分割函数的聚合直方图,并且为每个分割函数确定每个 边缘树节点通过计算为边界树节点产生最高信息增益的分割函数。