Method and system for partitioning of deep convolution network for executing on computationally constraint devices

    公开(公告)号:US11488026B2

    公开(公告)日:2022-11-01

    申请号:US16535668

    申请日:2019-08-08

    Abstract: A growing need for inferencing to be run on fog devices exists, in order to reduce the upstream network traffic. However, being computationally constrained in nature, executing complex deep inferencing models on such devices has been proved difficult. A system and method for partitioning of deep convolution neural network for execution of computationally constraint devices at a network edge has been provided. The system is configured to use depth wise input partitioning of convolutional operations in deep convolutional neural network (DCNN). The convolution operation is performed based on an input filter depth and number of filters for determining the appropriate parameters for partitioning based on an inference speedup method. The system uses a master-slave network for partitioning the input. The system is configured to address these problems by depth wise partitioning of input which ensures speedup inference of convolution operations by reducing pixel overlaps.

    System and method for fault detection in robotic actuation

    公开(公告)号:US11141858B2

    公开(公告)日:2021-10-12

    申请号:US16210512

    申请日:2018-12-05

    Abstract: A data driven approach for fault detection in robotic actuation is disclosed. Here, a set of robotic tasks are received and analyzed by a Deep Learning (DL) analytics. The DL analytics includes a stateful (Long Short Term Memory) LSTM. Initially, the stateful LSTM is trained to match a set of activities associated with the robots based on a set of tasks gathered from the robots in a multi robot environment. Here, the stateful LSTM utilizes a master slave framework based load distribution technique and a probabilistic trellis approach to predict a next activity associated with the robot with minimum latency and increased accuracy. Further, the predicted next activity is compared with an actual activity of the robot to identify any faults associated robotic actuation.

    System and method for executing fault-tolerant simultaneous localization and mapping in robotic clusters

    公开(公告)号:US10751881B2

    公开(公告)日:2020-08-25

    申请号:US15900880

    申请日:2018-02-21

    Abstract: In current distributed simultaneous localization and mapping (SLAM) implementations on multiple robots in a robotic cluster, failure of a leader robot terminates a map building process between multiple robots. Therefore, a technique for fault-tolerant SLAM in robotic clusters is disclosed. In this technique, robotic localization and mapping SLAM is executed in a resource constrained robotic cluster such that the distributed SLAM is executed in a reliable fashion and self-healed in case of failure of the leader robot. To ensure fault tolerance, the robots are enabled, by time series analysis, to find their individual failure probabilities and use that to enhance cluster reliability in a distributed manner.

    Data partitioning in internet-of-things (IOT) network

    公开(公告)号:US10516726B2

    公开(公告)日:2019-12-24

    申请号:US14498619

    申请日:2014-09-26

    Abstract: A method for data partitioning in an internet-of-things (IoT) network is described. The method includes determining number of computing nodes in the IoT network capable of contributing in processing of a data set. At least one capacity parameter associated with each computing node in the IoT network and each communication link between a computing node and a data analytics system can be ascertained. The capacity parameter can indicate a computational capacity for each computing node and communication capacity for each communication link. An availability status, indicating temporal availability, of each of computing nodes and each communication link is determined. The data set is partitioned into subsets, based on the number of computing nodes, the capacity parameter and the availability status, for parallel processing of the subsets.

    TASK ALLOCATION IN A COMPUTING ENVIRONMENT
    7.
    发明申请
    TASK ALLOCATION IN A COMPUTING ENVIRONMENT 有权
    计算环境中的任务分配

    公开(公告)号:US20160011908A1

    公开(公告)日:2016-01-14

    申请号:US14667459

    申请日:2015-03-24

    CPC classification number: G06F9/5027 G06F9/5044 G06F9/505

    Abstract: A method comprises, receiving, at each of a plurality of computing devices, a task execution estimation request message from a central server, the task execution estimation request message comprising a worst-case execution time (WCET) corresponding to the computing device. The method further comprises, computing, by each of the plurality of computing devices, an estimate task execution time for the task based on the WCET and a state transition model corresponding to the computing device, wherein the state transition model indicates available processing resources corresponding to the computing device. Further, the method comprises transmitting, by each of the plurality of computing devices, the estimate task execution time to the central server for allocation of the task to a computing device from amongst the plurality of computing devices based on the estimate task execution time corresponding to the computing device.

    Abstract translation: 一种方法包括:从多个计算设备中的每一个接收来自中央服务器的任务执行估计请求消息,所述任务执行估计请求消息包括对应于所述计算设备的最坏情况执行时间(WCET)。 该方法还包括:由多个计算设备中的每一个计算基于WCET的任务的估计任务执行时间和对应于该计算设备的状态转换模型,其中该状态转换模型指示对应于 计算设备。 此外,该方法包括由多个计算设备中的每个计算设备将估计任务执行时间发送到中央服务器,用于基于与多个计算设备对应​​的估计任务执行时间从多个计算设备中的任务分配到计算设备 计算设备。

    Task execution by idle resources in grid computing system
    8.
    发明授权
    Task execution by idle resources in grid computing system 有权
    网格计算系统中空闲资源执行任务

    公开(公告)号:US09201686B2

    公开(公告)日:2015-12-01

    申请号:US14317272

    申请日:2014-06-27

    CPC classification number: G06F9/4843 G06F9/4881 G06F9/5072 G06F9/52 H04L67/10

    Abstract: Described herein, are methods and devices for execution of a task in a grid computing system. According to an implementation, free time-slots are identified and durations of the free time-slots are estimated, by an edge device, for execution of a sub-task. The free time-slots are indicative of an idle state of the edge device. At least one computation capability parameter of the edge device is determined by the edge device for execution of a sub-task during the free time-slots. An advertisement profile having at least one free time-slot, and the duration and the at least one computation capability parameter associated with the at least one free time-slot is created by the edge device. The advertisement profile is provided by the edge device to grid servers in the grid computing system for partitioning a main task to create a sub-task executable by the edge device.

    Abstract translation: 这里描述的是用于在网格计算系统中执行任务的方法和装置。 根据实施方式,识别空闲时隙,并由边缘设备估计空闲时隙的持续时间,以执行子任务。 空闲时隙表示边缘设备的空闲状态。 边缘设备的至少一个计算能力参数由边缘设备确定,以在空闲时隙期间执行子任务。 具有至少一个空闲时隙的广告简档,以及与所述边缘设备一起创建与所述至少一个空闲时隙相关联的所述持续时间和所述至少一个计算能力参数。 该广告简档由边缘设备提供给网格计算系统中的网格服务器,用于划分主要任务以创建由边缘设备可执行的子任务。

    METHOD AND SYSTEM FOR AUTOMATIC SPEECH RECOGNITION IN RESOURCE CONSTRAINED DEVICES

    公开(公告)号:US20220157297A1

    公开(公告)日:2022-05-19

    申请号:US17361408

    申请日:2021-06-29

    Abstract: Automatic speech recognition techniques are implemented in resource constrained devices such as edge devices in internet of things where on-device speech recognition is required for low latency and privacy preservation. Existing neural network models for speech recognition have a large size and are not suitable for deployment in such devices. The present disclosure provides an architecture of a size constrained neural network and a method of training the size constrained neural network. The architecture of the size constrained neural network provides a way of increasing or decreasing number of feature blocks to achieve an accuracy-model size trade off. The method of training the size constrained neural network comprises creating a training dataset with short utterances and training the size constrained neural network with the training dataset to learn short term dependencies in the utterances. The trained size constrained neural network model is suitable for deployment in resource constrained devices.

Patent Agency Ranking