-
公开(公告)号:US11967133B2
公开(公告)日:2024-04-23
申请号:US17450602
申请日:2021-10-12
Applicant: Tata Consultancy Services Limited
Inventor: Swarnava Dey , Jayeeta Mondal , Jeet Dutta , Arpan Pal , Arijit Mukherjee , Balamuralidhar Purushothaman
IPC: G06V10/00 , G06V10/44 , G06V10/764 , G06V10/82
CPC classification number: G06V10/764 , G06V10/454 , G06V10/82
Abstract: Embodiments of the present disclosure provide a method and system for co-operative and cascaded inference on the edge device using an integrated Deep Learning (DL) model for object detection and localization, which comprises a strong classifier trained on largely available datasets and a weak localizer trained on scarcely available datasets, and work in coordination to first detect object (fire) in every input frame using the classifier, and then trigger a localizer only for the frames that are classified as fire frames. The classifier and the localizer of the integrated DL model are jointly trained using Multitask Learning approach. Works in literature hardly address the technical challenge of embedding such integrated DL model to be deployed on edge devices. The method provides an optimal hardware software partitioning approach for components or segments of the integrated DL model which achieves a tradeoff between latency and accuracy in object classification and localization.
-
公开(公告)号:US11488026B2
公开(公告)日:2022-11-01
申请号:US16535668
申请日:2019-08-08
Applicant: Tata Consultancy Services Limited
Inventor: Swarnava Dey , Arijit Mukherjee , Arpan Pal , Balamuralidhar Purushothaman
IPC: G06N3/10 , G06N3/04 , G06N3/08 , H04L41/142
Abstract: A growing need for inferencing to be run on fog devices exists, in order to reduce the upstream network traffic. However, being computationally constrained in nature, executing complex deep inferencing models on such devices has been proved difficult. A system and method for partitioning of deep convolution neural network for execution of computationally constraint devices at a network edge has been provided. The system is configured to use depth wise input partitioning of convolutional operations in deep convolutional neural network (DCNN). The convolution operation is performed based on an input filter depth and number of filters for determining the appropriate parameters for partitioning based on an inference speedup method. The system uses a master-slave network for partitioning the input. The system is configured to address these problems by depth wise partitioning of input which ensures speedup inference of convolution operations by reducing pixel overlaps.
-
公开(公告)号:US11141858B2
公开(公告)日:2021-10-12
申请号:US16210512
申请日:2018-12-05
Applicant: Tata Consultancy Services Limited
Inventor: Avik Ghose , Swarnava Dey , Arijit Mukherjee
IPC: B25J9/16
Abstract: A data driven approach for fault detection in robotic actuation is disclosed. Here, a set of robotic tasks are received and analyzed by a Deep Learning (DL) analytics. The DL analytics includes a stateful (Long Short Term Memory) LSTM. Initially, the stateful LSTM is trained to match a set of activities associated with the robots based on a set of tasks gathered from the robots in a multi robot environment. Here, the stateful LSTM utilizes a master slave framework based load distribution technique and a probabilistic trellis approach to predict a next activity associated with the robot with minimum latency and increased accuracy. Further, the predicted next activity is compared with an actual activity of the robot to identify any faults associated robotic actuation.
-
公开(公告)号:US10751881B2
公开(公告)日:2020-08-25
申请号:US15900880
申请日:2018-02-21
Applicant: Tata Consultancy Services Limited
Inventor: Swarnava Dey , Swagata Biswas , Arijit Mukherjee
Abstract: In current distributed simultaneous localization and mapping (SLAM) implementations on multiple robots in a robotic cluster, failure of a leader robot terminates a map building process between multiple robots. Therefore, a technique for fault-tolerant SLAM in robotic clusters is disclosed. In this technique, robotic localization and mapping SLAM is executed in a resource constrained robotic cluster such that the distributed SLAM is executed in a reliable fashion and self-healed in case of failure of the leader robot. To ensure fault tolerance, the robots are enabled, by time series analysis, to find their individual failure probabilities and use that to enhance cluster reliability in a distributed manner.
-
公开(公告)号:US10516726B2
公开(公告)日:2019-12-24
申请号:US14498619
申请日:2014-09-26
Applicant: TATA CONSULTANCY SERVICES LIMITED
Inventor: Himadri Sekhar Paul , Arijit Mukherjee , Swarnava Dey , Arpan Pal , Ansuman Banerjee
Abstract: A method for data partitioning in an internet-of-things (IoT) network is described. The method includes determining number of computing nodes in the IoT network capable of contributing in processing of a data set. At least one capacity parameter associated with each computing node in the IoT network and each communication link between a computing node and a data analytics system can be ascertained. The capacity parameter can indicate a computational capacity for each computing node and communication capacity for each communication link. An availability status, indicating temporal availability, of each of computing nodes and each communication link is determined. The data set is partitioned into subsets, based on the number of computing nodes, the capacity parameter and the availability status, for parallel processing of the subsets.
-
公开(公告)号:US10320704B2
公开(公告)日:2019-06-11
申请号:US14665979
申请日:2015-03-23
Applicant: TATA CONSULTANCY SERVICES LIMITED
Inventor: Swarnava Dey , Arijit Mukherjee , Pubali Datta , Himadri Sekhar Paul
IPC: H04L12/923 , H04L12/927 , H04L12/911 , H04L12/26 , G06F9/48 , G06F9/50
Abstract: Methods and devices for controlling execution of a data analytics application on a computing device are described. The devices include an alert app to prompt a user on system load and to recommend the user for proactively controlling the execution of a set of processes to reclaim computational resources required for execution of the data analytics application on the devices.
-
公开(公告)号:US20160011908A1
公开(公告)日:2016-01-14
申请号:US14667459
申请日:2015-03-24
Applicant: Tata Consultancy Services Limited
Inventor: Himadri Sekhar Paul , Arijit Mukherjee , Ansuman Banerjee , Swarnava Dey , Arpan Pal , Pubali Datta
IPC: G06F9/50
CPC classification number: G06F9/5027 , G06F9/5044 , G06F9/505
Abstract: A method comprises, receiving, at each of a plurality of computing devices, a task execution estimation request message from a central server, the task execution estimation request message comprising a worst-case execution time (WCET) corresponding to the computing device. The method further comprises, computing, by each of the plurality of computing devices, an estimate task execution time for the task based on the WCET and a state transition model corresponding to the computing device, wherein the state transition model indicates available processing resources corresponding to the computing device. Further, the method comprises transmitting, by each of the plurality of computing devices, the estimate task execution time to the central server for allocation of the task to a computing device from amongst the plurality of computing devices based on the estimate task execution time corresponding to the computing device.
Abstract translation: 一种方法包括:从多个计算设备中的每一个接收来自中央服务器的任务执行估计请求消息,所述任务执行估计请求消息包括对应于所述计算设备的最坏情况执行时间(WCET)。 该方法还包括:由多个计算设备中的每一个计算基于WCET的任务的估计任务执行时间和对应于该计算设备的状态转换模型,其中该状态转换模型指示对应于 计算设备。 此外,该方法包括由多个计算设备中的每个计算设备将估计任务执行时间发送到中央服务器,用于基于与多个计算设备对应的估计任务执行时间从多个计算设备中的任务分配到计算设备 计算设备。
-
8.
公开(公告)号:US09201686B2
公开(公告)日:2015-12-01
申请号:US14317272
申请日:2014-06-27
Applicant: Tata Consultancy Services Limited
Inventor: Swarnava Dey , Arpan Pal , Arijit Mukherjee , Himadri Sekhar Paul
CPC classification number: G06F9/4843 , G06F9/4881 , G06F9/5072 , G06F9/52 , H04L67/10
Abstract: Described herein, are methods and devices for execution of a task in a grid computing system. According to an implementation, free time-slots are identified and durations of the free time-slots are estimated, by an edge device, for execution of a sub-task. The free time-slots are indicative of an idle state of the edge device. At least one computation capability parameter of the edge device is determined by the edge device for execution of a sub-task during the free time-slots. An advertisement profile having at least one free time-slot, and the duration and the at least one computation capability parameter associated with the at least one free time-slot is created by the edge device. The advertisement profile is provided by the edge device to grid servers in the grid computing system for partitioning a main task to create a sub-task executable by the edge device.
Abstract translation: 这里描述的是用于在网格计算系统中执行任务的方法和装置。 根据实施方式,识别空闲时隙,并由边缘设备估计空闲时隙的持续时间,以执行子任务。 空闲时隙表示边缘设备的空闲状态。 边缘设备的至少一个计算能力参数由边缘设备确定,以在空闲时隙期间执行子任务。 具有至少一个空闲时隙的广告简档,以及与所述边缘设备一起创建与所述至少一个空闲时隙相关联的所述持续时间和所述至少一个计算能力参数。 该广告简档由边缘设备提供给网格计算系统中的网格服务器,用于划分主要任务以创建由边缘设备可执行的子任务。
-
公开(公告)号:US11735166B2
公开(公告)日:2023-08-22
申请号:US17361408
申请日:2021-06-29
Applicant: Tata Consultancy Services Limited
Inventor: Swarnava Dey , Jeet Dutta
IPC: G10L15/06 , G06N3/088 , G10L15/04 , G10L15/16 , G10L15/22 , G10L15/28 , G10L25/78 , G06N3/044 , G06N3/045 , G10L15/05
CPC classification number: G10L15/063 , G06N3/044 , G06N3/045 , G06N3/088 , G10L15/04 , G10L15/05 , G10L15/16 , G10L15/22 , G10L15/28 , G10L25/78 , G10L15/06
Abstract: Automatic speech recognition techniques are implemented in resource constrained devices such as edge devices in internet of things where on-device speech recognition is required for low latency and privacy preservation. Existing neural network models for speech recognition have a large size and are not suitable for deployment in such devices. The present disclosure provides an architecture of a size constrained neural network and a method of training the size constrained neural network. The architecture of the size constrained neural network provides a way of increasing or decreasing number of feature blocks to achieve an accuracy-model size trade off. The method of training the size constrained neural network comprises creating a training dataset with short utterances and training the size constrained neural network with the training dataset to learn short term dependencies in the utterances. The trained size constrained neural network model is suitable for deployment in resource constrained devices.
-
公开(公告)号:US20220157297A1
公开(公告)日:2022-05-19
申请号:US17361408
申请日:2021-06-29
Applicant: Tata Consultancy Services Limited
Inventor: Swarnava Dey , Jeet Dutta
Abstract: Automatic speech recognition techniques are implemented in resource constrained devices such as edge devices in internet of things where on-device speech recognition is required for low latency and privacy preservation. Existing neural network models for speech recognition have a large size and are not suitable for deployment in such devices. The present disclosure provides an architecture of a size constrained neural network and a method of training the size constrained neural network. The architecture of the size constrained neural network provides a way of increasing or decreasing number of feature blocks to achieve an accuracy-model size trade off. The method of training the size constrained neural network comprises creating a training dataset with short utterances and training the size constrained neural network with the training dataset to learn short term dependencies in the utterances. The trained size constrained neural network model is suitable for deployment in resource constrained devices.
-
-
-
-
-
-
-
-
-