Abstract:
Video conferencing involves transmission of video as well as audio over a network between people involved in the video conferencing, over a network. Typically, quality of conference sessions are affected by quality of network connection. If the bandwidth of the network is low, that that may cause call quality issues or call drops, which is not desirable especially in certain applications such as a surgery over video conferencing. Disclosed herein is a Conference Manager (CM) that can facilitate video conferencing over a low bandwidth network. The CM uses a producer unit and a consumer unit, for video capture and transmission, and a communication device for audio capture and transmission. The CM captures and combines audio and video data at a receiving end of the communication network. The CM also uses a fast block-wise data transfer mechanism for facilitating communication between the transmitting end and the receiving end.
Abstract:
In the field of Internet of Things understanding need of applications and translating them to network parameters and protocol parameters is a major challenge. This disclosure addresses problem of enabling network services by cognitive sense-analyze-decide-respond framework. A processor implemented method is provided for enabling network aware applications and applications aware networks by a sense analyze decide respond (SADR) framework. The processor implemented method includes sensing, at least one application parameter and at least one network parameter to obtain a plurality of sensed information; analyzing, the plurality of sensed information is filtered and synchronized to generate a plurality of derived parameters; determining, a plurality of rules based on the plurality of derived parameters; validating, the plurality of rules for a plurality of scenarios to obtain plurality of decisions; and enabling, at least one of (i) network, (ii) application and (iii) protocol control based on the plurality of decisions.
Abstract:
This disclosure relates generally to autonomous devices, and more particularly to method and system to optimally allocate warehouse procurement tasks to distributed autonomous devices. The method includes obtaining, at a coordinating agent, a global task associated with the warehouse and information associated with the robotic agents. The information includes a count and status of the robotic agents. The global task is profiled to obtain a set of sub-tasks and constraints associated with the set of sub-tasks are identified. The constraints include utilization constraint and/or pricing constraints. A distributed, decentralized optimal task allocation is performed amongst the robotic agents based on constraints to obtain optimal performance of robotic agents. The distributed optimal task allocation includes performing primal or dual decomposition of the set of sub-tasks by each robotic agent and updating corresponding primal/dual variables by the coordinating agent when the optimization is performed based on utilization constraint and pricing constraints, respectively.
Abstract:
This disclosure relates generally to distributed robotic networks, and more particularly to communication link-prediction in the distributed robotic networks. In one embodiment, robots in a robotic network, which are mobile, can establish communication with a cloud network through a fog node, wherein the fog node is a static node. A robot can directly communicate with a fog node (R2F) if the fog node is in the communication range of the robot. If there is no fog node in the communication range of the robot, then the robot can establish communication with another robot (R2R) and indirectly communicate with the fog node through the connected robot. Communication link prediction is used to identify one or more communication links that can be used by a robot for establishing communication with the cloud network. A link that satisfies requirements in terms of link quality and any other parameter is used for communication purpose.
Abstract:
A method and a system is disclosed herein for co-operative on-path and off-path caching policy for information centric networks (ICN). In an embodiment, a computer implemented method and system is provided for cooperative on-path and off-path caching policy for information centric networks in which the edge routers or on-path routers optimally store the requested ICN contents and are supported by a strategically placed central off-path cache router for additional level of caching. A heuristic mechanism has also been provided to offload and to optimally store the contents from the on-path routers to off-path central cache router. The present scheme optimally stores the requested ICN contents either in the on-path edge routers or in strategically located off-path central cache router. The present scheme also ensures optimal formulation resulting in reduced cache duplication, delay and network usage.
Abstract:
Coordinated Multipoint (CoMP) transmission is a potential candidate to optimize the performance of a network with added flexibility to serve a UE from multiple Base Stations (BSs). However, the performance gain in CoMP is as good as the dynamic clustering. The existing approaches are applicable for a fixed cluster size, which does not capture time-varying channel conditions and the cost of transmission. Embodiments herein provide a method and system for a learning based dynamic clustering of BSs for a CoMP transmission in communication networks. Herein, a framework for the CoMP transmission in 5th Generation (5G) and beyond networks is disclosed. Further, an optimal user-centric dynamic clustering technique is disclosed for the CoMP with the aim of maximizing the throughput subject to the constraint on the cost of transmission from the CoMP cluster i.e., coordinating set of BSs.
Abstract:
This disclosure relates generally to telecommunication networks, and more particularly to a method and system for delay aware uplink scheduling in a communication network. The telecommunication network performs data uplink and data downlink associated with data communication between multiple transmitter-receiver pairs connected to the network, by using specific scheduling schema as part of the data processing. Disclosed are method and system for performing a delay-aware uplink scheduling in the communication network. For an uplink request received from a transmitter, the system estimates downlink delay at the corresponding receiver side. The system further estimates a processing delay for the received uplink request. Based on the estimated downlink delay and the processing delay, the system determines an uplink delay budget for the transmitter. Further, based on the uplink delay budget and an achievable data rate computed for the transmitter, the system schedules uplink for the transmitter.
Abstract:
This disclosure relates to managing Fog computations between a coordinating node and Fog nodes. In one embodiment, a method for managing Fog computations includes receiving a task data and a request for allocation of at least a subset of a computational task. The task data includes data subset and task constraints associated with at least the subset of the computational task. The Fog nodes capable of performing the computational task are characterized with node characteristics to obtain resource data associated with the Fog nodes. Based on the task data and the resource data, an optimization model is derived to perform the computational task by the Fog nodes. The optimization model includes node constraints including battery degradation constraint, communication path loss constraint, and heterogeneous computational capacities of Fog nodes. Based on the optimization model, at least the subset of the computational task is offloaded to a set of Fog nodes.
Abstract:
A method and a system is disclosed herein for co-operative on-path and off-path caching policy for information centric networks (ICN). In an embodiment, a computer implemented method and system is provided for cooperative on-path and off-path caching policy for information centric networks in which the edge routers or on-path routers optimally store the requested ICN contents and are supported by a strategically placed central off-path cache router for additional level of caching. A heuristic mechanism has also been provided to offload and to optimally store the contents from the on-path routers to off-path central cache router. The present scheme optimally stores the requested ICN contents either in the on-path edge routers or in strategically located off-path central cache router. The present scheme also ensures optimal formulation resulting in reduced cache duplication, delay and network usage.
Abstract:
This disclosure relates generally to provisioning network services in a cloud computing environment, and more particularly to framework for provisioning network services in a heterogeneous cloud computing environment. In one embodiment, the disclosure includes a network as a service (NaaS) layer under a cloud provisioning platform. The NaaS layer can be interfaced with any cloud provisioning platform. The NaaS layer serves the networking needs of the heterogeneous cloud environment. It provides network services like monitoring, notifications, QoS policies, network topology and other services. For example, the cloud provisioning platform defines a virtual network and attaches a plurality of virtual machines to it. All the communications related to creation/deletion/update of virtual networks, virtual subnets, virtual ports, virtual router, virtual interfaces etc., are sent to the NaaS layer. On receiving the communication, the NaaS layer takes necessary steps to provide the network services as per the needs of the request. Apart from provisioning, the NaaS layer periodically monitors the network elements as well.