Abstract:
Methods and Systems for automatic information extraction by performing self-learning crawling and rule-based data mining is provided. The method determines existence of crawl policy within input information and performs at least one of front-end crawling, assisted crawling and recursive crawling. Downloaded data set is pre-processed to remove noisy data and subjected to classification rules and decision tree based data mining to extract meaningful information. Performing crawling techniques leads to smaller relevant datasets pertaining to a specific domain from multi-dimensional datasets available in online and offline sources.
Abstract:
In LTE Random Access Channel (RACH) mechanism, devices use slotted ALOHA based protocol for RACH message exchange. During these messages exchange, if a device does not get a response from a base-station (BS), the device assumes that it is not able to reach base station due to insufficient transmission power and increases transmit power to reach to the base station. However, at higher density most of requests are lost due to collision. In existing RACH procedure, device unnecessarily ramps power in next RACH process which leads to power wastage in already resource constrained device. When there is failure of reception of RACH process, the present disclosure computes time delays (TD) based on a RSSI value obtained from a message transmitted by the BS, and initiates RACH process accordingly. The embodiments further enable requests transmission from device to BS by ramping power of the devices based on the computed TD.
Abstract:
The present application provides a method and system for sharing of unlicensed spectrum. The disclosed method and system when implement improves the spectral efficiency of LTE users and also improves the overall performance of LAA and Wi-Fi users. A BS senses the channel for any ongoing transmissions for a CCA period which is equal to DIFS time period of Wi-Fi. If the channel is busy (CCA-busy), it enters into back-off stage by selecting a uniform random number from [0, C0−1] as the back-off counter, where C0 is the fixed congestion window size. If the channel is free for a CCA period (CCA-idle), the back-off counter gets decremented by unity until it reaches zero. Once the back-off counter reaches zero, LAA again senses and schedules its down-link transmissions for a maximum channel occupancy period, provided the channel is free.
Abstract:
A method and system for optimizing a distributed enterprise information technology (IT) network infrastructure is disclosed, wherein the IT infrastructure comprises at least one server, at least one storage element, and at least one network element. The method comprises collecting data and arranging the collected data pertaining to an existing state of the information technology network infrastructure in a first set of templates. The method further comprises mapping the existing state and a new state of at least one of the at least one server and at least one storage element with an existing set of network elements using the first set of templates to form a second set of templates, wherein the method further comprises of planning the new state of the IT network infrastructure for transformation using the first set of templates and the second set of templates, the new state being an optimized state.
Abstract:
A method and system is provided for distributed optimal caching for information centric networking. The system and method disclosed herein enables each router/node in the network to make an independent decision to solve the optimization problem based upon a cost feedback from its neighbors. Content is received by a first router which determines if it should store the content in its cache based on a Characterizing Metrics (CM) value or send it to a neighbor router j, where the neighbor router j is selected based on a transaction cost determination. The node j on receiving the content shared with itself again performs similar computation to determine if the content should be stored in its cache. The method is performed iteratively for optimal distributed caching.
Abstract:
Robotic applications are important in both indoor and outdoor environments. Establishing reliable end-to-end communication among robots in such environments are inevitable. Many real-time challenges in robotic communications are mainly due to the dynamic movement of robots, battery constraints, absence of Global Position System (GPS), etc. Systems and methods of the present disclosure provide assisted link prediction (ALP) protocol for communication between robots that resolves real-time challenges link ambiguity, prediction accuracy, improving Packet Reception Ratio (PRR) and reducing energy consumption in-terms of lesser retransmissions by computing link matrix between robots and determining status of a Collaborative Robotic based Link Prediction (CRLP) link prediction based on a comparison of link matrix value with a predefined covariance link matrix threshold. Based on determined status, robots either transmit or receive packet, and the predefined covariance link matrix threshold is dynamically updated. If the link to be predicted is unavailable, the system resolves ambiguity thereby enabling communication between robots.
Abstract:
System(s) and method(s) for designing a network of one or more entities in an enterprise is disclosed. A design type along with configurable design parameters is selected from a list of designs. Requirements associated with design of entities are collected from the users. The requirements and configurable design parameters are analyzed to obtain analysis results. Based on at least one of a layer-wise requirement and distribution or a zone-wise requirement and distribution, network devices and modules associated with the entities are determined. One or more designs are generated for the network of entities based on the layer-wise requirement and distribution or the zone-wise requirement and distribution of network devices and modules associated with the entities in the enterprise.
Abstract:
A method and system is provided for device-to-device (D2D) offloading in long term evolution (LTE) networks. The present application provides a method and system for device-to-device (D2D) offloading in long term evolution (LTE) networks, comprising processor implemented steps of selecting an offloader by a eNodeB (eNB) for a user device out of a plurality of user devices based on location of the user device and other closed proximity user devices, corresponding load and channel conditions upon receiving the offloading request from the user device; exchanging a control messages between the user device and the eNB; and between the eNB and the offloader; and scheduling of resource blocks (RBs) by the eNB for the user device and the offloader in D2D offloading.
Abstract:
A method and system is provided for distributed optimal caching for information centric networking. The system and method disclosed herein enables each router/node in the network to make an independent decision to solve the optimization problem based upon a cost feedback from its neighbors. Content is received by a first router which determines if it should store the content in its cache based on a Characterizing Metrics (CM) value or send it to a neighbor router j, where the neighbor router j is selected based on a transaction cost determination. The node j on receiving the content shared with itself again performs similar computation to determine if the content should be stored in its cache. The method is performed iteratively for optimal distributed caching.
Abstract:
A system and method for enhancing lifetime and throughput in a distributed wireless network is disclosed herein. The method may include sensing, by a first machine, different parameters of at least one neighboring machine; updating, by the first machine, at least one parameter of said first machine based on said sensed parameters of said neighboring machine; generating, by the first machine, a signed-graph on the basis of the updated parameter, wherein said generated graph comprises at least two nodes representing said updated parameter and at least one edge interconnecting said two nodes; iteratively updating, by the first machine, the at least one parameter at different time-scales until convergence is achieved; and communicating, by the first machine, inter-layer updates in individual layers of a transmission protocol stack of the first machine due to said update of at least one parameter.