Abstract:
This disclosure relates generally to provisioning network services in a cloud computing environment, and more particularly to framework for provisioning network services in a heterogeneous cloud computing environment. In one embodiment, the disclosure includes a network as a service (NaaS) layer under a cloud provisioning platform. The NaaS layer can be interfaced with any cloud provisioning platform. The NaaS layer serves the networking needs of the heterogeneous cloud environment. It provides network services like monitoring, notifications, QoS policies, network topology and other services. For example, the cloud provisioning platform defines a virtual network and attaches a plurality of virtual machines to it. All the communications related to creation/deletion/update of virtual networks, virtual subnets, virtual ports, virtual router, virtual interfaces etc., are sent to the NaaS layer. On receiving the communication, the NaaS layer takes necessary steps to provide the network services as per the needs of the request. Apart from provisioning, the NaaS layer periodically monitors the network elements as well.
Abstract:
The present disclosure is related to a system and method for providing aerial communication services to users trapped in disaster conditions and need immediate attention. It includes an UAV mounted with a central base station to establish communication services with each communication equipment by modeling of an emergency communication network. It analyzes utility function of criticalities to assure an efficient resource allocation mechanism where critical users get preference over non-critical users. The users can be in a critical state either due to low remaining energy of at least one communication equipment of the one or more users or because of the criticality due to their physical surroundings and data rate component to ensure throughput for the communication services. An assisted global positioning system (A-GPS) is being used for obtaining information of physical criticality of the users distributed over a geographical area.
Abstract:
System(s) and method(s) for network resource optimization in a service area of a communication network are described. The method includes dividing a service area into a plurality of sub-areas, where each of the plurality of sub-areas is serviced by at least one network resource from a pre-determined number of network resources. The method further includes determining a locally optimal deployment solution comprising at least one local allocation attribute for the at least one network resource in each of the plurality of sub-areas, to meet a plurality of objectives for network resource optimization. The method further includes obtaining a globally optimal deployment solution comprising at least one global allocation attribute for allocation of the pre-determined number of network resources in the service area, based on the locally optimal deployment solution to meet the plurality of objectives.
Abstract:
Optimization of control plane in a software defined network includes obtaining peer information of at least one neighbouring network controller by a network controller and determining a traffic profile variation. The method further includes computing of a self payoff value indicative of one of optimum utilization, underutilization and overutilization of the network controller. The method further includes initiating a non-zero sum game based network control plane optimization operation based on the self payoff value and the traffic profile of the neighbouring network controllers, and may include one of activating additional network controller(s), transferring control of one or more network devices managed by the network controller(s) to a neighbouring greedy network controller, deactivating the network controller, and transferring control of one or more additional network devices managed by the neighbouring network controller(s) to the greedy network controller.
Abstract:
A method for uplink scheduling over a communication channel in a communication network including at least one UE and an eNodeB, is described. The method comprises determining whether the UE is associated with at least one of Guaranteed Bit Rate (GBR) bearers and non-Guaranteed Bit Rate (non-GBR) bearers. Based on the determining, for each of the GBR-bearers and the non-GBR-bearers, computing a demand for resources for establishing an uplink communication, wherein the demand is computed based physical layer characteristics and transport layer characteristics associated with the communication channel. The demand computed is communicated as a request message to the eNodeB. In response to the request message, receiving an allocation of the resources for uplink scheduling from the eNodeB.
Abstract:
System(s) and method(s) for designing a network of one or more entities in an enterprise is disclosed. A design type along with configurable design parameters is selected from a list of designs. Requirements associated with design of entities are collected from the users. The requirements and configurable design parameters are analyzed to obtain analysis results. Based on at least one of a layer-wise requirement and distribution or a zone-wise requirement and distribution, network devices and modules associated with the entities are determined. One or more designs are generated for the network of entities based on the layer-wise requirement and distribution or the zone-wise requirement and distribution of network devices and modules associated with the entities in the enterprise.
Abstract:
State of the art networking solutions are tightly coupled and proprietary in nature due to multiple vendors in the networking domain. Embodiments of the present disclosure provide a method and system for management and orchestration of heterogeneous network environment using dynamic, robust and network aware microservices. The method enables a platform for automatically and dynamically identifying appropriate group of microservices in accordance with network type and service type specified by the user, thus providing a solution that generates network aware microservices for each network in the heterogeneous network landscape. Furthermore, the system manages the identified microservices for each of the network by managing the life cycle of these microservices. The right life cycle management and co-ordination of the microservices for the network is in-line with desired goals/business logic, in a reliable and scalable manner, in heterogeneous network environments.
Abstract:
This disclosure relates generally to telecommunication networks, and more particularly to a method and system for delay aware uplink scheduling in a communication network. The telecommunication network performs data uplink and data downlink associated with data communication between multiple transmitter-receiver pairs connected to the network, by using specific scheduling schema as part of the data processing. Disclosed are method and system for performing a delay-aware uplink scheduling in the communication network. For an uplink request received from a transmitter, the system estimates downlink delay at the corresponding receiver side. The system further estimates a processing delay for the received uplink request. Based on the estimated downlink delay and the processing delay, the system determines an uplink delay budget for the transmitter. Further, based on the uplink delay budget and an achievable data rate computed for the transmitter, the system schedules uplink for the transmitter.
Abstract:
In order to make use of computational resources available at runtime through fog networked robotics paradigm, it is critical to estimate average performance capacities of deployment hardware that is generally heterogeneous. It is also not feasible to replicate runtime deployment framework, collected sensor data and realistic offloading conditions for robotic environments. In accordance with an embodiment of the present disclosure, computational algorithms are dynamically profiled on a development testbed, combined with benchmarking techniques to estimate compute times over the deployment hardware. Estimation in accordance with the present disclosure is based both on Gustafson's law as well as embedded processor benchmarks. Systems and methods of the present disclosure realistically capture parallel processing, cache capacities and differing processing times across hardware.
Abstract:
Methods and Systems for automatic information extraction by performing self-learning crawling and rule-based data mining is provided. The method determines existence of crawl policy within input information and performs at least one of front-end crawling, assisted crawling and recursive crawling. Downloaded data set is pre-processed to remove noisy data and subjected to classification rules and decision tree based data mining to extract meaningful information. Performing crawling techniques leads to smaller relevant datasets pertaining to a specific domain from multi-dimensional datasets available in online and offline sources.