Abstract:
Methods and systems for performance inference include inferring an internal application status based on a unified call stack trace that includes both user and kernel information by inferring user function instances. A calling context encoding is generated that includes information regarding function calling paths. Application performance is analyzed based on the encoded calling contexts. The analysis includes performing a top-down latency breakdown and ranking calling contexts according to how costly each function calling path is.
Abstract:
A method of transmitting a data signal is provided that includes integrating a trellis encoder/decoder into a transmitter for 16 quadrative amplitude modulation (QAM) to provide a first and second mode of modulation, wherein each mode has a same baud rate. Data is transmitted from the transmitter at the first mode of modulation with a plurality of first subcarriers with DP16QAM modulation to provide a substantially 400 G data transmission rate, or data is transmitted from the transmitter at the second mode of modulation with a plurality of second subcarriers with trellis coded modulation to provide a substantially 1 T data transmission rate.
Abstract:
We propose an efficient procedure, namely disjoint pair procedure based cloud service embedding procedure that first maps working and backup virtual nodes over physical nodes while balancing computational resources of different types, and finally, maps working and backup virtual links over physical routes while balancing network spectral resources using the disjoint pair procedure.
Abstract:
A method and system are provided for heterogeneous log analysis. The method includes performing hierarchical log clustering on heterogeneous logs to generate a log cluster hierarchy for the heterogeneous logs. The method further includes performing, by a log pattern recognizer device having a processor, log pattern recognition on the log cluster hierarchy to generate log pattern representations. The method also includes performing log field analysis on the log pattern representations to generate log field statistics. The method additionally includes performing log indexing on the log pattern representations to generate log indexes.
Abstract:
Systems and methods for method for data transport using secure reconfigurable branching units, including receiving signals from a first trunk terminal and a second trunk terminal by branching units. Broadcasting is prevented for secure information delivery by dividing, within the branching units, the signals from the first trunk terminal and the second trunk terminal into three or more sections. Signals may be received from a branch terminal by one or more branching units using a single branch fiber pair, and the signals from the branch terminals may be divided into three or more groups of optical channels, wherein at least e of the channels includes dummy light. The divided signals from the first trunk terminal, the second trunk terminal, and dummy light from the branch terminal may be merged, and the merged signal sent to the branch terminal.
Abstract:
At a coarse time-scale, at the start of each frame the choice of TPs to be made active and users to associate with the active TPs are determined by solving an optimization problem. The inputs to the optimization problem are averaged slowly varying metrics that are relevant for a period longer than the backhaul latency. At a the fine time-scale, in each slot each active TP independently does scheduling over the set of users associated with it, without any coordination with any of the other active TPs, based on fast changing information, such as instantaneous rate or SINR estimates.
Abstract:
Methods and systems for variable rate control include determining a new communications rate in response to measured data traffic patterns. A receive change message is transmitted to a receiver that triggers the receiver to wait for an end of transmission (EoT) message and to set a new communications rate. A transmit change message is transmitted to a transmitter that triggers the transmitter to send the EoT message to the receiver, to set the new communications rate, and to send a start of transmission (SoT) message to the receiver before resuming data communications.
Abstract:
Disclosed is a general learning framework for computer implementation that induces sparsity on the undirected graphical model imposed on the vector of latent factors. A latent factor model SLFA is disclosed as a matrix factorization problem with a special regularization term that encourages collaborative reconstruction. Advantageously, the model may simultaneously learn the lower-dimensional representation for data and model the pairwise relationships between latent factors explicitly. An on-line learning algorithm is disclosed to make the model amenable to large-scale learning problems. Experimental results on two synthetic data and two real-world data sets demonstrate that pairwise relationships and latent factors learned by the model provide a more structured way of exploring high-dimensional data, and the learned representations achieve the state-of-the-art classification performance.
Abstract:
A device used in a network is disclosed. The device includes a network monitor to monitor a network state and to collect statistics for flows going through the network, a flow aggregation unit to aggregate flows into clusters and identify flows that can cause a network problem, and an adaptive control unit to adaptively regulate the identified flow according to network feedback. Other methods and systems also are disclosed.
Abstract:
Systems and methods for network management, including adaptively installing one or more monitoring rules in one or more network devices on a network using an intelligent network middleware, detecting application traffic on the network transparently using an application demand monitor, and predicting future network demands of the network by analyzing historical and current demands. The one or more monitoring rules are updated once counters are collected; and network paths are determined and optimized to meet network demands and maximize utilization and application performance with minimal congestion on the network.