Abstract:
A method for testing a cloud streaming server, and an apparatus and a system therefor are disclosed. Test result videos are created by receiving, from cloud streaming servers, test results corresponding to the key input of a preset test script; masked videos are created by masking the test result videos; and it is determined whether at least any one of the cloud streaming servers has a failure by mutually comparing test result images created by capturing the masked videos.
Abstract:
Non-disruptive integrated testing of network infrastructure in which one or more processors of a first network interface device (NID) performs tests on a network cable connected to a first NID of a host system, and connected to a second NID. Tests verify connectivity of the cable, a bandwidth capacity baseline, and a maximum bandwidth of the network cable. A self-test determines the host operating system, host status, and operational status of the host NID, and responsive to changed conditions, NID settings are reverted to a pre-validated condition state, and confirmation of reverting is sent to the host. Network activity is suspended if received by the host ID during scheduled network cable tests. Upon completion of tests and storing of test results in NID memory network activity resumes. Results of the tests are transferred from memory of the first NID to persistent storage of the host system.
Abstract:
Raw machine data are captured and may be organized as events. Entity definitions representing machine entities that perform a service identify the machine data pertaining to respective entities. KPI search queries each define a KPI. Each KPI search query derives one or more values for the KPI from machine data identified in the entity definitions. The derivation may be performed on a per-entity basis and on the aggregate. The derived values may then be translated into a state value domain using per-entity thresholds, aggregate thresholds, or a combination.
Abstract:
Methods, systems, and computer readable media for generating test packets in a network device using value lists caching are disclosed. In one method, value lists are stored in dynamic random access memory of a network test device. Each value lists includes values for user defined fields (UDFs) to be inserted in test packets. Portions of each value lists are read into per-port caches. The UDF values are drained from the per-port caches using per-port stream engines to generate and send streams of test packets to one or more devices under test. The per-port caches are refilled with portions of the value lists from the DRAM and a rate sufficient to maintain the sending of the stream engine packets to the one or more devices under test.
Abstract:
The present invention relates to systems and methods for network labeling in order to enhance real time data transfers. A network for a real time data transfer is identified and predictive models for network performance are compared against to determine if the network is suitable for the data transfer. If so, then the real time data transfer may be completed as expected. However, if the network is predicted to be unsuitable for transmission an alternate means for connection may be suggested. The alternate suggestion may include delaying the data transfer until the network is expected to be in better conditions, connecting to another access point in the network, or switching to another network entirely. During the data transfer, the quality of the network is monitored in order to update the predictive models for the network's quality. Identifiers for the network may be utilized to keep track of the networks. Network signal strength, signal pollution and time may also be tracked in order to identify patterns in the network's performance.
Abstract:
Described herein are systems and methods of identifying and classifying performance bottlenecks for web applications. Such systems and methods use classification and analysis of performance testing data and data instrumentation via arithmetic and/or machine learning. Data is integrated from different sources including system data, historical and real time sources. Performance variations are analyzed as load changes and the impact of these variations on different sectors of the Application stack are analyzed. Bottlenecks are identified and classified based on the sector in the software stack and recommendations for optimization of an Application under Test are presented to address the bottlenecks are presented.
Abstract:
A method of testing and monitoring a real-time streaming media recognition service provider is performed at a computer system. The computer system obtains a streaming media signal source, selects a testing sample from the streaming media signal source, records characteristics of the testing sample, and obtains an expected output according to the characteristics of the testing sample. Next, the computer system converts the testing sample into a digital streaming format preset by the service provider and initiates a media recognition request according to the testing sample in the digital streaming format to the service provider. After receiving a media recognition result of the testing sample returned by the service provider according to the media recognition request, the computer system compares the media recognition result with the expected output and indicates whether the service provider is normal in accordance with the comparison result.
Abstract:
A method receives start commands for starting end-to-end testing of a live multi-tenant system that hosts shared services for multiple tenants; executes multiple test scripts for generating controller commands in response to the start commands, the executing the test scripts generating respectively synthetic transaction inputs; provides the synthetic transaction inputs to the live multi-tenant system, the live multi-tenant system configured to use the synthetic transaction inputs to perform respectively multiple synthetic transactions involving multiple destinations in the live multi-tenant system, the live multi-tenant system configured to generate respectively multiple test results in response to the multiple synthetic transactions; receives and evaluates the test results generated by the live multi-tenant system to test end-to-end performance conditions of the multi-tenant system; and generates one or more alerts upon recognizing an alert trigger condition based upon the evaluating of the test results.
Abstract:
A system comprising a broadcast facility, one or more players, and an analytic service center. The broadcast facility may be configured to provide a plurality of streams. The one or more players may be configured to receive at least one of the plurality of streams and provide feedback on a user experience. The analytic service center may be configured to receive the feedback from the one or more players.
Abstract:
The various embodiments include methods, computers and communication systems for distributing telecommunications functionality across multiple heterogeneous domains within a telecommunications system, which may include determining policy-charging capabilities of a first telecommunications domain, determining policy-charging capabilities of a second telecommunications domain, determining policy-charging requirements required for a communication, partitioning the policy-charging requirements into a first group and a second group based on the determined policy-charging capabilities of the first and second telecommunications domains, sending a first message including the first group of policy-charging requirements to a public interface of the first domain, and sending a second message including the second group of policy-charging requirements to a public interface of the second domain.