摘要:
Systems and techniques are provided that implement application programming interfaces (APIs) that can be executed synchronously or asynchronously depending on the expected response time of the API, the status of the API and/or systems that implement the API, the identity and/or type of user making a request via the API, historical requirements or operation of the API, and/or other factors.
摘要:
Systems and methods for generating and evaluating driving scenarios with varying difficulty levels is provided. The disclosed systems and methods may be used to develop a suite of regression tests that track the progress of an autonomous driving stack. A robustness trace of a temporal logic formula may be computed from an always-eventually fragment using a computation graph. The robustness trace may be approximated by a smoothly differentiable computation graph, which can be implemented in existing machine learning programming frameworks. The systems and methods provided herein may be useful in automatic test case generation for autonomous or semi-autonomous vehicles.
摘要:
A trace of a bounded liveness failure of a system component is received, by one or more processors, along with fairness constraints and liveness assertion conditions. One or more processors generate randomized values for unassigned input values and register values, of the trace, and simulate traversal of each of a sequence of states of the trace. One or more processors determine whether traversing the sequence of states of the trace results in a repetition of a state, and responsive to determining that traversing the sequence of states of the trace does result in a repetition of a state, and the set of fairness constraints are asserted within the repetition of a state, and that the continuous liveness assertion conditions are maintained throughout the repetition of the state, a concrete counterexample of a liveness property of the system component is reported.
摘要:
Exemplary methods, apparatuses, and systems include a host computer selecting a first workload of a plurality of workloads running on the host computer to be subjected to an input/output (I/O) trace. The host computer determines whether to generate the I/O trace for the first workload for a first length of time or for a second length of time. The first length of time is shorter than the second length of time. The determination is based upon runtime history for the first workload, I/O trace history for the first workload, and/or workload type of the first workload. The host computer generates the I/O trace of the first workload for the selected length of time.
摘要:
A method for evaluating the performance of an application when migrated from a first environment in which the application is currently executing to a second, different environment includes generating a virtual application that mimics the resource consuming behavior of the application, executing the virtual application in the second environment, and evaluating the performance of the virtual application in the second environment.
摘要:
The techniques described herein provide software testing of a candidate version of software. In some examples, an interceptor intercepts at least one production request to a production version of the software and issues the production request to a shadow proxy service as a shadow request. The shadow proxy service causes the at least one shadow request to be processed by the candidate version of the software being validated and an authority version of the software being used to validate the candidate version. The shadow proxy service may then compare and/or analyze at least one candidate response to the shadow request from the candidate version and at least one authority response to the shadow request from the authority version. A dashboard service may provide at least some of the resulting information and issue a request the shadow proxy service to replay at least one of the shadow requests.
摘要:
A method and computer program product for testing a high performance computing application performing a computation within a clustered computer arrangement is disclosed. The high performance computing arrangement performances computations across processors in parallel wherein the processors cooperate to perform the computation. The application can be tested by adding delay and therefore latency to one or more commands inside of the precompiled application. The addition of delay can be used to simulate the performance of different interconnects that are used within the high performance computing arrangement.
摘要:
A system that provides efficient expansion capability of a storage unit under test including multiple storage processors and reduces a number of required Ethernet ports on a client device and a reduced number of physical connections on the client device. A first processor and a peer processor of a storage processor system may be coupled to counterpart processors on one or more other storage processor systems using direct port-to-port connections and/or using a network infrastructure. A command from the client device may be passed among first processors and peer processors of the multiple storage processor systems until the correct destination processor for the command is reached, and data packets may be passed from a source processor of a storage processor system through processors of other storage processor systems until the client device is reached.
摘要:
An efficient, cycle-accurate processor execution simulator models a target processor by executing a program execution image comprising instructions having run-time dependencies resolved by execution on an existing processor compatible with the target processor. The instructions may have been executed upon a processor in an I/O environment too complex to model. In one embodiment, the simulator executes instructions that were directly executed on a processor. In another embodiment, a markup engine alters a compiled program image, with reference to instructions executed on a processor, to remove run-time dependencies. The marked up program image is then executed by the simulator. The processor execution simulator includes an update engine operative to cycle-accurately simulate instruction execution, and a communication engine operative to model each communication bus of the target processor.
摘要:
An application server may be instrumented to provide a resource measurement framework to collect resource usage data regarding request processing by the application server and applications executing on the application server. The resource measurement framework of an application server may collect hardware and software resource usage data regarding request processing at interception points located at interfaces between application components and services or other components of the application server by instrumenting those interfaces. The resource measurement framework may collect resource usage by instrumenting standard interfaces and/or methods of various specifications, such as implemented by containers or other components of the application server. Thus, the resource measurement framework may collect resource usage for applications or application components that do not include any resource measuring capabilities. The collected resource usage data may be parsed and combined to create an overall characterization of resource usage corresponding to the application server's request processing.