Abstract:
A debugger executes on a computer system to receive a first debugging command from a client, where the first debugging command sets a first instruction in the reactive application to suspend execution of the reactive application, and where during execution of the reactive application the first instruction is triggered which suspends execution of the reactive application. Responsive to the execution of the reactive application being suspended, a system clock of the reactive application is replaced with a substitute clock and the substitute clock is paused. The debugger then receives a second debugging command, where the second debugging command triggers a second instruction in the reactive application to continue execution of the reactive application. Responsive to the execution of the reactive application being continued, clocking of the substitute clock is continued.
Abstract:
A debugger executes on a computer system to receive a first debugging command from a client, where the first debugging command sets a first instruction in the reactive application to suspend execution of the reactive application, and where during execution of the reactive application the first instruction is triggered which suspends execution of the reactive application. Responsive to the execution of the reactive application being suspended, a system clock of the reactive application is replaced with a substitute clock and the substitute clock is paused. The debugger then receives a second debugging command, where the second debugging command triggers a second instruction in the reactive application to continue execution of the reactive application. Responsive to the execution of the reactive application being continued, clocking of the substitute clock is continued.
Abstract:
Systems, methods, and techniques of distributing a workload of an application to a GPU are provided. An example method includes obtaining an intermediate representation of a source code portion of an application and compiling the intermediate representation into a set of instructions that is native to the GPU. The set of instructions includes a binary representation of the source code portion executable on the GPU, and execution of the set of instructions on the GPU includes processing a workload of the application. The method also includes transforming data associated with the source code portion into one or more data types native to the GPU and sending to the GPU a communication including the set of instructions executable on the GPU and the one or more data types native to the GPU.
Abstract:
A method and system for automatic ESB deployment at the level of individual services is described. In one method, a load balancer repeatedly monitors performance of individual services installed on ESB nodes. The performance is measured in view of utilization metrics of the individual services. The load balancer periodically determines whether the performance of one or more of the individual services falls below a performance threshold and deploys duplicate services for the one or more of the individual services that falls below the performance threshold at one or more additional ESB nodes without user intervention in response to the periodically determining.
Abstract:
Systems and methods are disclosed for executing a clustered method at a cluster of nodes. An example method includes identifying an annotated class included in an application that is deployed on the cluster of nodes. An annotation of the class indicates that a clustered method associated with the annotated class is executed at each node in the cluster. The method also includes creating an instance of the annotated class and coordinating execution of the clustered method with one or more other nodes in the cluster. The method further includes executing, based on the coordinating, the clustered method using the respective node's instance of the annotated class.
Abstract:
A device includes a processor and a memory comprising machine readable instructions that when executed by the processor, cause the system to display information to a user through a display screen of the device, the display screen being positioned on a front side of the device, receive a first input from a first sensor placed on a left side of the device, the first input indicating a placement of at least one appendage along the first sensor, receive a second input from a second sensor placed on a right side of the device, the second input indicating a placement of at least one appendage along the second sensor, and execute a predefined function within an application running on the device, the predefined function being based on both the first input and the second input.
Abstract:
Systems and methods for controlling access to a private partition on a storage device are disclosed for. An example system includes a token reader that detects a hardware token storing a private key and obtains the private key stored on the hardware token. The system also includes a partition controller that determines whether the private key unlocks a private partition on a storage device. In response to determining that the private key unlocks the private partition, the partition controller unlocks the private partition on the storage device. The private partition is invisible to an operating system executing in the computer system when the private partition is locked.
Abstract:
An example system for transmitting data between applications may include an access module that accesses a data object associated with a first application running on a first node. The access module may access the data object without using a class library. The system also includes a communication module that transmits via a network to a second node, data associated with the data object. The communication module may transmit the data for use by a second application running on the second node, and the data object may be accessible by at most one application at a time.
Abstract:
A mechanism for providing an operating system history is disclosed. A method includes placing, by an operating system (OS) of a processing device, a pointer to context of a first application in a history context of plurality of applications in a direct interface array (DIR) of the OS upon indication of switching from an interface of a first application to the interface of a second application. The method also includes moving the pointer from the context of the first application to the context of the second application in the DIR in view of an indication of a closing of the interface of the second application. The second application is closed in a foreground of the OS and is executing in a background of the OS. The method further includes providing the interface of the second application in the foreground of the OS upon activation of a global back function.
Abstract:
A method and system for automatic ESB deployment at the level of individual services is described. In one method, a load balancer repeatedly monitors performance of individual services installed on ESB nodes. The performance is measured in view of utilization metrics of the individual services. The load balancer periodically determines whether the performance of one or more of the individual services falls below a performance threshold and deploys duplicate services for the one or more of the individual services that falls below the performance threshold at one or more additional ESB nodes without user intervention in response to the periodically determining.