Abstract:
Embodiments of the invention provide language support for CPU-GPU platforms. In one embodiment, code can be flexibly executed on both the CPU and GPU. CPU code can offload a kernel to the GPU. That kernel may in turn call preexisting libraries on the CPU, or make other calls into CPU functions. This allows an application to be built without requiring the entire call chain to be recompiled. Additionally, in one embodiment data may be shared seamlessly between CPU and GPU. This includes sharing objects that may have virtual functions. Embodiments thus ensure the right virtual function gets invoked on the CPU or the GPU if a virtual function is called by either the CPU or GPU.
Abstract:
Preemptive scheduling enclaves as disclosed herein support both cooperative and preemptive scheduling of in-enclave (IE) thread execution. These preemptive scheduling enclaves may include a scheduler configured to be executed as part of normal hardware interrupt processing by enclave threads. The scheduler identifies an IE thread to be scheduled and modifies enclave data structures so that when the enclave thread resumes processing after a hardware interrupt, the identified IE thread is executed, rather than the interrupted IE thread.
Abstract:
Autonomous robots and methods of operating the same are disclosed. An autonomous robot includes a sensor and memory including machine readable instructions. The autonomous robot further includes at least one processor to execute the instructions to generate a velocity costmap associated with an environment in which the robot is located. The processor generates the velocity costmap based on a source image captured by the sensor. The velocity costmap includes velocity information indicative of movement of an obstacle detected in the environment.
Abstract:
A mechanism is described for facilitating slimming of neural networks in machine learning environments. A method of embodiments, as described herein, includes learning a first neural network associated with machine learning processes to be performed by a processor of a computing device, where learning includes analyzing a plurality of channels associated with one or more layers of the first neural network. The method may further include computing a plurality of scaling factors to be associated with the plurality of channels such that each channel is assigned a scaling factor, wherein each scaling factor to indicate relevance of a corresponding channel within the first neural network. The method may further include pruning the first neural network into a second neural network by removing one or more channels of the plurality of channels having low relevance as indicated by one or more scaling factors of the plurality of scaling factors assigned to the one or more channels.
Abstract:
Methods, systems, or apparatus may be directed to hosting, by a virtual machine manager of a local machine, a virtual machine having a device driver. A virtual machine manager may obtain, from a stub driver on a remote machine, information about the I/O device on the remote machine. The I/O device may be bound to a stub driver on the remote machine. The virtual machine manager may instantiate a virtual I/O device on the local machine corresponding to the I/O device on the remote machine. The virtual machine manager may then collaborate with the stub driver on the remote machine to effectuate a real access to the I/O device on the remote machine for an access to the virtual I/O device by the device driver on behalf of a program on the local machine.
Abstract:
A method for scheduling a computational task is proposed. The method includes receiving, at a server, a request for executing a computational task from a client device. The method further includes forwarding the computational task to a processing device if a predetermined condition is fulfilled. The predetermined condition can be based on an execution time or on a security level of data of the computational task, for example.
Abstract:
Various systems and methods for providing access control are described herein. A system comprises a display; a processor; and a memory, including instructions, which when executed on the processor, cause the processor to: present a limited lock screen on a display of the user device, wherein the limited lock screen only provides a non-personalized access mechanism; receive user input via the limited lock screen; correlate the user input with an operating context, wherein the user input is uniquely correlated with the operating context; and unlock the user device with access to the operating context.
Abstract:
Various embodiments are generally directed an apparatus and method for configuring an execution environment in a user space for device driver operations and redirecting a device driver operation for execution in the execution environment in the user space including copying instructions of the device driver operation from the kernel space to a user process in the user space. In addition, the redirected device driver operation may be executed in the execution environment in the user space.
Abstract:
An electric vehicle computing sharing system (100) is adapted to receive a signal indicating the electric vehicle (110, 120, 130) is connected to a charging station (115, 125, 135). The computing sharing system (100) may be further adapted to receive information about the electric vehicle (110, 120, 130). The computing sharing system (100) may be further adapted to determine a predicted charging duration (535) for the electric vehicle (110, 120, 130). The computing sharing system (100) may be further adapted to identify a task for execution by a computing resource of the electric vehicle (110, 120, 130) based on the predicted charging duration (535). The computing sharing system (100) may be further adapted to transmit the task to the electric vehicle (110, 120, 130). The computing sharing system (100) may be further adapted to receive a result for the task from the electric vehicle (110, 120, 130).
Abstract:
A mechanism is described for facilitating slimming of neural networks in machine learning environments. A method of embodiments, as described herein, includes learning a first neural network associated with machine learning processes to be performed by a processor of a computing device, where learning includes analyzing a plurality of channels associated with one or more layers of the first neural network. The method may further include computing a plurality of scaling factors to be associated with the plurality of channels such that each channel is assigned a scaling factor, wherein each scaling factor to indicate relevance of a corresponding channel within the first neural network. The method may further include pruning the first neural network into a second neural network by removing one or more channels of the plurality of channels having low relevance as indicated by one or more scaling factors of the plurality of scaling factors assigned to the one or more channels.