Abstract:
Systems, apparatuses and methods for technology that provides smart work spaces in ubiquitous computing environments. The technology may determine a task to be performed in a smart work space and perform task modeling, wherein the task modeling includes determining one or more user interfaces involved with the task. One or more placements may be determined for the one or more user interfaces based on one or more ergonomic conditions, an incidence of an interaction, and a length of time of interaction. The technology may position the one or more user interfaces into the smart work space in accordance with the determined one or more placements.
Abstract:
Systems and methods may provide for identifying an amount of time associated with a user based activity with respect to a battery powered device, and determining a battery drain rate of the battery powered device. An indicator of whether the user based activity can be completed in the amount of time may be generated based on the battery drain rate.
Abstract:
An apparatus to facilitate disaggregated computing for a distributed confidential computing environment is disclosed. The apparatus includes a graphics processing unit (GPU) to: provide a virtual GPU monitor (VGM) to interface over a network with a middleware layer of a client platform, the VGM to interface with the middleware layer using a message passing interface; configure and expose, by the VGM, virtual functions (VFs) of the GPU to the middleware layer of the client platform; intercept, by the VGM, request messages directed to the GPU from the middleware layer, the request messages corresponding to VFs of the GPU to be utilized by the client platform; and generate, by the VGM, a response to the request messages for the middleware client.
Abstract:
Embodiments are generally directed to a multi-tile architecture for graphics operations. An embodiment of an apparatus includes a multi-tile architecture for graphics operations including a multi-tile graphics processor, the multi-tile processor includes one or more dies; multiple processor tiles installed on the one or more dies; and a structure to interconnect the processor tiles on the one or more dies, wherein the structure to enable communications between processor tiles the processor tiles.
Abstract:
An apparatus to facilitate disaggregated computing for a distributed confidential computing environment is disclosed. The apparatus includes one or more processors to facilitate receiving a manifest corresponding to graph nodes representing regions of memory of a remote client machine, the graph nodes corresponding to a command buffer and to associated data structures and kernels of the command buffer used to initialize a hardware accelerator and execute the kernels, and the manifest indicating a destination memory location of each of the graph nodes and dependencies of each of the graph nodes; identifying, based on the manifest, the command buffer and the associated data structures to copy to the host memory; identifying, based on the manifest, the kernels to copy to local memory of the hardware accelerator; and patching addresses in the command buffer copied to the host memory with updated addresses of corresponding locations in the host memory.
Abstract:
An apparatus to facilitate inferred object shading is disclosed. The apparatus comprises one or more processors to receive rasterized pixel data and hierarchical data associated with one or more objects and perform an inferred shading operation on the rasterized pixel data, including using one or more trained neural networks to perform texture and lighting on the rasterized pixel data to generate a pixel output, wherein the one or more trained neural networks uses the hierarchical data to learn a three-dimensional (3D) geometry, latent space and representation of the one or more objects.
Abstract:
An apparatus to facilitate disaggregated computing for a distributed confidential computing environment is disclosed. The apparatus includes a processor executing a trusted execution environment (TEE) comprising a field-programmable gate array (FPGA) driver to interface with an FPGA device that is remote to the apparatus; and a remote memory-mapped input/output (MMIO) driver to expose the FPGA device as a legacy device to the FPGA driver, wherein the processor to utilize the remote MMIO driver to: enumerate the FPGA device using FPGA enumeration data provided by a remote management controller of the FPGA device, the FPGA enumeration data comprising a configuration space and device details; load function drivers for the FPGA device in the TEE; create corresponding device files in the TEE based on the FPGA enumeration data; and handle remote MMIO reads and writes to the FPGA device via a network transport protocol.
Abstract:
A method includes receiving a request for a transfer of data on a bus of a computing device; determining a direction for the transfer, at least in part based on the request; determining a quantity of data for the transfer, at least in part based on the request; determining a power state for a lane of the bus, at least in part based on the direction and the quantity of data for the transfer; and setting the power state for the lane of the bus.
Abstract:
An apparatus to facilitate disaggregated computing for a distributed confidential computing environment is disclosed. The apparatus includes a processor executing a trusted execution environment (TEE) comprising a field-programmable gate array (FPGA) driver to interface with an FPGA device that is remote to the apparatus; and a remote memory-mapped input/output (MMIO) driver to expose the FPGA device as a legacy device to the FPGA driver, wherein the processor to utilize the remote MMIO driver to: enumerate the FPGA device using FPGA enumeration data provided by a remote management controller of the FPGA device, the FPGA enumeration data comprising a configuration space and device details; load function drivers for the FPGA device in the TEE; create corresponding device files in the TEE based on the FPGA enumeration data; and handle remote MMIO reads and writes to the FPGA device via a network transport protocol.
Abstract:
An apparatus to facilitate disaggregated computing for a distributed confidential computing environment is disclosed. The apparatus includes one or more processors to facilitate receiving a manifest corresponding to graph nodes representing regions of memory of a remote client machine, the graph nodes corresponding to a command buffer and to associated data structures and kernels of the command buffer used to initialize a hardware accelerator and execute the kernels, and the manifest indicating a destination memory location of each of the graph nodes and dependencies of each of the graph nodes; identifying, based on the manifest, the command buffer and the associated data structures to copy to the host memory; identifying, based on the manifest, the kernels to copy to local memory of the hardware accelerator; and patching addresses in the command buffer copied to the host memory with updated addresses of corresponding locations in the host memory.