摘要:
A system includes an image signal processor (ISP) to process image data provided by an image sensor; a first hardware accelerator; a second hardware accelerator; a shared memory accessible by the first and second hardware accelerators; first processor circuitry to process sensor data, the sensor data including at least one of the image data or audio data, the audio data provided by an audio sensor; and second processor circuitry to implement an offload engine, the first processor circuitry to offload a computational task associated with the sensor data to the offload engine, the computational task to determine a context associated with the sensor data, at least one of the first processor circuitry or the second processor circuitry having at least one of a low power mode, an idle mode, or a sleep mode, the offload engine to determine the context when the at least one of the first processor circuitry or the second processor circuitry is in the at least one of the low power mode, the idle mode, or the sleep mode.
摘要:
A system includes an image signal processor (ISP) to process image data provided by an image sensor; a first hardware accelerator; a second hardware accelerator; a shared memory accessible by the first and second hardware accelerators; first processor circuitry to process sensor data, the sensor data including at least one of the image data or audio data, the audio data provided by an audio sensor; and second processor circuitry to implement an offload engine, the first processor circuitry to offload a computational task associated with the sensor data to the offload engine, the computational task to determine a context associated with the sensor data, at least one of the first processor circuitry or the second processor circuitry having at least one of a low power mode, an idle mode, or a sleep mode, the offload engine to determine the context when the at least one of the first processor circuitry or the second processor circuitry is in the at least one of the low power mode, the idle mode, or the sleep mode.
摘要:
Technologies for transferring offloading or on-loading data or tasks between a processor and a coprocessor include a computing device having a processor and a sensor hub that includes a coprocessor. The coprocessor receives sensor data associated with one or more sensors and detects events associated with the sensor data. The coprocessor determines frequency, resource usage cost, and power state transition cost for the events. In response to an offloaded task request from the processor, the coprocessor determines an aggregate load value based on the frequency, resource usage cost, and power state transition cost, and determines whether to accept the offloaded task request based on the aggregate load value. The aggregate load value may be determined as an exponential moving average. The coprocessor may determine whether to accept the offloaded task request based on a principal component analysis of the events. Other embodiments are described and claimed.
摘要:
Various embodiments are generally directed to techniques for supporting the distributed execution of a task routine among multiple secure controllers incorporated into multiple computing devices. An apparatus includes a first processor component and first secure controller of a first computing device, where the first secure controller includes: a selection component to select the first secure controller or a second secure controller of a second computing device to compile a task routine based on a comparison of required resources to compile the task routine and available resources of the first secure controller; and a compiling component to compile the task routine into a first version of compiled routine for execution within the first secure controller by the first processor component and a second version for execution within the second secure controller by a second processor component in response to selection of the first secure controller. Other embodiments are described and claimed.
摘要:
A point-of-sale device (“POS”) is described to include a secure transaction tunnel generator (“STG”). The STG may generate secure tunnels between peripherals attached to the POS and remote network resources. The secure tunnel may be generated using a trusted execution environment (“TEE”) of the POS. The STG may be alerted to the need to generate the secure tunnel based on an alert from the peripheral. The STG may execute under a protected environment and may generate two ends of a secure transaction tunnel using the TEE. The STG may also check the peripheral against whitelists and/or blacklists to determine whether the peripheral is allowed or not disallowed to participate in secure transactions. By generating the secure tunnel, the STG may facilitate performance of transactions in such a way that sensitive information is not available to unsecured processes in the POS. Other embodiments may be described and/or claimed.
摘要:
Methods, apparatuses and storage medium associated with migration between processors by a computing device are disclosed. In various embodiments, a portable electronic device having an internal processor and internal memory may be attached to a dock. The dock may include another processor as well other memory. The attachment of the dock to the portable electronic device may cause an interrupt. In response to this interrupt, a state associated with the internal processor may be copied to the other memory of the dock. Instructions for the computing device may then be executed using the other processor of the dock. Other embodiments may be disclosed or claimed.
摘要:
Apparatuses, methods and storage media associated with generating a custom class library are disclosed herein. In embodiments, an apparatus may include an analyzer configured to receive a workload for a device and a class library used by the workload, analyze the workload and class library, identify one or more workload full API call chains, and generate information about the one or more workload full API call chains. Further, the apparatus may include a generator to generate from the class library, a custom class library for the workload that is smaller than the class library, based at least in part on the one or more workload full API call chains. Other embodiments may be disclosed or claimed.