Abstract:
According to implementations of the subject matter described herein, there is provided a solution for secure execution of a machine learning network. An operation of a first network layer of a machine learning network is executed in an uTEE of a computing device based on an input of the first network layer and a first set of modified parameter values, to obtain a first error intermediate output. The modified parameter values are determined by modifying at least one subset of parameter values of the first network layer with first secret data. A first corrected intermediate output is determined in a TEE of the computing device by modifying the first error intermediate output at least based on the input and first secret data. A network output is determined based on the first corrected intermediate output. In this way, it is possible to protect the confidentiality of the machine learning network.
Abstract:
Various systems and methods for relaying remote a request are described herein. In one example, a method includes receiving a request at a public website to access a private router. The method can also include authenticating the request via an authentication service. Furthermore, the method can include providing access to the private router via the public website upon authentication.
Abstract:
Implementations of the present disclosure provide a solution for object detection. In this solution, object distribution information and performance metrics are obtained. The object distribution information indicates a size distribution of detected objects in a set of historical images captured by a camera. The performance metric indicates corresponding performance levels of a set of predetermined object detection models. At least one detection plan is further generated based on the object distribution information and the performance metric. The at least one detection plan indicates which of the set of predetermined object detection models is to be applied to each of at least one sub-image in a target image to be captured by the camera. Additionally, the at least one detection plan is provided for object detection on the target image. In this way, a balance between the detection latency and the detection accuracy may be improved.
Abstract:
Various implementations of the subject matter as described herein relate to a sparse convolutional neural network. In some implementations, a computer-implemented method comprises: quantizing an input feature map to obtain a quantized input feature map; determining, based on the quantized input feature map, a sparsity mask for an output feature map through a quantized version of a convolutional neural network, the sparsity mask indicating positions of non-zero entries in the output feature map; and determining, based on the input feature map, the non-zero entries indicated by the sparsity mask in the output feature map through the convolutional neural network.
Abstract:
In accordance with implementations of the subject matter described herein, there is provided a solution for execution of a deep learning model. In the solution, partitioned convolutions are executed based on an input and a set of parameter values of the convolutional layer sequentially in a trusted execution environment (TEE) of a computing device. The execution of a given one of partitioned convolutions comprises: storing, into a protected memory area in the TEE, an input portion of the input to be processed by a subset of parameter values for the given partitioned convolution; determining a result of the given partitioned convolution through a single matrix multiplication operation; and removing the input portion. By combining results of the partitioned convolutions, a result of the convolution is determined. Therefore, the solution can accelerate the model execution speed and improve the storage efficiency in a highly safe TEE with limited memory resources.
Abstract:
A computing device may dynamically adjust a pixel density based at least in part on a viewing distance between a user and a display of the computing device. In some examples, the viewing distance may be determined using low power acoustic (e.g., ultrasonic) sensing. A pixel density at which to display content may be determined using algorithms based on the viewing distance and a visual acuity of a user. Content to be displayed on the computing device may be sent to processors of the computing device for graphics processing. In some examples, the content may be intercepted, such as by using a hooking process, before processing and scaled based on the determined pixel density. Scaling down the pixel density of the content may require less system resources to process the content, which may result in less power consumption by the processors to perform the graphics processing operations.
Abstract:
Embodiments of the subject matter described herein relate to a wireless programmable media processing system. In the media processing system, a processing unit in a computing device generates a frame to be displayed based on a graphics content for an application running on the computing device. The frame to be displayed is then divided into a plurality of block groups which are compressed. The plurality of compressed block groups are sent to a graphics display device over a wireless link. In this manner, both the generation and the compression of the frame to be displayed may be completed at the same processing unit in the computing device, which avoids data copying and simplifies processing operations. Thereby, the data processing speed and efficiency is improved significantly.
Abstract:
Various systems and methods for providing network services are described herein. In one example, a method includes receiving, via a first processor, a network packet from a source device. The method can also include sending, via the first processor, the network packet to a second processor if a service subsystem and a service are responsive. Furthermore, the method can include modifying, via the second processor, the network packet based on the service. The method can also include receiving, via the first processor, the modified network packet from the second processor. The method can also further include sending, via the first processor, the network packet to a destination device.