Abstract:
A data processing apparatus has control circuitry for detecting whether a current micro-operation to be processed by a processing pipeline would give the same result as an earlier micro-operation. If so, then the current micro-operation is passed through the processing pipeline, with at least one pipeline stage passed by the current micro-operation being placed in a power saving state during a processing cycle in which the current micro-operation is at that pipeline stage. The result of the earlier micro-operation is then output as a result of said current micro-operation. This allows power consumption to be reduced by not repeating the same computation.
Abstract:
An overdrive engine 50 generates output frames 52 to be used to drive a display 53 from input frames 51 to be displayed.Each output frame 52 is generated on a region-by-region basis from the corresponding regions 57 of the input frames 51. If it is determined that an input frame region 57 has changed significantly since the previous version(s) 56 of the input frame, an overdriven version of the input frame region 57 is generated for use as the corresponding region 58 in the output frame 52. On the other hand, if it is determined that the input frame region 57 has not changed since the previous version of the input frame, then the new input frame region is used without performing any form of overdrive process on it for the corresponding region 58 in the output frame 52.
Abstract:
There is provided a data processing apparatus for performing machine learning. The data processing apparatus includes convolution circuitry for convolving a plurality of neighbouring regions of input data using a kernel to produce convolution outputs. Max-pooling circuitry determines and selects the largest of the convolution outputs as a pooled output and prediction circuitry performs a size prediction of the convolution outputs based on the neighbouring regions, wherein the size prediction is performed prior to the max-pooling circuitry determining the largest of the convolution outputs and adjusts a behaviour of the convolution circuitry based on the size prediction.
Abstract:
A system-on-chip comprises processing circuitry to process input data to generate output data, and power management circuitry to control power management policy for at least a portion of the system-on-chip. The power management circuitry controls the power management policy depending on metadata indicative of a property of the input data to be processed by the processing circuitry.
Abstract:
Circuitry comprises processing circuitry to access a hierarchy of at least two levels of cache memory storage; memory circuitry comprising plural storage elements, at least some of the storage elements being selectively operable as cache memory storage in respective different cache functions; and control circuitry to allocate storage elements of the memory circuitry for operation according to a given cache function.
Abstract:
A method for optimizing machine learning processing is provided. The method comprising retrieving, neural network architecture information for a neural network, the neural network architecture information comprising layer information and kernel information for the neural network. The network architecture information is analyzed to identify convolutional layers in the neural network which have associated strided layers. A first kernel for a convolutional layer identified as having an associated strided layer, and a second kernel for the strided layer associated with the convolutional layer are retrieved. A composite kernel is then generated, based on the first and second kernel, that performs the functions of the first and second kernel. Finally, the composite kernel is stored for further use by a neural network.
Abstract:
A human-machine interface system comprises a sensor configured to generate data associated with a human movement, such as measured electrical signals or data from an accelerometer. A measurement unit of the human-machine interface measures user movement over time to generate a sequence of measured user movement data. A processor processes the data associated with a human movement from the sensor using a trained neural network to determine one or more predicted user actions. A comparison unit compares the one or more predicted user actions with one or more user actions obtained from the sequence of measured user movement data. A control unit uses the predicted user actions to control a process in an information processing apparatus in dependence upon the comparison performed by the comparison unit.
Abstract:
An AR system includes one or more image sensors arranged to capture image data representing a scene located within a field of view of the one or more image sensors, a display arranged to enable a user of the AR system to observe a representation or view of the scene, and an augmentation engine. The augmentation engine is arranged to process the captured image data to determine one or more visual characteristics for the captured image data and to determine, in dependence on the determined one or more visual characteristics, one or more properties for an image element to be presented on the display. The augmentation engine is arranged to present the image element, with the determined one or more properties, on the display to overlay the representation or view of the scene.
Abstract:
An extended-reality system is described which determines extended-reality data to be obtained from the remote network-connected storage based on a location of the extended-reality system. The extended-reality system determines a communication method by which to obtain the extended-reality data, wherein the extended-reality data may be obtained by one or more requests to the remote network-connected storage or by one or more requests to a local device outside of the extended-reality system. A request is sent to at least one of the local device via a peer-to-peer network and the remote network-connected storage in dependence upon the determination by the extended-reality system.
Abstract:
A data processing system including storage. The data processing system also includes at least one processor to generate output data using at least a portion of a first neural network layer and generate a key associated with at least the portion of the first neural network layer. The at least one processor is further operable to obtain the key from the storage and obtain a version of the output data for input into a second neural network layer. Using the key, the at least one processor is further operable to determine whether the version of the output data differs from the output data.