Abstract:
A data processing system having a master device and a plurality of slave devices uses interconnect circuitry to couple the master device with the plurality of slave devices to enable transactions to be performed by the slave devices upon request from the master device. The master device issues a multi-transaction request identifying multiple transactions to be performed, the multi-transaction request providing a base transaction identifier, a quantity indication indicating a number of transactions to be performed, and address information. Request distribution circuitry within the interconnect circuitry analyses the address information and the quantity indication in order to determine, for each of the multiple transactions, the slave device that is required to perform that transaction. Transaction requests are then issued from the request distribution circuitry to each determined slave device to identify which transactions need to be performed by each slave device. Each determined slave device provides a response to the master device to identify completion of each transaction performed by that determined slave device. Each determined slave device provides its responses independently of the responses from any other determined slave device, and each response includes a transaction identifier determined from the base transaction identifier and transaction specific information. This enables the master device to identify completion of each transaction identified within the multi-transaction request. In an alternative arrangement, the same multi-transaction request approach can be used by a master device to initiate cache maintenance operations within a plurality of cache storage devices. This approach can give rise to significant improvements in efficiency and power consumption within the data processing system.
Abstract:
A graphics processing pipeline (20) comprises first vertex shading circuitry (21) that operates to vertex shade position attributes of vertices of a set of vertices to be processed by the graphics processing pipeline. Tiling circuitry (22) then determines for the vertices that have been subjected to the first vertex shading operation, whether the vertices should be processed further. A second vertex shading circuitry (23) then performs a second vertex shading operation on the vertices that it has been determined should be processed further, to vertex shade the remaining vertex attributes for each vertex that it has been determined should be processed further.
Abstract:
A processing apparatus comprising: several processors for processing data; a hierarchical memory system comprising a memory accessible to all the processors, and several caches corresponding to each of the processors, each of the caches being accessible to the corresponding processor and comprising storage locations and corresponding indicators. There is also cache coherency control circuitry for maintaining coherency of data stored in the hierarchical memory system. The processors are configured to respond to receipt of a predefined request to perform an operation on a data item to determine if the cache corresponding to the processor receiving the request has a storage location allocated to the data item. If not, the processing apparatus is configured to: allocate a storage location within the cache to the data item, set the indicator corresponding to the storage location to indicate that the storage location is storing a delta value, set data in the allocated storage location to an initial value. The processor is configured in response to the predefined request to perform the operation on data within the storage location allocated to the data item.
Abstract:
There is provided a data processing apparatus for performing machine learning. The data processing apparatus includes convolution circuitry for convolving a plurality of neighbouring regions of input data using a kernel to produce convolution outputs. Max-pooling circuitry determines and selects the largest of the convolution outputs as a pooled output and prediction circuitry performs a size prediction of the convolution outputs based on the neighbouring regions, wherein the size prediction is performed prior to the max-pooling circuitry determining the largest of the convolution outputs and adjusts a behaviour of the convolution circuitry based on the size prediction.
Abstract:
Example methods, apparatuses, and/or articles of manufacture are disclosed that may be implemented, in whole or in part, using one or more computing devices to enhance a rendered image. In an implementation, a process to enhance a portion of a rendered image may be affected based, at least in part, on a shading rate applied in rendering the portion of the rendered image.
Abstract:
There is provided a display apparatus to focus light for a user. The apparatus comprises a tuneable lens having controllable optical properties, an eye-tracker device to determine a position at which the user is looking, and circuitry to control the optical properties of the tuneable lens to bring an object at the depth of the position into focus for the user. A method of focusing light is also provided. The method comprises determining a position at which the user is looking and controlling optical properties of a tuneable lens to bring an object at the depth of the position into focus for the user.
Abstract:
The present disclosure relates to a method of operating a graphics processing system for providing frames over communication channel in a communication network, the graphics processing system being configured to process data for an application executed thereon to render frames for the application to be output for transmission over the communication channel to a client device, the method comprising: determining network characteristics of the communication network and/or server characteristics of the server; adaptively selecting a first prediction method from a plurality of prediction methods to be used for displaying frames based on the determined network characteristics and/or server characteristics; generating a plurality of frames based on the first prediction method; and selectively providing, based on the first prediction method, one or more output frames from the plurality of frames to the application to be output for transmission over the communication channel.
Abstract:
There is provided a method and apparatus to generate audio data for a user, the apparatus comprising: an input device to receive one or more inputs derived from an environment in which the user is located; and a processor configured to obtain an acoustic profile for the environment based on or in response to the one or more inputs, synthesize audio data having audio characteristics corresponding to a sound source in the environment in accordance with the acoustic profile, and output the synthesized audio data for use by the user.
Abstract:
A method of compressing kernels comprising detecting a plurality of replicated kernels. The plurality of replicated kernels comprise kernels. The method also comprises generating a composite kernel from the replicated kernels. The composite kernel comprises kernel data and meta data indicative of the rotations applied to the composite kernel data. The method also comprises storing a composite kernel.
Abstract:
A fault detection scheme for a data processor that comprises a programmable execution unit operable to execute programs to perform processing operations, and in which when executing a program, the execution unit executes the program for respective execution threads, each execution thread corresponding to a respective work item. In order to detect faults, a set of two or more identical execution threads is generated. The identical execution threads when executed perform identical processing for the same work item and a result of the processing of the same work item can thus be compared to determine whether there is a fault associated with the data processor.