TRANSFERRING DATA BETWEEN MEMORY SYSTEM AND BUFFER OF A MASTER DEVICE

    公开(公告)号:US20170364461A1

    公开(公告)日:2017-12-21

    申请号:US15612072

    申请日:2017-06-02

    Applicant: ARM LIMITED

    Abstract: A master device has a buffer for storing data transferred from, or to be transferred to, a memory system. Control circuitry issues from time to time a group of one or more transactions to request transfer of a block of data between the memory system and the buffer. Hardware or software mechanism can be provided to detect at least one memory load parameter indicating how heavily loaded the memory system is, and a group size of the block of data transferred per group can be varied based on the memory load parameter. By adapting the size of the block of data transferred per group based on memory system load, a better balance between energy efficiency and quality of service can be achieved.

    INTERFACE APPARATUS AND METHOD OF OPERATING AN INTERFACE APPARATUS
    12.
    发明申请
    INTERFACE APPARATUS AND METHOD OF OPERATING AN INTERFACE APPARATUS 审中-公开
    界面装置和操作界面装置的方法

    公开(公告)号:US20170076419A1

    公开(公告)日:2017-03-16

    申请号:US15254079

    申请日:2016-09-01

    Applicant: ARM LIMITED

    CPC classification number: G06T1/20 G06T1/60 H04N19/426 H04N19/436

    Abstract: An interface apparatus and method of operating the same are provided. The interface apparatus receives an uncompressed image data read request using a first addressing scheme at a first bus interface and transmits a compressed image data read request using a second addressing scheme from a second bus interface. Address translation circuitry translates between the first addressing scheme and the second addressing scheme. Decoding circuitry decodes a set of compressed image data received via the second bus interface to generate the set of uncompressed image data which is then transmitted via the first bus interface. The use of a second addressing scheme and image data compression is thus transparent to the source of the uncompressed image data read request, and the interface apparatus can therefore be used to connect devices which use different addressing schemes and image data formats, without either needing to be modified.

    Abstract translation: 提供了一种操作该接口装置和方法。 接口装置在第一总线接口处使用第一寻址方案接收未压缩的图像数据读取请求,并且使用第二寻址方案从第二总线接口发送压缩图像数据读取请求。 地址转换电路在第一寻址方案和第二寻址方案之间转换。 解码电路解码经由第二总线接口接收的一组压缩图像数据,以生成一组未压缩图像数据,然后经由第一总线接口传送。 因此,使用第二寻址方案和图像数据压缩对于未压缩的图像数据读取请求的源是透明的,并且接口装置因此可以用于连接使用不同寻址方案和图像数据格式的设备,而不需要 被修改。

    MACHINE LEARNING IMPROVEMENTS
    13.
    发明申请

    公开(公告)号:US20230089112A1

    公开(公告)日:2023-03-23

    申请号:US17479257

    申请日:2021-09-20

    Applicant: Arm Limited

    Abstract: There is provided a data processing apparatus for performing machine learning. The data processing apparatus includes convolution circuitry for convolving a plurality of neighbouring regions of input data using a kernel to produce convolution outputs. Max-pooling circuitry determines and selects the largest of the convolution outputs as a pooled output and prediction circuitry performs a size prediction of the convolution outputs based on the neighbouring regions, wherein the size prediction is performed prior to the max-pooling circuitry determining the largest of the convolution outputs and adjusts a behaviour of the convolution circuitry based on the size prediction.

    OPTIMISED MACHINE LEARNING PROCESSING

    公开(公告)号:US20230040673A1

    公开(公告)日:2023-02-09

    申请号:US17387454

    申请日:2021-07-28

    Applicant: Arm Limited

    Abstract: A method for optimizing machine learning processing is provided. The method comprising retrieving, neural network architecture information for a neural network, the neural network architecture information comprising layer information and kernel information for the neural network. The network architecture information is analyzed to identify convolutional layers in the neural network which have associated strided layers. A first kernel for a convolutional layer identified as having an associated strided layer, and a second kernel for the strided layer associated with the convolutional layer are retrieved. A composite kernel is then generated, based on the first and second kernel, that performs the functions of the first and second kernel. Finally, the composite kernel is stored for further use by a neural network.

    DATA PROCESSING SYSTEM AND METHOD
    16.
    发明申请

    公开(公告)号:US20220038270A1

    公开(公告)日:2022-02-03

    申请号:US16940770

    申请日:2020-07-28

    Applicant: Arm Limited

    Abstract: A data processing system including storage. The data processing system also includes at least one processor to generate output data using at least a portion of a first neural network layer and generate a key associated with at least the portion of the first neural network layer. The at least one processor is further operable to obtain the key from the storage and obtain a version of the output data for input into a second neural network layer. Using the key, the at least one processor is further operable to determine whether the version of the output data differs from the output data.

    COMPOSITING PLURAL LAYER OF IMAGE DATA FOR DISPLAY
    17.
    发明申请
    COMPOSITING PLURAL LAYER OF IMAGE DATA FOR DISPLAY 有权
    用于显示的图像数据的组合层

    公开(公告)号:US20160217592A1

    公开(公告)日:2016-07-28

    申请号:US15067683

    申请日:2016-03-11

    Applicant: ARM LIMITED

    Abstract: Apparatus and a corresponding method for processing image data are provided. The apparatus has compositing circuitry to generate a composite layer for a frame for display from image data representing plural layers of content within the frame. Plural latency buffers are provided to store at least a portion of the image data representing the plural layers. At least one of the plural latency buffers is larger than at least one other of the plural latency buffers. The compositing circuitry is responsive to at least one characteristic of the plural layers of content to allocate the plural layers to respective latency buffers of the plural latency buffers. Image data information for a layer allocated to the larger latency buffer is available for analysis earlier than that of the layers allocated to the smaller latency buffers and processing efficiencies can then result.

    Abstract translation: 提供了用于处理图像数据的装置和相应的方法。 该装置具有合成电路,以从表示帧内的多层内容的图像数据生成用于显示帧的复合层。 提供多个延迟缓冲器以存储表示多个层的图像数据的至少一部分。 多个延迟缓冲器中的至少一个大于多个等待时间缓冲器中的至少一个。 合成电路响应于多层内容的至少一个特征,以将多个层分配给多个等待时间缓冲器的各个延迟缓冲器。 分配给较大等待时间缓冲器的图像的图像数据信息可以比分配给较小等待时间缓冲器的层的图像数据信息更早地进行分析,然后可以产生处理效率。

Patent Agency Ranking