-
公开(公告)号:US12099846B2
公开(公告)日:2024-09-24
申请号:US17396865
申请日:2021-08-09
Applicant: Arm Limited
Inventor: Frederic Claude Marie Piry , Cédric Denis Robert Airaud , Natalya Bondarenko , Luca Maroncelli , Geoffray Matthieu Lacourba
CPC classification number: G06F9/3836 , G06F9/30123 , G06F9/3877 , G06F9/4881
Abstract: A data processing apparatus comprises receiver circuitry for receiving instructions from each of a plurality of requester devices. Processing circuitry executes the instructions associated with each of a subset of the requester devices at a time and arbitration circuitry determines the subset of the requester devices and causes the instructions associated with each of the subset of the requester devices to be executed next. In response to the receiver circuitry receiving an instruction of a predetermined type from one of the requester devices outside the subset of requester devices, the arbitration circuitry causes the instruction of the predetermined type to be executed next.
-
公开(公告)号:US10855609B2
公开(公告)日:2020-12-01
申请号:US16269740
申请日:2019-02-07
Applicant: Arm Limited
Inventor: Geoffray Matthieu Lacourba , Alex James Waugh
IPC: H04L12/437 , H04L12/801 , H04L12/933
Abstract: An interconnect is provided that has a plurality of nodes, and a ring network to which each of the nodes is connected to allow packets to be transmitted between nodes. For an ordered sequence of packets one of the nodes is arranged as a source node to add each packet of the ordered sequence on to the ring network, and another of the nodes is arranged as a destination node to remove each packet of the ordered sequence from the ring network. The source node is enabled to add a packet of the ordered sequence on to the ring network without waiting for a previously added packet of the ordered sequence to be removed from the ring network by the destination node. When the destination node is unable to accept a given packet of the ordered sequence that is currently being presented to the destination node by the ring network, that given packet remains on the ring network and continues to be transmitted around the ring network such that after a respin period that given packet will be presented again to the destination node. The destination node is then arranged to prevent acceptance of at least any other packets of the ordered sequence subsequently presented to the destination node by the ring network until the destination node has accepted the given packet following at least one respin period. This can improve the efficiency of the ring network in the handling of ordered sequences of packets, whilst still ensuring the ordering constraints are met.
-
公开(公告)号:US10802969B2
公开(公告)日:2020-10-13
申请号:US16266185
申请日:2019-02-04
Applicant: Arm Limited
Inventor: Alex James Waugh , Geoffray Matthieu Lacourba
IPC: G06F12/0815 , H04L12/747 , G06F12/0862 , G06F12/0811
Abstract: An interconnect, and method of operation of such an interconnect, are disclosed. The interconnect has a plurality of nodes, and a routing network via which information is routed between the plurality of nodes. The plurality of nodes comprises at least one slave node used to couple master devices to the interconnect, at least one master node used to couple slave devices to the interconnect, and at least one control node. Each control node is responsive to a slave node request received via the routing network from a slave node, to perform an operation to service the slave node request and, when a propagation condition is present, to issue a control node request via the routing network to a chosen master node in order to service the slave node request. The chosen master node processes the control node request in order to generate a master node response, and treats as a default destination for the master node response the control node that issued the control node request. In response to a trigger event occurring after the control node request has been issued, the control node sends an update destination request to the chosen master node that identifies a replacement destination node for the master node response. At least in the absence of an override condition, the chosen master node then sends the master node response via the routing network to the replacement destination node.
-
公开(公告)号:US10776274B2
公开(公告)日:2020-09-15
申请号:US15910122
申请日:2018-03-02
Applicant: Arm Limited
Inventor: Lucas Garcia , Geoffray Matthieu Lacourba , Natalya Bondarenko , Nathanael Premillieu
IPC: G06F12/00 , G06F12/0862
Abstract: Data processing circuitry comprises a cache memory to cache a subset of data elements from a main memory; a processing element to execute program code to access data elements having respective memory addresses, the processing element being configured to access the data elements in the cache memory and, in the case of a cache miss, to fetch the data elements from the main memory; prefetch circuitry, responsive to an access to a current data element, to initiate prefetching into the cache memory of a data element at a memory address defined by a current offset value relative to the address of the current data element; and offset value selection circuitry comprising: an address table to store memory addresses for which a data element accessed by the processing element resulted in a cache miss or an access to a previously prefetched data element; and detector circuitry to detect, for each of a group of candidate offset values, one or more respective metrics representing a proportion of a set of data element accesses which resulted in a cache miss or an access to a previously prefetched data element, for which the memory address for that data element access differs by the candidate offset value from a memory address in the address table; in which the detector circuitry is configured to process the group of candidate offset values as successive complementary sub-groups of one or more of the group of candidate offset values and to set a next instance of the current offset value in response to processing each sub-group, in dependence upon the proportions indicated by the one or more detected metrics for that sub-group; and the one or more metrics previously detected for the current offset value.
-
公开(公告)号:US10769069B2
公开(公告)日:2020-09-08
申请号:US15910137
申请日:2018-03-02
Applicant: Arm Limited
Inventor: Natalya Bondarenko , Lucas Garcia , Geoffray Matthieu Lacourba
IPC: G06F12/00 , G06F12/0862
Abstract: Data processing circuitry comprises a cache memory to cache a subset of data elements from a main memory; a processing element to execute program code to access data elements having respective memory addresses, the processing element being configured to access the data elements in the cache memory and, in the case of a cache miss, to fetch the data elements from the main memory; prefetch circuitry, responsive to an access to a current data element, to initiate prefetching into the cache memory of a data element at a memory address defined by a current offset value relative to the address of the current data element; offset value selection circuitry comprising: an address table to store memory addresses for which a data element accessed by the processing element resulted in a cache miss or an access to a previously prefetched data element; detector circuitry to detect, for each of a group of candidate offset values, one or more respective metrics representing a proportion of a set of data element accesses which resulted in a cache miss or an access to a previously prefetched data element, for which the memory address for that data element access differs by the candidate offset value from a memory address in the address table; in which the detector circuitry is configured to set a next instance of the current offset value in response to the one or more detected metrics; verification circuitry to detect, at one or more predetermined stages with respect to the processing of the group of candidate offset values by the offset value selection circuitry, one or more verification metrics representing a proportion of a set of data element accesses which resulted in a cache miss or an access to a previously prefetched data element, for which the memory address for that data element access differs by the current offset value from a memory address in the address table, to detect whether the one or more verification metrics comply with a predetermined condition; and control circuitry to inhibit prefetching at least until a next selection of a current offset value by the offset value selection circuitry, in response to a detection by the verification circuitry that the one or more verification metrics do not comply with the predetermined condition.
-
公开(公告)号:US10223002B2
公开(公告)日:2019-03-05
申请号:US15427335
申请日:2017-02-08
Applicant: ARM Limited
Inventor: Phanindra Kumar Mannava , Bruce James Mathewson , Klas Magnus Bruce , Geoffray Matthieu Lacourba
Abstract: A compare and swap transaction can be issued by a master device to request a processing unit to select whether to write a swap data value to a storage location corresponding to a target address in dependence on whether a compare data value matches a target data value read from the storage location. The compare and swap data values are transported within a data field of the compare and swap transaction. The compare data value is packed into a first region of the data field in dependence of an offset portion of the target address and having a position within the data field corresponding to the position of the target data value within the storage location. This reduces latency and circuitry required at the processing unit for handling the compare and swap transaction.
-
公开(公告)号:US12112169B2
公开(公告)日:2024-10-08
申请号:US18096141
申请日:2023-01-12
Applicant: Arm Limited
Inventor: Luca Nassi , Geoffray Matthieu Lacourba , Cédric Denis Robert Airaud , Albin Pierrick Tonnerre
CPC classification number: G06F9/30098 , G06F9/30094 , G06F9/384
Abstract: A data processing apparatus is provided. Instruction send circuitry sends an instruction to an external processor to be executed by the external processor. Allocation circuitry allocates a specified one of several registers for a result of the instruction having been executed on the external processor and data receive circuitry receives the result of the instruction having been executed on the external processor and stores the result in the specified one of the several registers. In response to a condition being met: the specified one of the several registers is dereserved prior to the result being received by the data receive circuitry, and the result is discarded by the data receive circuitry when the result is received by the data receive circuitry.
-
公开(公告)号:US11580032B2
公开(公告)日:2023-02-14
申请号:US17153147
申请日:2021-01-20
Applicant: Arm Limited
Inventor: Frederic Claude Marie Piry , Natalya Bondarenko , Cédric Denis Robert Airaud , Geoffray Matthieu Lacourba
IPC: G06F12/126 , G06F12/02 , G06K9/62
Abstract: A technique is provided for training a prediction apparatus. The apparatus has an input interface for receiving a sequence of training events indicative of program instructions, and identifier value generation circuitry for performing an identifier value generation function to generate, for a given training event received at the input interface, an identifier value for that given training event. The identifier value generation function is arranged such that the generated identifier value is dependent on at least one register referenced by a program instruction indicated by that given training event. Prediction storage is provided with a plurality of training entries, where each training entry is allocated an identifier value as generated by the identifier value generation function, and is used to maintain training data derived from training events having that allocated identifier value. Matching circuitry is then responsive to the given training event to detect whether the prediction storage has a matching training entry (i.e. an entry whose allocated identifier value matches the identifier value for the given training event). If so, it causes the training data in the matching training entry to be updated in dependence on the given training event.
-
公开(公告)号:US11256623B2
公开(公告)日:2022-02-22
申请号:US15427459
申请日:2017-02-08
Applicant: ARM Limited
Inventor: Phanindra Kumar Mannava , Bruce James Mathewson , Jamshed Jalal , Klas Magnus Bruce , Michael Filippo , Paul Gilbert Meyer , Alex James Waugh , Geoffray Matthieu Lacourba
IPC: G06F12/0831 , G06F12/0808
Abstract: Apparatus and a corresponding method of operating a hub device, and a target device, in a coherent interconnect system are presented. A cache pre-population request of a set of coherency protocol transactions in the system is received from a requesting master device specifying at least one data item and the hub device responds by cause a cache pre-population trigger of the set of coherency protocol transactions specifying the at least one data item to be transmitted to a target device. This trigger can cause the target device to request that the specified at least one data item is retrieved and brought into cache. Since the target device can therefore decide whether to respond to the trigger or not, it does not receive cached data unsolicited, simplifying its configuration, whilst still allowing some data to be pre-cached.
-
公开(公告)号:US12182427B2
公开(公告)日:2024-12-31
申请号:US17966071
申请日:2022-10-14
Applicant: Arm Limited
Inventor: Stefano Ghiggini , Natalya Bondarenko , Luca Nassi , Geoffray Matthieu Lacourba , Huzefa Moiz Sanjeliwala , Miles Robert Dooley , Abhishek Raja
IPC: G06F3/06
Abstract: An apparatus is provided for controlling the operating mode of control circuitry, such that the control circuitry may change between two operating modes. In an allocation mode, data that is loaded in response to an instruction is allocated into storage circuitry from an intermediate buffer, and the data is read from the storage circuitry. In a non-allocation mode, the data is not allocated to the storage circuitry, and is read directly from intermediate buffer. The control of the operating mode may be performed by mode control circuitry, and the mode may be changed in dependence on the type of instruction that calls the data, and whether the data may be used again in the near future, or whether it is expected to be used only once.
-
-
-
-
-
-
-
-
-