-
公开(公告)号:US20190205276A1
公开(公告)日:2019-07-04
申请号:US15859830
申请日:2018-01-02
Applicant: Arm Limited
Inventor: Andrew Brian Thomas HOPKINS , Sean James SALISBURY
Abstract: An interconnect, and method of handling supplementary data in an interconnect, are provided. The interconnect has routing circuitry providing a plurality of paths, and routing control circuitry to use the plurality of paths to establish routes through the interconnect between source devices and destination devices coupled to the interconnect, to enable system data to be routed through the interconnect between the source devices and the destination devices. The system data relates to functional operation of a system comprising the interconnect, the source devices and the destination devices. At least a subset of the paths are redundant paths whose use by the routing control circuitry provides the system data with resilience to faults when routing the system data through the interconnect. The routing control circuitry is responsive to supplementary data which is unnecessary to ensure the functional operation of the system, to establish a supplementary data route through the interconnect to a supplementary data receiving circuit, such that the supplementary data route employs at least one of the redundant paths that is not required to provide resilience for the system data at a time the at least one of the redundant paths is used for the supplementary data route. This provides an efficient mechanism for transporting supplementary data, whilst ensuring non-intrusive behaviour.
-
公开(公告)号:US20220365853A1
公开(公告)日:2022-11-17
申请号:US17807054
申请日:2022-06-15
Applicant: Arm Limited
Inventor: Andrew Brian Thomas HOPKINS , Graeme Leslie INGRAM , Elliot Maurice Simon ROSEMARINE , Antonio PRIORE
Abstract: A method of performing fault detection during computations relating to a neural network comprising a first neural network layer and a second neural network layer in a data processing system, the method comprising: scheduling computations onto data processing resources for the execution of the first neural network layer and the second neural network layer, wherein the scheduling includes: for a given one of the first neural network layer and the second neural network layer, scheduling a respective given one of a first computation and a second computation as a non-duplicated computation, in which the given computation is at least initially scheduled to be performed only once during the execution of the given neural network layer; and for the other of the first and second neural network layers, scheduling the respective other of the first and second computations as a duplicated computation.
-