-
公开(公告)号:US12124886B2
公开(公告)日:2024-10-22
申请号:US17232195
申请日:2021-04-16
发明人: Xiaoke Ni , Jinpeng Chen , Hao Lan
CPC分类号: G06F9/5077 , G06F9/5011 , G06F9/5072 , G06F13/28 , G06F2209/5011
摘要: This disclosure provides a data processing method, including: receiving, by a first computing device, a first packet sent by a second computing device, where the first computing device is configured to assist the second computing device in performing service processing, the first computing device is a computing device in a heterogeneous resource pool, the first computing device communicates with the second computing device through a network, the heterogeneous resource pool includes at least one first computing device, and the first packet includes an instruction used to request the first computing device to process to-be-processed data; processing, by the first computing device, the to-be-processed data based on the instruction; and sending, by the first computing device, a second packet to the second computing device, where the second packet includes a processing result of the to-be-processed data.
-
公开(公告)号:US20240348538A1
公开(公告)日:2024-10-17
申请号:US18755926
申请日:2024-06-27
IPC分类号: H04L45/28 , G06F9/50 , G06F9/54 , G06F12/0862 , G06F12/1036 , G06F12/1045 , G06F13/14 , G06F13/16 , G06F13/28 , G06F13/38 , G06F13/40 , G06F13/42 , G06F15/173 , H04L1/00 , H04L43/0876 , H04L43/10 , H04L45/00 , H04L45/02 , H04L45/021 , H04L45/028 , H04L45/12 , H04L45/122 , H04L45/125 , H04L45/16 , H04L45/24 , H04L45/42 , H04L45/745 , H04L45/7453 , H04L47/10 , H04L47/11 , H04L47/12 , H04L47/122 , H04L47/20 , H04L47/22 , H04L47/24 , H04L47/2441 , H04L47/2466 , H04L47/2483 , H04L47/30 , H04L47/32 , H04L47/34 , H04L47/52 , H04L47/62 , H04L47/625 , H04L47/6275 , H04L47/629 , H04L47/76 , H04L47/762 , H04L47/78 , H04L47/80 , H04L49/00 , H04L49/101 , H04L49/15 , H04L49/90 , H04L49/9005 , H04L49/9047 , H04L67/1097 , H04L69/22 , H04L69/28 , H04L69/40
CPC分类号: H04L45/28 , G06F9/505 , G06F9/546 , G06F12/0862 , G06F12/1036 , G06F12/1063 , G06F13/14 , G06F13/16 , G06F13/1642 , G06F13/1673 , G06F13/1689 , G06F13/28 , G06F13/385 , G06F13/4022 , G06F13/4068 , G06F13/4221 , G06F15/17331 , H04L1/0083 , H04L43/0876 , H04L43/10 , H04L45/02 , H04L45/021 , H04L45/028 , H04L45/122 , H04L45/123 , H04L45/125 , H04L45/16 , H04L45/20 , H04L45/22 , H04L45/24 , H04L45/38 , H04L45/42 , H04L45/46 , H04L45/566 , H04L45/70 , H04L45/745 , H04L45/7453 , H04L47/11 , H04L47/12 , H04L47/122 , H04L47/18 , H04L47/20 , H04L47/22 , H04L47/24 , H04L47/2441 , H04L47/2466 , H04L47/2483 , H04L47/30 , H04L47/32 , H04L47/323 , H04L47/34 , H04L47/39 , H04L47/52 , H04L47/621 , H04L47/6235 , H04L47/626 , H04L47/6275 , H04L47/629 , H04L47/76 , H04L47/762 , H04L47/781 , H04L47/80 , H04L49/101 , H04L49/15 , H04L49/30 , H04L49/3009 , H04L49/3018 , H04L49/3027 , H04L49/90 , H04L49/9005 , H04L49/9021 , H04L49/9036 , H04L49/9047 , H04L67/1097 , H04L69/22 , H04L69/40 , G06F2212/50 , G06F2213/0026 , G06F2213/3808 , H04L69/28
摘要: Systems and methods of routing a data communication across a network having a plurality switches are provided by monitoring the operation of the plurality of global links to determine which of the plurality of global links provide working paths. A routing table indicative of a status for the plurality of links is maintained, where the routing table provides weighting for each of the working paths. When routing, a link using a weighted pseudo-random selection from the choices available in the routing table is selected. Routing along one of the working paths commensurate with the selected link is performed, and the weighting is updated based upon the operation of the plurality of links.
-
公开(公告)号:US12117956B2
公开(公告)日:2024-10-15
申请号:US16726676
申请日:2019-12-24
申请人: Intel Corporation
发明人: Mark Sean Hefty , Arlin R. Davis
CPC分类号: G06F13/4234 , G06F9/30145 , G06F9/546 , G06F13/28 , H04L45/566 , H04L69/22
摘要: Examples described herein relate to configuring a target network interface to recognize packets that are to be written directly from the network interface to multiple memory destinations. A packet can include an identifier that a portion of the packet is to be written to multiple memory devices at specific addresses. The packet is validated to determine if the target network interface is permitted to directly copy the portion of the packet to memory of the target. The target network interface can perform a direct copy to multiple memory locations of a portion of the packet.
-
公开(公告)号:US12112792B2
公开(公告)日:2024-10-08
申请号:US17712935
申请日:2022-04-04
IPC分类号: H01L23/00 , G06F3/06 , G06F13/16 , G06F13/28 , G11C7/08 , G11C7/10 , G11C11/408 , G11C11/4091 , G11C11/4093 , G11C11/4096 , G16B30/00 , G16B50/10 , H01L21/66 , H01L21/78 , H01L25/00 , H01L25/065 , H01L25/18
CPC分类号: G11C11/4093 , G06F3/0656 , G06F13/1673 , G06F13/28 , G11C7/08 , G11C7/1039 , G11C11/4087 , G11C11/4091 , G11C11/4096 , G16B30/00 , G16B50/10 , H01L21/78 , H01L22/12 , H01L24/08 , H01L24/48 , H01L24/80 , H01L25/0652 , H01L25/0657 , H01L25/18 , H01L25/50 , G06F2213/28 , H01L24/16 , H01L2224/0801 , H01L2224/08145 , H01L2224/1601 , H01L2224/16221 , H01L2224/48091 , H01L2224/48145 , H01L2224/48221 , H01L2224/80895 , H01L2224/80896 , H01L2225/06517 , H01L2225/06524 , H01L2225/06527 , H01L2225/06541 , H01L2225/06565 , H01L2225/06589 , H01L2924/1431 , H01L2924/14335 , H01L2924/1436
摘要: A memory device includes an array of memory cells configured on a die or chip and coupled to sense lines and access lines of the die or chip and a respective sense amplifier configured on the die or chip coupled to each of the sense lines. Each of a plurality of subsets of the sense lines is coupled to a respective local input/output (I/O) line on the die or chip for communication of data on the die or chip and a respective transceiver associated with the respective local I/O line, the respective transceiver configured to enable communication of the data to one or more device off the die or chip.
-
公开(公告)号:US12111721B2
公开(公告)日:2024-10-08
申请号:US18490675
申请日:2023-10-19
申请人: Apple Inc.
发明人: Marc A. Schaub , Roy G. Moss , Michael Bekerman
CPC分类号: G06F11/0793 , G06F13/28
摘要: Systems, apparatuses, and methods for error detection and recovery when streaming data are described. A system includes one or more companion direct memory access (DMA) subsystems for transferring data. When an error is detected for a component of the companion DMA subsystem(s), the operations performed by the other components need to gracefully adapt to this error so that operations face only a minimal disruption. For example, while one or more consumers are still consuming a first frame, a companion router receives an indication of an error for a second frame, causing the companion router to send a router frame abort message to a route manager. In response, the route manager waits until the consumer(s) are consuming the second frame before sending them a frame abort message. The consumer(s) flush their pipeline and transition to an idle state waiting for a third frame after receiving the frame abort message.
-
公开(公告)号:US20240333304A1
公开(公告)日:2024-10-03
申请号:US18738203
申请日:2024-06-10
发明人: Arthur John Redfern , Dan Wang
CPC分类号: H03M7/30 , G06F13/28 , G06F17/16 , G06N3/063 , H03M7/3082 , H03M7/6029 , H03M7/6064 , G06N3/045
摘要: A matrix compression/decompression accelerator (MCA) system/method that coordinates lossless data compression (LDC) and lossless data decompression (LDD) transfers between an external data memory (EDM) and a local data memory (LDM) is disclosed. The system implements LDC using a 2D-to-1D transformation of 2D uncompressed data blocks (2DU) within LDM to generate 1D uncompressed data blocks (1DU). The 1DU is then compressed to generate a 1D compressed superblock (CSB) in LDM. This LDM CSB may then be written to EDM with a reduced number of EDM bus cycles. The system implements LDD using decompression of CSB data retrieved from EDM to generate a 1D decompressed data block (1DD) in LDM. A 1D-to-2D transformation is then applied to the LDM 1DD to generate a 2D decompressed data block (2DD) in LDM. This 2DD may then be operated on by a matrix compute engine (MCE) using a variety of function operators.
-
公开(公告)号:US20240330217A1
公开(公告)日:2024-10-03
申请号:US18616772
申请日:2024-03-26
申请人: Apple Inc.
发明人: Christopher L. MILLS
IPC分类号: G06F13/28
CPC分类号: G06F13/28 , G06F2213/2806
摘要: An SoC circuit includes a neural processor circuit coupled to a CPU. The neural processor circuit includes neural engines, a data processor DMA circuit, a system memory, and a data processor circuit. The CPU is configured to execute a compiler, which is in turn configured to determine to perform a mode of spatial cropping and the associated crop offset. The neural processor circuit is configured to support arbitrary cropping in the x and y dimensions. The compiler is configured to generate task descriptor(s), the task descriptor(s) distributed to components of the neural processor circuit. The data processor DMA circuit is configured to fetch and format data corresponding to the crop from a source to the buffer. The buffer is configured to realign the data according to the crop origin for broadcast to the neural engines. The neural engines is configured to perform a computation operation which uses the cropped data.
-
公开(公告)号:US20240330216A1
公开(公告)日:2024-10-03
申请号:US18193129
申请日:2023-03-30
申请人: Xilinx, Inc.
IPC分类号: G06F13/28
CPC分类号: G06F13/28 , G06F2213/28
摘要: A direct memory access (DMA) system includes a plurality of read circuits and a switch coupled to a plurality of data port controllers configured to communicate with one or more data processing systems. The DMA system includes a read scheduler circuit coupled to the plurality of read circuits and the switch. The read scheduler circuit is configured to receive read requests from the plurality of read circuits, request allocation of entries of a data memory for the read requests, and submit the read requests to the one more data processing systems via the switch. The DMA system includes a read reassembly circuit coupled to the plurality of read circuits, the switch, and the read scheduler circuit. The read reassembly circuit is configured to reorder read completion data received from the switch for the read requests and provide read completion data, as reordered, to the plurality of read circuits.
-
公开(公告)号:US20240320039A1
公开(公告)日:2024-09-26
申请号:US18737728
申请日:2024-06-07
IPC分类号: G06F9/48 , G06F9/50 , G06F13/16 , G06F13/18 , G06F13/20 , G06F13/28 , G06F13/364 , G06F16/901
CPC分类号: G06F9/4881 , G06F9/48 , G06F9/4806 , G06F9/4818 , G06F9/4831 , G06F9/50 , G06F9/5005 , G06F9/5027 , G06F9/5038 , G06F13/16 , G06F13/1605 , G06F13/18 , G06F13/20 , G06F13/28 , G06F13/364 , G06F16/9027
摘要: Methods and systems for generating common priority information for a plurality of requestors in a computing system that share a plurality of computing resources for use in a next cycle to arbitrate between the plurality of requestors, include generating, for each resource, priority information for the next cycle based on an arbitration scheme; generating, for each resource, relevant priority information for the next cycle based on the priority information for the next cycle for that resource, the relevant priority information for a resource being the priority information that relates to requestors that requested access to the resource in the current cycle and were not granted access to the resource in the current cycle; and combining the relevant priority information for the next cycle for each resource to generate the common priority information for the next cycle.
-
公开(公告)号:US20240314072A1
公开(公告)日:2024-09-19
申请号:US18417570
申请日:2024-01-19
申请人: Intel Corporation
发明人: Pratik M. MAROLIA , Rajesh M. SANKARAN , Ashok RAJ , Nrupal JANI , Parthasarathy SARANGAM , Robert O. SHARP
IPC分类号: H04L45/74 , G06F12/1081 , G06F13/28 , H04L45/60 , H04L49/90
CPC分类号: H04L45/742 , G06F12/1081 , G06F13/28 , H04L45/60 , H04L49/9068
摘要: A network interface controller can be programmed to direct write received data to a memory buffer via either a host-to-device fabric or an accelerator fabric. For packets received that are to be written to a memory buffer associated with an accelerator device, the network interface controller can determine an address translation of a destination memory address of the received packet and determine whether to use a secondary head. If a translated address is available and a secondary head is to be used, a direct memory access (DMA) engine is used to copy a portion of the received packet via the accelerator fabric to a destination memory buffer associated with the address translation. Accordingly, copying a portion of the received packet through the host-to-device fabric and to a destination memory can be avoided and utilization of the host-to-device fabric can be reduced for accelerator bound traffic.
-
-
-
-
-
-
-
-
-