Method, apparatus, and system for retransmitting data packet in quick path interconnect system
    1.
    发明授权
    Method, apparatus, and system for retransmitting data packet in quick path interconnect system 有权
    用于在快速路径互连系统中重传数据包的方法,装置和系统

    公开(公告)号:US09197373B2

    公开(公告)日:2015-11-24

    申请号:US14107109

    申请日:2013-12-16

    Abstract: The present invention discloses a method for retransmitting a data packet in a quick path interconnect system, and a node. When a first node serves as a sending end, only the first data packet detected to be faulty is retransmitted to a second node, thereby saving system resources that need to be occupied in the data packet retransmission. When the first node serves as a receiving end, it implements that the packet loss does not occur in the first node in a case that the second node only retransmits the second data packet detected to be faulty, thereby ensuring reliability of the data packet transmission based on the QPI bus.

    Abstract translation: 本发明公开了一种在快速路径互连系统和节点中重传数据分组的方法。 当第一节点用作发送端时,只有检测到有故障的第一数据包被重传到第二节点,从而节省了在数据分组重传中需要占用的系统资源。 当第一节点用作接收端时,在第二节点仅重发检测到的第二数据包有故障的情况下,实现第一节点中不发生分组丢失,从而确保基于数据分组传输的可靠性 在QPI总线上。

    Method, Apparatus, and System for Retransmitting Data Packet in Quick Path Interconnect System
    2.
    发明申请
    Method, Apparatus, and System for Retransmitting Data Packet in Quick Path Interconnect System 有权
    快速路径互连系统中重传数据包的方法,装置和系统

    公开(公告)号:US20140108878A1

    公开(公告)日:2014-04-17

    申请号:US14107109

    申请日:2013-12-16

    Abstract: The present invention discloses a method for retransmitting a data packet in a quick path interconnect system, and a node. When a first node serves as a sending end, only the first data packet detected to be faulty is retransmitted to a second node, thereby saving system resources that need to be occupied in the data packet retransmission. When the first node serves as a receiving end, it implements that the packet loss does not occur in the first node in a case that the second node only retransmits the second data packet detected to be faulty, thereby ensuring reliability of the data packet transmission based on the QPI bus.

    Abstract translation: 本发明公开了一种在快速路径互连系统和节点中重传数据分组的方法。 当第一节点用作发送端时,只有检测到有故障的第一数据包被重传到第二节点,从而节省了在数据分组重传中需要占用的系统资源。 当第一节点用作接收端时,在第二节点仅重发检测到的第二数据包有故障的情况下,实现在第一节点中不发生分组丢失,从而确保基于数据分组传输的可靠性 在QPI总线上

    Translation lookaside buffer management method and multi-core processor

    公开(公告)号:US10795826B2

    公开(公告)日:2020-10-06

    申请号:US16178676

    申请日:2018-11-02

    Abstract: A translation lookaside buffer (TLB) management method and a multi-core processor are provided. The method includes: receiving, by a first core, a first address translation request; querying a TLB of the first core based on the first address translation request; determining that a first target TLB entry corresponding to the first address translation request is missing in the TLB of the first core, obtaining the first target TLB entry; determining that entry storage in the TLB of the first core is full; determining a second core from cores in an idle state in the multi-core processor; replacing a first entry in the TLB of the first core with the first target TLB entry; storing the first entry in a TLB of the second core. Accordingly, a TLB miss rate is reduced and program execution is accelerated.

    Method for accessing cache and pseudo cache agent
    4.
    发明授权
    Method for accessing cache and pseudo cache agent 有权
    访问缓存和伪高速缓存代理的方法

    公开(公告)号:US09465743B2

    公开(公告)日:2016-10-11

    申请号:US13719626

    申请日:2012-12-19

    CPC classification number: G06F12/084 G06F12/0806 G06F12/0811 G06F2212/1012

    Abstract: Embodiments of the present invention disclose a method for accessing a cache and a pseudo cache agent (PCA). The method of the present invention is applied to a multiprocessor system, where the system includes at least one NC, at least one PCA conforming to a processor micro-architecture level interconnect protocol is embedded in the NC, the PCA is connected to at least one PCA storage device, and the PCA storage device stores data shared among memories in the multiprocessor system. The method of the present invention includes: if the NC receives a data request, obtaining, by the PCA, target data required in the data request from the PCA storage device connected to the PCA; and sending the target data to a sender of the data request. Embodiments of the present invention are mainly applied to a process of accessing cache data in the multiprocessor system.

    Abstract translation: 本发明的实施例公开了一种用于访问高速缓存和伪高速缓存代理(PCA)的方法。 本发明的方法应用于多处理器系统,其中系统包括至少一个NC,至少一个符合处理器微架构级互连协议的PCA嵌入在NC中,PCA连接到至少一个 PCA存储装置,PCA存储装置将存储在多处理器系统中的数据共享。 本发明的方法包括:如果NC接收到数据请求,则由PCA从连接到PCA的PCA存储设备获得数据请求中所需的目标数据; 并将目标数据发送到数据请求的发送者。 本发明的实施例主要应用于在多处理器系统中访问高速缓存数据的过程。

    Computer system and memory access technology

    公开(公告)号:US11093245B2

    公开(公告)日:2021-08-17

    申请号:US16439335

    申请日:2019-06-12

    Abstract: A computer system and a memory access technology are provided. In the computer system, when load/store instructions having a dependency relationship is processed, dependency information between a producer load/store instruction and a consumer load/store instruction can be obtained from a processor. A consumer load/store request is sent to a memory controller in the computer system based on the obtained dependency information, so that the memory controller can terminate a dependency relationship between load/store requests in the memory controller locally based on the dependency information in the received consumer load/store request, and execute the consumer load/store request.

    Method for accessing entry in translation lookaside buffer TLB and processing chip

    公开(公告)号:US10740247B2

    公开(公告)日:2020-08-11

    申请号:US16211225

    申请日:2018-12-05

    Abstract: A method for accessing an entry in a translation lookaside buffer and a processing chip are provided. In the method, the entry includes at least one combination entry, and the combination entry includes a virtual huge page number, a bit vector field, and a physical huge page number. The physical huge page number is an identifier of N consecutive physical pages corresponding to the N consecutive virtual pages. One entry is used to represent a plurality of virtual-to-physical page mappings, so that when a page table length is fixed, a quantity of entries in the TLB can be increased exponentially, thereby increasing a TLB hit probability, and reducing TLB misses. In this way, a delay in program processing can be reduced, and processing efficiency of the processing chip can be improved.

Patent Agency Ranking