-
公开(公告)号:US10951741B2
公开(公告)日:2021-03-16
申请号:US15874852
申请日:2018-01-18
Applicant: HUAWEI TECHNOLOGIES CO., LTD.
Inventor: Yun Chen , Haibin Wang , Xiongli Gu , Xiaosong Cui
Abstract: A computer device and a method for reading or writing data by a computer device are provided. In the computer device, a central processing unit (CPU) is connected to a cloud controller using a double data rate (DDR) interface. Because the DDR interface has a high data transmission rate, interruption of CPU can be avoided. In addition, the CPU converts a read or write operation request into a control command and writes the control command into a transmission queue in the cloud controller. Because the cloud controller performs a read operation or a write operation on a network device according to operation information in the control command, after writing the control command into the transmission queue, the CPU does not need to wait for an operation performed by the cloud controller and can continue to perform other processes.
-
公开(公告)号:US10740247B2
公开(公告)日:2020-08-11
申请号:US16211225
申请日:2018-12-05
Applicant: HUAWEI TECHNOLOGIES CO., LTD.
Inventor: Weiguang Cai , Xiongli Gu , Lei Fang
IPC: G06F12/00 , G06F13/00 , G06F12/1027
Abstract: A method for accessing an entry in a translation lookaside buffer and a processing chip are provided. In the method, the entry includes at least one combination entry, and the combination entry includes a virtual huge page number, a bit vector field, and a physical huge page number. The physical huge page number is an identifier of N consecutive physical pages corresponding to the N consecutive virtual pages. One entry is used to represent a plurality of virtual-to-physical page mappings, so that when a page table length is fixed, a quantity of entries in the TLB can be increased exponentially, thereby increasing a TLB hit probability, and reducing TLB misses. In this way, a delay in program processing can be reduced, and processing efficiency of the processing chip can be improved.
-
公开(公告)号:US10795826B2
公开(公告)日:2020-10-06
申请号:US16178676
申请日:2018-11-02
Applicant: HUAWEI TECHNOLOGIES CO., LTD.
Inventor: Lei Fang , Weiguang Cai , Xiongli Gu
IPC: G06F12/10 , G06F12/1027 , G06F12/0806 , G06F12/128 , G06F12/0842 , G06F12/1009 , G06F12/0811
Abstract: A translation lookaside buffer (TLB) management method and a multi-core processor are provided. The method includes: receiving, by a first core, a first address translation request; querying a TLB of the first core based on the first address translation request; determining that a first target TLB entry corresponding to the first address translation request is missing in the TLB of the first core, obtaining the first target TLB entry; determining that entry storage in the TLB of the first core is full; determining a second core from cores in an idle state in the multi-core processor; replacing a first entry in the TLB of the first core with the first target TLB entry; storing the first entry in a TLB of the second core. Accordingly, a TLB miss rate is reduced and program execution is accelerated.
-
公开(公告)号:US20180159963A1
公开(公告)日:2018-06-07
申请号:US15874852
申请日:2018-01-18
Applicant: HUAWEI TECHNOLOGIES CO., LTD.
Inventor: Yun Chen , Haibin Wang , Xiongli Gu , Xiaosong Cui
CPC classification number: H04L69/10 , G06F1/16 , G06F3/061 , G06F3/0611 , G06F3/0659 , G06F3/067 , G06F13/32 , H04L69/02
Abstract: A computer device and a method for reading or writing data by a computer device are provided. In the computer device, a central processing unit (CPU) is connected to a cloud controller using a double data rate (DDR) interface. Because the DDR interface has a high data transmission rate, interruption of CPU can be avoided. In addition, the CPU converts a read or write operation request into a control command and writes the control command into a transmission queue in the cloud controller. Because the cloud controller performs a read operation or a write operation on a network device according to operation information in the control command, after writing the control command into the transmission queue, the CPU does not need to wait for an operation performed by the cloud controller and can continue to perform other processes.
-
公开(公告)号:US20160234311A1
公开(公告)日:2016-08-11
申请号:US15131449
申请日:2016-04-18
Applicant: Huawei Technologies Co., Ltd.
Inventor: Xiongli Gu , Haibin Wang , Xiaosong Cui
Abstract: A memory access device allocate a memory resource to a node in a system to reduce maintenance overheads of the system and implement flexible scheduling of memory resources. The device includes a cloud control device on the side of a requesting node and a cloud control device on the side of a contributing node. The cloud control device on the side of the requesting node generates a request packet for accessing to-be-accessed data stored in the contributing node which provides memory resource, sends the request packet to the contributing node t. The cloud control device on the side of the contributing node receives the request packet, provides a request message to the contributing node, generates the response packet for the contributing node, and sends the response packet to the requesting node.
Abstract translation: 存储器访问设备向系统中的节点分配存储器资源以减少系统的维护开销并实现对存储器资源的灵活调度。 该设备包括在请求节点侧的云控制设备和在贡献节点一侧的云控制设备。 请求节点侧的云控制装置生成用于访问存储在提供存储器资源的贡献节点中的被访问数据的请求分组,将请求分组发送到贡献节点t。 贡献节点侧的云控制装置接收请求分组,向贡献节点提供请求消息,生成贡献节点的响应分组,并将响应分组发送给请求节点。
-
-
-
-