Abstract:
A memory management method and a device, where the method includes: receiving a memory access request, where the memory access request carries a virtual address; determining a page fault type of the virtual address if finding, in a translation lookaside buffer TLB and a memory, no page table entry corresponding to the virtual address; allocating a corresponding page to the virtual address if the page fault type of the virtual address is a blank-page-caused page fault, where the blank-page-caused page fault means that no corresponding page is allocated to the virtual address; and updating the page table entry corresponding to the virtual address to the memory and the TLB. The memory manager does not generate a page fault when a blank-page-caused page fault occurs, but allocates a corresponding page to the virtual address. Therefore, a quantity of occurrences of the page fault is reduced, thereby improving memory management efficiency.
Abstract:
In a method for accessing an extended memory, after receiving a first memory access request from a processor system in a computer, an extended memory controller sends a read request for obtaining to-be-accessed data to the extended memory and return, to the processor system, a first response message indicating the to-be-accessed data has not been obtained. The extended memory controller writes the to-be-accessed data into a data buffer after receiving the to-be-accessed data returned by the extended memory. After receiving, from the processor system, a second memory access request comprising a second access address, the extended memory controller returns, to the processor system, the to-be-accessed data in the data buffer in response to the second memory access request, wherein the second access address is different from the first access address and points to the physical address of the to-be-accessed data.
Abstract:
A method for reducing power consumption of a memory system and a memory controller are provided. The method for reducing power consumption of a memory system includes: determining whether a dynamic random access memory DRAM memory module with a low access frequency exists in a memory system; when a DRAM memory module with a low access frequency exists, transfer, according to a size of a working set in the memory system, page data that does not belong to the working set to a non-volatile memory NVM memory module, where the page data that does not belong to the working set is page data that does not need to be accessed when a process runs within preset time.
Abstract:
A first level buffer chip gates a target second level buffer chip according to a preset mapping relationship, a first chip select signal, and a first higher-order address signal, and forwards a memory access instruction and a lower-order address signal received from a memory controller to the target second level buffer chip. The target second level buffer chip determines a target memory module according to a second chip select signal and a delayed address signal obtained by delays a second higher-order address signal, determines a target memory chip according to the lower-order address signal, acquires target data from the target memory chip according to the memory access instruction, and returns the target data to the memory controller. A cascading manner of a system memory is changed to a tree-like topological form, which avoids a protocol conversion problem, reduces the memory access time.
Abstract:
An address compression method, an address decompression method, a compressor, and a decompressor, which can improve an address compression ratio. The address compression method includes after a compressor receives multiple operation request messages that are sent by a first processor, determining, according to an address feature formed by address information carried in all operation request messages that have a same stream number, a compression algorithm corresponding to the operation request messages that have a same stream number; and then compressing, according to the determined compression algorithm, addresses carried in the operation request messages that have a same stream number. The present invention is applicable to the computer field.
Abstract:
A method for accessing an extended memory, a device, and a system are disclosed. According to the method, after receiving a first memory access requests from a processor system in a computer, an extended memory controller sends a read request for obtaining to-be-accessed data to the extended memory and return, to the processor system, a first response message indicating the to-be-accessed data has not been obtained. The extended memory controller writes the to-be-accessed data into a data buffer after receiving the to-be-accessed data returned by the extended memory. After receiving, from the processor system, a second memory access request comprising a second access address, the extended memory controller returns, to the processor system, the to-be-accessed data in the data buffer in response to the second memory access request, wherein the second access address is different from the first access address and points to the physical address of the to-be-accessed data.
Abstract:
A method for accessing an extended memory, a device, and a system are disclosed. According to the method, after receiving a first memory access requests from a processor system in a computer, an extended memory controller sends a read request for obtaining to-be-accessed data to the extended memory and return, to the processor system, a first response message indicating the to-be-accessed data has not been obtained. The extended memory controller writes the to-be-accessed data into a data buffer after receiving the to-be-accessed data returned by the extended memory. After receiving, from the processor system, a second memory access request comprising a second access address, the extended memory controller returns, to the processor system, the to-be-accessed data in the data buffer in response to the second memory access request, wherein the second access address is different from the first access address and points to the physical address of the to-be-accessed data.
Abstract:
A message-based memory access apparatus and an access method thereof are disclosed, The message-based memory access apparatus includes: a message-based command bus, configured to transmit a message-based memory access instruction generated by the CPU to instruct a memory system to perform a corresponding operation; a message-based memory controller, configured to package a CPU request into a message packet and sent the packet to a storage module, and parse a message packet returned by the storage module and return data to the CPU; a message channel, configured to transmit a request message packet and a response message packet; and the storage module, including a buffer scheduler, and configured to receive the request packet from the message-based memory controller and process the corresponding request.
Abstract:
A first level buffer chip gates a target second level buffer chip according to a preset mapping relationship, a first chip select signal, and a first higher-order address signal, and forwards a memory access instruction and a lower-order address signal received from a memory controller to the target second level buffer chip. The target second level buffer chip determines a target memory module according to a second chip select signal and a delayed address signal obtained by delay processing on a second higher-order address signal, determines a target memory chip according to the lower-order address signal, acquires target data from the target memory chip according to the memory access instruction, and returns the target data to the memory controller. A cascading manner of a system memory is changed to a tree-like topological form, which avoids a protocol conversion problem and reduces the memory access time.
Abstract:
An address compression method, an address decompression method, a compressor, and a decompressor are disclosed, wherein the address compression method includes after a compressor receives multiple operation request messages that are sent by a first processor, determining, according to an address feature formed by address information carried in all operation request messages that have a same stream number, a compression algorithm corresponding to the operation request messages that have a same stream number; and then compressing, according to the determined compression algorithm, addresses carried in the operation request messages that have a same stream number. The present invention is applicable to the computer field.