Abstract:
A memory allocation method and a device, where the method is applied to a computer system including a processor and a memory, and comprises, after receiving a memory access request carrying a to-be-accessed virtual address and determining that no memory page has been allocated to the virtual address, the processor selecting a target rank group from at least two rank groups of the memory based on access traffic of the rank groups. The processor selects, from idle memory pages, a to-be-allocated memory page for the virtual address, where information about a first preset location in a physical address of the to-be-allocated memory page is the same as first portions of address information in addresses of ranks in the target rank group.
Abstract:
A memory refresh method is applied to a computer system including a processor, a memory controller, and a dynamic random access memory (DRAM). The memory controller receives a first plurality of access requests from the processor. The memory controller refreshes a first rank in a plurality of ranks at shortened interval set to T/N when a quantity of target ranks to be accessed by the first plurality of access requests is less than a first threshold and a proportion of read requests in the first plurality of access requests or a proportion of write requests in the first plurality of access requests is greater than a second threshold. T is a standard average refresh interval, and N is greater than 1. The memory refresh technology provided in this application can improve performance of the computer system in a memory refresh process.
Abstract:
A memory allocation method and a device, where the method is applied to a computer system including a processor and a memory, and comprises, after receiving a memory access request carrying a to-be-accessed virtual address and determining that no memory page has been allocated to the virtual address, the processor selecting a target rank group from at least two rank groups of the memory based on access traffic of the rank groups. The processor selects, from idle memory pages, a to-be-allocated memory page for the virtual address, where information about a first preset location in a physical address of the to-be-allocated memory page is the same as first portions of address information in addresses of ranks in the target rank group.
Abstract:
A data transmission method and apparatus, where the method comprises checking full-bandwidth transmission paths of a bus, and When a fault occurs in the full-bandwidth transmission paths and a quantity of faulty full-bandwidth transmission paths is less than or equal to M, selecting N full-bandwidth transmission paths from full-bandwidth transmission paths that are not faulty to transmit a data unit, and when a fault occurs in the full-bandwidth transmission paths and a quantity of faulty full-bandwidth transmission paths is greater than M, reconfiguring a size of a data unit according to a quantity of full-bandwidth transmission paths that are not faulty and a target burst quantity.
Abstract:
Embodiments of the present disclosure provide a data transmission method, which meets a requirement for an Ethernet network with diversified rate levels. The method includes: grouping media access control (MAC) layer data into a plurality of MAC layer data groups; allocating, according to a bandwidth required by a target MAC layer data group and a reference bandwidth of a logical channel, at least one target logical channel to the target MAC layer data group; encoding the target MAC layer data group to generate target physical layer data, where the target logical channel corresponds to the target MAC layer data group and the target physical layer data; and sending the target physical layer data and first indication information, where the first indication information is used to indicate a relationship between the target physical layer data and the target logical channel.
Abstract:
A memory refresh method is applied to a computer system including a processor, a memory controller, and a dynamic random access memory (DRAM). The memory controller receives a first plurality of access requests from the processor. The memory controller refreshes a first rank in a plurality of ranks at shortened interval set to T/N when a quantity of target ranks to be accessed by the first plurality of access requests is less than a first threshold and a proportion of read requests in the first plurality of access requests or a proportion of write requests in the first plurality of access requests is greater than a second threshold. T is a standard average refresh interval, and N is greater than 1. The memory refresh technology provided in this application can improve performance of the computer system in a memory refresh process.
Abstract:
A memory refresh method is applied to a computer system including a processor, a memory controller, and a dynamic random access memory (DRAM). The memory controller receives a first plurality of access requests from the processor. The memory controller refreshes a first rank in a plurality of ranks at shortened interval set to T/N when a quantity of target ranks to be accessed by the first plurality of access requests is less than a first threshold and a proportion of read requests in the first plurality of access requests or a proportion of write requests in the first plurality of access requests is greater than a second threshold. T is a standard average refresh interval, and N is greater than 1. The memory refresh technology provided in this application can improve performance of the computer system in a memory refresh process.
Abstract:
Embodiments of the present disclosure provide a data transmission method, which can meet a requirement for an Ethernet network with diversified rate levels. The method includes: grouping media access control (MAC) layer data into a plurality of MAC layer data groups; allocating, according to a bandwidth required by a target MAC layer data group and a reference bandwidth of a logical channel, at least one target logical channel to the target MAC layer data group; encoding the target MAC layer data group to generate target physical layer data, where the target logical channel corresponds to the target MAC layer data group and the target physical layer data; and sending the target physical layer data and first indication information, where the first indication information is used to indicate a relationship between the target physical layer data and the target logical channel.
Abstract:
Embodiments of the present invention provide a method and an apparatus for increasing and decreasing variable optical channel bandwidth. The method for increasing includes: sending a higher order optical channel data unit (HO ODU) frame to which a timeslot increase indication is added to a second NE; starting from a next HO ODU frame, mapping, by an NE, a bit stream formed by a flexible optical transport data unit (ODUflex) bit stream at a first rate and an idle data bit stream to Y timeslots of the HO ODU frame; sending an ODUflex frame to which a rate increase indication is added to the second NE; and starting from a next ODUflex frame, mapping an ODUflex bit stream at a second rate to the Y timeslots of the HO ODU frame.
Abstract:
Embodiments of the present disclosure disclose a method for controlling data stream switch and a relevant equipment. The method includes: obtaining bandwidth demand information of a data stream; calculating a BWM according to the bandwidth demand information, the physical bandwidth of at least one ingress port and at least one egress port of the data stream, and TDM service bandwidth information; performing sequencing on entries of the BWM, to obtain a bandwidth sequencing information table; performing cell even sequencing processing on the data stream according to the bandwidth sequencing information table, to obtain a cell table; and controlling, sending of cells of the data stream according to the cell table. Through the solutions provided, processing complexity may be effectively reduced, the problem of scale limitation on a bufferless switch structure is solved, and meanwhile, a delay jitter during switch processing is also decreased.