摘要:
One embodiment is main memory that includes a combination of non-volatile memory (NVM) and dynamic random access memory (DRAM). An operating system migrates data between the NVM and the DRAM.
摘要:
One embodiment is main memory that includes a combination of non-volatile memory (NVM) and dynamic random access memory (DRAM). An operating system migrates data between the NVM and the DRAM.
摘要:
Example methods, apparatus, and articles of manufacture to access memory are disclosed. A disclosed example method involves receiving at least one runtime characteristic associated with accesses to contents of a memory page and dynamically adjusting a memory fetch width for accessing the memory page based on the at least one runtime characteristic.
摘要:
Example methods, apparatus, and articles of manufacture to access memory are disclosed. A disclosed example method involves receiving at least one runtime characteristic associated with accesses to contents of a memory page and dynamically adjusting a memory fetch width for accessing the memory page based on the at least one runtime characteristic.
摘要:
A scalable, multi-tenant network architecture for a virtualized datacenter is provided. The network architecture includes a network having a plurality of servers connected to a plurality of switches. The plurality of servers hosts a plurality of virtual interfaces for a plurality of tenants. A configuration repository is connected to the network and each server in the plurality of servers has a network agent hosted therein. The network agent encapsulates packets for transmission across the network from a source virtual interface to a destination virtual interface in the plurality of virtual interfaces for a tenant in the plurality of tenants. The packets are encapsulated with information identifying and locating the destination virtual interface, and the information is interpreted by switches connected to the source virtual interface and the destination virtual interface.
摘要:
Connectors of a first removable modular optical connection assembly, having a first predefined arrangement of optical signal conduits, are connected to respective connectors on a support structure that are optically connected to corresponding devices. The first modular optical connection assembly is replaceable with a second modular optical connection assembly having a second, different predefined arrangement of optical signal conduits, to change a topology of a network.
摘要:
A method of generating a plurality of potential network topologies is provided herein. The method includes receiving parameters that specify a number of servers, a number of switches, and a number of ports in the switches. The parameters are for configuring a network topology. The method also includes generating one or more potential network topologies comprising the set of potential network topologies, for each of a number of dimensions. The number of dimensions is based on the number of switches. The method further includes determining that the set of potential network topologies is structurally feasible. Additionally, the method includes determining an optimal link aggregation (LAG) factor in each dimension of each of the set of potential network topologies.
摘要:
A computer system uses a prefetch prediction model having energy usage parameters to predict the impact of prefetching specified files on the system's energy usage. A prefetch prediction engine utilizes the prefetch prediction model to evaluate the specified files with respect to prefetch criteria, including energy efficiency prefetch criteria, and generates a prefetch decision with respect to each file of the specified files. For each specified file for which the prefetch prediction engine generates an affirmative prefetch decision, an identifying entry is stored in a queue. The computer system fetches files identified by entries in the queue, although some or all of the entries in the queue at any one time may be deleted if it is determined that the identified files are no longer likely to be needed by the computer system.
摘要:
A system receives a flow of data packets via the link and determines a target bandwidth to be allocated to the flow on the link. In response to the flow, the receiving system transmits data to the sending system. The transmitted data control the sending system such that when the sending system transmits subsequent data packets to the receiving system, such subsequent data packets are transmitted at a rate approximating the target bandwidth allocated to the flow. In one embodiment, the rate at which the transmitted data from the receiving system arrive at the sending system determines the rate at which the sending system transmits the subsequent data packets. The receiving system can control the rate by delaying its response to the sending system according to a calculated delay factor. In another embodiment, the data transmitted from the receiving system to the sending system indicate a maximum amount of data that the receiving system will accept from the sending system in a subsequent data transmission. The maximum amount is determined so that when the sending system transmits subsequent data packets according to that amount, data is transmitted by the sending system to the receiving system at a rate approximating the target bandwidth.
摘要:
A cache memory system and method for selectively removing stale "aliased" entries, which arise when portions of several address spaces are mapped into a single region of real memory, from a virtually addressed cache, are described. The cache memory system includes a central processor unit (CPU) and a first-level cache on an integrated circuit chip. The CPU receives tag and data information from the first level cache via virtual address lines and data lines respectively. An off-chip second level cache is additionally coupled to provide data to the data lines. The CPU is coupled to a translation lookaside buffer (TLB) via the virtual address lines, while the second level cache is coupled to the TLB via physical address lines. The first and second level caches each comprise a plurality of entries. Each of the entries includes a status bit, indicating possible membership in a class of entries that might require flushing. Address translation database entries (page table entries or translation lookaside buffer (TLB) entries) are augmented with a field that contains the appropriate value of the status bits of each first and second level cache entry. Status bits are set for any page in which stale aliases may potentially occur (i.e., those shared pages that can be modified by at least one process or device). The cache-fill mechanism includes a path combining the status bits with the data being loaded into the first-level cache.