摘要:
In an embodiment, a shared memory fabric is configured to receive memory requests from multiple agents, where at least some of the requests have an associated order identifier and a deadline value to indicate a maximum latency prior to completion of the memory request. Responsive to the requests, the fabric is to arbitrate between the requests based at least in part on the deadline values. Other embodiments are described and claimed.
摘要:
Provided are a method, system, and program for managing Input/Output (I/O) requests in a cache memory system. A request is received to data at a memory address in a first memory device, wherein data in the first memory device is cached in a second memory device. A determination is made as to whether to fetch the requested data from the first memory device to cache in the second memory device in response to determining that the requested data is not in the second memory device. The requested data in the first memory device is accessed and the second memory device is bypassed to execute the request in response to determining not to fetch the requested data from the first memory device to cache in the second memory device.
摘要:
An embodiment of the invention includes a hardware architecture for, as an example, mobile computing devices. The architecture includes a physical layer that can be configured to be shared across one or more display panels that, in some instances, have different resolutions and bandwidth requirements. Using a shared physical layer removes one of the physical layers typically needed for multiple display devices (e.g., smart phones with two displays). In one embodiment, one physical layer includes two or more reference clock lanes so data lanes can be shared across two or more links The shared physical layer may be configured via a display driver. Other embodiments are described herein.