摘要:
Techniques are disclosed for maintaining cache coherency across a serial interface bus such as a Peripheral Component Interconnect Express (PCIe) bus. The techniques include generating a snoop request (SNP) to determine whether first data stored in a local memory is coherent relative to second data stored in a data cache, the snoop request including destination information that identifies the data cache on the serial interface bus and causing the snoop request to be transmitted over the serial interface bus to a second processor. The techniques further include extracting a cache line address from the snoop request, determining whether the second data is coherent, generating a complete message (CPL) indicating that the first data is coherent with the second data, and causing the complete message to be transmitted over the bus to the first processor. The snoop request and complete messages may be vendor defined messages.
摘要:
A method and system for maintaining cache coherency across a serial interface bus such as a Peripheral Component Interconnect Express (PCIe) bus. The method includes generating a snoop request (SNP) to determine whether first data stored in a local memory is coherent relative to second data stored in a data cache, the snoop request including destination information that identifies the data cache on the serial interface bus, and causing the snoop request to be transmitted over the serial interface bus to a second processor. The method further includes extracting a cache line address from the snoop request, determining whether the second data is coherent, generating a complete message (CPL) indicating that the first data is coherent with the second data, and causing the complete message to be transmitted over the bus to the first processor. The snoop request and complete messages may be vendor defined messages.
摘要:
A method for executing processing operations using data stored in a memory. The method includes generating a snoop request configured to determine whether first data stored in a local memory is coherent relative to second data stored in a data cache, the snoop request including destination information that identifies the data cache on a bus, and a cache line address identifying where in the data cache the second data is located. The method further includes causing the snoop request to be transmitted over the bus to the second processor, extracting the cache line address from the snoop request, determining whether the second data is coherent, generating a complete message that includes completion information indicating that the first data is coherent with the second data, and causing the complete message to be transmitted over the bus to the first processor.
摘要:
A digital content system is disclosed. A security engine disposed in a bridge provides cryptographic services. Clear text digital data received from a central processing unit is encrypted and transferred via the bridge over unsecured data paths as cipher text.
摘要:
A bridge is disclosed having a security engine to protect digital content at insecure interfaces of the bridge. The bridge permits cryptographic services to he offloaded from a central processing unit to the bridge. The bridge receives a clear text input from a central processing unit. The bridge encrypts the clear text input as cipher text for storage in a memory. The bridge provided the cipher text to a graphics processing unit.
摘要:
Circuits, methods, and apparatus that provide for trusted transactions between a device and system memory. In one exemplary embodiment of the present invention, a host processor asserts and de-asserts trust over a virtual wire. The device accesses certain data if the host processor provides a trusted instruction for it to do so. Once the device attempts to access this certain data, or perform a certain type of data access, a memory controller allows the access on the condition that the host processor previously made the trusted instruction. The device then accepts data if trust is asserted during the data transfer.
摘要:
Embodiments of the invention accelerate at least one special purpose processor, such as a GPU, or a driver managing a special purpose processor, by using at least one co-processor. Advantageously, embodiments of the invention are fault-tolerant in that the at least one GPU or other special purpose processor is able to execute all computations, although perhaps at a lower level of performance, if the at least one co-processor is rendered inoperable. The co-processor may also be used selectively, based on performance considerations.
摘要:
A system, apparatus, and method are disclosed for predicting accesses to memory. In one embodiment, an exemplary apparatus comprises a processor configured to execute program instructions and process program data, a memory including the program instructions and the program data, and a memory processor. The memory processor can include a speculator configured to receive an address containing the program instructions or the program data. Such a speculator can comprise a sequential predictor for generating a configurable number of sequential addresses. The speculator can also include a nonsequential predictor configured to associate a subset of addresses to the address and to predict a group of addresses based on at least one address of the subset, wherein at least one address of the subset is unpatternable to the address.
摘要:
A system, apparatus, and method are disclosed for storing and prioritizing predictions to anticipate nonsequential accesses to a memory. In one embodiment, an exemplary apparatus is configured as a prefetcher for predicting accesses to a memory. The prefetcher includes a prediction generator configured to generate a prediction that is unpatternable to an address. Also, the prefetcher also can include a target cache coupled to the prediction generator to maintain the prediction in a manner that determines a priority for the prediction. In another embodiment, the prefetcher can also include a priority adjuster. The priority adjuster sets a priority for a prediction relative to other predictions. In some cases, the placement of the prediction is indicative of the priority relative to priorities for the other predictions. In yet another embodiment, the prediction generator uses the priority to determine that the prediction is to be generated before other predictions.
摘要:
A method for managing a memory controller comprising selecting a low-power state from a plurality of low-power states. The method further comprises transitioning to the low-power and entering the low-power state when the transition is complete, provided a wake-event has not been received. An apparatus comprises a controller configured to select a power state for transition, a state-machine configured to execute steps for transitions between power states of a memory controller connected by a bus to a memory, a storage configured to store at least one context, and a context engine configured to stream, at the direction of the state-machine engine, the at least one context to the memory controller. Streaming comprises communicating N portions of context data as a stream to N registers in the memory controller. A context comprises a plurality of calibrations corresponding to a state selected for transition.