摘要:
A memory loopback system and method including an address/command transmit source configured to transmit a command and associated address through an address/command path. A transmit data source is configured to transmit write data associated with the command through a write path. Test control logic is configured to generate gaps between successive commands. A loopback connection is configured to route the write data from the write path to a read path. A data comparator is configured to compare the data received via the read path to a receive data source and generate a data loopback status output. Pattern generation logic can be configured to generate a loopback strobe, the loopback strobe being coupled to the read path. The pattern generation logic may be configured to synthesize a read strobe based on the test control logic and to use the synthesized read strobe as the loopback strobe. The loopback connection may be configured to route the address/command data from the address/command path to an address/command comparator, the address/command comparator being configured to compare the address/command data to an address/command receive source and generate an address/command loopback status output.
摘要:
A memory loopback system and method including an address/command transmit source configured to transmit a command and associated address through an address/command path. A transmit data source is configured to transmit write data associated with the command through a write path. Test control logic is configured to generate gaps between successive commands. A loopback connection is configured to route the write data from the write path to a read path. A data comparator is configured to compare the data received via the read path to a receive data source and generate a data loopback status output. Pattern generation logic can be configured to generate a loopback strobe, the loopback strobe being coupled to the read path. The pattern generation logic may be configured to synthesize a read strobe based on the test control logic and to use the synthesized read strobe as the loopback strobe. The loopback connection may be configured to route the address/command data from the address/command path to an address/command comparator, the address/command comparator being configured to compare the address/command data to an address/command receive source and generate an address/command loopback status output.
摘要:
A method and apparatus for training read latency of a memory are disclosed. A memory controller includes a command FIFO configured to convey commands to a memory, a data queue coupled to receive data from the memory, and a register configured to provide a value indicative of a number of cycles of a first clock signal after which data is valid. During a startup routine, the memory controller is configured to compare data received by the data queue to a known data pattern after a specified number of cycles of the first clock signal have elapsed. The memory controller is further to configured to decrement the first value and repeat conveying and comparing if the data received matches the data pattern. If the received data does not match the data pattern for any attempted read of the memory, the memory controller is configured to program a second value into the register.
摘要:
A method and apparatus for training read latency of a memory are disclosed. A memory controller includes a command FIFO configured to convey commands to a memory, a data queue coupled to receive data from the memory, and a register configured to provide a value indicative of a number of cycles of a first clock signal after which data is valid. During a startup routine, the memory controller is configured to compare data received by the data queue to a known data pattern after a specified number of cycles of the first clock signal have elapsed. The memory controller is further to configured to decrement the first value and repeat conveying and comparing if the data received matches the data pattern. If the received data does not match the data pattern for any attempted read of the memory, the memory controller is configured to program a second value into the register.
摘要:
A system and method for efficient management of resources within a semiconductor chip for an optimal combination of power reduction and high performance. An intergrated circuit, such as a system on a chip (SOC), includes at least two processing units. The second processing unit includes a cache. The SOC includes a power management unit (PMU) that determines whether a first activity level for the first processing unit is above a first threshold and a second activity level for the second processing unit is below a second threshold. If this condition is true, then the PMU places a limit on a highest power-performance state (P-state) used by the second processing unit. The PMU sends an indication to flush the at least one cache within the second processing unit. The PMU changes a P-state used by the first processing unit to a higher performance P-state.
摘要:
A system and method for efficient management of operating modes within an IC for optimal power and performance targets. On a same die, an SOC includes one or more processing units and a input/output (I/O) controller (IOC). The multiple interfaces within the IOC manage packets and messages according multiple different protocols. The IOC maintains an activity level for each one of the multiple interfaces. This activity level may be based at least on a respective number of transactions executed by a corresponding one of the multiple interfaces. The IOC determines a power estimate for itself based on at least the activity levels. In response to detecting a difference between the power estimate and an assigned I/O power limit for the IOC, a power manager adjusts at least respective power limits for the one or more processing units based on at least the difference.
摘要:
A system and method for efficient management of resources within a semiconductor chip for an optimal combination of power reduction and high performance. An intergrated circuit, such as a system on a chip (SOC), includes at least two processing units. The second processing unit includes a cache. The SOC includes a power management unit (PMU) that determines whether a first activity level for the first processing unit is above a first threshold and a second activity level for the second processing unit is below a second threshold. If this condition is true, then the PMU places a limit on a highest power-performance state (P-state) used by the second processing unit. The PMU sends an indication to flush the at least one cache within the second processing unit. The PMU changes a P-state used by the first processing unit to a higher performance P-state.
摘要:
A technique of accessing a resource includes receiving, at a master scheduler, resource access requests. The resource access requests are translated into respective slave state machine work orders that each include one or more respective commands. The respective commands are assigned, for execution, to command streams associated with respective slave state machines. The respective commands are then executed responsive to the respective slave state machines.
摘要:
A system and method for data allocation in a shared cache memory of a computing system are contemplated. Each cache way of a shared set-associative cache is accessible to multiple sources, such as one or more processor cores, a graphics processing unit (GPU), an input/output (I/O) device, or multiple different software threads. A shared cache controller enables or disables access separately to each of the cache ways based upon the corresponding source of a received memory request. One or more configuration and status registers (CSRs) store encoded values used to alter accessibility to each of the shared cache ways. The control of the accessibility of the shared cache ways via altering stored values in the CSRs may be used to create a pseudo-RAM structure within the shared cache and to progressively reduce the size of the shared cache during a power-down sequence while the shared cache continues operation.
摘要:
Dynamic power test slave (DPTS) modules are placed at selected locations of a data processing device to provide data to a logic module of the device at a high rate during testing of the device. The DPTS module intercepts data requests targeted to another logic module and the DPTS instead provides the requested data, thus simulating data transfer by the target logic module. The simulated data transfers can provide for transitions at the data processing device from a relatively high power state to a relatively low power state. Accordingly, the DPTS modules allow for simulation of expected normal operating conditions during testing of the data processing device.