摘要:
In a multi-tasking pipelined processor, consecutive instructions are executed by different tasks, eliminating the need to purge an instruction execution pipeline of subsequent instructions when a previous instruction cannot be completed. The tasks do not share registers which store task-specific values, thus eliminating the need to save or load registers when a new task is scheduled for execution. If an instruction accesses an unavailable resource, the instruction becomes suspended, allowing other tasks' instructions to be executed instead until the resource becomes available. Task scheduling is performed by hardware; no operating system is needed. Simple techniques are provided to synchronize shared resource access between different tasks.
摘要:
Embodiments of the present invention are directed to a method and system for allowing data structures to be moved between storage locations of varying performance and cost without changing the application firmware. In one embodiment, rather than application firmware directly accessing memory, the application firmware requests a data structure by parameters, to which the implementation returns a pointer. The parameters can be, for example, the logical block address of a data sector, and the data structure can be mapping and associated information of that logical block address (LBA) to a location in the flash device.
摘要:
A system for selecting a subset of issued flash storage commands to improve processing time for command execution. A plurality of ports stores a first plurality of command identifiers and are associated with the plurality of ports. Each of the first plurality of arbiters selects an oldest command identifier among command identifiers within each corresponding port resulting in a second plurality of command identifiers. A second arbiter makes a plurality of selections from the second plurality of command identifiers based on command identifier age and the priority of the port. A session identifier queue stores commands associated with the plurality of selections among other commands forming a third plurality of commands. A microcontroller selects an executable command from the third plurality of commands for execution based on an execution optimization heuristic. After execution of the command, the command identifier in the port is cleared.
摘要:
A data path controller, a computer device, an apparatus and a method are disclosed for integrating power management functions into a data path controller to manage power consumed by processors and peripheral devices. By embedding power management within the data path controller, the data path controller can advantageously modify its criteria in-situ so that it can adapt its power management actions in response to changes in processors and peripheral devices. In addition, the data path controller includes a power-managing interface that provides power-monitoring ports for monitoring and/or quantifying power consumption of various components. In one embodiment, the data path controller includes a power-monitoring interface for selectably monitoring power of a component. It also includes a controller for adjusting operational characteristics of the component for modifying the power consumed by the component to comply with a performance profile, which generally specifies permissible power consumption levels for the component.
摘要:
In a multi-tasking pipelined processor, consecutive instructions are executed by different tasks, eliminating the need to purge an instruction execution pipeline of subsequent instructions when a previous instruction cannot be completed. The tasks do not share registers which store task-specific values, thus eliminating the need to save or load registers when a new task is scheduled for execution. If an instruction accesses an unavailable resource, the instruction becomes suspended, allowing other tasks' instructions to be executed instead until the resource becomes available. Task scheduling is performed by hardware; no operating system is needed. Simple techniques are provided to synchronize shared resource access between different tasks.
摘要:
A memory controller, in one embodiment, includes a command translation data structure, a front end and a back end. The command translation data structure maps command operations to primitives, wherein the primitives are decomposed from command operations determined for one or more memory devices. The front end receives command operations from a processing unit and translates each command operation to a set of one or more corresponding primitives using the command translation data structure. The back end outputs the set of one or more corresponding primitives for each received command operation to a given memory device.
摘要:
The wear-leveling techniques include discovering a persistent state of one or more memory devices, or building and caching persistent state parameters for each logical unit of a given memory device if a persistent state is not discovered for a given memory device. The techniques may also include processing memory access commands utilizing the cached persistent state parameters. When processing memory access commands, the logical block address and length parameter of a logical address of a command may be translated to a plurality of physical addresses for accessing one or more memory devices, each physical address includes a device address, a logical unit address, a block address, and a page address, wherein the block address includes one or more interleaved address bits.
摘要:
A virtual address translation table and an on-chip address cache are usable for translating virtual addresses to physical addresses. Address translation information is provided using a cluster that is associated with some range of virtual addresses and that can be used to translate any virtual address in its range to a physical address, where the sizes of the ranges mapped by different clusters may be different. Clusters are stored in an address translation table that is indexed by virtual address so that, starting from any valid virtual address, the appropriate cluster for translating that address can be retrieved from the translation table. Recently retrieved clusters are stored in an on-chip cache, and a cached cluster can be used to translate any virtual address in its range without accessing the address translation table again.
摘要:
A virtual address translation table and an on-chip address cache are usable for translating virtual addresses to physical addresses. Address translation information is provided using a cluster that is associated with some range of virtual addresses and that can be used to translate any virtual address in its range to a physical address, where the sizes of the ranges mapped by different clusters may be different. Clusters are stored in an address translation table that is indexed by virtual address so that, starting from any valid virtual address, the appropriate cluster for translating that address can be retrieved from the translation table. Recently retrieved clusters are stored in an on-chip cache, and a cached cluster can be used to translate any virtual address in its range without accessing the address translation table again.
摘要:
A processor system suitable to provide an interface between networks includes a software programmable processor and a channel processor that receives data from a network and transforms data at commands from the software programmable processor. The channel can execute only a few simple commands, but these commands are sufficient for a wide range of systems. The commands include (1) a command to transmit received data, perhaps skipping some data; and (2) a command to transmit data specified by the command itself rather than the received data. The channel is fast, simple and inexpensive.