摘要:
Ways of a cache memory system are designated as being in one of three subsets: a normal subset, a transient subset, and a locked subset. The designation of the respective subsets is provided by a normal subset floor index, a transient subset floor index, and a transient subset ceiling index. The respective indexes are used to select the subset into which new entries are copied from main memory as a result of a cache miss. If the new entry is designated as being characterized by normal program behavior, it is copied into the normal subset in the cache. If the new entry is designated as being characterized by transient program behavior, it is copied into the transient subset in the cache. The relationship between the normal subset and the transient subset is programmable. For example, the normal and the transient subsets may include at least one common way of the cache memory or the transient subset may be completely included in the normal subset or completely separate therefrom.
摘要:
A process and system for transferring data including at least one slave device connected to at least one master device through an arbiter device. The master and slave devices are connected by a single address bus, a write data bus and a read data bus. The arbiter device receives requests for data transfers from the master devices and selectively transmits the requests to the slave devices. The master devices and the slave devices are further connected by a plurality of transfer qualifier signals which may specify predetermined characteristics of the requested data transfers. Control signals are also communicated between the arbiter device and the slave devices to allow appropriate slave devices to latch addresses of requested second transfers during the pendency of current or primary data transfers so as to obviate an address transfer latency typically required for the second transfer. The design is configured to advantageously function in mixed systems which may include address-pipelining and non-address-pipelining slave devices.
摘要:
A system and method for tracing program code within a processor having an embedded cache memory. The non-invasive tracing technique minimizes the need for trace information to be broadcast externally. The tracing technique monitors changes in instruction flow from the normal execution stream of the code. The tracing technique monitors the updating of processor branch target register contents in order to monitor branch target flow of the code. A FIFO and serial logic circuitry is utilized to minimize the number of chip pins required to broadcast the information from the chip. The tracing technique utilizes instruction and data breakpoint debug functions to signal an external trace tool that a trace event has occurred.
摘要:
Methods and systems to guard against attacks designed to replace authenticated, secure code with non-authentic, unsecure code and using existing hardware resources in the CPU's memory management unit (MMU) are disclosed. In certain embodiments, permission entries indicating that pages in memory have been previously authenticated as secure are maintained in a translation lookaside buffer (TLB) and checked upon encountering an instruction residing at an external page. A TLB permission entry indicating permission is invalid causes on-demand authentication of the accessed page. Upon authentication, the permission entry in the TLB is updated to reflect that the page has been authenticated. As another example, in certain embodiments, a page of recently authenticated pages is maintained and checked upon encountering an instruction residing at an external page.
摘要:
When a branch misprediction in a pipelined processor is discovered, if the mispredicted branch instruction is not the last uncommitted instruction in the pipelines, older uncommitted instructions are checked for dependency on a long latency operation. If one is discovered, all uncommitted instructions are flushed from the pipelines without waiting for the dependency to be resolved. The branch prediction is corrected, and the branch instruction and all flushed instructions older than the branch instruction are re-fetched and executed.
摘要:
Methods and systems to guard against attacks designed to replace authenticated, secure code with non-authentic, unsecure code and using existing hardware resources in the CPU's memory management unit (MMU) are disclosed. In certain embodiments, permission entries indicating that pages in memory have been previously authenticated as secure are maintained in a translation lookaside buffer (TLB) and checked upon encountering an instruction residing at an external page. A TLB permission entry indicating permission is invalid causes on-demand authentication of the accessed page. Upon authentication, the permission entry in the TLB is updated to reflect that the page has been authenticated. As another example, in certain embodiments, a page of recently authenticated pages is maintained and checked upon encountering an instruction residing at an external page.
摘要:
A method and system for messaging between processors and co-processors connected through a bus. The method permits a multi-thread system processor to request the services of a processor or co-processor located on the bus. Message control blocks are stored in a memory which identify the physical address of the target processor, as well as a memory location in the memory dedicated to the thread requesting the service. When the system processor requests service of a processor or co-processor, a DCR command is created pointing to the message control block. A message is built from information contained in the message control block or transferred to the processor or co-processor. The return address for the processor or co-processor message is concatenated with the thread number, so that the processor or co-processor can create a return message specifically identifying memory space dedicated to the requesting thread for storage of the response message.
摘要:
Delays due to waiting for operands that will not be used by a select operand instruction, are alleviated based on an early recognition that such operand data is not required in order to complete the processing of the select operand instruction. At appropriate points prior to execution, determinations are made regarding a selection criterion or criteria specified by the select operand instruction, conditions that affect the selection criteria, and the availability of operands. A hold circuit uses the determinations to control the activation and release of a hold signal that controls processor pipeline stalls. A stall required to wait for operand data is skipped or a stall is terminated early, if the selected operand is available even though the other operand, that will not be used, is not available. A stall due to waiting for operands is maintained until the selection criteria is met and the selected operand is fetched and made available.
摘要:
A renaming register file complex for saving power is described. A mapping unit transforms an instruction register number (IRN) to a logical register number (LRN). The renaming register file maps an LRN to a physical register number (PRN), there being a greater number of physical registers than addressable by direct use of the IRN. The renaming register file uses a content addressable memory (CAM) to provide the mapping function. The renaming register file CAM further uses current processor state information to selectively enable tag comparators to minimize power in accessing registers. When a tag comparator is not enabled it remains in a low power state. A processor using a renaming register file with low power features is also described.
摘要:
A processing system may include a memory configured to store data in a plurality of pages, a TLB, and a memory cache including a plurality of cache lines. Each page in the memory may include a plurality of lines of memory. The memory cache may permit, when a virtual address is presented to the cache, a matching cache line to be identified from the plurality of cache lines, the matching cache line having a matching address that matches the virtual address. The memory cache may be configured to permit one or more page attributes of a page located at the matching address to be retrieved from the memory cache and not from the TLB, by further storing in each one of the cache lines a page attribute of the line of data stored in the cache line.