Abstract:
A method and apparatus for enhancing/extending a serial point-to-point interconnect architecture, such as Peripheral Component Interconnect Express (PCIe) is herein described. Temporal and locality caching hints and prefetching hints are provided to improve system wide caching and prefetching. Message codes for atomic operations to arbitrate ownership between system devices/resources are included to allow efficient access/ownership of shared data. Loose transaction ordering provided for while maintaining corresponding transaction priority to memory locations to ensure data integrity and efficient memory access. Active power sub-states and setting thereof is included to allow for more efficient power management. And, caching of device local memory in a host address space, as well as caching of system memory in a device local memory address space is provided for to improve bandwidth and latency for memory accesses.
Abstract:
Prior to updating a database entry, an update task invalidates a valid indicator (e.g., a bit) associated with the database entry. The update task waits for any other tasks (e.g., user tasks) that are accessing the database entry to complete their processing. In particular, a synchronization register holds a synchronization entry (e.g., a bit) for each user task that is created by a micro controller. The update task sets each synchronization entry of the synchronization register to a first value. As each user task completes its processing, the synchronization entry associated with the user task in the synchronization register is set to a second value (e.g., the synchronization bit is reset). The update task monitors the synchronization register, and, when each synchronization entry has been set to the second value, the update task performs its update of the database entry.
Abstract:
A method and apparatus for enhancing/extending a serial point-to-point interconnect architecture, such as Peripheral Component Interconnect Express (PCIe) is herein described. Temporal and locality caching hints and prefetching hints are provided to improve system wide caching and prefetching. Message codes for atomic operations to arbitrate ownership between system devices/resources are included to allow efficient access/ownership of shared data. Loose transaction ordering provided for while maintaining corresponding transaction priority to memory locations to ensure data integrity and efficient memory access. Active power sub-states and setting thereof is included to allow for more efficient power management. And, caching of device local memory in a host address space, as well as caching of system memory in a device local memory address space is provided for to improve bandwidth and latency for memory accesses.
Abstract:
Tasks are dynamically allocated to process packets. In particular, packets of data to be processed are assigned a packet identification. The packet identification includes a lane and a packet sequence number. The term “lane” as used herein refers to a port number and a direction (i.e. ingress or egress), such as Port 3 Egress. A set of resources (e.g., registers and memory buffers) are associated with each lane. The task is allowed to access resources associated with the lane. In some embodiments, a task may change the port that it services and use the resources associated with that port.
Abstract:
An apparatus and method for implementing watchpoints and breakpoints in a data processing system (110). In one embodiment, a pipelined processor (110) performs each instruction of a program. One or more watchpoints are associated with the instructions. The processor includes a history buffer (50) for storing processor state values at the time when each of the instructions was executed, until a predetermined time. Watchpoint information associated with a particular watchpoint is also stored in the history buffer (50), in association with the processor state values, such that the processor state is changed and the watchpoint is announced at the predetermined time. The watchpoint information may include increment/decrement information for one or more counters (41, 42). Breakpoint information may also be stored in history buffer (50).
Abstract:
A cached processor (2) comprises a cache memory (8') having mode switching means for selecting an address capture mode whereby information, such as data and/or instructions, can be captured and stored in all or part of a cache array (30) of the cache memory in real time. The captured information can at any time be transferred to, and used by, an external debug station, coupled to the cached processor, to observe the executed program flow.