摘要:
An apparatus for controlling memory includes a memory controller, and a data interface that interfaces with and is in data communication with data lines, each having inherent skew. Each data line carries a data signal. The data lines connect the memory controller to the memory. The apparatus also includes data de-skewers, each associated with a corresponding data line, a strobe interface that interfaces with a strobe line that connects the memory controller to the memory and that applies a timing signal to the strobe line, and a strobe de-skewer connected to the strobe line. Each data de-skewer operates in read or write mode. A particular data line's data de-skewer applies a compensation skew to a data signal carried by that line.
摘要:
A circuit manages and controls access requests to a register, such as a control and status register (CSR) among a number of devices. In particular, the circuit selectively forwards or suspends off-chip access requests and forwards on-chip access requests independent of the status of off-chip requests. The circuit receives access requests at a plurality of buses, one or more of which can be dedicated to exclusively on-chip requests and/or exclusively off-chip requests. Based on the completion status of previous off-chip access requests, further off-chip access requests are selectively forwarded or suspended, while on-chip access request are sent independently of off-chip request status.
摘要:
Managing an instruction cache of a processing element, the instruction cache including a plurality of instruction cache entries, each entry including a mapping of a virtual memory address to one or more processor instructions, includes: issuing, at the processing element, a translation lookaside buffer invalidation instruction for invalidating a translation lookaside buffer entry in a translation lookaside buffer, the translation lookaside buffer entry including a mapping from a range of virtual memory addresses to a range of physical memory addresses; causing invalidation of one or more of the instruction cache entries of the plurality of instruction cache entries in response to the translation lookaside buffer invalidation instruction.
摘要:
An improved content search mechanism uses a graph that includes intelligent nodes avoids the overhead of post processing and improves the overall performance of a content processing application. An intelligent node is similar to a node in a DFA graph but includes a command. The command in the intelligent node allows additional state for the node to be generated and checked. This additional state allows the content search mechanism to traverse the same node with two different interpretations. By generating state for the node, the graph of nodes does not become exponential. It also allows a user function to be called upon reaching a node, which can perform any desired user tasks, including modifying the input data or position.
摘要:
A root node of a decision tree data structure may cover all values of a search space used for packet classification. The search space may include a plurality of rules, the plurality of rules having at least one field. The decision tree data structure may include a plurality of nodes, the plurality of nodes including a subset of the plurality of rules. Scope in the decision tree data structure may be based on comparing a portion of the search space covered by a node to a portion of the search space covered by the node's rules. Scope in the decision tree data structure may be used to identify whether or not a compilation operation may be unproductive. By identifying an unproductive compilation operation it may be avoided, thereby improving compiler efficiency as the unproductive compilation operation may be time-consuming.
摘要:
A computer system that supports virtualization may maintain multiple address spaces. Each guest operating system employs guest virtual addresses (GVAs), which are translated to guest physical addresses (GPAs). A hypervisor, which manages one or more guest operating systems, translates GPAs to root physical addresses (RPAs). A merged translation lookaside buffer (MTLB) caches translations between the multiple addressing domains, enabling faster address translation and memory access. The MTLB can be logically addressable as multiple different caches, and can be reconfigured to allot different spaces to each logical cache. Further, a collapsed TLB is an additional cache storing collapsed translations derived from the MTLB. Entries in the MTLB, the collapsed TLB, and other caches can be maintained for consistency.
摘要:
In one embodiment, a system includes a memory, and a memory controller coupled to the memory via an address bus, a data bus, and an error code bus. The memory stores data at an address and stores an error code at the address. The error code is generated based on a function of the corresponding data and address.
摘要:
In one embodiment, a processor includes plural processing cores, and plural instruction stores, each instruction store storing at least one instruction, each instruction having a corresponding group number, each instruction store having a unique identifier. The processor also includes a group execution matrix having a plurality of group execution masks and a store execution matrix comprising a plurality of store execution masks. The processor further includes a core selection unit that, for each instruction within each instruction store, selects a store execution mask from the store execution matrix. The core selection unit for each instruction within each instruction store selects at least one group execution mask from the group execution matrix. The core selection unit performs logic operations to create a core request mask. The processor includes an arbitration unit that determines instruction priority among each instruction, assigns an instruction for each available core, and signals the instruction store.
摘要:
A new approach is proposed that contemplates systems and methods to support security communication between a hardware security module (HSM) and a plurality of network-enabled devices to offload their key storage, management, and crypto operations to the HSM. The HSM includes a plurality of HSM service units, each configured to authenticate one of the network-enabled devices based on its credentials and process the key management and crypto operations offloaded from the network-enabled device once it is authenticated. The HSM service unit also communicates results of the key management and crypto operations back to the network-enabled device via the secured communication channel.
摘要:
A computer system that supports virtualization may maintain multiple address spaces. Each guest operating system employs guest virtual addresses (GVAs), which are translated to guest physical addresses (GPAs). A hypervisor, which manages one or more guest operating systems, translates GPAs to root physical addresses (RPAs). A merged translation lookaside buffer (MTLB) caches translations between the multiple addressing domains, enabling faster address translation and memory access. The MTLB can be logically addressable as multiple different caches, and can be reconfigured to allot different spaces to each logical cache. Lookups to the caches of the MTLB can be selectively bypassed based on a control configuration and the attributes of a received address.