摘要:
Apparatuses and methods for page coloring to associate memory pages with programs are disclosed. In one embodiment, an apparatus includes a paging unit and an interface to access a memory. The paging unit includes translation logic and comparison logic. The translation logic is to translate a first address to a second address. The first address is to be provided by an instruction stored in a first page in the memory. The translation is based on an entry in a data structure, and the entry is to include a base address of a second page in the memory including the second address. The comparison logic is to compare the color of the first page to the color of the second page. The color of the first page is to indicate association of the first page with a first program including the first instruction. The data structure entry is also to include the color of the second page to indicate association of the second page with the first program or a second program.
摘要:
A method and apparatus to insert control blocks into a stream of data user blocks. Data user blocks are transmitted onto a network during transmission slots. One of the data user blocks is buffered during one of the transmission slots. Instead of transmitting the buffered data user block during this transmission slot, a control block is transmitted onto the network in the data user block's place. Transmission of the data user block is delayed until the next transmission slot. The control block is inserted at a required position into the stream of data user blocks at a transmit engine, as opposed to a queue manager, leaving the queue manager unconcerned with the insertion details of the control block. Insertion of the control block by the transmit engine enables the queue manager to handle frames containing large numbers of user blocks as a single unit (e.g., such as is the case with AAL-5) and avoid complications related to inserting the control block in the midst of these frames.
摘要:
A method and apparatus of implementing protocol state machines that conserve energy on energy conscious devices is disclosed. Under this method, most of the energy consuming protocol state machine context invocations or operations are aggregated in time and are scheduled at regular intervals. Such an aggregation leads to many contexts executing concurrently in a burst prior to entering a dormant state. Thus, resource usage can reach a predictable rate pattern of idle and active cycles. With such a pattern, it is possible to take advantage of the energy saving features of processors by downshifting the processor clock speed and use of other resources such as peripherals and buses. The intervals are configured to achieve a tradeoff between timely execution and energy consumption. The aggregation operates across two dimensions, namely, multiple instances of a protocol state machine and multiple layers of protocols in a layered architecture.
摘要:
A method and apparatus to insert control blocks into a stream of data user blocks. Data user blocks are transmitted onto a network during transmission slots. One of the data user blocks is buffered during one of the transmission slots. Instead of transmitting the buffered data user block during this transmission slot, a control block is transmitted onto the network in the data user block's place. Transmission of the data user block is delayed until the next transmission slot. The control block is inserted at a required position into the stream of data user blocks at a transmit engine, as opposed to a queue manager, leaving the queue manager unconcerned with the insertion details of the control block. Insertion of the control block by the transmit engine enables the queue manager to handle frames containing large numbers of user blocks as a single unit (e.g., such as is the case with AAL-5) and avoid complications related to inserting the control block in the midst of these frames.
摘要:
According to some embodiments, execution information is received from a first development tool. Execution information is also received from a second development tool. Based on the first execution information and the second execution information, operation of the first development tool may be controlled. According to some embodiments, the first and second development tools are associated with different processor architectures.
摘要:
In general, in one aspect, the disclosure describes a method that includes providing a user interface common to multiple development tools, different ones of the development tools dedicated to different processor architectures. The method also includes enabling communications between the user interface and the development tools.
摘要:
Extending network capabilities for a network with a policy-based network management (PBNM) architecture. The method includes sending a first message from a policy enforcement point (PEP) to a policy decision point (PDP) in response to an external action, and sending a Java object in a second message from the PDP to the PEP in response to receiving the first message. The Java object may be executed on the PEP to implement a policy.