摘要:
Methods, apparatuses, and systems for handling transactions received after a configuration request, the method, for example, comprising: receiving a configuration change request by a transaction-handling logic block; performing a configuration change by the transaction-handling logic block in response to the configuration change request, wherein the logic block is to handle transactions received prior to receipt of the configuration change request differently than transactions received after receipt of the configuration change request; receiving, by the transaction-handling logic block, a first transaction before receiving the configuration change request; receiving, by the transaction-handling logic block, a second transaction after receiving the configuration change request and before the configuration change is complete; differentiating the first transaction from the second transaction based on the order in which the first and second transactions were received relative to receipt of the configuration change request; and handling the first and second transactions.
摘要:
Methods, apparatuses, and systems for handling transactions received after a configuration request, the method, for example, comprising: receiving a configuration change request by a transaction-handling logic block; performing a configuration change by the transaction-handling logic block in response to the configuration change request, wherein the logic block is to handle transactions received prior to receipt of the configuration change request differently than transactions received after receipt of the configuration change request; receiving, by the transaction-handling logic block, a first transaction before receiving the configuration change request; receiving, by the transaction-handling logic block, a second transaction after receiving the configuration change request and before the configuration change is complete; differentiating the first transaction from the second transaction based on the order in which the first and second transactions were received relative to receipt of the configuration change request; and handling the first and second transactions.
摘要:
In accordance with embodiments disclosed herein, there are provided methods, systems, and apparatuses for enabling an agent interfacing with a pipelined backbone to locally handle transactions while obeying an ordering rule including, for example, receiving a transaction which requests access to a backbone; decoding routing destination information from the transaction received, in which the decoded routing destination information designates the transaction to be processed either locally or processed via the backbone; storing the decoded routing destination information and the transaction into a First-In-First-Out (FIFO) buffer; retrieving the decoded routing destination information and the transaction from the FIFO buffer; and processing the transaction locally or via the backbone based on the decoded routing destination information retrieved from the FIFO buffer with the transaction.
摘要:
In accordance with embodiments disclosed herein are mechanisms for enabling multiple bus master engines to share the same request channel to a pipelined backbone including: receiving a plurality of unarbitrated grant requests at an agent bus interface from a plurality of masters, each requesting access to a backbone connected via a common request channel; determining which of the unarbitrated grant requests is to issue first as a final grant request; storing a master identifier code for the final grant request into a FIFO buffer, the master identifier code associating the final grant request with the issuing master among the plurality of masters; waiting for a backbone grant; and presenting the master identifier code for the final grant request to an agent bus interface, wherein the agent bus interface communicates a command and data for processing via a backbone responsive to the backbone grant to fulfill the final grant request.
摘要:
In accordance with embodiments disclosed herein, there are provided methods, systems, and apparatuses for enabling an agent interfacing with a pipelined backbone to locally handle transactions while obeying an ordering rule including, for example, receiving a transaction which requests access to a backbone; decoding routing destination information from the transaction received, in which the decoded routing destination information designates the transaction to be processed either locally or processed via the backbone; storing the decoded routing destination information and the transaction into a First-In-First-Out (FIFO) buffer; retrieving the decoded routing destination information and the transaction from the FIFO buffer; and processing the transaction locally or via the backbone based on the decoded routing destination information retrieved from the FIFO buffer with the transaction.
摘要:
In accordance with embodiments disclosed herein are mechanisms for enabling multiple bus master engines to share the same request channel to a pipelined backbone including: receiving a plurality of unarbitrated grant requests at an agent bus interface from a plurality of masters, each requesting access to a backbone connected via a common request channel; determining which of the unarbitrated grant requests is to issue first as a final grant request; storing a master identifier code for the final grant request into a FIFO buffer, the master identifier code associating the final grant request with the issuing master among the plurality of masters; waiting for a backbone grant; and presenting the master identifier code for the final grant request to an agent bus interface, wherein the agent bus interface communicates a command and data for processing via a backbone responsive to the backbone grant to fulfill the final grant request.
摘要:
An embodiment of the present invention is a technique to provide a secure authentication of chipset configuration. A first chipset configuration (CC) register set in an input/output (I/O) manageability engine (ME) partition authenticates and controls enabling a CC functionality. The I/O ME partition manages I/O resources shared with a processor in a secure manner. A second CC register set in a processor interface space provides the CC functionality. The second CC register set includes a global enable register having an enable field securely accessible to the I/O ME partition in a read and write-once accessibility and accessible to the processor via the processor interface space in a read-only accessibility.
摘要翻译:本发明的实施例是提供芯片组配置的安全认证的技术。 在输入/输出(I / O)可管理性引擎(ME)分区中设置的第一个芯片组配置(CC)寄存器对CC功能进行认证和控制。 I / O ME分区以安全的方式管理与处理器共享的I / O资源。 在处理器接口空间中设置的第二个CC寄存器提供CC功能。 第二CC寄存器集包括全局使能寄存器,其具有可读取和写入一次可访问性并且可通过处理器接口空间在只读可访问性中对处理器可访问的I / O ME分区可安全访问的使能字段。
摘要:
Techniques for maintaining an order of transactions in a multi-bus computer architecture. In an embodiment, an arbitrator receives access requests from a plurality of requestors, each access request requesting a respective access to a bus. Based on an arbitration between the access requests—e.g. between those requestors providing the access requests—the arbitrator may generate a grant message which triggers a carrying of a first message on the first bus. In certain embodiments, the grant message further triggers another carrying of the first message on the second bus.
摘要:
Techniques for maintaining an order of transactions in a multi-bus computer architecture. In an embodiment, an arbitrator receives access requests from a plurality of requestors, each access request requesting a respective access to a bus. Based on an arbitration between the access requests—e.g. between those requestors providing the access requests—the arbitrator may generate a grant message which triggers a carrying of a first message on the first bus. In certain embodiments, the grant message further triggers another carrying of the first message on the second bus.