摘要:
An electronic system is disclosed, including multiple initiators and one or more targets coupled to a bus, and a request mask control unit (RMCU). The initiators are configured to initiate requests (e.g., read requests and write requests) via the bus, and the targets are configured to receive requests from the initiators via the bus. The targets are also configured to produce multiple MaskEnable signals, wherein each of the MaskEnable signals is generated following an initial request received via the bus, and dependent on a corresponding “masking situation” within the target. The RMCU receives the MaskEnable signals and produces multiple RequestMask signals dependent upon the MaskEnable signals. One or more of the initiators are permitted to repeat requests via the bus dependent upon one or more of the RequestMask signals. This mechanism provides additional bus bandwidth for carrying out successful data transfers.
摘要:
An apparatus and method for passing messages through a bus-to-bus bridge while maintaining ordering. The method comprises passing messages into a message container in the bus bridge without using the bridge buffer, setting a flag that tracks all the writes in the write queue ahead of when the message was put into the message container, blocking the receiving device on the bus connected to the bridge from accessing the message container until the flag is cleared, and clearing the flag when all the writes put into the write queue ahead of when the flag was set have been written to local memory on the receiving bus, then allowing the device on the receiving bus that is the intended recipient to receive the message.
摘要:
A design structure for a processor system may be embodied in a machine readable medium for designing, manufacturing or testing a processor integrated circuit. The design structure may embody a processor integrated circuit including multiple processors with respective processor cache memories. The design structure may specify enhanced cache coherency protocols to achieve cache memory integrity in a multi-processor environment. The design structure may describe a processor bus controller manages cache coherency bus interfaces to master devices and slave devices. The design structure may also describe a master I/O device controller and a slave I/O device controller that couple directly to the processor bus controller while system memory couples to the processor bus controller via a memory controller. In one embodiment, the design structure may specify that the processor bus controller blocks partial responses that it receives from all devices except the slave I/O device from being included in a combined response that the processor bus controller sends over the cache coherency buses.
摘要:
Computer implemented method, system and computer usable program code for processing a data request in a data processing system. A read command requesting data is received from a requesting master device. It is determined whether a cache of a processor can provide the requested data. Responsive to a determination that a cache of a processor can provide the requested data, the requested data is routed to the requesting master device on an intervention data bus of the processor separate from a read data bus and a write data bus of the processor.
摘要:
A method of designing a system on a chip (SoC) to operate with varying latencies and frequencies. A layout of the chip is designed with specific placement of devices, including a bus controller, initiator, and target devices. The time for a signal to propagate from a source device to a destination device is determined relative to a default propagation time. A pipeline stage is then inserted into a bus path between said source device and destination device for each additional time the signal takes to propagate. Each device (i.e., initiators, targets, and bus controller) is designed with logic to control a protocol that functions with a variety of response latencies. With the additional logic, the devices do not need to be changed when pipeline stages are inserted in the various paths. Registers are utilized as the pipeline stages that are inserted within the paths.
摘要:
A method and system for reducing power in a snooping cache based environment. A memory may be coupled to a plurality of processing units via a bus. Each processing unit may comprise a cache controller coupled to a cache associated with the processing unit. The cache controller may comprise a segment register comprising N bits where each bit in the segment register may be associated with a segment of memory divided into N segments. The cache controller may be configured to snoop a requested address on the bus. Upon determining which bit in the segment register is associated with the snooped requested address, the segment register may determine if the bit associated with the snooped requested address is set. If the bit is not set, then a cache search may not be performed thereby mitigating the power consumption associated with a snooped request cache search.
摘要:
A method and system for translating a non-native bytecode to a set of codes native to a processor within a computer system is disclosed. In accordance with the method and system of the present invention, a computer system capable of translating non-native instructions to a set of native instructions is provided that comprises a system memory, a processor, and an instruction set convertor. The system memory is utilized to store non-native instructions and groups of unrelated native instructions. The processor is only capable of processing native instructions. The instruction set convertor, coupled between the system memory and the processor, includes a semantics table and an information table. In response to an instruction fetch from the processor for a non-native instruction in the system memory, the instruction set convertor translates the non-native instruction to a set of native instructions for the processor by accessing both the semantics table and the information table.
摘要:
A bus interface with resources to selectively optimize burst mode data transfers from one bus to another through an automated selection and generation of a cadence. In one form, the cadence is selected based upon memory access latency characteristics, the relative widths of the busses, and the relative clock frequencies of the busses. The selected cadence is provided as a pacing ready signal to the bus receiving the transferred data.
摘要:
A technique for triggering a system bus write command with user code includes identifying a specific store-type instruction in a user instruction sequence. The specific store-type instruction is converted into a specific request-type command, which is configured to include core permission controls (that are stored in core configuration registers of a processor core by a trusted kernel) and user created data (stored in a cache memory). Slave devices are configured through register space (that is only accessible by the trusted kernel) with respective slave permission controls. The specific request-type command is then transmitted from the cache memory, via a system bus. In this case, the slave devices that receive the specific request-type command process the specific request-type command when the core permission controls are the same as the respective slave permission controls. The trusted kernel may be included in a hypervisor or an operating system.
摘要:
The present invention provides for a system comprising a peripheral component interface (PCI) host bridge. The PCI host bridge is configured to be coupled to a PCI bus, and to receive a system reset signal, to generate a PCI bus reset signal based on the received system reset signal, to detect a PCI operational mode of the PCI bus, and to generate a voltage indicator signal based on the detected PCI operational mode. A voltage regulator is coupled to the PCI host bridge and configured to receive the voltage indicator signal and to regulate a signaling voltage for the PCI bus based on the voltage indicator signal.