摘要:
A cache memory device includes a command receiving unit that receives a plurality of commands from each of a plurality of processors; a processing unit that performs a process based on each of the commands; and a storage unit that stores in a queue a first command, when the command receiving unit receives the first command while the processing unit is processing a second command, a cache line address corresponding to the first command being identical to the cache line address corresponding to the second command which is being processed by the processing unit.
摘要:
Systems and methods for implementing back-off timing for retries of commands sent from a master device to a slave device over a split-transaction bus. One embodiment includes a buffer having entries for storing each pending command and associated information, including a number of retries of the command and a static pseudorandom timer expiration value. The timer expiration value of each entry is compared to a running counter according to a mask associated with the number of retries of the command corresponding to the entry. When the unmasked bits of the two values match, the command is retried. In one embodiment, the same portion of the buffer entry that is used to store the number of retries and the timer expiration value is alternately used to store a slave-generated tag that is received with an acknowledgment response.
摘要:
The invention provides a compound of the following formula (1): wherein m, n, and p are independently an integer of 0-4, provided 3≦m+n≦8; X is nitrogen atom or a group of the formula: C—R15; Y is a substituted or unsubstituted aromatic group, etc.; R15, R1, R2, R3, R4, R5, R6 and R7 are hydrogen atom, a substituted or unsubstituted alkyl group, etc.; and Z is hydrogen atom, cyano group, etc., or a prodrug thereof, or a pharmaceutically acceptable salt thereof, which exhibits an action for enhancing LDL receptor expression, and is useful as a medicament for treating hyperlipidemia, atherosclerosis, etc.
摘要翻译:本发明提供下式(1)的化合物:其中m,n和p独立地为0-4的整数,条件是3 <= m + n <= 8; X是氮原子或下式的基团:C-R 15; Y是取代或未取代的芳基等; R 15,R 1,R 2,R 3,R 4, R 5,R 6和R 7是氢原子,取代或未取代的烷基等; Z为氢原子,氰基等,或其前体药物或其药学上可接受的盐,其具有增强LDL受体表达的作用,可用作治疗高脂血症,动脉粥样硬化等的药物。
摘要:
Systems and methods for facilitating the location of entries in a buffer where a slave device stores information related to an active transaction so that the entries can be removed if the corresponding transactions are canceled. In one embodiment, multiple master devices and multiple slave devices are coupled to a split transaction bus. When a read command is received by a target slave device, the slave device generates an acknowledgment if the slave's command buffer has available entries, or a retry reply if the slave's command buffer is full. The acknowledgment includes a tag which is an index to the buffer location in which the command is stored. If a combined response to the command which is received by the slave device is a retry, the tag, which is included therein, is used by the slave to clear the command from its command buffer.
摘要:
Systems and methods for enabling a slave device to generate a tag that is an index into a buffer where the slave device stores information related to an active transaction such as a write command received by a master device. The tag is sent to the master device with a reply (such as a response to a write command received from the master device), the master device returns the tag with the data to be written to the slave device. The slave device can efficiently associate the received data with the previously sent write command by retrieving the command from the buffer using the tag as an index into the buffer. Additional hardware such as a content-addressable memory unit is not required to make the association.
摘要:
Systems and methods for controlling access by a set of agents to a resource, where the agents have corresponding priorities associated with them, and where a monitor associated with the resource controls accesses by the agents to the resource based on the priorities. One embodiment is implemented in a computer system having multiple processors that are connected to a processor bus. The processor bus includes a shaping monitor configured to control access by the processors to the bus. The shaping monitor attempts to distribute the accesses from each of the processors throughout a base period according to priorities assigned to the processors. The shaping monitor allocates slots to the processors in accordance with their relative priorities. Priorities are initially assigned according to the respective bandwidth needs of the processors, but may be modified based upon comparisons of actual to expected accesses to the bus.
摘要:
A multiprocessor system including a master processor, a plurality of processor elements, each of which is provided with a local memory, the processor elements being controlled in accordance with commands from the foregoing master processor, and a global memory shared by the plurality of processor elements is disclosed. The processor elements are provided with a command pooling buffer capable of accumulating a plurality of commands, respectively. DMA controllers are also provided with a command pooling buffer capable of accumulating a plurality of commands, respectively. The master processor persistently issues a plurality of commands to the DMA controller and each processor element. A counter array manages the number of the issued commands which have received no response. When the responses are returned with respect to all issued commands, the counter array notifies the master processor of this.
摘要:
A computer architecture and programming model for high speed processing over broadband networks are provided. The architecture employs a consistent modular structure, a common computing module and uniform software cells. The common computing module includes a control processor, a plurality of processing units, a plurality of local memories from which the processing units process programs, a direct memory access controller and a shared main memory. A synchronized system and method for the coordinated reading and writing of data to and from the shared main memory by the processing units also are provided. A hardware sandbox structure is provided for security against the corruption of data among the programs being processed by the processing units. The uniform software cells contain both data and applications and are structured for processing by any of the processors of the network. Each software cell is uniquely identified on the network. A system and method for creating a dedicated pipeline for processing streaming data also are provided.
摘要:
A next address computing section contains a selector and is connected to an instruction cache. The instruction cache maintains a predecode result of a branch instruction or predefined settings for a field in this branch instruction. Based on this information maintained in the instruction cache, the selector determines whether the compiler performed a branch prediction about the branch instruction or could not perform that branch prediction. When the compiler could not perform the branch prediction, the selector selects an output from a conditional branch prediction device (saturation counter section). When the compiler performed the branch prediction, the selector selects a prediction result by the compiler for a prediction in Agree mode. These selection results are used for setting a value of a register holding the next address. Based on this next-address register value, an instruction is fetched from the cache then inserted into a pipeline.
摘要:
The shared disk array which incorporates a plurality of disk apparatus storing the contents including the digitized video data and a plurality of element servers are connected to the shared channel network suitable for the multi-initiator architecture, whereby each of the element servers can physically share the shared disk array via the shared channel network. Further, each of the element servers is provided with the network interface suitable for the high-speed transmission and the band-width reservation, so that the contents stored in the shared disk array are read out in response to the request form the client, thus being output of the communication network via the network interface.