摘要:
Apparatus and method for using a precharge command in which a plurality of address lines are individually used to specify which banks of memory cells within a memory device have an open row that is to be closed.
摘要:
In some embodiments, a chip includes a request queue to include write requests, and scheduling circuitry to schedule commands including commands in response to the write requests. The chip also includes mode selection circuitry to monitor the request queue and in response thereto to select a first or a second mode for the scheduling circuitry, wherein in the first mode the scheduling circuitry schedules certain commands as separate single commands and in the second mode the scheduling circuitry schedules consolidated commands to represent more than one separate single command. Other embodiments are described.
摘要:
Apparatus and method to implicitly transmit a command to close a row of memory cells within a memory device as part of the transmission of an activate command to open another row of memory cells within the memory device.
摘要:
A method of prefetching from a memory device includes determining a prefetch buffer hit rate (PBHR) and a memory bandwidth utilization (MBU) rate. Prefetches are inserted aggressively if the memory bandwidth utilization (MBU) rate is above a MBU threshold level and the prefetch buffer hit rate (PBHR) is above a PBHR threshold level. Prefetches are inserted conservatively if the memory bandwidth utilization (MBU) rate is above the MBU threshold level and the prefetch buffer hit rate (PBHR) is below the PBHR threshold level.
摘要:
A method and apparatus for the optimization of memory read operations via read launch optimizations in memory interconnect are disclosed. In one embodiment, a write request may be preempted by a read request.
摘要:
Apparatus is provided to enable real-time volume rendering on a personal computer or a desktop computer in which a technique involving blocking of voxel data organizes the data so that all voxels within a block are stored at consecutive memory addresses within a single memory model, making possible fetching an entire block of data in a burst rather than one voxel at a time. This permits utilization of DRAM memory modules which provide high capacity and low cost with substantial space savings. Additional techniques including sectioning reduces the amount of intermediate storage in a processing pipeline to an acceptable level for semiconductor implementation. A multiplexing technique takes advantage of blocking to reduce the amount of data needed to be transmitted per block, thus reducing the number of pins and the rates at which data must be transmitted across the pins connecting adjacent processing modules with each other. A mini-blocking technique saves the time needed to process sections by avoiding reading entire blocks for voxels near the boundary between a section and previously processed sections.
摘要:
A network protocol and interface using direct deposit messaging provides low overhead communication in a network of multi-user computers. This system uses both sender-provided and receiver-provided information to process received messages and to deposit data directly in memory and to conditionally interrupt a host processor based on control information. Message processing is separated into data delivery, which bypasses the host processor and operating system, and message actions which may or may not require host processor interaction. In this protocol, a message includes an indication of the operation desired by the sender, an operand specified by the sender and an operand which refers to some information stored at the receiver. The receiver ensures that the desired action is permitted and then, if the action is permitted, performs the action according to both the operand specified by the sender and the state of the receiver. The action may be message delivery, wherein the operands in the message specify values for use in various addressing modes including direct, indirect, post-increment and index modes. The action may also be conditionally generating an interrupt, wherein the operands are used, in combination with the receiver state, to determine whether a message requires immediate or delayed action. The action may also be an operation on a register in the network interface or on other information stored at the receiver.
摘要:
An apparatus and system for controlling traffic on an on-chip network. Embodiments of the apparatus comprise single-ended transmission circuitry and single-ended receiving circuitry on a first chip for coupling with a second chip, the transmission circuitry having impedance matching and lacking equalization, the receiving circuitry lacking equalization, the transmission circuitry and the receiving circuitry having statically configurable features and organized in clusters, wherein the clusters have the same physical layer circuitry design for different configurations of the configurable features, the configurable features including half-duplex mode and full-duplex mode, wherein the first chip and the second chip are on the same package, and wherein a plurality of conductive lines for coupling the first chip with the second chip are matched.
摘要:
According to one embodiment, a memory controller is disclosed. The memory controller includes assignment logic and a transaction assembler. The assignment logic receives a request to access a memory channel. The transaction assembler combines the request into one or more additional requests to access two or more independently addressable subchannels within the channel.
摘要:
Embodiments of the present invention provide an algorithm for scheduling read and write transactions to memory out of order to improve command and data bus utilization and gain performance over a range of workloads. In particular, memory transactions are sorted into queues so that they do not have page conflict with each other and are scheduled from these queues out of order in accordance with read and write scheduling algorithms to optimize latency.