摘要:
The present invention provides a virtual storage system that generally stores uses larger segmentations, but divides large segments into smaller sub-segments during data movement operations. The present invention provides a method and system having this hierarchy of segment sizes, namely a large segment for the normal case, while breaking the large segment into single disk blocks during data movement. The mapping has large segments except for those segments undergoing data movement. For those segments, it would be desirable to have the smallest segment size possible, namely, a single disk block. In this way, the administration costs are generally low, but latencies caused by the movement of large data blocks are avoided.
摘要:
An embodiment of the invention is directed to a method including fetching address translations for a current group of scanlines of image data and prefetching address translations for a next group of scanlines of image data. The prefetching occurs while the current group of scanlines of image data is being rendered on a display. The current group of scanlines and the next group of scanlines may be the same size such that determining address translations for the next group of scanlines terminates at or before the time the current group of scanlines have been rendered on the display. A translation look aside buffer (TLB) controller may be used to implement the method. In a particular embodiment of the invention, a first buffer and a second buffer are used such that when one stores address translations for the current group of scanlines of image data, the other stores address translations for the next group of scanlines of image data.
摘要:
In a computer system, a parallel, distributed function lookaside buffer (TLB) includes a small, fast TLB and a second larger, but slower TLB. The two TLBs operate in parallel, with the small TLB receiving integer load data and the large TLB receiving other virtual address information. By distributing functions, such as load and store instructions, and integer and floating point instructions, between the two TLBs, the small TLB can operate with a low latency and avoid thrashing and similar problems while the larger TLB provides high bandwidth for memory intensive operations. This mechanism also provides a parallel store update and invalidation mechanism which is particularly useful for prevalidated cache tag designs.
摘要:
An AR map has entries of the same number as the AR's and is accessed by an ARN. In each of the map entries, there are entered: an ID of a pertinent entry in an STD array; and a flag representing valid or invalid of the map entry. Into an ALET holding part, there are stored ALET's corresponding to STD's in the STD array, respectively. Upon AR access, if an entry in the AR map which corresponds to a designated AR is valid, an ID included in the valid entry is outputted to a storage controlling part. In case that the corresponding entry in the AR map is invalid, if the ALET holding part stores an ALET identical with an ALET of the designated AR, an ID of STD corresponding to the stored ALET is stored into the AR map.
摘要:
An information storage medium is designed to assure stable continuous recording without adverse effect, even when many defective areas are present on the information storage medium. To record information onto the information storage medium, a file unit is defined as a first unit. A contiguous data area unit to be treated as a continuous recording area is defined as a second unit. Furthermore, recording is done in the contiguous data area units and a collection of the contiguous data area units is organized into the file unit. In addition, an information recording place is provided in such a manner that the contiguous data area unit is so set that it extends over the recording area of another file already recorded on the information storage medium and a defective area on the information storage medium.
摘要:
A memory for storing address translation data includes one or more page table entry structures. Each page table entry structure includes a base address field to identify an allocated page of memory, a prior page field to identify zero or more allocated pages of memory that are sequential to and before that page of memory identified by the base address field, and a subsequent page field to identify zero or more allocated pages of memory that are sequential to and after that page identified by the base address field.
摘要:
An embodiment of the present invention includes a tag array, a valid vector, and a detector. The tag array stores N tag entries. Each of the N tag entries contains a one-hot tag having K bits. Each of the K bits of the one-hot tag corresponds to a translation look-aside buffer (TLB) entry in a TLB array having K TLB entries. The valid vector stores N valid entries corresponding to the N tag entries. The detector detects an error when a tag entry is read out upon a fetch read operation.
摘要:
A symmetric multiprocessor (SMP) of hierarchical connection realizing an inter-partition shared memory has at the gateway of an inter-node connection switch from each node, a translator for translating an address of an access command for an area shared between partitions, between a real address used in a partition and a shared area address used in common between partitions. Thereby, the address of a local area of each partition is freely set, and cache coherent control of a shared area is conducted at high speed by using a snoop command of the hierarchical connection SMP. Fault containment between partitions is realized by checking conformity between the address of the access command issued from another partition and the shared area configuration. Nodes included in other partitions may be reset from each partition. In addition, the configuration information of the shared area between partitions may be dynamically modified.
摘要:
A method and system for laying out and accessing data in a disk drive system. The layout resides in a table in firmware of the disk drive system. The table includes multiple entries or rows, one corresponding to each different area in the disk media. The entry provides information about the range of block addresses in that area including the starting and end block address in the area, and information about the range of physical addresses including the head and the starting and ending cylinder number. A firmware routine finds the appropriate entry in the table and converts the block address to the physical address, or vice versa.
摘要:
According to the present invention, methods and apparatus for reducing memory access latency are disclosed. When a new entry is made to translation look aside buffer, the new TLB entry points to a corresponding TLB page of memory. Concurrently with the updating of the TLB, the TLB page is moved temporally closer to a processor by storing the TLB page in a TLB page cache. The TLB page cache is temporally closer to the processor than is a main memory.