摘要:
The present invention provides a method for a processor to write data to a cache or other fast memory, without also writing it to main memory. Further, the data is “locked” into the cache or other fast memory until it is loaded for use. Data remains in the locking cache until it is specifically overwritten under software control. The locking cache or other fast memory can be used as additional system memory. In an embodiment of the invention, the locking cache is one or more sets of ways, but not all of the sets or ways, of a multiple set associative cache.
摘要:
A method and an apparatus are provided for efficiently managing the operation of a translation buffer. A software and hardware apparatus and method are utilized to pre-load a translation buffer to prevent poor operation as a result of slow warming of a cache.
摘要:
A computer architecture and programming model for high speed processing over broadband networks are provided. The architecture employs a consistent modular structure, a common computing module and uniform software cells. The common computing module includes a control processor, a plurality of processing units, a plurality of local memories from which the processing units process programs, a direct memory access controller and a shared main memory. A synchronized system and method for the coordinated reading and writing of data to and from the shared main memory by the processing units also are provided. A hardware sandbox structure is provided for security against the corruption of data among the programs being processed by the processing units. The uniform software cells contain both data and applications and are structured for processing by any of the processors of the network. Each software cell is uniquely identified on the network.
摘要:
The present invention provides a method of storing data transferred from an I/O device, a network, or a disk into a portion of a cache or other fast memory, without also writing it to main memory. Further, the data is “locked” into the cache or other fast memory until it is loaded for use. Data remains in the locking cache until it is specifically overwritten under software control. In an embodiment of the invention, a processor can write data to the cache or other fast memory without also writing it to main memory. The portion of the cache or other fast memory can be used as additional system memory.
摘要:
The present invention provides a method for a processor to write data to a cache or other fast memory, without also writing it to main memory. Further, the data is “locked” into the cache or other fast memory until it is loaded for use. Data remains in the locking cache until it is specifically overwritten under software control. The locking cache or other fast memory can be used as additional system memory. In an embodiment of the invention, the locking cache is one or more sets of ways, but not all of the sets or ways, of a multiple set associative cache.
摘要:
A system for using a plurality of heterogeneous processors in a common computer system is presented. Each processor type in the heterogeneous group handles a particular instruction set. The processors share a common memory using a common bus. In one embodiment, one of the processor types accesses the memory using DMA instructions. In another embodiment, a cache for each type of processor is stored in the common memory pool. In one embodiment, one or more PowerPC processors shares a memory with one or more Synergistic Processing Complex (SPC). A common table is used to track and maintain memory for the various processors.
摘要:
An apparatus and method for efficient communication of producer/consumer buffer status are provided. With the apparatus and method, devices in a data processing system notify each other of updates to head and tail pointers of a shared buffer region when the devices perform operations on the shared buffer region using signal notification channels of the devices. Thus, when a producer device that produces data to the shared buffer region writes data to the shared buffer region, an update to the head pointer is written to a signal notification channel of a consumer device. When a consumer device reads data from the shared buffer region, the consumer device writes a tail pointer update to a signal notification channel of the producer device. In addition, channels may operate in a blocking mode so that the corresponding device is kept in a low power state until an update is received over the channel.
摘要:
A computer implemented method, data processing system, and computer usable code are provided for using software and hardware thermal profiles to schedule the execution of applications. Hardware and software thermal profiles are generated for a set of processors and a set of applications, respectively, to form a plurality of hardware and software thermal profiles. Then a set of hardware and software thermal profiles are selected from the plurality of hardware and software thermal profiles. The set of software thermal profiles and the set of hardware thermal profiles are used to generate a thermal index. Finally, the execution of the set of applications is scheduled using the thermal index.
摘要:
A system and method for limiting the size of a local storage of a processor are provided. A facility is provided in association with a processor for setting a local storage size limit. This facility is a privileged facility and can only be accessed by the operating system running on a control processor in the multiprocessor system or the associated processor itself. The operating system sets the value stored in the local storage limit register when the operating system initializes a context switch in the processor. When the processor accesses the local storage using a request address, the local storage address corresponding to the request address is compared against the local storage limit size value in order to determine if the local storage address, or a modulo of the local storage address, is used to access the local storage.
摘要:
A computer architecture and programming model for high speed processing over broadband networks are provided. The architecture employs a consistent modular structure, a common computing module and uniform software cells. The common computing module includes a control processor, a plurality of processing units, a plurality of local memories from which the processing units process programs, a direct memory access controller and a shared main memory. A synchronized system and method for the coordinated reading and writing of data to and from the shared main memory by the processing units also are provided. A hardware sandbox structure is provided for security against the corruption of data among the programs being processed by the processing units. The uniform software cells contain both data and applications and are structured for processing by any of the processors of the network. Each software cell is uniquely identified on the network.