摘要:
Color-based caching allows each cache line to be distinguished by a specific color, and enables the manipulation of cache behavior based upon the colors of the cache lines. When multiple threads are able to share a cache, effective cache management is critical to overall performance. Color-based caching provides an effective method to better utilize caches and avoid unnecessary cache thrashing and pollution. Hardware maintains color-based counters relative to the cache lines to monitor and obtain feedback on cache line events. These counters are utilized for cache coherence transactions in multiple processor systems.
摘要:
Color-based caching allows each cache line to be distinguished by a specific color, and enables the manipulation of cache behavior based upon the colors of the cache lines. When multiple threads are able to share a cache, effective cache management is critical to overall performance. Color-based caching provides an effective method to better utilize caches and avoid unnecessary cache thrashing and pollution. Hardware maintains color-based counters relative to the cache lines to monitor and obtain feedback on cache line events. These counters are utilized for cache coherence transactions in multiple processor systems.
摘要:
Color-based caching allows each cache line to be distinguished by a specific color, and enables the manipulation of cache behavior based upon the colors of the cache lines. When multiple threads are able to share a cache, effective cache management is critical to overall performance. Color-based caching provides an effective method to better utilize a cache and avoid unnecessary cache thrashing and/or pollution. The color based caching can be monitored to improve memory performance and guarantee Quality-Of-Service of cache utilization.
摘要:
Color-based caching allows each cache line to be distinguished by a specific color, and enables the manipulation of cache behavior based upon the colors of the cache lines. When multiple threads are able to share a cache, effective cache management is critical to overall performance. Color-based caching provides an effective method to better utilize a cache and avoid unnecessary cache thrashing and/or pollution. The color based caching can be monitored to improve memory performance and guarantee Quality-Of-Service of cache utilization.
摘要:
Memory Access Coloring provides architecture support that allows software to classify memory accesses into different congruence classes by specifying a color for each memory access operation. The color information is received and recorded by the underlying system with appropriate granularity. This allows hardware to monitor color-based cache monitoring information and provide such feedback to the software to enable various runtime optimizations. It also enables enforcement of different memory consistency models for memory regions with different colors at the same time.
摘要:
Memory Access Coloring provides architecture support that allows software to classify memory accesses into different congruence classes by specifying a color for each memory access operation. The color information is received and recorded by the underlying system with appropriate granularity. This allows hardware to monitor color-based cache monitoring information and provide such feedback to the software to enable various runtime optimizations. It also enables enforcement of different memory consistency models for memory regions with different colors at the same time.
摘要:
Methods, systems, and media for reducing memory latency seen by processors by providing a measure of control over on-chip memory (OCM) management to software applications, implicitly and/or explicitly, via an operating system are contemplated. Many embodiments allow part of the OCM to be managed by software applications via an application program interface (API), and part managed by hardware. Thus, the software applications can provide guidance regarding address ranges to maintain close to the processor to reduce unnecessary latencies typically encountered when dependent upon cache controller policies. Several embodiments utilize a memory internal to the processor or on a processor node so the memory block used for this technique is referred to as OCM.
摘要:
A method and apparatus for restricting access of an application to computer hardware. The apparatus includes both an authentication module and a validation module. The authentication module is within the trusted firmware layer. The purpose of the authentication module is to verify a cryptographic key presented by an application. The validation module is responsive to the authentication module and limits access of the application to the computer hardware. The authentication modules may be implemented in software through a firmware call, or through a hardware register of the computer.
摘要:
To dynamically update an operating system, a new factory object may have one or more new and/or updated object instances. A corresponding old factory object is then located and its version is checked for compatibility. A dynamic update procedure is then executed, which includes (a) changing a factory reference pointer within the operating system from the old factory object to the new factory object. For the case of updated object instances, (b) hot swapping each old object instance for its corresponding updated object instance, and (c) removing the old factory object. This may be performed for multiple updated object instances in the new factory object, preferably each separately. For the case of new object instances, they are created by the new factory and pointers established to invoke them. A single factory object may include multiple updated objects from a class, and/or new object instances from different classes, and the update may be performed without the need to reboot the operating system.
摘要:
A system, method and computer program product for efficient sharing of memory between first and second applications running under first and second operating systems on a shared hardware system. The hardware system runs a hypervisor that supports concurrent execution of the first and second operating systems, and further includes a region of shared memory managed on behalf of the first and second applications. Techniques are used to avoid preemption when the first application is accessing the shared memory region. In this way, the second application will not be unduly delayed when attempting to access the shared memory region due to delays stemming from the first application's access of the shared memory region. This is especially advantageous when the second application and operating system are adapted for real-time processing. Additional benefits can be obtained by taking steps to minimize memory access faults.