摘要:
Techniques are disclosed relating to filtering cache accesses. In some embodiments, a control unit is configured to, in response to a request to process a set of data, determine a size of a portion of the set of data to be handled using a cache. In some embodiments, the control unit is configured to determine filtering parameters indicative of a set of addresses corresponding to the determined size. In some embodiments, the control unit is configured to process one or more access requests for the set of data based on the determined filter parameters, including: using the cache to process one or more access requests having addresses in the set of addresses and bypassing the cache to access a backing memory directly, for access requests having addresses that are not in the set of addresses. The disclosed techniques may reduce average memory bandwidth or peak memory bandwidth.
摘要:
Various techniques for processing instructions that specify multiple destinations. A first portion of a processor pipeline is configured to split a multi-destination instruction into a plurality of single-destination operations. A second portion of the pipeline is configured to process the plurality of single-destination operations. A third portion of the pipeline is configured to merge the plurality of single-destination operations into one or more multi-destination operations. The one or more multi-destination operations may be performed. The first portion of the pipeline may include a decode unit. The second portion of the pipeline may include a map unit, which may in turn include circuitry configured to maintain a list of free architectural registers and a mapping table that maps physical registers to architectural registers. The third portion of the pipeline may comprise a dispatch unit. In some embodiments, this may provide certain advantages such as reduced area and/or power consumption.
摘要:
Techniques are disclosed relating to filtering cache accesses. In some embodiments, a control unit is configured to, in response to a request to process a set of data, determine a size of a portion of the set of data to be handled using a cache. In some embodiments, the control unit is configured to determine filtering parameters indicative of a set of addresses corresponding to the determined size. In some embodiments, the control unit is configured to process one or more access requests for the set of data based on the determined filter parameters, including: using the cache to process one or more access requests having addresses in the set of addresses and bypassing the cache to access a backing memory directly, for access requests having addresses that are not in the set of addresses. The disclosed techniques may reduce average memory bandwidth or peak memory bandwidth.
摘要:
Techniques are disclosed relating to filtering cache accesses. In some embodiments, a control unit is configured to, in response to a request to process a set of data, determine a size of a portion of the set of data to be handled using a cache. In some embodiments, the control unit is configured to determine filtering parameters indicative of a set of addresses corresponding to the determined size. In some embodiments, the control unit is configured to process one or more access requests for the set of data based on the determined filter parameters, including: using the cache to process one or more access requests having addresses in the set of addresses and bypassing the cache to access a backing memory directly, for access requests having addresses that are not in the set of addresses. The disclosed techniques may reduce average memory bandwidth or peak memory bandwidth.
摘要:
Various techniques for processing instructions that specify multiple destinations. A first portion of a processor pipeline is configured to split a multi-destination instruction into a plurality of single-destination operations. A second portion of the pipeline is configured to process the plurality of single-destination operations. A third portion of the pipeline is configured to merge the plurality of single-destination operations into one or more multi-destination operations. The one or more multi-destination operations may be performed. The first portion of the pipeline may include a decode unit. The second portion of the pipeline may include a map unit, which may in turn include circuitry configured to maintain a list of free architectural registers and a mapping table that maps physical registers to architectural registers. The third portion of the pipeline may comprise a dispatch unit. In some embodiments, this may provide certain advantages such as reduced area and/or power consumption.