Cache line compaction of compressed data segments

    公开(公告)号:US10261910B2

    公开(公告)日:2019-04-16

    申请号:US15077534

    申请日:2016-03-22

    Abstract: Methods, devices, and non-transitory process-readable storage media for compacting data within cache lines of a cache. An aspect method may include identifying, by a processor of the computing device, a base address (e.g., a physical or virtual cache address) for a first data segment, identifying a data size (e.g., based on a compression ratio) for the first data segment, obtaining a base offset based on the identified data size and the base address of the first data segment, and calculating an offset address by offsetting the base address with the obtained base offset, wherein the calculated offset address is associated with a second data segment. In some aspects, the method may include identifying a parity value for the first data segment based on the base address and obtaining the base offset by performing a lookup on a stored table using the identified data size and identified parity value.

    Power aware padding
    16.
    发明授权

    公开(公告)号:US09858196B2

    公开(公告)日:2018-01-02

    申请号:US14462773

    申请日:2014-08-19

    Abstract: Aspects include computing devices, systems, and methods for implementing a cache memory access requests for data smaller than a cache line and eliminating overfetching from a main memory by combining the data with padding data of a size of a difference between a size of a cache line and the data. A processor may determine whether the data, uncompressed or compressed, is smaller than a cache line using a size of the data or a compression ratio of the data. The processor may generate the padding data using constant data values or a pattern of data values. The processor may send a write cache memory access request for the combined data to a cache memory controller, which may write the combined data to a cache memory. The cache memory controller may send a write memory access request to a memory controller, which may write the combined data to a memory.

    Process Scheduling to Improve Victim Cache Mode
    18.
    发明申请
    Process Scheduling to Improve Victim Cache Mode 有权
    进程调度以改善受害者缓存模式

    公开(公告)号:US20160239344A1

    公开(公告)日:2016-08-18

    申请号:US14623554

    申请日:2015-02-17

    Abstract: Aspects include computing devices, systems, and methods for implementing scheduling an execution process to an execution processor cluster to take advantage of reduced latency with a victim cache. The computing device may determine a first processor cluster with a first remote shared cache memory having an available shared cache memory space. To properly schedule the execution process, the computing device may determine a second processor cluster with a lower latency to the first remote shared cache memory than an execution processor cluster scheduled with the execution process. The second processor cluster may be scheduled the execution process, thus becoming the execution processor cluster, based on a size of the available shared cache memory space and the latency of the second processor cluster to the first remote shared cache memory. The available shared cache memory space may be used as the victim cache for the execution process.

    Abstract translation: 方面包括用于实现将执行过程调度到执行处理器集群以利用受害缓存的减少的延迟的计算设备,系统和方法。 计算设备可以使用具有可用共享高速缓冲存储器空间的第一远程共享高速缓冲存储器来确定第一处理器群集。 为了适当地安排执行过程,计算设备可以确定具有比执行处理调度的执行处理器群具有比第一远程共享高速缓冲存储器更低延迟的第二处理器群集。 可以基于可用的共享高速缓冲存储器空间的大小和第二处理器群集到第一远程共享高速缓冲存储器的等待时间来调度第二处理器群集,从而成为执行处理器群集。 可用的共享缓存存储器空间可以用作执行过程的受害缓存。

    Cache line compaction of compressed data segments
    19.
    发明授权
    Cache line compaction of compressed data segments 有权
    压缩数据段的缓存行压缩

    公开(公告)号:US09361228B2

    公开(公告)日:2016-06-07

    申请号:US14451639

    申请日:2014-08-05

    Abstract: Methods, devices, and non-transitory process-readable storage media for compacting data within cache lines of a cache. An aspect method may include identifying, by a processor of the computing device, a base address (e.g., a physical or virtual cache address) for a first data segment, identifying a data size (e.g., based on a compression ratio) for the first data segment, obtaining a base offset based on the identified data size and the base address of the first data segment, and calculating an offset address by offsetting the base address with the obtained base offset, wherein the calculated offset address is associated with a second data segment. In some aspects, the method may include identifying a parity value for the first data segment based on the base address and obtaining the base offset by performing a lookup on a stored table using the identified data size and identified parity value.

    Abstract translation: 用于在高速缓存的高速缓存行中压缩数据的方法,设备和非暂态过程可读存储介质。 方面方法可以包括由计算设备的处理器识别用于第一数据段的基地址(例如,物理或虚拟高速缓存地址),识别第一数据段的数据大小(例如,基于压缩比) 数据段,基于所识别的数据大小和第一数据段的基址获得基本偏移,并且通过利用所获得的基本偏移量偏移基址来计算偏移地址,其中所计算的偏移地址与第二数据相关联 分割。 在一些方面,所述方法可以包括基于所述基地址识别所述第一数据段的奇偶校验值,并通过使用所识别的数据大小和所识别的奇偶校验值对存储的表执行查找来获得所述基本偏移。

    CACHING PICTURES FOR VIDEO CODING
    20.
    发明申请

    公开(公告)号:US20240422335A1

    公开(公告)日:2024-12-19

    申请号:US18337109

    申请日:2023-06-19

    Abstract: A method includes generating a plurality of future reference picture lists associated with a plurality of future pictures of a set of pictures, wherein the set of pictures includes a current picture and the plurality of future pictures, and the plurality of future pictures follow the current picture in coding order, determining, based on information derived from the plurality of future reference picture lists associated with the plurality of future pictures, whether to write the current picture in a dedicated chip memory or whether to write the current picture in a non-dedicated system memory, and writing the current picture in the dedicated chip memory or the non-dedicated system memory based on the determination of whether to write the current picture in the dedicated chip memory or whether to write the current picture the non-dedicated system memory.

Patent Agency Ranking