Mechanisms to bound the presence of cache blocks with specific properties in caches
    51.
    发明授权
    Mechanisms to bound the presence of cache blocks with specific properties in caches 有权
    限制缓存中具有特定属性的高速缓存块的存在的机制

    公开(公告)号:US09075730B2

    公开(公告)日:2015-07-07

    申请号:US13725011

    申请日:2012-12-21

    CPC classification number: G06F12/0871 G06F12/0848

    Abstract: A system and method for efficiently limiting storage space for data with particular properties in a cache memory. A computing system includes a cache and one or more sources for memory requests. In response to receiving a request to allocate data of a first type, a cache controller allocates the data in the cache responsive to determining a limit of an amount of data of the first type permitted in the cache is not reached. The controller maintains an amount and location information of the data of the first type stored in the cache. Additionally, the cache may be partitioned with each partition designated for storing data of a given type. Allocation of data of the first type is dependent at least upon the availability of a first partition and a limit of an amount of data of the first type in a second partition.

    Abstract translation: 一种用于有效地限制高速缓冲存储器中具有特定属性的数据的存储空间的系统和方法。 计算系统包括缓存和用于存储器请求的一个或多个源。 响应于接收到分配第一类型的数据的请求,高速缓存控制器响应于确定未达到高速缓存中允许的第一类型的数据量的极限而分配缓存中的数据。 控制器维护存储在高速缓存中的第一类型的数据的量和位置信息。 此外,可以用指定用于存储给定类型的数据的每个分区对高速缓存进行分区。 第一类型的数据的分配至少依赖于第一分区的可用性和第二分区中第一类型的数据量的限制。

    Processing device with address translation probing and methods
    52.
    发明授权
    Processing device with address translation probing and methods 有权
    具有地址转换探测和方法的处理设备

    公开(公告)号:US08984255B2

    公开(公告)日:2015-03-17

    申请号:US13723379

    申请日:2012-12-21

    Abstract: A data processing device is provided that employs multiple translation look-aside buffers (TLBs) associated with respective processors that are configured to store selected address translations of a page table of a memory shared by the processors. The processing device is configured such that when an address translation is requested by a processor and is not found in the TLB associated with that processor, another TLB is probed for the requested address translation. The probe across to the other TLB may occur in advance of a walk of the page table for the requested address or alternatively a walk can be initiated concurrently with the probe. Where the probe successfully finds the requested address translation, the page table walk can be avoided or discontinued.

    Abstract translation: 提供了一种数据处理设备,其采用与相应处理器相关联的多个翻译后备缓冲器(TLB),其被配置为存储由处理器共享的存储器的页表的所选地址转换。 处理装置被配置为使得当处理器请求地址转换并且在与该处理器相关联的TLB中没有找到地址转换时,探测另一TLB用于请求的地址转换。 跨越其他TLB的探针可以在针对所请求的地址的页表的行进之前发生,或者可以与探针同时启动步行。 探头成功找到所请求的地址转换的地方,可以避免或停止页表的移动。

    Conditional Notification Mechanism
    53.
    发明申请
    Conditional Notification Mechanism 有权
    条件通知机制

    公开(公告)号:US20140304474A1

    公开(公告)日:2014-10-09

    申请号:US13856728

    申请日:2013-04-04

    Abstract: The described embodiments comprise a computing device with a first processor core and a second processor core. In some embodiments, during operations, the first processor core receives, from the second processor core, an indication of a memory location and a flag. The first processor core then stores the flag in a first cache line in a cache in the first processor core and stores the indication of the memory location separately in a second cache line in the cache. Upon encountering a predetermined result when evaluating a condition for the indicated memory location, the first processor core updates the flag in the first cache line. Based on the update of the flag, the first processor core causes the second processor core to perform an operation.

    Abstract translation: 所描述的实施例包括具有第一处理器核心和第二处理器核心的计算设备。 在一些实施例中,在操作期间,第一处理器核心从第二处理器核心接收存储器位置和标志的指示。 第一处理器核心然后将标志存储在第一处理器核心中的高速缓存中的第一高速缓存行中,并将存储器位置的指示分别存储在高速缓存中的第二高速缓存行中。 当在评估所指示的存储器位置的条件时遇到预定结果时,第一处理器核心更新第一高速缓存行中的标志。 基于标志的更新,第一处理器核心使得第二处理器核心执行操作。

    SERVING MEMORY REQUESTS IN CACHE COHERENT HETEROGENEOUS SYSTEMS
    54.
    发明申请
    SERVING MEMORY REQUESTS IN CACHE COHERENT HETEROGENEOUS SYSTEMS 审中-公开
    在高速缓存异构系统中服务存储器请求

    公开(公告)号:US20140281234A1

    公开(公告)日:2014-09-18

    申请号:US13795777

    申请日:2013-03-12

    CPC classification number: G06F12/0815 G06F12/0817

    Abstract: Apparatus, computer readable medium, and method of servicing memory requests are presented. A read request for a memory block from a requester processing having a processor type may be serviced by providing exclusive access to the requested memory block to the requester processor when the requested memory block was modified a last time it was accessed by a previous requester processor having a same processor type as the processor type of the requester processor. Exclusive access to the requested memory block may be provided to the requester processor based on whether the requested memory block was modified by a previous processor having a same type as the requester processor at least once in the last several times the memory block was in a cache of the previous processor. Exclusive access to the requested memory block may be provided to the requester processor based on a region of the memory block.

    Abstract translation: 提供了设备,计算机可读介质和服务存储器请求的方法。 当具有处理器类型的请求者处理的存储器块的读取请求可以通过向所请求的处理器提供对所请求的存储器块的独占访问来服务,当所请求的存储器块在上一次由先前的请求者处理器访问时被修改时 与请求者处理器的处理器类型相同的处理器类型。 可以基于所请求的存储器块是否由与请求器处理器具有相同类型的先前处理器在存储器块处于高速缓存器的最后几次中至少一次进行修改而提供给所请求的存储器块的独占访问 的以前的处理器。 可以基于存储器块的区域向请求者处理器提供对所请求的存储器块的独占访问。

    Conditional Notification Mechanism
    55.
    发明申请
    Conditional Notification Mechanism 有权
    条件通知机制

    公开(公告)号:US20140250312A1

    公开(公告)日:2014-09-04

    申请号:US13782117

    申请日:2013-03-01

    Abstract: The described embodiments comprise a first hardware context. The first hardware context receives, from a second hardware context, an indication of a memory location and a condition to be met by the memory location. The first hardware context then sends a signal to the second hardware context when the memory location meets the condition.

    Abstract translation: 所描述的实施例包括第一硬件上下文。 第一硬件上下文从第二硬件上下文接收存储器位置的指示和存储器位置要满足的条件。 当存储器位置满足条件时,第一硬件上下文然后向第二硬件上下文发送信号。

    DIE-STACKED MEMORY DEVICE PROVIDING DATA TRANSLATION
    56.
    发明申请
    DIE-STACKED MEMORY DEVICE PROVIDING DATA TRANSLATION 有权
    提供数据翻译的DIE堆叠存储器件

    公开(公告)号:US20140181458A1

    公开(公告)日:2014-06-26

    申请号:US13726143

    申请日:2012-12-23

    Abstract: A die-stacked memory device incorporates a data translation controller at one or more logic dies of the device to provide data translation services for data to be stored at, or retrieved from, the die-stacked memory device. The data translation operations implemented by the data translation controller can include compression/decompression operations, encryption/decryption operations, format translations, wear-leveling translations, data ordering operations, and the like. Due to the tight integration of the logic dies and the memory dies, the data translation controller can perform data translation operations with higher bandwidth and lower latency and power consumption compared to operations performed by devices external to the die-stacked memory device.

    Abstract translation: 芯片堆叠存储器件在器件的一个或多个逻辑管芯上并入数据转换控制器,以提供数据转换服务,用于存储在芯片堆叠存储器件中或从芯片堆叠的存储器件中取出的数据。 由数据转换控制器实现的数据转换操作可以包括压缩/解压缩操作,加密/解密操作,格式转换,磨损均衡转换,数据排序操作等。 由于逻辑管芯和存储器管芯的紧密集成,与堆叠式存储器件外部的器件执行的操作相比,数据转换控制器可以执行具有更高带宽和更低延迟和功耗的数据转换操作。

    CONDENSED COMMAND PACKET FOR HIGH THROUGHPUT AND LOW OVERHEAD KERNEL LAUNCH

    公开(公告)号:US20220197696A1

    公开(公告)日:2022-06-23

    申请号:US17133574

    申请日:2020-12-23

    Abstract: Methods, devices, and systems for launching a compute kernel. A reference kernel dispatch packet is received by a kernel agent. The reference kernel dispatch packet is processed by the kernel agent to determine kernel dispatch information. The kernel dispatch information is stored by the kernel agent. A kernel is dispatched by the kernel agent, based on the kernel dispatch information. In some implementations, a condensed kernel dispatch packet is received by the kernel agent, the condensed kernel dispatch packet is processed by the kernel agent to retrieve the stored kernel dispatch information, and a kernel is dispatched by the kernel agent based on the retrieved kernel dispatch information.

    TECHNIQUES FOR IMPROVING OPERAND CACHING

    公开(公告)号:US20210173650A1

    公开(公告)日:2021-06-10

    申请号:US16703833

    申请日:2019-12-04

    Abstract: A technique for determining whether a register value should be written to an operand cache or whether the register value should remain in and not be evicted from the operand cache is provided. The technique includes executing an instruction that accesses an operand that comprises the register value, performing one or both of a lookahead technique and a prediction technique to determine whether the register value should be written to an operand cache or whether the register value should remain in and not be evicted from the operand cache, and based on the determining, updating the operand cache.

    ENHANCED ATOMICS FOR WORKGROUP SYNCHRONIZATION

    公开(公告)号:US20210096909A1

    公开(公告)日:2021-04-01

    申请号:US16588872

    申请日:2019-09-30

    Abstract: A technique for synchronizing workgroups is provided. The techniques comprise detecting that one or more non-executing workgroups are ready to execute, placing the one or more non-executing workgroups into one or more ready queues based on the synchronization status of the one or more workgroups, detecting that computing resources are available for execution of one or more ready workgroups, and scheduling for execution one or more ready workgroups from the one or more ready queues in an order that is based on the relative priority of the ready queues.

Patent Agency Ranking