METHOD OF USING A BUFFER WITHIN AN INDEXING ACCELERATOR DURING PERIODS OF INACTIVITY
    5.
    发明申请
    METHOD OF USING A BUFFER WITHIN AN INDEXING ACCELERATOR DURING PERIODS OF INACTIVITY 有权
    在不活动期间在指数加速器中使用缓冲器的方法

    公开(公告)号:US20140215160A1

    公开(公告)日:2014-07-31

    申请号:US13754758

    申请日:2013-01-30

    Abstract: A method of using a buffer within an indexing accelerator during periods of inactivity, comprising flushing indexing specific data located in the buffer, disabling a controller within the indexing accelerator, handing control of the buffer over to a higher level cache, and selecting one of a number of operation modes of the buffer. An indexing accelerator, comprising a controller and a buffer communicatively coupled to the controller, in which, during periods of inactivity, the controller is disabled and a buffer operating mode among a number of operating modes is chosen under which the buffer will be used.

    Abstract translation: 一种在不活动期间在索引加速器内使用缓冲器的方法,包括:对位于所述缓冲器中的特定数据进行刷新索引,禁用所述索引加速器内的控制器,将所述缓冲器的控制权交给更高级别的缓存,以及选择 缓冲区的操作模式数。 索引加速器,包括通信地耦合到控制器的控制器和缓冲器,其中在非活动期间控制器被禁用,并且在多个操作模式之间选择缓冲器操作模式,在该缓冲器操作模式下使用缓冲器。

    INDEXING ACCELERATOR WITH MEMORY-LEVEL PARALLELISM SUPPORT
    6.
    发明申请
    INDEXING ACCELERATOR WITH MEMORY-LEVEL PARALLELISM SUPPORT 审中-公开
    指示加速器与记忆级并行支持

    公开(公告)号:US20160070701A1

    公开(公告)日:2016-03-10

    申请号:US14888237

    申请日:2013-07-31

    CPC classification number: G06F16/2255 G06F12/0859 G06F12/0862 Y02D10/13

    Abstract: According to an example, an indexing accelerator with memory-level parallelism (MLP) support may include a request decoder to receive indexing requests. The request decoder may include a plurality of configuration registers. A controller may be communicatively coupled to the request decoder to support MLP by assigning an indexing request of the received indexing requests to a configuration register of the plurality of configuration registers. A buffer may be communicatively coupled to the controller to store data related to an indexing operation of the controller for responding to the indexing request.

    Abstract translation: 根据示例,具有存储器级并行性(MLP)支持的索引加速器可以包括用于接收索引请求的请求解码器。 请求解码器可以包括多个配置寄存器。 控制器可以通信地耦合到请求解码器以通过将接收到的索引请求的索引请求分配给多个配置寄存器的配置寄存器来支持MLP。 缓冲器可以通信地耦合到控制器以存储与用于响应索引请求的控制器的索引操作相关的数据。

    Storing Data in Persistent Hybrid Memory
    7.
    发明申请
    Storing Data in Persistent Hybrid Memory 有权
    将数据存储在持久混合内存中

    公开(公告)号:US20150254014A1

    公开(公告)日:2015-09-10

    申请号:US14716473

    申请日:2015-05-19

    Abstract: Storing data in persistent hybrid memory includes promoting a memory block from non-volatile memory to a cache based on a usage of said memory block according to a promotion policy, tracking modifications to the memory block while in the cache, and writing the memory block back into the non-volatile memory after the memory block is modified in the cache based on a writing policy that keeps a number of the memory blocks that are modified at or below a number threshold while maintaining the memory block in the cache.

    Abstract translation: 将数据存储在永久性混合存储器中包括根据促销策略,基于所述存储器块的使用,将存储器块从非易失性存储器升级到高速缓存,在高速缓存中跟踪对存储器块的修改以及将存储器块写回 基于写入策略,在高速缓存中修改存储器块之后,在将存储器块保持在高速缓存中的同时保持修改数量或低于数字阈值的存储器块数量的情况下,进入非易失性存储器。

    HYBRID SECURE NON-VOLATILE MAIN MEMORY
    8.
    发明申请
    HYBRID SECURE NON-VOLATILE MAIN MEMORY 审中-公开
    混合安全非易失性主存储器

    公开(公告)号:US20160239685A1

    公开(公告)日:2016-08-18

    申请号:US14900665

    申请日:2013-07-31

    Abstract: According to an example, a hybrid secure non-volatile main memory (HSNVMM) may include a non-volatile memory (NVM) to store a non-working set of memory data in an encrypted format, and a dynamic random-access memory (DRAM) buffer to store a working set of memory data in a decrypted format. A cryptographic engine may selectively encrypt and decrypt memory pages in the working and non-working sets of memory data. A security controller may control memory data placement and replacement in the NVM and the DRAM buffer based on memory data characteristics that include clean memory pages, dirty memory pages, working set memory pages, and non-working set memory pages. The security controller may further provide incremental encryption and decryption instructions to the cryptographic engine based on the memory data characteristics.

    Abstract translation: 根据示例,混合安全非易失性主存储器(HSNVMM)可以包括以加密格式存储非工作的存储器数据集的非易失性存储器(NVM)和动态随机存取存储器(DRAM) )缓冲器以解密的格式存储一组工作的存储器数据。 加密引擎可以选择性地加密和解密存储器数据的工作和非工作集合中的存储器页面。 安全控制器可以基于包括清洁存储器页面,脏存储器页面,工作集存储器页面和非工作集存储器页面的存储器数据特性来控制NVM和DRAM缓冲器中的存储器数据放置和替换。 安全控制器还可以基于存储器数据特性向密码引擎提供增量的加密和解密指令。

    Storing data in persistent hybrid memory
    9.
    发明授权
    Storing data in persistent hybrid memory 有权
    将数据存储在持久混合内存中

    公开(公告)号:US09348527B2

    公开(公告)日:2016-05-24

    申请号:US14716473

    申请日:2015-05-19

    Abstract: Storing data in persistent hybrid memory includes promoting a memory block from non-volatile memory to a cache based on a usage of said memory block according to a promotion policy, tracking modifications to the memory block while in the cache, and writing the memory block back into the non-volatile memory after the memory block is modified in the cache based on a writing policy that keeps a number of the memory blocks that are modified at or below a number threshold while maintaining the memory block in the cache.

    Abstract translation: 将数据存储在永久性混合存储器中包括根据促销策略,基于所述存储器块的使用,将存储器块从非易失性存储器升级到高速缓存,在高速缓存中跟踪对存储器块的修改以及将存储器块写回 基于写入策略,在高速缓存中修改存储器块之后,在将存储器块保持在高速缓存中的同时保持修改数量或低于数字阈值的存储器块数量的情况下,进入非易失性存储器。

    Vertically-Tiered Client-Server Architecture
    10.
    发明申请
    Vertically-Tiered Client-Server Architecture 审中-公开
    垂直分层的客户端 - 服务器架构

    公开(公告)号:US20150350381A1

    公开(公告)日:2015-12-03

    申请号:US14759692

    申请日:2013-01-15

    Abstract: Systems and methods of vertically aggregating tiered servers in a data center are disclosed. An example method includes partitioning a plurality of servers in the data center to form an array of aggregated end points (AEPs). Multiple servers within each AEP are connected by an intra-AEP network fabric and different AEPs are connected by an inter-AEP network. Each AEP has one or multiple central hub servers acting as end-points on the inter-AEP network. The method includes resolving a target server identification (ID). If the target server ID is the central hub server in the first AEP, the request is handled in the first AEP. If the target server ID is another server local to the first AEP, the request is redirected over the intra-AEP fabric. If the target server ID is a server in a second AEP, the request is transferred to the second AEP.

    Abstract translation: 公开了在数据中心中垂直聚合分层服务器的系统和方法。 示例性方法包括对数据中心中的多个服务器进行分区以形成聚合端点阵列(AEP)。 每个AEP内的多个服务器通过AEP网络结构连接,不同的AEP通过AEP网络连接。 每个AEP都有一个或多个中央集线器服务器充当AEP网络上的端点。 该方法包括解决目标服务器标识(ID)。 如果目标服务器ID是第一个AEP中的中心服务器,则请求将在第一个AEP中处理。 如果目标服务器ID是第一个AEP本地的另一个服务器,则该请求将通过AEP内部结构重定向。 如果目标服务器ID是第二个AEP中的服务器,则该请求被传送到第二个AEP。

Patent Agency Ranking