MULTI-PROTOCOL HEADER GENERATION SYSTEM

    公开(公告)号:US20170085472A1

    公开(公告)日:2017-03-23

    申请号:US14859844

    申请日:2015-09-21

    CPC classification number: H04L45/52 H04L45/04 H04L49/9057 H04L69/08

    Abstract: A communication device includes a data source that generates data for transmission over a bus, and a data encoder that receives and encodes outgoing data. An encoder system receives outgoing data from a data source and stores the outgoing data in a first queue. An encoder encodes outgoing data with a header type that is based upon a header type indication from a controller and stores the encoded data that may be a packet or a data word with at least one layered header in a second queue for transmission. The device is configured to receive at a payload extractor, a packet protocol change command from the controller and to remove the encoded data and to re-encode the data to create a re-encoded data packet and placing the re-encoded data packet in the second queue for transmission.

    HOT PAGE SELECTION IN MULTI-LEVEL MEMORY HIERARCHIES
    42.
    发明申请
    HOT PAGE SELECTION IN MULTI-LEVEL MEMORY HIERARCHIES 审中-公开
    多级记忆分析中的热页选择

    公开(公告)号:US20160378655A1

    公开(公告)日:2016-12-29

    申请号:US14752408

    申请日:2015-06-26

    CPC classification number: G06F12/0811 G06F12/0897 G06F12/1027 Y02D10/13

    Abstract: Systems, apparatuses, and methods for sorting memory pages in a multi-level heterogeneous memory architecture. The system may classify pages into a first “hot” category or a second “cold” category. The system may attempt to place the “hot” pages into the memory level(s) closest to the systems' processor cores. The system may track parameters associated with each page, with the parameters including number of accesses, types of accesses, power consumed per access, temperature, wearability, and/or other parameters. Based on these parameters, the system may generate a score for each page. Then, the system may compare the score of each page to a threshold. If the score of a given page is greater than the threshold, the given page may be designated as “hot”. If the score of the given page is less than the threshold, the given page may be designated as “cold”.

    Abstract translation: 用于在多级异构存储器架构中分类存储器页面的系统,装置和方法。 系统可以将页面分类为第一个“热”类别或第二个“冷”类别。 系统可能会尝试将“热”页放置在最接近系统处理器内核的存储器级中。 系统可以跟踪与每个页面相关联的参数,其中参数包括访问次数,访问类型,每次访问消耗的功率,温度,耐磨性和/或其他参数。 基于这些参数,系统可以为每个页面生成分数。 然后,系统可以将每个页面的分数与阈值进行比较。 如果给定页面的分数大于阈值,则给定页面可能被指定为“热”。 如果给定页面的分数小于阈值,则给定的页面可以被指定为“冷”。

    Batching modified blocks to the same dram page
    43.
    发明授权
    Batching modified blocks to the same dram page 有权
    将修改的块批处理到同一个戏剧页面

    公开(公告)号:US09529718B2

    公开(公告)日:2016-12-27

    申请号:US14569175

    申请日:2014-12-12

    Abstract: To efficiently transfer of data from a cache to a memory, it is desirable that more data corresponding to the same page in the memory be loaded in a line buffer. Writing data to a memory page that is not currently loaded in a row buffer requires closing an old page and opening a new page. Both operations consume energy and clock cycles and potentially delay more critical memory read requests. Hence it is desirable to have more than one write going to the same DRAM page to amortize the cost of opening and closing DRAM pages. A desirable approach is batch write backs to the same DRAM page by retaining modified blocks in the cache until a sufficient number of modified blocks belonging to the same memory page are ready for write backs.

    Abstract translation: 为了有效地将数据从高速缓存传输到存储器,期望将与存储器中的相同页面相对应的更多数据加载到行缓冲器中。 将数据写入当前未加载到行缓冲区的内存页面时,需要关闭旧页面并打开新页面。 两种操作都消耗能量和时钟周期,并可能延迟更多关键的存储器读取请求。 因此,期望具有多于一个写入同一DRAM页面的写入以分摊打开和关闭DRAM页面的成本。 期望的方法是通过将修改的块保留在高速缓存中来批量回写到相同的DRAM页面,直到属于同一存储器页面的足够数量的修改的块准备好回写。

    MEMORY MODULE WITH VOLATILE AND NON-VOLATILE STORAGE ARRAYS
    44.
    发明申请
    MEMORY MODULE WITH VOLATILE AND NON-VOLATILE STORAGE ARRAYS 审中-公开
    具有挥发性和非易失性存储阵列的存储模块

    公开(公告)号:US20160246715A1

    公开(公告)日:2016-08-25

    申请号:US14628699

    申请日:2015-02-23

    Abstract: A memory module is responsive to control signaling for a random access memory (RAM) module, and performs translation of received memory addresses so that it can map a relatively small address space of an operating system to a larger physical address space of its storage arrays. The memory module can therefore be employed in systems requiring a large amount of memory, such as systems using many processors, without requiring specialized operating systems for addressing the larger physical address space.

    Abstract translation: 存储器模块响应于随机存取存储器(RAM)模块的控制信令,并且执行接收到的存储器地址的转换,使得其可以将操作系统的相对较小的地址空间映射到其存储阵列的较大物理地址空间。 因此,存储器模块可用于需要大量存储器的系统中,例如使用许多处理器的系统,而不需要专门的操作系统来寻址更大的物理地址空间。

    NVRAM-AWARE DATA PROCESSING SYSTEM
    45.
    发明申请
    NVRAM-AWARE DATA PROCESSING SYSTEM 审中-公开
    NVRAM-AWARE数据处理系统

    公开(公告)号:US20160188456A1

    公开(公告)日:2016-06-30

    申请号:US14587325

    申请日:2014-12-31

    Abstract: In one form, a computer system includes a central processing unit, a memory controller coupled to the central processing unit and capable of accessing non-volatile random access memory (NVRAM), and an NVRAM-aware operating system. The NVRAM-aware operating system causes the central processing unit to selectively execute selected ones of a plurality of application programs, and is responsive to a predetermined operation to cause the central processing unit to execute a memory persistence procedure using the memory controller to access the NVRAM.

    Abstract translation: 在一种形式中,计算机系统包括中央处理单元,耦合到中央处理单元并且能够访问非易失性随机存取存储器(NVRAM)的存储器控​​制器以及支持NVRAM的操作系统。 所述NVRAM感知操作系统使得所述中央处理单元选择性地执行多个应用程序中的选定的应用程序,并且响应于预定操作,以使所述中央处理单元执行使用所述存储器控制器访问所述NVRAM的存储器持久性过程 。

    Conditional prefetching
    46.
    发明授权
    Conditional prefetching 有权
    条件预取

    公开(公告)号:US09367466B2

    公开(公告)日:2016-06-14

    申请号:US13765813

    申请日:2013-02-13

    CPC classification number: G06F12/0862 G06F2212/6026

    Abstract: A type of conditional probability fetcher prefetches data, such as for a cache, from another memory by maintaining information relating to memory elements in a group of memory elements fetched from the second memory. The information may be an aggregate number of memory elements that have been fetched for different memory segments in the group. The information is maintained responsive to fetching one or more memory elements from a segment of memory elements in the group of memory elements. Prefetching one or more remaining memory elements in a particular segment of memory elements from the second memory into the first memory occurs when the information relating to the memory elements in the group of memory elements indicates that a prefetching condition has been satisfied.

    Abstract translation: 一种类型的条件概率提取器通过维护与从第二存储器提取的一组存储器元件中的存储器元件有关的信息,从另一存储器预取诸如用于高速缓存的数据。 信息可以是已经为组中的不同存储器段获取的存储器元素的总数。 该信息是响应于从一组存储器元件中的存储器元件的段中提取一个或多个存储器元件来保持的。 当与存储元件组中的存储元件相关的信息指示已经满足预取条件时,将存储器元件的特定段中的一个或多个剩余存储元件从第二存储器预取到第一存储器中。

    SELECTING A RESOURCE FROM A SET OF RESOURCES FOR PERFORMING AN OPERATION
    47.
    发明申请
    SELECTING A RESOURCE FROM A SET OF RESOURCES FOR PERFORMING AN OPERATION 有权
    从一组资源中选择一个资源来执行操作

    公开(公告)号:US20160062803A1

    公开(公告)日:2016-03-03

    申请号:US14935056

    申请日:2015-11-06

    CPC classification number: G06F9/5016 G06F9/5011 G06F12/0875 G06F2212/45

    Abstract: The described embodiments comprise a selection mechanism that selects a resource from a set of resources in a computing device for performing an operation. In some embodiments, the selection mechanism performs a lookup in a table selected from a set of tables to identify a resource from the set of resources. When the resource is not available for performing the operation and until another resource is selected for performing the operation, the selection mechanism identifies a next resource in the table and selects the next resource for performing the operation when the next resource is available for performing the operation.

    Abstract translation: 所描述的实施例包括从用于执行操作的计算设备中的一组资源中选择资源的选择机制。 在一些实施例中,选择机制在从一组表中选择的表中执行查找以从资源集合中识别资源。 当资源不可用于执行操作并且直到选择用于执行操作的另一资源为止时,选择机制识别表中的下一个资源,并且当下一个资源可用于执行操作时选择用于执行操作的下一个资源 。

    Method and system for asymmetrical processing with managed data affinity
    48.
    发明授权
    Method and system for asymmetrical processing with managed data affinity 有权
    具有管理数据亲和力的不对称处理方法和系统

    公开(公告)号:US09244629B2

    公开(公告)日:2016-01-26

    申请号:US13926765

    申请日:2013-06-25

    Abstract: Methods, systems and computer readable storage mediums for more efficient and flexible scheduling of tasks on an asymmetric processing system having at least one host processor and one or more slave processors, are disclosed. An example embodiment includes, determining a data access requirement of a task, comparing the data access requirement to respective local memories of the one or more slave processors selecting a slave processor from the one or more slave processors based upon the comparing, and running the task on the selected slave processor.

    Abstract translation: 公开了用于在具有至少一个主处理器和一个或多个从属处理器的非对称处理系统上更有效和灵活地调度任务的方法,系统和计算机可读存储介质。 一个示例实施例包括:确定任务的数据访问需求,将数据访问要求与一个或多个从属处理器的相应本地存储器进行比较,所述一个或多个从属处理器基于比较而从一个或多个从属处理器中选择从属处理器,并且执行任务 在所选的从属处理器上。

    Write endurance management techniques in the logic layer of a stacked memory
    49.
    发明授权
    Write endurance management techniques in the logic layer of a stacked memory 有权
    在堆叠式存储器的逻辑层中写入耐力管理技术

    公开(公告)号:US09235528B2

    公开(公告)日:2016-01-12

    申请号:US13725305

    申请日:2012-12-21

    CPC classification number: G06F12/10 G06F11/1666 G06F11/2094

    Abstract: A system, method, and memory device embodying some aspects of the present invention for remapping external memory addresses and internal memory locations in stacked memory are provided. The stacked memory includes one or more memory layers configured to store data. The stacked memory also includes a logic layer connected to the memory layer. The logic layer has an Input/Output (I/O) port configured to receive read and write commands from external devices, a memory map configured to maintain an association between external memory addresses and internal memory locations, and a controller coupled to the I/O port, memory map, and memory layers, configured to store data received from external devices to internal memory locations.

    Abstract translation: 提供体现本发明的一些方面的用于重新映射外部存储器地址和堆叠存储器中的内部存储器位置的系统,方法和存储器件。 堆叠的存储器包括被配置为存储数据的一个或多个存储器层。 堆叠的存储器还包括连接到存储器层的逻辑层。 逻辑层具有被配置为从外部设备接收读取和写入命令的输入/输出(I / O)端口,被配置为保持外部存储器地址和内部存储器位置之间的关联的存储器映射以及耦合到I / O端口,内存映射和内存层,配置为将从外部设备接收的数据存储到内部存储器位置。

    Dirty cacheline duplication
    50.
    发明授权
    Dirty cacheline duplication 有权
    脏的缓存线重复

    公开(公告)号:US09229803B2

    公开(公告)日:2016-01-05

    申请号:US13720536

    申请日:2012-12-19

    CPC classification number: G06F11/1064 G06F12/0893

    Abstract: A method of managing memory includes installing a first cacheline at a first location in a cache memory and receiving a write request. In response to the write request, the first cacheline is modified in accordance with the write request and marked as dirty. Also in response to the write request, a second cacheline is installed that duplicates the first cacheline, as modified in accordance with the write request, at a second location in the cache memory.

    Abstract translation: 管理存储器的方法包括在高速缓冲存储器中的第一位置安装第一高速缓存线并接收写入请求。 响应于写入请求,第一个缓存线根据写入请求进行修改并标记为脏。 还响应于写入请求,安装第二高速缓存线,该第二高速缓存线在高速缓冲存储器的第二位置处复制根据写入请求修改的第一高速缓存线。

Patent Agency Ranking