NVRAM-AWARE DATA PROCESSING SYSTEM
    11.
    发明申请
    NVRAM-AWARE DATA PROCESSING SYSTEM 审中-公开
    NVRAM-AWARE数据处理系统

    公开(公告)号:US20160188456A1

    公开(公告)日:2016-06-30

    申请号:US14587325

    申请日:2014-12-31

    Abstract: In one form, a computer system includes a central processing unit, a memory controller coupled to the central processing unit and capable of accessing non-volatile random access memory (NVRAM), and an NVRAM-aware operating system. The NVRAM-aware operating system causes the central processing unit to selectively execute selected ones of a plurality of application programs, and is responsive to a predetermined operation to cause the central processing unit to execute a memory persistence procedure using the memory controller to access the NVRAM.

    Abstract translation: 在一种形式中,计算机系统包括中央处理单元,耦合到中央处理单元并且能够访问非易失性随机存取存储器(NVRAM)的存储器控​​制器以及支持NVRAM的操作系统。 所述NVRAM感知操作系统使得所述中央处理单元选择性地执行多个应用程序中的选定的应用程序,并且响应于预定操作,以使所述中央处理单元执行使用所述存储器控制器访问所述NVRAM的存储器持久性过程 。

    SELECTING A RESOURCE FROM A SET OF RESOURCES FOR PERFORMING AN OPERATION
    12.
    发明申请
    SELECTING A RESOURCE FROM A SET OF RESOURCES FOR PERFORMING AN OPERATION 有权
    从一组资源中选择一个资源来执行操作

    公开(公告)号:US20160062803A1

    公开(公告)日:2016-03-03

    申请号:US14935056

    申请日:2015-11-06

    CPC classification number: G06F9/5016 G06F9/5011 G06F12/0875 G06F2212/45

    Abstract: The described embodiments comprise a selection mechanism that selects a resource from a set of resources in a computing device for performing an operation. In some embodiments, the selection mechanism performs a lookup in a table selected from a set of tables to identify a resource from the set of resources. When the resource is not available for performing the operation and until another resource is selected for performing the operation, the selection mechanism identifies a next resource in the table and selects the next resource for performing the operation when the next resource is available for performing the operation.

    Abstract translation: 所描述的实施例包括从用于执行操作的计算设备中的一组资源中选择资源的选择机制。 在一些实施例中,选择机制在从一组表中选择的表中执行查找以从资源集合中识别资源。 当资源不可用于执行操作并且直到选择用于执行操作的另一资源为止时,选择机制识别表中的下一个资源,并且当下一个资源可用于执行操作时选择用于执行操作的下一个资源 。

    Selecting a resource from a set of resources for performing an operation
    13.
    发明授权
    Selecting a resource from a set of resources for performing an operation 有权
    从一组用于执行操作的资源中选择资源

    公开(公告)号:US09183055B2

    公开(公告)日:2015-11-10

    申请号:US13761985

    申请日:2013-02-07

    CPC classification number: G06F9/5016 G06F9/5011 G06F12/0875 G06F2212/45

    Abstract: The described embodiments comprise a selection mechanism that selects a resource from a set of resources in a computing device for performing an operation. In some embodiments, the selection mechanism is configured to perform a lookup in a table selected from a set of tables to identify a resource from the set of resources. When the identified resource is not available for performing the operation and until a resource is selected for performing the operation, the selection mechanism is configured to identify a next resource in the table and select the next resource for performing the operation when the next resource is available for performing the operation.

    Abstract translation: 所描述的实施例包括从用于执行操作的计算设备中的一组资源中选择资源的选择机制。 在一些实施例中,选择机制被配置为在从一组表中选择的表中执行查找,以从资源集合中识别资源。 当所识别的资源不可用于执行操作并且直到选择资源来执行操作时,选择机制被配置为识别表中的下一个资源,并且当下一个资源可用时选择用于执行操作的下一个资源 用于执行操作。

    MECHANISMS TO BOUND THE PRESENCE OF CACHE BLOCKS WITH SPECIFIC PROPERTIES IN CACHES
    14.
    发明申请
    MECHANISMS TO BOUND THE PRESENCE OF CACHE BLOCKS WITH SPECIFIC PROPERTIES IN CACHES 有权
    在缓存中具有特定属性的高速缓存块的存在机制

    公开(公告)号:US20140181414A1

    公开(公告)日:2014-06-26

    申请号:US14055869

    申请日:2013-10-16

    Abstract: A system and method for efficiently limiting storage space for data with particular properties in a cache memory. A computing system includes a cache array and a corresponding cache controller. The cache array includes multiple banks, wherein a first bank is powered down. In response a write request to a second bank for data indicated to be stored in the powered down first bank, the cache controller determines a respective bypass condition for the data. If the bypass condition exceeds a threshold, then the cache controller invalidates any copy of the data stored in the second bank. If the bypass condition does not exceed the threshold, then the cache controller stores the data with a clean state in the second bank. The cache controller writes the data in a lower-level memory for both cases.

    Abstract translation: 一种用于有效地限制高速缓冲存储器中具有特定属性的数据的存储空间的系统和方法。 计算系统包括高速缓存阵列和对应的高速缓存控制器。 高速缓存阵列包括多个存储体,其中第一存储体断电。 作为响应,向第二组写入请求,指示存储在掉电第一存储体中的数据,高速缓存控制器确定数据的相应旁路条件。 如果旁路条件超过阈值,则高速缓存控制器使存储在第二组中的数据的任何副本无效。 如果旁路条件不超过阈值,则高速缓存控制器将具有干净状态的数据存储在第二存储体中。 高速缓存控制器将这些数据写入较低级别的内存。

    System and method for parallelization of data processing in a processor

    公开(公告)号:US10558466B2

    公开(公告)日:2020-02-11

    申请号:US15191257

    申请日:2016-06-23

    Abstract: Systems, apparatuses, and methods for adjusting group sizes to match a processor lane width are described. In early iterations of an algorithm, a processor partitions a dataset into groups of data points which are integer multiples of the processing lane width of the processor. For example, when performing a K-means clustering algorithm, the processor determines that a first plurality of data points belong to a first group during a given iteration. If the first plurality of data points is not an integer multiple of the number of processing lanes, then the processor reassigns a first number of data points from the first plurality of data points to one or more other groups. The processor then performs the next iteration with these first number of data points assigned to other groups even though the first number of data points actually meets the algorithmic criteria for belonging to the first group.

    Enhanced resolution video and security via machine learning

    公开(公告)号:US10271008B2

    公开(公告)日:2019-04-23

    申请号:US15485071

    申请日:2017-04-11

    Abstract: Systems, apparatuses, and methods for enhanced resolution video and security via machine learning are disclosed. A transmitter reduces a resolution of each image of a videostream from a first, higher image resolution to a second, lower image resolution. The transmitter generates a set of parameters for programming a neural network to reconstruct a version of each image at the first image resolution. Then, the transmitter sends the images at the second image resolution to the receiver, along with the first set of parameters. The receiver programs a neural network with the first set of parameters and uses the neural network to reconstruct versions of the images at the first image resolution. The transmitter can send the first set of parameters to the receiver via a secure channel, ensuring that only the receiver can decode the images from the second image resolution to the first image resolution.

    PROGRAMMING IN-MEMORY ACCELERATORS TO IMPROVE THE EFFICIENCY OF DATACENTER OPERATIONS

    公开(公告)号:US20180081583A1

    公开(公告)日:2018-03-22

    申请号:US15269495

    申请日:2016-09-19

    CPC classification number: G06F12/00 G06F9/30

    Abstract: Systems, apparatuses, and methods for utilizing in-memory accelerators to perform data conversion operations are disclosed. A system includes one or more main processors coupled to one or more memory modules. Each memory module includes one or more memory devices coupled to a processing in memory (PIM) device. The main processors are configured to generate an executable for a PIM device to accelerate data conversion tasks of data stored in the local memory devices. In one embodiment, the system detects a read request for data stored in a given memory module. In order to process the read request, the system determines that a conversion from a first format to a second format is required. In response to detecting the read request, the given memory module's PIM device performs the conversion of the data from the first format to the second format and then provides the data to a consumer application.

    Workload partitioning among heterogeneous processing nodes
    19.
    发明授权
    Workload partitioning among heterogeneous processing nodes 有权
    异构处理节点之间的工作负载划分

    公开(公告)号:US09479449B2

    公开(公告)日:2016-10-25

    申请号:US13908887

    申请日:2013-06-03

    CPC classification number: H04L47/70 G06F9/5044 Y02D10/22

    Abstract: A method of computing is performed in a first processing node of a plurality of processing nodes of multiple types with distinct processing capabilities. The method includes, in response to a command, partitioning data associated with the command among the plurality of processing nodes. The data is partitioned based at least in part on the distinct processing capabilities of the multiple types of processing nodes.

    Abstract translation: 在具有不同处理能力的多种类型的多个处理节点的第一处理节点中执行计算方法。 该方法响应于命令,在多个处理节点之间分配与该命令相关联的数据。 至少部分地基于多种类型的处理节点的不同处理能力对数据进行分区。

    Thread assignment for power and performance efficiency using multiple power states
    20.
    发明授权
    Thread assignment for power and performance efficiency using multiple power states 有权
    使用多个电源状态进行功率和性能效率的线程分配

    公开(公告)号:US09170854B2

    公开(公告)日:2015-10-27

    申请号:US13909789

    申请日:2013-06-04

    Abstract: A method is performed in a computing system that includes a plurality of processing nodes of multiple types configurable to run in multiple performance states. In the method, an application executes on a thread assigned to a first processing node. Power and performance of the application on the first processing node is estimated. Power and performance of the application in multiple performance states on other processing nodes of the plurality of processing nodes besides the first processing node is also estimated. It is determined that the estimated power and performance of the application on a second processing node in a respective performance state of the multiple performance states is preferable to the power and performance of the application on the first processing node. The thread is reassigned to the second processing node, with the second processing node in the respective performance state.

    Abstract translation: 在计算系统中执行一种方法,该计算系统包括多个可配置为以多个执行状态运行的多个处理节点。 在该方法中,应用程序在分配给第一处理节点的线程上执行。 估计第一处理节点上的应用的功率和性能。 还估计除了第一处理节点之外的多个处理节点的其他处理节点上的多个性能状态下的应用的功率和性能。 确定在多个性能状态的各个性能状态下的第二处理节点上的应用的估计功率和性能优于第一处理节点上的应用的功率和性能。 线程被重新分配给第二处理节点,其中第二处理节点处于相应的执行状态。

Patent Agency Ranking