Data Streaming Unit and Method for Operating the Data Streaming Unit

    公开(公告)号:US20170163698A1

    公开(公告)日:2017-06-08

    申请号:US14958773

    申请日:2015-12-03

    Abstract: A data streaming unit (DSU) and a method for operating a DSU are disclosed. In an embodiment the DSU includes a memory interface configured to be connected to a storage unit, a compute engine interface configured to be connected to a compute engine (CE) and an address generator configured to manage address data representing address locations in the storage unit. The data streaming unit further includes a data organization unit configured to access data in the storage unit and to reorganize the data to be forwarded to the compute engine, wherein the memory interface is communicatively connected to the address generator and the data organization unit, wherein the address generator is communicatively connected to the data organization unit, and wherein the data organization unit is communicatively connected to the compute engine interface.

    Advance cache allocator
    4.
    发明授权

    公开(公告)号:US10042773B2

    公开(公告)日:2018-08-07

    申请号:US14811436

    申请日:2015-07-28

    Abstract: Systems and techniques for advance cache allocation are described. A described technique includes selecting a job from a plurality of jobs; selecting a processor core from a plurality of processor cores to execute the selected job; receiving a message which describes future memory accesses that will be generated by the selected job; generating a memory burst request based on the message; performing the memory burst request to load data from a memory to at least a dedicated portion of a cache, the cache corresponding to the selected processor core; and starting the selected job on the selected processor core. The technique can include performing an action indicated by a send message to write one or more values from another dedicated portion of the cache to the memory.

    Advance Cache Allocator
    5.
    发明申请
    Advance Cache Allocator 审中-公开
    高级缓存分配器

    公开(公告)号:US20170031829A1

    公开(公告)日:2017-02-02

    申请号:US14811436

    申请日:2015-07-28

    Abstract: Systems and techniques for advance cache allocation are described. A described technique includes selecting a job from a plurality of jobs; selecting a processor core from a plurality of processor cores to execute the selected job; receiving a message which describes future memory accesses that will be generated by the selected job; generating a memory burst request based on the message; performing the memory burst request to load data from a memory to at least a dedicated portion of a cache, the cache corresponding to the selected processor core; and starting the selected job on the selected processor core. The technique can include performing an action indicated by a send message to write one or more values from another dedicated portion of the cache to the memory.

    Abstract translation: 描述了用于提前高速缓存分配的系统和技术。 所描述的技术包括从多个作业中选择作业; 从多个处理器核心选择处理器核心以执行所选择的作业; 接收描述将由所选作业生成的未来存储器访问的消息; 基于所述消息产生存储器突发请求; 执行所述存储器突发请求以将数据从存储器加载到高速缓存的至少专用部分,所述高速缓存对应于所选择的处理器核; 并在所选的处理器核心上启动所选作业。 该技术可以包括执行由发送消息指示的动作,以将一个或多个值从高速缓存的另一个专用部分写入存储器。

    System and Method for Shared Memory Ownership Using Context

    公开(公告)号:US20200050376A1

    公开(公告)日:2020-02-13

    申请号:US16658899

    申请日:2019-10-21

    Abstract: It is possible to reduce the latency attributable to memory protection in shared memory systems by performing access protection at a central Data Ownership Manager (DOM), rather than at distributed memory management units in the central processing unit (CPU) elements (CEs) responsible for parallel thread processing. In particular, the DOM may monitor read requests communicated over a data plane between the CEs and a memory controller, and perform access protection verification in parallel with the memory controller's generation of the data response. The DOM may be separate and distinct from both the CEs and the memory controller, and therefore may generally be able to make the access determination without interfering with data plane processing/generation of the read requests and data responses exchanged between the memory controller and the CEs.

    Data streaming unit and method for operating the data streaming unit

    公开(公告)号:US10419501B2

    公开(公告)日:2019-09-17

    申请号:US14958773

    申请日:2015-12-03

    Abstract: A data streaming unit (DSU) and a method for operating a DSU are disclosed. In an embodiment the DSU includes a memory interface configured to be connected to a storage unit, a compute engine interface configured to be connected to a compute engine (CE) and an address generator configured to manage address data representing address locations in the storage unit. The data streaming unit further includes a data organization unit configured to access data in the storage unit and to reorganize the data to be forwarded to the compute engine, wherein the memory interface is communicatively connected to the address generator and the data organization unit, wherein the address generator is communicatively connected to the data organization unit, and wherein the data organization unit is communicatively connected to the compute engine interface.

    System and Method for Shared Memory Ownership Using Context

    公开(公告)号:US20220164115A1

    公开(公告)日:2022-05-26

    申请号:US17543024

    申请日:2021-12-06

    Abstract: It is possible to reduce the latency attributable to memory protection in shared memory systems by performing access protection at a central Data Ownership Manager (DOM), rather than at distributed memory management units in the central processing unit (CPU) elements (CEs) responsible for parallel thread processing. In particular, the DOM may monitor read requests communicated over a data plane between the CEs and a memory controller, and perform access protection verification in parallel with the memory controller's generation of the data response. The DOM may be separate and distinct from both the CEs and the memory controller, and therefore may generally be able to make the access determination without interfering with data plane processing/generation of the read requests and data responses exchanged between the memory controller and the CEs.

    System and method for shared memory ownership using context

    公开(公告)号:US11194478B2

    公开(公告)日:2021-12-07

    申请号:US16658899

    申请日:2019-10-21

    Abstract: It is possible to reduce the latency attributable to memory protection in shared memory systems by performing access protection at a central Data Ownership Manager (DOM), rather than at distributed memory management units in the central processing unit (CPU) elements (CEs) responsible for parallel thread processing. In particular, the DOM may monitor read requests communicated over a data plane between the CEs and a memory controller, and perform access protection verification in parallel with the memory controller's generation of the data response. The DOM may be separate and distinct from both the CEs and the memory controller, and therefore may generally be able to make the access determination without interfering with data plane processing/generation of the read requests and data responses exchanged between the memory controller and the CEs.

Patent Agency Ranking