-
公开(公告)号:US12208812B2
公开(公告)日:2025-01-28
申请号:US17498930
申请日:2021-10-12
Applicant: DENSO CORPORATION
Inventor: Nobuhiko Tanibata
IPC: B60W50/00 , G06F12/084
Abstract: A vehicle device includes a plurality of CPU modules, a plurality of cache memories respectively provided for the plurality of CPU modules, a specifying unit configured to specify a shared region shared by the plurality of CPU modules, and a region arrangement unit configured to arrange the shared region specified by the specifying unit in a main memory.
-
公开(公告)号:US12204450B2
公开(公告)日:2025-01-21
申请号:US18451698
申请日:2023-08-17
Applicant: MediaTek Inc.
Inventor: Yu-Pin Chen , Jia-Ming Chen , Chien-Yuan Lai , Ya Ting Chang , Cheng-Tse Chen
IPC: G06F12/0811 , G06F12/084
Abstract: A computing system performs shared cache allocation to allocate cache resources to groups of tasks. The computing system monitors the bandwidth at a memory hierarchy device that is at a next level to the cache in a memory hierarchy of the computing system. The computing system estimates a change in dynamic power from a corresponding change in the bandwidth before and after the cache resources are allocated. The allocation of the cache resources are adjusted according to an allocation policy that receives inputs including the estimated change in the dynamic power and a performance indication of task execution.
-
公开(公告)号:US12182022B2
公开(公告)日:2024-12-31
申请号:US17741244
申请日:2022-05-10
Applicant: Western Digital Technologies, Inc.
Inventor: Marjan Radi , Dejan Vucinic
IPC: G06F3/06 , G06F12/0817 , G06F12/084
Abstract: A node includes at least one memory for use as a shared cache in a distributed cache. One or more other nodes on a network each provide a respective shared cache for the distributed cache. A request is received by a kernel of the node to access data in the shared cache and an Input/Output (I/O) queue is identified from among a plurality of I/O queues in a kernel space of the at least one memory for queuing the received request based on at least one of a priority indicated by the received request and an application that initiated the request. In another aspect, each I/O queue of the plurality of I/O queues corresponds to at least one of different respective priorities for requests to access data in the shared cache and different respective applications initiating requests to access data in the shared cache.
-
公开(公告)号:US20240419600A1
公开(公告)日:2024-12-19
申请号:US18818955
申请日:2024-08-29
Applicant: HUAWEI TECHNOLOGIES CO., LTD.
Inventor: Zehui CHEN , Ruliang DONG
IPC: G06F12/084
Abstract: The present disclosure relates to methods and apparatuses for managing a shared cache. One example method includes determining an access characteristic of accessing the shared cache by IO requests of each of K types that access the shared cache, determining a partition size and an eviction algorithm of the IO requests of each type in the shared cache based on the determined access characteristic and a hit rate of the shared cache, and configuring a cache size of the IO requests of each type in the shared cache as the determined partition size, and configuring an eviction algorithm of the IO requests of each type in the shared cache as the determined eviction algorithm.
-
公开(公告)号:US20240385965A1
公开(公告)日:2024-11-21
申请号:US18785026
申请日:2024-07-26
Applicant: Ascenium, Inc.
Inventor: Peter Foley
IPC: G06F12/084 , G06F9/54
Abstract: Techniques for task processing are disclosed. An array of compute elements is accessed. Each compute element within the array is known to a compiler and is coupled to its neighboring compute elements. The array of compute elements is coupled to at least one data cache. The data cache provides memory storage for the array. Control for the compute elements is provided on a cycle-by-cycle basis. Control is enabled by a stream of wide control words generated by the compiler. A load address and a store address are generated. The load and the store addresses comprise memory block move addresses. The memory block move addresses point to memory storage locations in the data cache. A memory block move is executed, based on the memory block move addresses. The data for the memory block move is transferred outside of the array.
-
公开(公告)号:US20240370372A1
公开(公告)日:2024-11-07
申请号:US18763009
申请日:2024-07-03
Applicant: Intel Corporation
Inventor: Xiaodong Qiu , Yong Jiang , Changwon Rhee , Cui Tang , Shuangpeng Zhou , Lei Chen , Danyu Bi , Peiqing Jiang , Chengxi Wu
IPC: G06F12/084 , G06F9/48
Abstract: Embodiments are generally directed to methods and apparatuses for dynamically changing data priority in a cache. An embodiment of an apparatus comprising: a priority controller to: receive a memory access request to request data; and set a priority flag for the memory access request based on an accumulated access amount of data stored in a memory block to be accessed by the memory access request to dynamically change a priority level of the requested data.
-
7.
公开(公告)号:US20240370370A1
公开(公告)日:2024-11-07
申请号:US18777045
申请日:2024-07-18
Applicant: Google LLC
Inventor: Liran Fishel , David Dayan
IPC: G06F12/0811 , G06F12/084
Abstract: A system for dynamically controlling point-of-coherency or a point-of-serialization of shared data includes a plurality of processing engines grouped into a plurality of separate clusters and a shared communications path communicatively connecting each of the plurality of clusters to one another. Each respective cluster includes memory shared by the processing engines of the respective cluster, each unit of data in the memory being assigned to a single owner cluster responsible for maintaining an authoritative copy and a single manager cluster permanently responsible for assigning the owner cluster responsibility. Each respective cluster also includes a controller configured to receive data requests, track each of a manager status and an ownership status of the respective cluster, and control ownership status changes with respect to respective units of data based at least in part on the tracked ownership and manager statuses of the respective cluster.
-
公开(公告)号:US12106079B2
公开(公告)日:2024-10-01
申请号:US18338023
申请日:2023-06-20
Applicant: Google LLC
Inventor: Hyo Jun Kim , Rohit Upadhyaya Jayasankar
IPC: G06F8/41 , G06F12/084
CPC classification number: G06F8/4442 , G06F12/084 , G06F2212/1032
Abstract: Example embodiments of the present disclosure provide, in one example aspect, an example computer-implemented method for verification of a shared cache. The example method can include retrieving a precompiled shared cache entry corresponding to a shared cache key, the shared cache key being associated with an operation request. The example method can include obtaining a directly compiled resource associated with the operation request. The example method can include certifying one or more portions of the shared cache based at least in part on a comparison of the precompiled shared cache entry and the directly compiled resource.
-
公开(公告)号:US20240311302A1
公开(公告)日:2024-09-19
申请号:US18589852
申请日:2024-02-28
Applicant: Samsung Electronics Co., Ltd.
Inventor: Jin Jung , Daehoon Kim , Hwanjun Lee , Jonggeon Lee , Jinin So
IPC: G06F12/0811 , G06F12/084
CPC classification number: G06F12/0811 , G06F12/084
Abstract: A processor includes a processing core configured to process each of a plurality of requests by accessing a corresponding one of a first memory and a second memory, a latency monitor configured to generate first latency information and second latency information, the first latency information comprising a first access latency to the first memory, and the second latency information comprising a second access latency to the second memory, a plurality of cache ways divided into a first partition and a second partition, and a decision engine configured to allocate each of the plurality of cache ways to one of the first partition and the second partition, based on the first latency information and the second latency information.
-
公开(公告)号:US12093178B2
公开(公告)日:2024-09-17
申请号:US18343023
申请日:2023-06-28
Applicant: Microsoft Technology Licensing, LLC
Inventor: Subrata Biswas
IPC: G06F12/084 , G06F12/0888
CPC classification number: G06F12/084 , G06F12/0888 , G06F2212/1044 , G06F2212/608
Abstract: Database objects are retrieved from a database and parsed into normalized cached data objects. The database objects are stored in the normalized cached data objects in a cache store, and tenant data requests are serviced from the normalized cached data objects. The normalized cached data objects include references to shared objects in a shared object pool that can be shared across different rows of the normalized cached data objects and across different tenant cache systems.
-
-
-
-
-
-
-
-
-