RESERVATION OF MEMORY IN MULTIPLE TIERS OF MEMORY

    公开(公告)号:US20230305720A1

    公开(公告)日:2023-09-28

    申请号:US18084258

    申请日:2022-12-19

    CPC classification number: G06F3/0631 G06F3/0659 G06F3/067 G06F3/0611

    Abstract: Examples described herein relate to a memory controller, when connected to at least one memory device in a multi-tiered memory system comprising a near memory and far memory, is to allocate a region of the near memory to a requester based on receipt of a request. In some examples, the memory controller includes circuitry to transmit at least one memory read command and address information to the multi-tiered memory system to read data from the multi-tiered memory system and circuitry to transmit at least one memory write command and address information to the multi-tiered memory system to write data to the multi-tiered memory system, wherein the near memory comprises at least one memory connected to the memory controller via a memory interface and the far memory comprises at least one memory connected to the memory controller via a network.

    TECHNOLOGIES FOR CLOUD DATA CENTER ANALYTICS
    3.
    发明申请
    TECHNOLOGIES FOR CLOUD DATA CENTER ANALYTICS 审中-公开
    云数据中心分析技术

    公开(公告)号:US20160366026A1

    公开(公告)日:2016-12-15

    申请号:US15114696

    申请日:2015-02-24

    Abstract: Technologies for generating an analytical model for a workload of a data center include an analytics server to receive raw data from components of a data center. The analytics server retrieves a workbook that includes analytical algorithms from a workbook marketplace server, and uses the analytical algorithms to analyze the raw data to generate the analytical model for the workload based on the raw data. The analytics server further generates an optimization trigger to be transmitted to a controller component of the data center that may be based on the analytical model and one or more previously generated analytical models. The workbook marketplace server may include a plurality of workbooks, each of which may include one or more analytical algorithms from which to generate a different analytical model for the workload of the data center.

    Abstract translation: 为数据中心工作负载生成分析模型的技术包括分析服务器,用于从数据中心的组件接收原始数据。 分析服务器从工作簿市场服务器检索包含分析算法的工作簿,并使用分析算法分析原始数据,以根据原始数据为工作负载生成分析模型。 分析服务器还产生要发送到数据中心的控制器组件的优化触发器,其可以基于分析模型和一个或多个先前生成的分析模型。 工作簿市场服务器可以包括多个工作簿,每个工作簿可以包括一个或多个分析算法,从而为数据中心的工作负载生成不同的分析模型。

    WORKLOAD OPTIMIZATION, SCHEDULING, AND PLACEMENT FOR RACK-SCALE ARCHITECTURE COMPUTING SYSTEMS
    4.
    发明申请
    WORKLOAD OPTIMIZATION, SCHEDULING, AND PLACEMENT FOR RACK-SCALE ARCHITECTURE COMPUTING SYSTEMS 审中-公开
    机架式架构计算系统的工作载荷优化,调度和放置

    公开(公告)号:US20160359683A1

    公开(公告)日:2016-12-08

    申请号:US15114687

    申请日:2015-02-24

    Abstract: Technologies for datacenter management include one or more computing racks each including a rack controller. The rack controller may receive system, performance, or health metrics for the components of the computing rack. The rack controller generates regression models to predict component lifespan and may predict logical machine lifespans based on the lifespan of the included hardware components. The rack controller may generate notifications or schedule maintenance sessions based on remaining component or logical machine lifespans. The rack controller may compose logical machines using components having similar remaining lifespans. In some embodiments the rack controller may validate a service level agreement prior to executing an application based on the probability of component failure. A management interface may generate an interactive visualization of the system state and optimize the datacenter schedule based on optimization rules derived from human input in response to the visualization. Other embodiments are described and claimed.

    Abstract translation: 用于数据中心管理的技术包括一个或多个计算机架,每个计算机架包括机架控制器。 机架控制器可以接收计算机架的组件的系统,性能或健康度量。 机架控制器生成回归模型以预测组件的使用寿命,并可根据所包含的硬件组件的寿命来预测逻辑机器寿命。 机架控制器可以基于剩余组件或逻辑机器生命周期生成通知或安排维护会话。 机架控制器可以使用具有相似剩余寿命的组件来组成逻辑机器。 在一些实施例中,机架控制器可以在基于组件故障的概率执行应用之前验证服务水平协议。 管理界面可以生成系统状态的交互式可视化,并且基于从人类输入导出的响应于可视化的优化规则来优化数据中心调度。 描述和要求保护其他实施例。

    TECHNOLOGIES FOR PRE-CONFIGURING ACCELERATORS BY PREDICTING BIT-STREAMS

    公开(公告)号:US20210334138A1

    公开(公告)日:2021-10-28

    申请号:US17365898

    申请日:2021-07-01

    Abstract: Technologies for pre-configuring accelerators by predicting bit-streams include communication circuitry and a compute device. The compute device includes a compute engine to determine one or more bit-streams registered on each accelerator of multiple accelerators. The compute engine is further to predict a next job to be requested for acceleration from an application of at least one compute sled of multiple compute sleds, predict a bit-stream from a bit-stream library that is to execute the predicted next job requested to be accelerated, and determine whether the predicted bit-stream is already registered on one of the accelerators. In response to a determination that the predicted bit-stream is not registered on one of the accelerators, the compute engine is to select an accelerator from the plurality of accelerators that satisfies characteristics of the predicted bit-stream and register the predicted bit-stream on the determined accelerator.

    PROACTIVE DATA PREFETCH WITH APPLIED QUALITY OF SERVICE

    公开(公告)号:US20200004685A1

    公开(公告)日:2020-01-02

    申请号:US16568048

    申请日:2019-09-11

    Abstract: Examples described herein relate to prefetching content from a remote memory device to a memory tier local to a higher level cache or memory. An application or device can indicate a time availability for data to be available in a higher level cache or memory. A prefetcher used by a network interface can allocate resources in any intermediary network device in a data path from the remote memory device to the memory tier local to the higher level cache. Memory access bandwidth, egress bandwidth, memory space in any intermediary network device can be allocated for prefetch of content. In some examples, proactive prefetch can occur for content expected to be prefetched but not requested to be prefetched.

Patent Agency Ranking