Network-driven, packet context-aware power management for client-server architecture

    公开(公告)号:US11256321B2

    公开(公告)日:2022-02-22

    申请号:US16024290

    申请日:2018-06-29

    Abstract: A server system including an enhanced Network Interface Controller (NIC) within a client-server architecture is provided. The server system includes a memory for storing data from one or more network packets and one or more processors for processing network requests based on the one or more network packets. The enhanced NIC is configured to receive the one or more network packets and transfer the data from the one or more network packets to the memory. During a latency period defined from the time required to transfer the network packet to memory, the enhanced NIC performs a Network-driven, packet Context Aware Power (NCAP) management process in order to actively transition a power management state of the one or more processors to a predicted level. In this manner, computational and energy efficiency of the server system is improved.

    Systems and methods for hardware-based asynchronous persistence

    公开(公告)号:US12050810B2

    公开(公告)日:2024-07-30

    申请号:US17935912

    申请日:2022-09-27

    CPC classification number: G06F3/0659 G06F3/061 G06F3/0673

    Abstract: Systems and methods for hardware-based asynchronous logging include: initiating first and second atomic regions on first and second cores of a central processing unit (CPU); and asynchronously logging data for the first atomic region and the second atomic region using the CPU by: asynchronously performing log persist operations (LPOs) to log an old data value from each atomic region; updating the old data value to a new data value from each atomic region; tracking dependencies between the first atomic region and the second atomic region using a memory controller; asynchronously performing data persist operations (DPOs) to persist the new data value for each atomic region; and committing the first atomic region and the second atomic region based on the dependencies using the memory controller of the CPU.

    In-Memory Near-Data Approximate Acceleration

    公开(公告)号:US20210382691A1

    公开(公告)日:2021-12-09

    申请号:US17285409

    申请日:2019-10-14

    Abstract: A random access memory may include memory banks and arithmetic approximation units. Each arithmetic approximation unit may be dedicated to one or more of the memory banks and include a respective multiply-and-accumulate unit and a respective lookup-table unit. The respective multiply-and-accumulate unit is configured to iteratively perform shift and add operations with two inputs and to provide a result of the shift and add operations to the respective lookup-table unit. The result approximates or is a product of the two inputs. The respective lookup-table unit is configured produce an output by applying a pre-defined function to the result. The arithmetic approximation units are configured for parallel operation. The random access memory may also include a memory controller configured to receive instructions, from a processor, regarding locations within the memory banks from which to obtain the two inputs and in which to write the output.

    Application-transparent near-memory processing architecture with memory channel network

    公开(公告)号:US12210473B2

    公开(公告)日:2025-01-28

    申请号:US17980685

    申请日:2022-11-04

    Abstract: A computing device includes a host processor to execute a host driver to create a host-side interface, the host-side interface emulating a first Ethernet interface, assign the host-side interface a first medium access control (MAC) address and a first Internet Protocol (IP) address. Memory components are disposed on a substrate. A memory channel network (MCN) processor is disposed on the substrate and coupled between the memory components and the host processor. The MCN processor is to execute an MCN driver to create a MCN-side interface, the MCN-side interface emulating a second Ethernet interface. The MCN processor is to assign the MCN-side interface a second MAC address and a second IP address, which identify the MCN processor as a MCN network node to the host processor.

    DATAFLOW-BASED GENERAL-PURPOSE PROCESSOR ARCHITECTURES

    公开(公告)号:US20230333852A1

    公开(公告)日:2023-10-19

    申请号:US18301776

    申请日:2023-04-17

    CPC classification number: G06F9/3012 G06F9/3005 G06F9/3816

    Abstract: A dataflow-based general-purpose processor architecture and its method are disclosed. A circuit for the dataflow-based general-purpose processor architecture includes multiple processing elements (PEs) corresponding to multiple assigned central processing unit (CPU) instructions in program order, a register file, and multiple feedforward register lanes configured to map each of the multiple assigned CPU instructions on the multiple PEs to the register file or another PE of the multiple PEs to construct a hardware datapath corresponding to a dataflow graph of the multiple assigned CPU instructions. Other aspects, embodiments, and features are also claimed and described.

    DYNAMIC TRANSLATION AND OPTIMIZATION FOR SPATIAL ACCELERATION ARCHITECTURES

    公开(公告)号:US20240385886A1

    公开(公告)日:2024-11-21

    申请号:US18663713

    申请日:2024-05-14

    Abstract: A method for translation and optimization for acceleration and its circuit are disclosed. The method includes: detecting a code region executing on a central processing unit (CPU) core for acceleration, the code region comprising a plurality of instructions; mapping, in hardware, the plurality of instructions in linear order to a planar grid for a spatial accelerator; configuring the spatial accelerator based on the planar grid; and transferring control to the spatial accelerator to execute the code region. Other aspects, embodiments, and features are also claimed and described.

    APPLICATION-TRANSPARENT NEAR-MEMORY PROCESSING ARCHITECTURE WITH MEMORY CHANNEL NETWORK

    公开(公告)号:US20230071386A1

    公开(公告)日:2023-03-09

    申请号:US17980685

    申请日:2022-11-04

    Abstract: A computing device includes a host processor to execute a host driver to create a host-side interface, the host-side interface emulating a first Ethernet interface, assign the host-side interface a first medium access control (MAC) address and a first Internet Protocol (IP) address. Memory components are disposed on a substrate. A memory channel network (MCN) processor is disposed on the substrate and coupled between the memory components and the host processor. The MCN processor is to execute an MCN driver to create a MCN-side interface, the MCN-side interface emulating a second Ethernet interface. The MCN processor is to assign the MCN-side interface a second MAC address and a second IP address, which identify the MCN processor as a MCN network node to the host processor.

    APPLICATION-TRANSPARENT NEAR-MEMORY PROCESSING ARCHITECTURE WITH MEMORY CHANNEL NETWORK

    公开(公告)号:US20210209047A1

    公开(公告)日:2021-07-08

    申请号:US17250785

    申请日:2019-09-06

    Abstract: A system includes a printed circuit board (PCB) on which is disposed memory components and a processor disposed on the PCB and coupled between the memory components and a host memory controller. The processor comprises a memory channel network (MCN) memory controller to handle memory requests associated with the memory components; a local buffer; and a core coupled to the MCN memory controller and the local buffer. The core executes an operating system (OS) running a network software layer and a distributed computing framework; and an MCN driver to: receive a network packet from the network software layer; store the network packet in the local buffer; and assert a transmit polling field of the local buffer to signal to the host memory controller that the network packet is available for transmission to a host computing device.

Patent Agency Ranking