EFFICIENT TRICKLE UPDATES IN LARGE DATABASES USING PERSISTENT MEMORY

    公开(公告)号:US20190114337A1

    公开(公告)日:2019-04-18

    申请号:US15786829

    申请日:2017-10-18

    Abstract: Systems, methods, and computer-readable media for storing data in a data storage system using a child table. In some examples, a trickle update to first data in a parent table is received at a data storage system storing the first data in the parent table. A child table storing second data can be created in persistent memory for the parent table. Subsequently the trickle update can be stored in the child table as part of the second data stored in the child table. The second data including the trickle update stored in the child table can be used to satisfy, at least in part, one or more data queries for the parent table using the child table.

    FPGA ACCELERATION FOR SERVERLESS COMPUTING
    82.
    发明申请

    公开(公告)号:US20190026150A1

    公开(公告)日:2019-01-24

    申请号:US15655648

    申请日:2017-07-20

    Abstract: In one embodiment, a method for FPGA accelerated serverless computing comprises receiving, from a user, a definition of a serverless computing task comprising one or more functions to be executed. A task scheduler performs an initial placement of the serverless computing task to a first host determined to be a first optimal host for executing the serverless computing task. The task scheduler determines a supplemental placement of a first function to a second host determined to be a second optimal host for accelerating execution of the first function, wherein the first function is not able to accelerated by one or more FPGAs in the first host. The serverless computing task is executed on the first host and the second host according to the initial placement and the supplemental placement.

    INTELLIGENT LAYOUT OF COMPOSITE DATA STRUCTURES IN TIERED STORAGE

    公开(公告)号:US20180341411A1

    公开(公告)日:2018-11-29

    申请号:US15811318

    申请日:2017-11-13

    Abstract: Aspects of the subject technology relate to ways to determine the optimal storage of data structures in a hierarchy of memory types. In some aspects, a process of the technology can include steps for determining a latency cost for each of a plurality of fields in an object, identifying at least one field having a latency cost that exceeds a predetermined threshold, and determining whether to store the at least one field to a first memory device or a second memory device based on the latency cost. Systems and machine-readable media are also provided.

    DATA-DRIVEN CEPH PERFORMANCE OPTIMIZATIONS
    86.
    发明申请
    DATA-DRIVEN CEPH PERFORMANCE OPTIMIZATIONS 审中-公开
    数据驱动性能优化

    公开(公告)号:US20160349993A1

    公开(公告)日:2016-12-01

    申请号:US14726182

    申请日:2015-05-29

    Abstract: The present disclosure describes, among other things, a method for managing and optimizing distributed object storage on a plurality of storage devices of a storage cluster. The method comprises computing, by a states engine, respective scores associated with the storage devices based on a set of characteristics associated with each storage device and a set of weights corresponding to the set of characteristics, and computing, by the states engine, respective bucket weights for leaf nodes and parent node(s) of a hierarchical map of the storage cluster based on the respective scores associated with the storage devices, wherein each leaf nodes represent a corresponding storage device and each parent node aggregates one or more storage devices.

    Abstract translation: 本公开尤其描述了一种用于在存储集群的多个存储设备上管理和优化分布式对象存储的方法。 该方法包括由状态引擎基于与每个存储设备相关联的一组特性以及与该组特征相对应的一组权重来计算与存储设备相关联的各个分数,以及由状态引擎计算相应的桶 基于与存储设备相关联的各个分数,存储集群的分层映射的叶节点和父节点的权重,其中每个叶节点表示相应的存储设备,并且每个父节点聚合一个或多个存储设备。

    Flow Based Network Service Insertion
    87.
    发明申请
    Flow Based Network Service Insertion 有权
    基于流的网络服务插入

    公开(公告)号:US20150063102A1

    公开(公告)日:2015-03-05

    申请号:US14014742

    申请日:2013-08-30

    Abstract: Techniques are provided to generate and store a network graph database comprising information that indicates a service node topology, and virtual or physical network services available at each node in a network. A service request is received for services to be performed on packets traversing the network between at least first and second endpoints. A subset of the network graph database is determined that can provide the services requested in the service request. A service chain and service chain identifier is generated for the service based on the network graph database subset. A flow path is established through the service chain by flow programming network paths between the first and second endpoints using the service chain identifier.

    Abstract translation: 提供技术来生成和存储包括指示服务节点拓扑的信息和在网络中的每个节点处可用的虚拟或物理网络服务的网络图数据库。 接收对在至少第一和第二端点之间穿过网络的分组执行的服务的服务请求。 确定网络图数据库的子集,其可以提供服务请求中请求的服务。 基于网络图数据库子集为服务生成服务链和服务链标识符。 通过使用服务链标识符的第一和第二端点之间的流程编程网络路径,通过服务链建立流路径。

    ESTABLISHING TRANSLATION FOR VIRTUAL MACHINES IN A NETWORK ENVIRONMENT
    88.
    发明申请
    ESTABLISHING TRANSLATION FOR VIRTUAL MACHINES IN A NETWORK ENVIRONMENT 有权
    在网络环境中建立虚拟机的翻译

    公开(公告)号:US20140280997A1

    公开(公告)日:2014-09-18

    申请号:US13830861

    申请日:2013-03-14

    CPC classification number: H04L69/03 G06F9/541

    Abstract: A method, apparatus, computer readable medium, and system that includes receiving an indication identifying a tunnel between a first virtual machine, associated with a first protocol, and a second virtual machine, associated with a second protocol, determining that the first protocol is different than the second protocol, determining at least one translation directive that specifies for translation between the first protocol and the second protocol for the tunnel, and causing establishment of a translator based, at least in part, on the translation directive is disclosed.

    Abstract translation: 一种方法,装置,计算机可读介质和系统,其包括接收标识与第一协议相关联的第一虚拟机与第二协议的第二虚拟机之间的隧道的指示,确定所述第一协议是不同的 公开了至少部分地基于翻译指令来确定为第一协议和第二协议之间的转换指定用于隧道的至少一个翻译指令,以及建立翻译器。

    FPGA acceleration for serverless computing

    公开(公告)号:US11709704B2

    公开(公告)日:2023-07-25

    申请号:US17408259

    申请日:2021-08-20

    CPC classification number: G06F9/4881 G06F9/5038 G06F9/5066 G06F9/5088

    Abstract: In one embodiment, a method for FPGA accelerated serverless computing comprises receiving, from a user, a definition of a serverless computing task comprising one or more functions to be executed. A task scheduler performs an initial placement of the serverless computing task to a first host determined to be a first optimal host for executing the serverless computing task. The task scheduler determines a supplemental placement of a first function to a second host determined to be a second optimal host for accelerating execution of the first function, wherein the first function is not able to accelerated by one or more FPGAs in the first host. The serverless computing task is executed on the first host and the second host according to the initial placement and the supplemental placement.

    SYSTEMS AND METHODS FOR INTEGRATION OF HUMAN FEEDBACK INTO MACHINE LEARNING BASED NETWORK MANAGEMENT TOOL

    公开(公告)号:US20210312324A1

    公开(公告)日:2021-10-07

    申请号:US16842334

    申请日:2020-04-07

    Abstract: The present disclosure is directed to system and methods for providing machine learning tools such as Kubeflow and other similar ML platforms with human-in-the-loop capabilities for optimizing the resulting machine models. In one aspect, a machine learning integration tool includes memory having computer-readable instructions stored therein and one or more processors configured to execute the computer-readable instructions to execute a workflow associated with a machine learning process; determine, during execution of the machine learning process, that non-automated feedback is required; generate a virtual input unit for receiving the non-automated feedback; modify raw data used for the machine learning process with the non-automated feedback to yield updated data; and complete the machine learning process using the updated data.

Patent Agency Ranking