Nodal Work Assignments in Cloud Computing

    公开(公告)号:US20250071173A1

    公开(公告)日:2025-02-27

    申请号:US18948668

    申请日:2024-11-15

    Abstract: Nodal work assignments efficiently distribute server work items, such as storing redundant copies of electronic data. A cloud computing network establishes a policy that governs how and where the redundant copies are stored cloud computing nodes (such as by region, zone, and cluster targets). The cloud computing network repeatedly or continuously re-evaluates the work assignments based on replication assignment skews and/or leadership penalties. The nodal work assignments thus minimize hardware and software operations, network traffic, and electrical energy consumption.

    MANAGING USE OF HARDWARE BUNDLES IN PRODUCTION ENVIRONMENTS

    公开(公告)号:US20250068477A1

    公开(公告)日:2025-02-27

    申请号:US18455833

    申请日:2023-08-25

    Abstract: Methods, systems, and devices for providing for providing computer implemented services using managed systems are disclosed. To provide the computer implemented services, hardware components may be bundled into hardware bundles. The hardware bundles may be used to satisfy subscriptions for the services, and limit use of the hardware bundles when subscription limits are reached. The hardware bundles may include direct management hardware components and indirect management hardware components. Limits on use of the hardware bundles may be enforced by analyzing workloads performed by the hardware bundles.

    ACCOUNT VENDING
    4.
    发明申请

    公开(公告)号:US20250068475A1

    公开(公告)日:2025-02-27

    申请号:US18456274

    申请日:2023-08-25

    Abstract: A computing device may determine a first set of applications that facilitate functions for a requested cloud account. A sequential execution order may be determined for a second set of applications to be executed after a respective set of dependencies for each application of the second set of applications and at least one application of the first set of applications have been executed. A parallel execution order may be determined for remaining applications of the first set of applications that are excluded from the sequential execution order. The first and second sets of applications may be executed according to the sequential and parallel execution orders and a notification may be sent to the user device that facilitates access to the cloud account based on an indication that the first and second sets of applications have been successfully executed.

    DISTRIBUTED REGISTER FILE CACHE TO REDUCE L1 BANDWIDTH REQUIREMENTS

    公开(公告)号:US20250068473A1

    公开(公告)日:2025-02-27

    申请号:US18453867

    申请日:2023-08-22

    Abstract: Described herein is a graphics processor comprising a graphics processing cluster coupled with the memory interface, the graphics processing cluster including a plurality of processing resources, a processing resource of the plurality of processing resources including a register file including a first plurality of registers associated with a first hardware thread of a plurality of hardware threads of the processing resource and a second plurality of registers associated with a second hardware thread of the plurality of hardware threads of the processing resource and first circuitry configured to facilitate access to memory on behalf of the plurality of hardware threads and store metadata for memory access requests from the plurality of hardware threads.

    MANAGING USE OF HARDWARE BUNDLES USING CONTROL MECHANISMS

    公开(公告)号:US20250068470A1

    公开(公告)日:2025-02-27

    申请号:US18455848

    申请日:2023-08-25

    Abstract: Methods, systems, and devices for providing computer implemented services using managed systems are disclosed. To provide the computer implemented services, hardware components may be bundled into hardware bundles. The hardware bundles may be used to satisfy subscriptions for the services, and limit use of the hardware bundles when subscription limits are reached. The hardware bundles may include direct management hardware components and indirect management hardware components. Limits on use of the hardware bundles may be enforced by a variety of control mechanisms.

    Graph neural network training methods and systems

    公开(公告)号:US12235930B2

    公开(公告)日:2025-02-25

    申请号:US17574428

    申请日:2022-01-12

    Abstract: Methods, systems, and apparatus for training a graph neural network. An example method includes obtaining a complete graph; dividing the complete graph into a plurality of subgraphs; obtaining a training graph to participate in graph neural network training based on selecting at least one subgraph from the plurality of subgraphs; obtaining, based on the training graph, a node feature vector of each node in the training graph; obtaining a node fusion vector of each current node in the training graph; determining a loss function based on node labels and the node fusion vectors in the training graph; and iteratively training the graph neural network to update parameter values of the graph neural network based on optimizing the loss function.

    COMMUNICATION AND SYNCHRONIZATION WITH EDGE SYSTEMS

    公开(公告)号:US20250060999A1

    公开(公告)日:2025-02-20

    申请号:US18672892

    申请日:2024-05-23

    Applicant: Nutanix, Inc.

    Abstract: A scalable Internet of Things (IoT) system may include multiple instances of an IoT manager, each instance respectively configured to connect to a respective edge system of multiple edge systems. The IoT system may further include a containerized system configured to allow any instance of the IoT manager to deploy data pipelines to any edge system of the multiple edge systems in delta communications. Any instance of the IoT manager may send a change message to any edge system via a publish/subscribe notification method. In some examples, a centralized IoT manager may form a secure communication with an edge system, synchronize an object model with an edge object model for the edge system, and maintain the edge system using delta change communications. The IoT system may facilitate any instance of the IoT manager to subscribe a communication channel with an associated edge system for receiving update notification.

    Graphics processor with non-blocking concurrent architecture

    公开(公告)号:US12229865B2

    公开(公告)日:2025-02-18

    申请号:US18133088

    申请日:2023-04-11

    Abstract: In some aspects, systems and methods provide for forming groupings of a plurality of independently-specified computation workloads, such as graphics processing workloads, and in a specific example, ray tracing workloads. The workloads include a scheduling key, which is one basis on which the groupings can be formed. Workloads grouped together can all execute from the same source of instructions, on one or more different private data elements. Such workloads can recursively instantiate other workloads that reference the same private data elements. In some examples, the scheduling key can be used to identify a data element to be used by all the workloads of a grouping. Memory conflicts to private data elements are handled through scheduling of non-conflicted workloads or specific instructions and/or deferring conflicted workloads instead of locking memory locations.

    Disaggregated computing for distributed confidential computing environment

    公开(公告)号:US12229605B2

    公开(公告)日:2025-02-18

    申请号:US18538171

    申请日:2023-12-13

    Abstract: An apparatus to facilitate disaggregated computing for a distributed confidential computing environment is disclosed. The apparatus includes one or more processors to facilitate receiving a manifest corresponding to graph nodes representing regions of memory of a remote client machine, the graph nodes corresponding to a command buffer and to associated data structures and kernels of the command buffer used to initialize a hardware accelerator and execute the kernels, and the manifest indicating a destination memory location of each of the graph nodes and dependencies of each of the graph nodes; identifying, based on the manifest, the command buffer and the associated data structures to copy to the host memory; identifying, based on the manifest, the kernels to copy to local memory of the hardware accelerator; and patching addresses in the command buffer copied to the host memory with updated addresses of corresponding locations in the host memory.

Patent Agency Ranking