Physical hardware controller for provisioning remote storage services on processing devices

    公开(公告)号:US11748038B1

    公开(公告)日:2023-09-05

    申请号:US17676584

    申请日:2022-02-21

    Abstract: An apparatus comprises a first processing device, the first processing device comprising a physical hardware controller configured for coupling with a second processing device. The first processing device is configured to identify one or more remote storage service instances attached to the second processing device, and to initiate storage emulation modules for the remote storage service instances attached to the second processing device, the storage emulation modules emulating one or more physical storage devices configured for attachment to the second processing device. The first processing device is also configured to provision the remote storage service instances to the second processing device by processing input/output requests that are directed to one or more storage volumes of the emulated physical storage devices utilizing hardware resources of the physical hardware controller and providing processing results of the input/output requests to the second processing device via the emulated physical storage devices.

    CHAIN SCHEDULE MANAGEMENT FOR MACHINE LEARNING MODEL-BASED PROCESSING IN COMPUTING ENVIRONMENT

    公开(公告)号:US20230273814A1

    公开(公告)日:2023-08-31

    申请号:US17681309

    申请日:2022-02-25

    Inventor: Victor Fong

    CPC classification number: G06F9/4881 G06N20/00

    Abstract: Techniques are disclosed for chain schedule management for machine learning model-based processing in a computing environment. For example, a method receives a machine learning model-based request and determines a scheduling decision for execution of the machine learning model-based request. Determination of the scheduling decision comprises utilizing a set of one or more scheduling algorithms and comparing results of at least a portion of the set of one or more scheduling algorithms to identify execution environments of a computing environment in which the machine learning model-based request is to be executed. The identified execution environments may then be managed to execute the machine learning model-based request.

    PHYSICAL HARDWARE CONTROLLER FOR PROVISIONING SECURITY SERVICES ON PROCESSING DEVICES

    公开(公告)号:US20230269225A1

    公开(公告)日:2023-08-24

    申请号:US17676598

    申请日:2022-02-21

    CPC classification number: H04L63/0236 H04L63/101 H04L63/1416

    Abstract: An apparatus comprises a first processing device, the first processing device comprising a physical hardware controller configured for coupling with a second processing device. The first processing device is configured to identify remote security service instances attached to the second processing device and to initiate, at the first processing device, one or more network emulation modules for the remote security service instances attached to the second processing device that emulate physical network interface devices configured for attachment to the second processing device. The first processing device is also configured to provision the remote security service instances to the second processing device by utilizing hardware resources of the physical hardware controller to analyze network traffic associated with the second processing device, to modify at least a portion of the network traffic, and to provide the modified network traffic to the second processing device via the emulated physical network interface devices.

    Chain schedule management for machine learning model-based processing in computing environment

    公开(公告)号:US12217086B2

    公开(公告)日:2025-02-04

    申请号:US17681309

    申请日:2022-02-25

    Inventor: Victor Fong

    Abstract: Techniques are disclosed for chain schedule management for machine learning model-based processing in a computing environment. For example, a method receives a machine learning model-based request and determines a scheduling decision for execution of the machine learning model-based request. Determination of the scheduling decision comprises utilizing a set of one or more scheduling algorithms and comparing results of at least a portion of the set of one or more scheduling algorithms to identify execution environments of a computing environment in which the machine learning model-based request is to be executed. The identified execution environments may then be managed to execute the machine learning model-based request.

    ORCHESTRATION OF QUBO JOBS BETWEEN GATE-BASED QUANTUM COMPUTERS AND QUANTUM ANNEALERS

    公开(公告)号:US20240394586A1

    公开(公告)日:2024-11-28

    申请号:US18321526

    申请日:2023-05-22

    Abstract: One example method includes obtaining information about a first pre-defined implementation of a QUBO (quadratic unconstrained binary optimization) problem configured for execution on a gate-based device, obtaining information about a second pre-defined implementation of the QUBO problem configured for execution on an annealing device, receiving information about a QUBO job that is to be executed, identifying first hardware and second hardware that are available to execute the QUBO job, and the first hardware is different from the second hardware, using the information about the first and second pre-defined implementations of the QUBO problem to generate respective predictions concerning performance of the QUBO job on the first hardware and the second hardware, comparing the predictions, and based on the comparing, selecting one of the first hardware and the second hardware for execution of the QUBO job.

    DYNAMIC CHECKPOINT FOR SIMULATION
    7.
    发明公开

    公开(公告)号:US20240160994A1

    公开(公告)日:2024-05-16

    申请号:US18345364

    申请日:2023-06-30

    CPC classification number: G06N10/80

    Abstract: One example method includes simulating execution of a quantum circuit on a classical computing infrastructure, after one or more times that a gate of the quantum circuit is executed as part of the simulating, creating, after execution of that gate, a hash of a state vector that captures a state of the execution of the quantum circuit, storing the hash, and respective associated data structure, in storage, then as part of a simulated execution process, calculating a hash of each gate across the new quantum circuit, looking up, in the storage, a hash of a state vector associated with execution of one of the gates of the new quantum circuit, and restoring, from storage, the latest hash of the state vector associated with the one gate of the new quantum circuit.

    Processing unit virtualization with scalable over-provisioning in an information processing system

    公开(公告)号:US11900174B2

    公开(公告)日:2024-02-13

    申请号:US17846309

    申请日:2022-06-22

    CPC classification number: G06F9/5077 G06F9/3877 G06F9/505

    Abstract: Techniques are disclosed for processing unit virtualization with scalable over-provisioning in an information processing system. For example, the method accesses a data structure that maps a correspondence between a plurality of virtualized processing units and a plurality of abstracted processing units, wherein the plurality of abstracted processing units are configured to decouple an allocation decision from the plurality of virtualized processing units, and further wherein at least one of the virtualized processing units is mapped to multiple ones of the abstracted processing units. The method allocates one or more virtualized processing units to execute a given application by allocating one or more abstracted processing units identified from the data structure. The method also enables migration of one or more virtualized processing units across the system. Examples of processing units with which scalable over-provisioning functionality can be applied include, but are not limited to, accelerators such as GPUs.

    EDGE UTILITY SYSTEM WITH DYNAMIC AGGREGATION OF EDGE RESOURCES ACROSS MULTIPLE EDGE COMPUTING SITES

    公开(公告)号:US20230275847A1

    公开(公告)日:2023-08-31

    申请号:US17682077

    申请日:2022-02-28

    CPC classification number: H04L47/80

    Abstract: A method includes receiving inputs for respective users in an edge utility system comprising edge and core computing sites, with a first one of the inputs for a first user characterizing edge resources requested by that user for executing at least a portion of a workload of that user, and a second one of the inputs for a second user characterizing edge resources available from that user for executing at least a portion of a workload of another user. The method includes populating one or more data structures based at least in part on the received inputs, aggregating edge resources of multiple ones of the edge computing sites into an edge network based at least in part on the populated data structures, and utilizing at least a portion of the aggregated edge resources of the edge network to execute at least a portion of a workload of a particular user.

    SCHEDULE MANAGEMENT FOR MACHINE LEARNING MODEL-BASED PROCESSING IN COMPUTING ENVIRONMENT

    公开(公告)号:US20230273813A1

    公开(公告)日:2023-08-31

    申请号:US17681299

    申请日:2022-02-25

    CPC classification number: G06F9/4881 G06N5/04

    Abstract: Techniques are disclosed for schedule management for machine learning model-based processing in a computing environment. For example, a method receives a machine learning model-based request and determines a scheduling decision for execution of the machine learning model-based request. Determination of the scheduling decision comprises identifying, based on one or more metrics, at least one cluster from a plurality of clusters as an execution environment in which the machine learning model-based request is to be executed. The machine learning model-based request may then be forwarded to the at least one identified cluster for execution.

Patent Agency Ranking