Memory barrier elision for multi-threaded workloads

    公开(公告)号:US12299494B2

    公开(公告)日:2025-05-13

    申请号:US17947435

    申请日:2022-09-19

    Applicant: Red Hat, Inc.

    Abstract: A system includes a memory, at least one physical processor in communication with the memory, and a plurality of threads executing on the at least one physical processor. A first thread of the plurality of threads is configured to execute a plurality of instructions that includes a restartable sequence. Responsive to a different second thread in communication with the first thread being pre-empted while the first thread is executing the restartable sequence, the first thread is configured to restart the restartable sequence prior to reaching a memory barrier.

    Real time qubit allocation for dynamic network topographies

    公开(公告)号:US12294498B2

    公开(公告)日:2025-05-06

    申请号:US17871365

    申请日:2022-07-22

    Applicant: Red Hat, Inc.

    Abstract: Qubit allocation for dynamic network topographies is disclosed. In one example, a processor device of a computing system implements a configuration to quantum definition (C2Q) service that performs real time qubit allocation for dynamic network topographies. The C2Q service can ensure synchronization between a configuration file for a network topography and a quantum definition file for qubits allocated to the network topography.

    Self-organizing network configuration

    公开(公告)号:US12294495B2

    公开(公告)日:2025-05-06

    申请号:US17572086

    申请日:2022-01-10

    Applicant: RED HAT, INC.

    Abstract: A status communication is received that is associated with a mesh network comprising a plurality of interconnected node devices. Responsive to the status communication, it is determined whether a configuration policy of the mesh network has been violated. Responsive to a determination that the configuration policy of the mesh network has been violated, a configuration communication comprising an updated configuration is transmitted by a processing device to a first node device of the plurality of node devices to modify the first node device from performing a first service within the mesh network to performing a second service.

    RUNTIME LOADING OF NON-NATIVE MODULES IN VIRTUAL MACHINES

    公开(公告)号:US20250138858A1

    公开(公告)日:2025-05-01

    申请号:US18498320

    申请日:2023-10-31

    Applicant: RED HAT, INC.

    Abstract: Systems and methods for runtime loading of a non-native module to a virtual machine. An example method may include running, by a processing device, a virtual machine; loading, in the virtual machine, a first bytecode module comprising a first bytecode of a first bytecode type, wherein the first bytecode type is not supported by the virtual machine; validating the first bytecode module according to a first validation policy based on a status of the virtual machine; generating a second bytecode module by translating the first bytecode to a second bytecode of a second bytecode type, wherein the second bytecode type is supported by the virtual machine; and executing the second bytecode module by the virtual machine.

    Workload distribution by utilizing unused central processing unit capacity in a distributed computing system

    公开(公告)号:US12288100B2

    公开(公告)日:2025-04-29

    申请号:US17556027

    申请日:2021-12-20

    Applicant: Red Hat, Inc.

    Abstract: A technique for improving workload distribution by utilizing unused resources in a distributed computing system is described. In one example of the present disclosure, a system can determine that a computing entity of a distributed computing system includes an unused portion of a CPU capacity. The computing entity can have a first defined limit of the CPU capacity. The system can use the unused portion of the CPU capacity to improve a usage of a resource of the computing entity. The computing entity can have a second defined limit of the resource.

    Suppressing a vulnerability of a continuous integration pipeline with audit functionality

    公开(公告)号:US12265629B2

    公开(公告)日:2025-04-01

    申请号:US18081803

    申请日:2022-12-15

    Applicant: RED HAT, INC.

    Abstract: A vulnerability with respect to an image file in a continuous integration (CI) pipeline can be suppressed according to some aspects described herein. For example, a processor can receive an alert for the vulnerability with the CI pipeline being able to block deployment of the image file in response to the alert. Based on the alert, the processor can determine that the vulnerability of the image file is deferrable. After determining that the vulnerability is deferrable, the processor can automatically adjust a status of the vulnerability from an observed state to a deferred state. The CI pipeline can allow the deployment of the image file based on the status of the vulnerability being in the deferred state. The processor can deploy the image file in the CI pipeline after adjusting the status of the vulnerability to the deferred state.

    Dynamic geo-based computing host identification for automatic provisioning

    公开(公告)号:US12261917B2

    公开(公告)日:2025-03-25

    申请号:US18448776

    申请日:2023-08-11

    Applicant: Red Hat, Inc.

    Abstract: A computing system receives a first geographic area indication that corresponds to a first geographic area of a plurality of different geographic areas, and provisioning information indicative of a first set of tasks to be performed on each computing host in the first geographic area. The computing system dynamically generates, based on the first geographic area indication, a first computing host list that identifies a first set of computing hosts in the first geographic area. The computing system sends, to a first provisioning node of a plurality of provisioning nodes, instructions to implement the first set of tasks on the first set of computing hosts identified in the first computing host list, the first provisioning node being associated with the first geographic area.

    Sizing service for cloud migration away from only cloud storage and back to on-site computing strategy

    公开(公告)号:US12260224B2

    公开(公告)日:2025-03-25

    申请号:US17940475

    申请日:2022-09-08

    Applicant: Red Hat, Inc.

    Abstract: A computing device receives data related to operation of a cloud computing environment having an application comprising several services. The data related to operation of the cloud computing environment can include time-based data related to computing resource use in the cloud computing environment, such as I/O rate, processor utilization, and others. In some implementations the services that compose the application can be orchestrated through an orchestrator, and in those implementations data regarding the orchestration can also be provided to the computing device. The computing device can also request service-related information from the cloud computing environment, where the service-related information can include financial related information for operations in the cloud. The computing device can take as input the data related to operation of the application and services the orchestration, and the service and thereafter provide a recommendation of an on-premises computing infrastructure adequate to replace the cloud computing environment.

    ON-DEMAND UNIKERNEL FOR APPLICATION PROTECTION

    公开(公告)号:US20250097095A1

    公开(公告)日:2025-03-20

    申请号:US18369074

    申请日:2023-09-15

    Applicant: Red Hat, Inc.

    Abstract: Embodiments of the present disclosure relate to systems and methods for using unikernels to protect critical safety applications from interference events. For each of a set of applications identified as critical to the functioning of a computing environment, a corresponding unikernel may be generated, the unikernel including code of the application and kernel functionality. In response to determining that an interference event is affecting a first application of the set of applications, it is determined whether the interference event is unsustainable. In response to determining that the interference event is unsustainable, the unikernel corresponding to the first application is initiated and a failover from the first application to the unikernel corresponding to the first application is performed.

    COMPUTATIONAL PROBE AUTO-TUNING
    10.
    发明申请

    公开(公告)号:US20250094310A1

    公开(公告)日:2025-03-20

    申请号:US18370537

    申请日:2023-09-20

    Applicant: Red Hat, Inc.

    Abstract: Systems, methods, and apparatuses for automatically tuning computational probe threshold values in a containerized computing environment are provided herein. An example method comprises identifying a computational container environment that is operating outside of at least one predefined window of values of at least one performance metric, measuring the at least one performance metric, automatically adjusting at least one parameter value of a probe based upon the at least one performance metric, and iterating the measuring and adjusting until the computational container environment is operating within the at least one predefined window of values.

Patent Agency Ranking