REAL-TIME ACCESS TO REMOTE MEDIA PRODUCTION EDITING FUNCTIONALITY

    公开(公告)号:US20250126171A1

    公开(公告)日:2025-04-17

    申请号:US18829958

    申请日:2024-09-10

    Abstract: Novel tools and techniques are provided for implementing real-time access to remote media production editing functionality. In various embodiments, in response to a request for real-time access to remote media production editing functionalities, a computing system provisions access to a media production software application (“app”) on at least one of compute resources on at least one network edge node among a plurality of network edge nodes. The computing system establish an access connection between the at least one network edge node and a remote media storage system, via remote direct memory access (“RDMA”) functionality. The computing system provides access to at least one media production file that is stored on the remote media storage system via the established access connection, for editing using the instantiated media production app. User input for the instantiated media production app and data, content, and/or editing results may be relayed over the established access connection.

    INFERENCE AS A SERVICE
    2.
    发明申请

    公开(公告)号:US20250086510A1

    公开(公告)日:2025-03-13

    申请号:US18829449

    申请日:2024-09-10

    Abstract: Novel tools and techniques are provided for implementing Inference as a Service. In various embodiments, a computing system may receive a request to perform an AI/ML task on first data, the request including desired parameters, in some cases, without information regarding any of specific hardware, specific hardware type, specific location, or specific network for providing network services for performing the requested AI/ML task. The computing system may identify edge compute nodes within a network based on the desired parameters and/or unused processing capacity of each node. The computing system may identify AI/ML pipelines capable of performing the AI/ML task, the pipelines including neural networks utilizing pre-trained AI/ML models. The computing system may cause the identified nodes to run the identified pipelines to perform the AI/ML task. In response to receiving inference results from the identified pipelines, the computing system may send, store, and/or cause display of the received inference results.

Patent Agency Ranking