SYSTEM AND METHOD FOR ENSURING CONTINUITY OF PROXY-BASED SERVICE

    公开(公告)号:US20240356860A1

    公开(公告)日:2024-10-24

    申请号:US18760703

    申请日:2024-07-01

    CPC classification number: H04L47/6255 H04L9/0861 H04L47/6225

    Abstract: Disclosed is a system for ensuring the continuity of a proxy-based service, and a method thereof. The system includes a target server that processes a queue of access requests received from a plurality of client devices and processes the queue until a first time point when the waiting amount of access request exceeds a predetermined first threshold, a queue management server that processes the queue from the first time point until a second time point when the waiting amount exceeds a predetermined second threshold, and a mirror server that processes the queue from the second time point.

    Sending and receiving messages including training data using a multi-path packet spraying protocol

    公开(公告)号:US12063163B2

    公开(公告)日:2024-08-13

    申请号:US17552685

    申请日:2021-12-16

    CPC classification number: H04L47/6225 G06N3/08 H04L45/24 H04L45/50 H04L49/15

    Abstract: Systems and methods for sending and receiving messages, including training data, using a multi-path packet spraying protocol are described. A method includes segmenting a message into a set of data packets comprising training data. The method further includes initiating transmission of the set of data packets to a receiving node. The method further includes spraying the set of data packets across the switch fabric in accordance with the multi-path spraying protocol such that depending upon a value of a fabric determination field associated with a respective data packet, the respective data packet can traverse via any one of a plurality of paths offered by the switch fabric for a connection between the sending node to the receiving node. The method further includes initiating transmission of synchronization packets to the receiving node, where unlike the set of data packets, the synchronization packets are not sprayed across the switch fabric.

    Radio unit cascading in radio access networks

    公开(公告)号:US12041488B2

    公开(公告)日:2024-07-16

    申请号:US17647093

    申请日:2022-01-05

    CPC classification number: H04W28/14 H04L47/56 H04L47/6225

    Abstract: The described technology is generally directed towards radio unit cascading in radio access networks. Radio units (RUs) can be configured with processors adapted to support daisy chaining of multiple RUs, so that the multiple RUs can connect to one hardware interface at a distributed unit (DU). An RU processor for a given RU can be configured to receive downlink data, including downlink data for the given RU as well as downlink data for other downstream RUs. The RU processor can extract the downlink data for the given RU and forward the downlink data for other downstream RUs via a southbound interface. The RU processor can also be configured to receive uplink data from the other RUs, multiplex the received uplink data from the other RUs with uplink data from the given RU, and send the resulting multiplexed data towards the DU via a northbound interface.

    SYSTEM AND METHOD FOR ENSURING CONTINUITY OF PROXY-BASED SERVICE

    公开(公告)号:US20240223509A1

    公开(公告)日:2024-07-04

    申请号:US18399259

    申请日:2023-12-28

    CPC classification number: H04L47/6255 H04L9/0861 H04L47/6225

    Abstract: Disclosed is a system for ensuring the continuity of a proxy-based service, and a method thereof. The system includes a target server that processes a queue of access requests received from a plurality of client devices and processes the queue until a first time point when the waiting amount of access request exceeds a predetermined first threshold, a queue management server that processes the queue from the first time point until a second time point when the waiting amount exceeds a predetermined second threshold, and a mirror server that processes the queue from the second time point.

    METHOD AND A SYSTEM FOR NETWORK-ON-CHIP ARBITRATION

    公开(公告)号:US20240163223A1

    公开(公告)日:2024-05-16

    申请号:US18064988

    申请日:2022-12-13

    CPC classification number: H04L47/623 H04L47/6225 H04L47/6275

    Abstract: The present invention relates to a method (100) for network-on-chip arbitration. The method (100) comprises the steps of receiving input from a user via user interface and selecting a plurality of flits from a plurality of ingress into a plurality of virtual channels followed by selecting the flits from the virtual channels into a plurality of egress based on the input from the user. The selection of the flits into the virtual channels and the egress characterized by the steps of computing default and elevated bandwidths of the virtual channels, computing default and elevated weights of the virtual channels based on the default and elevated bandwidths and generating a weightage lookup table using the default and elevated weights to perform arbitration weightage lookup for the flits with default and elevated priority levels for selecting the flits into the virtual channels and the egress, wherein the flits from the different ingress comprise different default and elevated weight. A system (200) for network-on-chip arbitration is also disclosed herein.

    AI engine-supporting downlink radio resource scheduling method and apparatus

    公开(公告)号:US11943793B2

    公开(公告)日:2024-03-26

    申请号:US17537542

    申请日:2021-11-30

    CPC classification number: H04W72/535 G06N20/00 H04L47/6225 H04W72/23

    Abstract: An Artificial Intelligence (AI) engine-supporting downlink radio resource scheduling method and apparatus are provided. The AI engine-supporting downlink radio resource scheduling method includes: constructing an AI engine, establishing a Socket connection between an AI engine and an Open Air Interface (OAI) system, and configuring the AI engine into an OAI running environment to utilize the AI engine to replace a Round-Robin scheduling algorithm and a fair Round-Robin scheduling algorithm adopted by a Long Term Evolution (LTE) at a Media Access Control (MAC) layer in the OAI system for resource scheduling to take over a downlink radio resource scheduling process; sending scheduling information to the AI engine through Socket during the downlink radio resource scheduling process of the OAI system; and utilizing the AI engine to carry out resource allocation according to the scheduling information, and returning a resource allocation result to the OAI system.

    Power spectral density aware uplink scheduling

    公开(公告)号:US11838876B2

    公开(公告)日:2023-12-05

    申请号:US18108285

    申请日:2023-02-10

    CPC classification number: H04W52/365 H04L1/0003 H04L1/0009 H04L47/6225

    Abstract: According to an aspect, there is provided an apparatus for performing the following. The apparatus is configured to, first, allocate one or more of a plurality of available physical resource blocks to a plurality of terminal devices, wherein the allocating is performed so that a power spectral density for the plurality of terminal devices matches or exceeds a pre-defined limit or a plurality of respective pre-defined limits for power spectral density. In response to one or more physical resource blocks being still available following said allocating, the apparatus is configured to further allocate at least one of the one or more physical resource blocks still available to at least one of the plurality of terminal devices, wherein the further allocating is performed so that at least a pre-defined value for a modulation and coding scheme index is sustainable for said at least one of the plurality of terminal devices.

    METHOD AND APPARATUS FOR PROVIDING A SERVICE WITH A PLURALITY OF SERVICE NODES

    公开(公告)号:US20230336413A1

    公开(公告)日:2023-10-19

    申请号:US18211580

    申请日:2023-06-19

    Applicant: Nicira, Inc.

    Abstract: Some embodiments provide an elastic architecture for providing a service in a computing system. To perform a service on the data messages, the service architecture uses a service node (SN) group that includes one primary service node (PSN) and zero or more secondary service nodes (SSNs). The service can be performed on a data message by either the PSN or one of the SSN. However, in addition to performing the service, the PSN also performs a load balancing operation that assesses the load on each service node (i.e., on the PSN or each SSN), and based on this assessment, has the data messages distributed to the service node(s) in its SN group. Based on the assessed load, the PSN in some embodiments also has one or more SSNs added to or removed from its SN group. To add or remove an SSN to or from the service node group, the PSN in some embodiments directs a set of controllers to add (e.g., instantiate or allocate) or remove the SSN to or from the SN group. Also, to assess the load on the service nodes, the PSN in some embodiments receives message load data from the controller set, which collects such data from each service node. In other embodiments, the PSN receives such load data directly from the SSNs.

Patent Agency Ranking