Devices and Methods for Supporting Handover of UE

    公开(公告)号:US20250159558A1

    公开(公告)日:2025-05-15

    申请号:US19028270

    申请日:2025-01-17

    Abstract: The present disclosure relates to handover (HO) of a user equipment (UE), which consumes a quality of service (QoS) flow that is associated with multiple QoS profiles. The disclosure proposes a network entity for supporting HO of the UE, and proposes a second network entity that is the target of HO of the UE. The network entity is configured to: obtain multiple QoS profiles associated with a QoS flow; obtain QoS capability information indicating whether the second network entity supports more than one QoS profile and/or whether a first network entity supports more than one QoS profile; and provide at least one message to the second network entity, wherein the at least one message includes at least one QoS profile according to the QoS capability information.

    ULTRA-WIDEBAND FRAME SENDING METHOD AND COMMUNICATION DEVICE

    公开(公告)号:US20250159474A1

    公开(公告)日:2025-05-15

    申请号:US19023117

    申请日:2025-01-15

    Abstract: Embodiments of this application relate to the field of communication technologies, and provide an ultra-wideband frame sending method and a communication device, to improve a security protection capability in UWB sending and receiving processes. A specific solution is as follows: A communication device obtains key information of an ultra-wideband frame, where the ultra-wideband frame includes a plurality of data fragments. The communication device sends the plurality of data fragments based on one or more gaps between the plurality of data fragments, or the communication device receives the plurality of data fragments based on one or more gaps between the plurality of data fragments, where the gaps between the plurality of data fragments are determined based on the key information. Embodiments of this application are used in a process of sending the ultra-wideband frame by the communication device.

    COMMUNICATION METHOD AND APPARATUS, AND SYSTEM

    公开(公告)号:US20250159438A1

    公开(公告)日:2025-05-15

    申请号:US19027040

    申请日:2025-01-17

    Inventor: Meng LI Yanmei YANG

    Abstract: The communication method provided in this application includes: First, an application function network element receives first area information from a first network element, where the first area information indicates an area not supporting MBS in an MBS service area; and the application function network element transmits, in a unicast mode based on the first area information, data to a terminal device located in the area not supporting MBS in the MBS service area.

    Computing Cluster and Computing Cluster Connection Method

    公开(公告)号:US20250159388A1

    公开(公告)日:2025-05-15

    申请号:US19022428

    申请日:2025-01-15

    Abstract: A computing cluster is provided, and is used in a cluster communication scenario. In the computing cluster, a plurality of computing apparatuses in a first computing node are connected to at least one first wavelength cross-connect device, and a plurality of computing apparatuses in a second computing node are connected to at least one second wavelength cross-connect device. In addition, the at least one first wavelength cross-connect device is connected to the at least one second wavelength cross-connect device via an optical cross-connect device. In this way, any computing apparatus in the first computing node can be connected to any computing apparatus in the second computing node via the at least one first wavelength cross-connect device, the optical cross-connect device, and the at least one second wavelength cross-connect device.

    ARTIFICIAL INTELLIGENCE MODEL PROCESSING METHOD AND RELATED DEVICE

    公开(公告)号:US20250158896A1

    公开(公告)日:2025-05-15

    申请号:US19026918

    申请日:2025-01-17

    Abstract: In an artificial intelligence (AI) model processing method, a first node determines a first AI model, and the first node sends first information, where the first information indicates model information of the first AI model and auxiliary information of the first AI model. Compared with a manner in which different nodes exchange only respective AI models, in addition to the model information of the first AI model, the first information may further indicate the auxiliary information of the first AI model, so that a receiver of the first information can perform AI model processing (for example, training and merging) on the model information of the first AI model based on the auxiliary information of the first AI model, thereby improving performance of an AI model obtained by the receiver of the first information by performing processing based on the first AI model.

    CHIP PACKAGE STRUCTURE, ELECTRONIC DEVICE, AND PACKAGING METHOD OF CHIP PACKAGE STRUCTURE

    公开(公告)号:US20250157999A1

    公开(公告)日:2025-05-15

    申请号:US19026394

    申请日:2025-01-17

    Abstract: This disclosure provides a chip package structure, an electronic device, and a packaging method of the chip package structure, and relate to the field of chip packaging technologies, to reduce impact of a through-silicon via (TSV) on performance of an electronic component. The chip package structure may include a first component chip, a support chip, and a second component chip. The second component chip is stacked on the support chip through a bonding layer, and the first component chip is disposed on a side that is of the second component chip and that is away from the support chip. A conductive channel penetrates the support chip and the second component chip. The support chip includes a first substrate, and the second component chip includes a second substrate and an electronic component layer formed on the second substrate.

    MODEL TRAINING METHOD AND RELATED DEVICE

    公开(公告)号:US20250156712A1

    公开(公告)日:2025-05-15

    申请号:US19019814

    申请日:2025-01-14

    Inventor: Dequan YU Yin ZHAO

    Abstract: This application discloses a model training method, including: obtaining training data; using the training data as an input of a model, and in a training process of the model, calculating a parameter by using a first precision range, to obtain a calculated value; and if the calculated value overflows the first precision range, recalculating the parameter by using a second precision range, and performing iterative training on the model for one or more times by using a recalculated parameter, where the second precision range includes the first precision range, or the second precision range partially overlaps the first precision range.

    BINARY QUANTIZATION METHOD, NEURAL NETWORK TRAINING METHOD, DEVICE, AND STORAGE MEDIUM

    公开(公告)号:US20250156697A1

    公开(公告)日:2025-05-15

    申请号:US19019769

    申请日:2025-01-14

    Abstract: This application provides a binary quantization method, a neural network training method, a device, and a storage medium. The binary quantization method includes: determining to-be-quantized data in a neural network; determining a quantization parameter corresponding to the to-be-quantized data, where the quantization parameter includes a scaling factor and an offset; determining, based on the scaling factor and the offset, a binary upper limit and a binary lower limit corresponding to the to-be-quantized data; and performing binary quantization on the to-be-quantized data based on the scaling factor and the offset, to quantize the to-be-quantized data into the binary upper limit or the binary lower limit.

    INLINE AND IN-MEMORY TWO-DIMENSIONAL CONVOLUTION AND CONVOLUTIONAL NEURAL NETWORK HARDWARE ENGINES FOR NETWORKING DEVICES

    公开(公告)号:US20250156690A1

    公开(公告)日:2025-05-15

    申请号:US19025671

    申请日:2025-01-16

    Abstract: A two-dimensional (2D) convolution hardware engine for a networking device includes processing circuitry that further includes multiple processing stages forming a pipeline to perform an inline and in-memory 2D convolution operation on a received dataset. A first processing stage of the pipeline shifts the dataset in a first direction of a 2D space, a second processing stage shifts the dataset in a second direction of the 2D space, a third processing stage calculates a product of each pixel of a kernel of the shifted 2D image by multiplying the pixel with a filter weight of the kernel, and a fourth processing stage calculates a sum of the products of all pixels of the kernel of the shifted 2D image. A convolutional neural network (CNN) computation hardware engine for performing an inline and in-memory CNN computation operation on the received dataset is included.

Patent Agency Ranking