-
公开(公告)号:US20230085732A1
公开(公告)日:2023-03-23
申请号:US18058543
申请日:2022-11-23
Inventor: Yuying HAO , Yi LIU , Zewu WU , Baohua LAI , Zeyu CHEN , Dianhai YU , Yanjun MA , Zhiliang YU , Xueying LV
IPC: G06T7/11
Abstract: The present disclosure provides an image processing method and apparatus, and relates to the field of image processing, and in particular to the field of image annotation. An implementation is: obtaining an image to be processed including a target region to be annotated; in response to a first click on the target region, performing a first operation to expand a predicted region for the target region based on a click position of the first click; in response to a second click in a position where the predicted region exceeds the target region, performing a second operation to reduce the predicted region based on a click position of the second click; and in response to determining that a difference between the predicted region and the target region meets a preset condition, obtaining an outline of the predicted region to annotate the target region.
-
公开(公告)号:US20230085684A1
公开(公告)日:2023-03-23
申请号:US17993775
申请日:2022-11-23
Inventor: Daxiang DONG , Li WANG , Jiangwei SUN , Zhou XIN
Abstract: A method of recommending data, a device, and a medium, which relate to a field of an artificial intelligence technology, in particular to fields of deep learning, natural language processing and intelligent recommendation technologies. The method of recommending the data includes: acquiring operation data of an operation object, and the operation data is associated with first content data and first target object data; determining an operation object feature, a content feature and a target object feature based on the operation data; determining a fusion feature based on the operation object feature and the content feature; and recommending second content data and second target object data in an associated manner based on the fusion feature and the target object feature.
-
公开(公告)号:US20230080230A1
公开(公告)日:2023-03-16
申请号:US17991977
申请日:2022-11-22
Inventor: Ji LIU , Sunjie YU , Dejing DOU , Jiwen ZHOU
Abstract: A method for generating a federated learning model is provided. The method includes obtaining images; obtaining sorting results of the images; and generating a trained federated learning model by training a federated learning model to be trained according to the images and the sorting results. The federated learning model to be trained is obtained after pruning a federated learning model to be pruned, and a pruning rate of a convolution layer in the federated learning model to be pruned is automatically adjusted according to a model accuracy during the pruning.
-
公开(公告)号:US20230078410A1
公开(公告)日:2023-03-16
申请号:US18051322
申请日:2022-10-31
Inventor: Jianzhang PENG
Abstract: The disclosure provides a method for testing a network device and an electronic device. The method includes: simulating at least one virtual client, and generating by the virtual client a second request message to be sent based on an existing first request message; sending the second request message to the network device, so that the network device sends the second request message to a simulated virtual server for processing; and receiving a response message for the second request message sent by the network device, in which the response message is sent by the virtual server to the network device.
-
公开(公告)号:US20230078041A1
公开(公告)日:2023-03-16
申请号:US17975181
申请日:2022-10-27
Inventor: Da QU
Abstract: A method of displaying an animation, an electronic device and a storage medium, which relate to a field of a computer technology, in particular to fields of artificial intelligence and augmented reality technologies. The method includes: determining, in response to a scene switching operation for a target scene, a first sampling result corresponding to each vertex of a three-dimensional model according to a first cubic texture object corresponding to the target scene; determining a roaming animation according to a color information of each vertex in a current scene and the first sampling result corresponding to each vertex; and presenting the roaming animation so as to switch the current scene to the target scene.
-
公开(公告)号:US20230074417A1
公开(公告)日:2023-03-09
申请号:US18055149
申请日:2022-11-14
Inventor: Ji LIU , Sunjie YU , Jiwen ZHOU , Ruipu ZHOU , Dejing DOU
Abstract: A method for training a longitudinal federated learning model is provided, and is applied to a first participant device. The first participant device includes label data. The longitudinal federated learning model includes a first bottom layer sub-model, an interaction layer sub-model, a top layer sub-model based on a Lipschitz neural network and a second bottom layer sub-model in a second participant device. First bottom layer output data of the first participant device and second bottom layer output data sent by the second participant device are obtained. The first bottom layer output data and the second bottom layer output data are input into an interaction layer sub-model to obtain interaction layer output data. Top layer output data is obtained based on the interaction layer output data and the top layer sub-model. The longitudinal federated learning model is trained according to the top layer output data and the label data.
-
737.
公开(公告)号:US20230072240A1
公开(公告)日:2023-03-09
申请号:US17988168
申请日:2022-11-16
Inventor: Kafeng WANG , Chengzhong XU , Haoyi XIONG , Xingjian LI , Dejing DOU
IPC: G06K9/62
Abstract: A method for processing synthetic features is provided, and includes: the synthetic features to be evaluated and original features corresponding to the synthetic features are obtained. A feature extraction is performed on the synthetic features to be evaluated based on a number S of pre-trained samples, to obtain meta features with S samples. S is a positive integer. The meta features are input into the pre-trained meta feature evaluation model for a binary classification prediction, to obtain a probability of binary classification. Quality screening is performed on the synthetic features to be evaluated according to the probability of the binary classification, to obtain second synthetic features to be evaluated. The second synthetic features are classified in a good category. The second synthetic features and original features are input into a first classifier for evaluation. classified in a poor category.
-
公开(公告)号:US20230070349A1
公开(公告)日:2023-03-09
申请号:US18048944
申请日:2022-10-24
Inventor: Xi CHEN , Guangdi SHAN , Wei LI , Fangsheng JIANG , Hailu JIA
Abstract: A positioning method includes: receiving detection data sent by a positioning device, in which the detection data includes first satellite data of multiple satellites; determining prediction noise of each satellite based on the first satellite data, and determining a weight of each satellite based on the prediction noise; and determining a position of the positioning device based on the weight and observation equations.
-
公开(公告)号:US11599594B2
公开(公告)日:2023-03-07
申请号:US17829113
申请日:2022-05-31
Inventor: Yaqing Wang , Dejing Dou
IPC: G06F16/955 , G06F16/9532 , G06F16/906 , G06F16/903 , G06F16/9538
Abstract: A method for data processing is provided. The method includes obtaining first retrieving data associated with a first user and a first retrieving result selected by the first user from at least one retrieving result corresponding to the first retrieving data. The first retrieving data is labelled with an intention tag indicating a retrieving intention of the first user. The method further includes obtaining second retrieving data that is used by a second user to conduct retrieving and selecting the first retrieving result within a predetermined time period. The method further includes assigning the intention tag to the second retrieving data.
-
公开(公告)号:US20230069197A1
公开(公告)日:2023-03-02
申请号:US17983208
申请日:2022-11-08
Inventor: Wenhao WU , Yuxiang Zhao
Abstract: A method and an apparatus for training a video recognition model are provided. The method may include: dividing a sample video into a plurality of sample video segments; sampling a part of sample video frames from a sample video segment; inputting the part of sample video frames into a feature extraction network to obtain feature information of the sample video segment; performing convolution fusion on the feature information by using a dynamic segment fusion module to obtain fusion feature information, where a convolution kernel of the dynamic segment fusion module varies with different video inputs; inputting the fusion feature information to a fully connected layer to obtain an estimated category of the sample video; and performing a parameter adjustment based on a difference between the tag of a true category and the estimated category to obtain the video recognition model.
-
-
-
-
-
-
-
-
-