METHODS, SYSTEMS, AND MEDIA FOR IMAGE SEARCHING

    公开(公告)号:US20220405322A1

    公开(公告)日:2022-12-22

    申请号:US17354786

    申请日:2021-06-22

    摘要: Methods, systems, and media for image searching are described. Images comprising one query image and a plurality of candidate images are received. For each candidate image, a first model similarity measure from an output of a first model configured for scene classification to perceive scenes in the images is determined. Further, for each candidate image of the plurality of candidate images, a second model similarity measure from the output of a second model configured for attribute classification to perceive attributes in the images is determined. For each candidate image of the plurality of candidate images, a similarity agglomerate index of a weighted aggregate of the first model similarity measure and the second model similarity measure is computed. The plurality of candidate images based on the respective similarity agglomerate index of each candidate image are ranked and a first ranked candidate images corresponding to the searched images are generated.

    METHODS AND SYSTEMS FOR CROSS-DOMAIN FEW-SHOT CLASSIFICATION

    公开(公告)号:US20220300823A1

    公开(公告)日:2022-09-22

    申请号:US17204670

    申请日:2021-03-17

    IPC分类号: G06N3/08 G06N3/04

    摘要: Methods, systems, and media for training deep neural networks for cross-domain few-shot classification are described. The methods comprise an encoder and a decoder of a deep neural network. The training of the autoencoder comprises two training stages. For each iteration in the first training stage, a batch of data samples from the source dataset are sampled and fed to the encoder to generate a plurality of source feature maps, then determining a first training stage loss, which updates the autoencoder's parameters. For each iteration in the second training stage, the novel dataset is split into a support set and a query set. The support set is fed to the encoder to determine a prototype for each class label. The query set is also fed to the encoder to calculate a query set metric classification loss. The query set metric classification loss updates the autoencoder's parameters.

    METHOD AND SYSTEM FOR VIDEO SEGMENTATION

    公开(公告)号:US20210073551A1

    公开(公告)日:2021-03-11

    申请号:US16566179

    申请日:2019-09-10

    摘要: Methods and systems for video segmentation and scene recognition are described. A video having a plurality of frames and a subtitle file associated with the video are received. Segmentation is performed on the video to generate a first set video frames comprising one or more video frames based on a frame-by-frame comparison of features in the frames of the video. Each video frame in the first includes a frame indicator which indicates at least a first start frame of the video frame. The subtitle file associated with the video is parsed to generate one or more subtitle segments based on a start and an end time of each dialogue in the subtitle file. A second set of video frames comprising one or more second video frames are generated based on the video frames of the first set of video frames and the e or more subtitle segments.

    DEVICE AND METHOD FOR ENHACNING WELL PERFORATING

    公开(公告)号:US20190113315A1

    公开(公告)日:2019-04-18

    申请号:US15945537

    申请日:2018-04-04

    申请人: Peng DAI Bo QU

    发明人: Peng DAI Bo QU

    IPC分类号: F42B1/028 E21B43/117

    摘要: A perforation enhancement cap for a shaped charge contains a shell and a pack of a solid propellant disposed inside the shell. The shell has a tubular straight section and a rounded cap. The rounded cap has a hole. The propellant pack has a through hole. The straight section, the rounded cap, the hole in the rounded cap, as well as the through hole are disposed about a common longitudinal axis. The through hole has a conical frustum section with its larger base facing the inside of the enhancement cap. The enhancement cap is adapted to receive the shaped charge to-form an enhanced perforation charge assembly for perforation and fracturing the formation.