METHODS AND DEVICES FOR CAPTURING HIGH-SPEED AND HIGH-DEFINITION VIDEOS

    公开(公告)号:US20210152739A1

    公开(公告)日:2021-05-20

    申请号:US17159984

    申请日:2021-01-27

    Abstract: Methods and devices for generating a slow motion video segment are described. A first set of video frames captures a video view at a first resolution and at a first frame rate. A second set of video frames captures the video view at a second lower resolution, and at a second frame rate that is greater for at least a portion of the second set. At least two high resolution frames are identified in the first set for generating the slow motion video segment. One or more low resolution frames are identified in the second set corresponding to an inter-frame time period between the identified high resolution frames. The slow motion video segment is generated by generating at least one high resolution frame corresponding to the inter-frame time period using interpolation based on the identified high resolution frames and the identified low resolution frames.

    METHODS AND SYSTEMS FOR HAND GESTURE-BASED CONTROL OF A DEVICE

    公开(公告)号:US20230082789A1

    公开(公告)日:2023-03-16

    申请号:US17950246

    申请日:2022-09-22

    Abstract: Methods and systems for gesture-based control of a device are described. An input frame is processed to determine a location of a distinguishing anatomical feature in the input frame. A virtual gesture-space is defined based on the location of the distinguishing anatomical feature, the virtual gesture-space being a defined space for detecting a gesture input. The input frame is processed in only the virtual gesture-space, to detect and track a hand. Using information generated from detecting and tracking the at least one hand, a gesture class is determined for the at least one hand. The device may be a smart television, a smart phone, a tablet, etc.

    SELF-SUPERVISED VIDEO REPRESENTATION LEARNING BY EXPLORING SPATIOTEMPORAL CONTINUITY

    公开(公告)号:US20230072445A1

    公开(公告)日:2023-03-09

    申请号:US17468224

    申请日:2021-09-07

    Abstract: This disclosure provides a training method and apparatus, and relates to the artificial intelligence field. The method includes feeding a primary video segment, representative of a concatenation of a first and a second nonadjacent video segments obtained from a video source, to a deep learning backbone network. The method further includes embedding, via the deep learning backbone network, the primary video segment into a first feature output. The method further includes providing the first feature output to a first perception network to generate a first set of probability distribution outputs indicating a temporal location of a discontinuous point associated with the primary video segment. The method further includes generating a first loss function based on the first set of probability distribution outputs. The method further includes optimizing the deep learning backbone network, by backpropagation of the first loss function.

Patent Agency Ranking