Generation of a sequence of textures for video delivery

    公开(公告)号:US11580675B2

    公开(公告)日:2023-02-14

    申请号:US17331186

    申请日:2021-05-26

    Applicant: Adobe Inc.

    Abstract: Techniques and systems are provided for generating a video from texture images, and for reconstructing the texture images from the video. For example, a texture image can be divided into a number of tiles, and the number of tiles can be sorted into a sequence of ordered tiles. The sequence of ordered tiles can be provided to a video coder for generating a coded video. The number of tiles can be encoded based on the sequence of ordered tiles. The encoded video including the encoded sequence of ordered tiles can be decoded. At least a portion of the decoded video can include the number of tiles sorted into a sequence of ordered tiles. A data file associated with at least the portion of the decoded video can be used to reconstruct the texture image using the tiles.

    GRAPH NEURAL NETWORKS FOR DATASETS WITH HETEROPHILY

    公开(公告)号:US20220309334A1

    公开(公告)日:2022-09-29

    申请号:US17210157

    申请日:2021-03-23

    Applicant: Adobe Inc.

    Abstract: Techniques are provided for training graph neural networks with heterophily datasets and generating predictions for such datasets with heterophily. A computing device receives a dataset including a graph data structure and processes the dataset using a graph neural network. The graph neural network defines prior belief vectors respectively corresponding to nodes of the graph data structure, executes a compatibility-guided propagation from the set of prior belief vectors and using a compatibility matrix. The graph neural network predicts predicting a class label for a node of the graph data structure based on the compatibility-guided propagations and a characteristic of at least one node within a neighborhood of the node. The computing device outputs the graph data structure where it is usable by a software tool for modifying an operation of a computing environment.

    Lossless image compression using block based prediction and optimized context adaptive entropy coding

    公开(公告)号:US11425368B1

    公开(公告)日:2022-08-23

    申请号:US17177592

    申请日:2021-02-17

    Applicant: Adobe Inc.

    Abstract: Embodiments are disclosed for lossless image compression using block-based prediction and context adaptive entropy coding. A method of lossless image compression using block-based prediction and context adaptive entropy coding comprises dividing an input image into a plurality of blocks, determining a pixel predictor for each block based on a block strategy, determining a plurality of residual values using the pixel predictor for each block, selecting a subset of features associated with the plurality of residual values, performing context modeling on the plurality of residual values based on the subset of features to identify a plurality of residual clusters, and entropy coding the plurality of residual clusters.

    SYSTEM FOR COMBINING SENSOR DATA TO DETERMINE A RELATIVE POSITION

    公开(公告)号:US20220264251A1

    公开(公告)日:2022-08-18

    申请号:US17176982

    申请日:2021-02-16

    Applicant: ADOBE INC.

    Abstract: A first device determines relative position data representative of a position of one or more other user devices relative to the first device. To determine relative position data between the first device and a second device, the first device determines a distance between the first device and the second device at a plurality of timestamps. Additionally, the first device determines movement data at each timestamp from one or more device sensors. The movement data at each corresponding timestamp may reflect movement of the first device and/or the second device between a prior timestamp and the corresponding timestamp. The first device computes relative position data for the second device by combining the distance measurements and movement data over the plurality of timestamps, for instance, through a process of sensor fusion. By computing the relative position data, the first device may determine a transformation that can be used to convert between a coordinate system of the second device and the coordinate system of the first device.

    Reinforcement learning techniques for automated video summarization

    公开(公告)号:US11314970B1

    公开(公告)日:2022-04-26

    申请号:US16953049

    申请日:2020-11-19

    Applicant: Adobe Inc.

    Abstract: A video summarization system generates a concatenated feature set by combining a feature set of a candidate video shot and a summarization feature set. Based on the concatenated feature set, the video summarization system calculates multiple action options of a reward function included in a trained reinforcement learning module. The video summarization system determines a reward outcome included in the multiple action options. The video summarization system modifies the summarization feature set to include the feature set of the candidate video shot by applying a particular modification indicated by the reward outcome. The video summarization system identifies video frames associated with the modified summarization feature set, and generates a summary video based on the identified video frames.

    Depicting Humans in Text-Defined Outfits

    公开(公告)号:US20220108509A1

    公开(公告)日:2022-04-07

    申请号:US17553114

    申请日:2021-12-16

    Applicant: Adobe Inc.

    Abstract: Generating images and videos depicting a human subject wearing textually defined attire is described. An image generation system receives a two-dimensional reference image depicting a person and a textual description describing target clothing in which the person is to be depicted as wearing. To maintain a personal identity of the person, the image generation system implements a generative model, trained using both discriminator loss and perceptual quality loss, which is configured to generate images from text. In some implementations, the image generation system is configured to train the generative model to output visually realistic images depicting the human subject in the target clothing. The image generation system is further configured to apply the trained generative model to process individual frames of a reference video depicting a person and output frames depicting the person wearing textually described target clothing.

    Depicting humans in text-defined outfits

    公开(公告)号:US11210831B2

    公开(公告)日:2021-12-28

    申请号:US16804822

    申请日:2020-02-28

    Applicant: Adobe Inc.

    Abstract: Generating images and videos depicting a human subject wearing textually defined attire is described. An image generation system receives a two-dimensional reference image depicting a person and a textual description describing target clothing in which the person is to be depicted as wearing. To maintain a personal identity of the person, the image generation system implements a generative model, trained using both discriminator loss and perceptual quality loss, which is configured to generate images from text. In some implementations, the image generation system is configured to train the generative model to output visually realistic images depicting the human subject in the target clothing. The image generation system is further configured to apply the trained generative model to process individual frames of a reference video depicting a person and output frames depicting the person wearing textually described target clothing.

    System and Method for Low-Latency Content Streaming

    公开(公告)号:US20210289235A1

    公开(公告)日:2021-09-16

    申请号:US17332033

    申请日:2021-05-27

    Applicant: Adobe Inc.

    Abstract: Embodiments of a system and method for low-latency content streaming are described. In various embodiments, multiple data fragments may be sequentially generated. Each data fragment may represent a distinct portion of media content generated from a live content source. Each data fragment may include multiple sub-portions. Furthermore, for each data fragment, generating that fragment may include sequentially generating each sub-portion of that fragment. Embodiments may include, responsive to receiving a request for a particular data fragment from a client during the generation of a particular sub-portion of that particular data fragment, providing the particular sub-portion to the client subsequent to that particular sub-portion being generated and prior to the generation of that particular data fragment being completed in order to reduce playback latency at the client relative to the live content source.

    Latency mitigation for encoding data

    公开(公告)号:US11120363B2

    公开(公告)日:2021-09-14

    申请号:US15788455

    申请日:2017-10-19

    Applicant: ADOBE INC.

    Abstract: Embodiments of the present disclosure provide systems, methods, and computer storage media for mitigating latencies associated with the encoding of digital assets. Instead of waiting for codebook generation to complete in order to encode a digital asset for storage, embodiments described herein describe a shifting codebook generation and employment technique that significantly mitigates any latencies typically associated with encoding schemes. As a digital asset is received, a single codebook is trained based on each portion of the digital asset, or in some instances along with each portion of other digital assets being received. The single codebook is employed to encode subsequent portion(s) of the digital asset as it is received. The process continues until an end of the digital asset is reached or another command to terminate the encoding process is received. To encode an initial portion of the digital asset, a bootstrap codebook can be employed.

Patent Agency Ranking