Systems and methods for digital image editing

    公开(公告)号:US10755036B1

    公开(公告)日:2020-08-25

    申请号:US15147123

    申请日:2016-05-05

    Applicant: Snap Inc.

    Abstract: A system according to various exemplary embodiments includes a processor and a user interface, communication module, and memory coupled to the processor. The memory stores instructions that, when executed by the processor, cause the system to: retrieve a digital image from a server using the communication module; present the digital image on a display of the user interface; receive edits to the digital image via the user interface; generate, based on the edits, a modified digital image, wherein generating the modified digital image includes transforming a format of the digital image to include a field containing an identifier associated with the modified digital image; and transmit the modified digital image to the server using the communication module.

    Systems and methods for content tagging

    公开(公告)号:US10157333B1

    公开(公告)日:2018-12-18

    申请号:US15247697

    申请日:2016-08-25

    Applicant: Snap Inc.

    Abstract: Systems, methods, devices, media, and computer readable instructions are described for local image tagging in a resource constrained environment. One embodiment involves processing image data using a deep convolutional neural network (DCNN) comprising at least a first subgraph and a second subgraph, the first subgraph comprising at least a first layer and a second layer, processing, the image data using at least the first layer of the first subgraph to generate first intermediate output data; processing, by the mobile device, the first intermediate output data using at least the second layer of the first subgraph to generate first subgraph output data, and in response to a determination that each layer reliant on the first intermediate data have completed processing, deleting the first intermediate data from the mobile device. Additional embodiments involve convolving entire pixel resolutions of the image data against kernels in different layers if the DCNN.

    Encoding and decoding a stylized custom graphic

    公开(公告)号:US11887344B2

    公开(公告)日:2024-01-30

    申请号:US18128128

    申请日:2023-03-29

    Applicant: Snap Inc.

    CPC classification number: G06T9/002 G06N3/047 H04L51/52

    Abstract: Disclosed are methods for encoding information in a graphic image. The information may be encoded so as to have a visual appearance that adopts a particular style, so that the encoded information is visually pleasing in the environment in which it is displayed. An encoder and decoder are trained during an integrated training process, where the encoder is tuned to minimize a loss when its encoded images are decoded. Similarly, the decoder is also trained to minimize loss when decoding the encoded images. Both the encoder and decoder may utilize a convolutional neural network in some aspects to analyze data and/or images. Once data is encoded, a style from a sample image is transferred to the encoded data. When decoding, the decoder may largely ignore the style aspects of the encoded data and decode based on a content portion of the data.

    Encoding and decoding a stylized custom graphic

    公开(公告)号:US11670012B2

    公开(公告)日:2023-06-06

    申请号:US17302361

    申请日:2021-04-30

    Applicant: Snap Inc.

    CPC classification number: G06T9/002 G06N3/047 H04L51/52

    Abstract: Disclosed are methods for encoding information in a graphic image. The information may be encoded so as to have a visual appearance that adopts a particular style, so that the encoded information is visually pleasing in the environment in which it is displayed. An encoder and decoder are trained during an integrated training process, where the encoder is tuned to minimize a loss when its encoded images are decoded. Similarly, the decoder is also trained to minimize loss when decoding the encoded images. Both the encoder and decoder may utilize a convolutional neural network in some aspects to analyze data and/or images. Once data is encoded, a style from a sample image is transferred to the encoded data. When decoding, the decoder may largely ignore the style aspects of the encoded data and decode based on a content portion of the data.

    DENSE FEATURE SCALE DETECTION FOR IMAGE MATCHING

    公开(公告)号:US20220292697A1

    公开(公告)日:2022-09-15

    申请号:US17825994

    申请日:2022-05-26

    Applicant: Snap Inc.

    Abstract: Dense feature scale detection can be implemented using multiple convolutional neural networks trained on scale data to more accurately and efficiently match pixels between images. An input image can be used to generate multiple scaled images. The multiple scaled images are input into a feature net, which outputs feature data for the multiple scaled images. An attention net is used to generate an attention map from the input image. The attention map assigns emphasis as a soft distribution to different scales based on texture analysis. The feature data and the attention data can be combined through a multiplication process and then summed to generate dense features for comparison.

    Feedback adversarial learning
    48.
    发明授权

    公开(公告)号:US11429841B1

    公开(公告)日:2022-08-30

    申请号:US16192437

    申请日:2018-11-15

    Applicant: Snap Inc.

    Abstract: Disclosed is a feedback adversarial learning framework, a recurrent framework for generative adversarial networks that can be widely adapted to not only stabilize training but also generate higher quality images. In some aspects, a discriminator's spatial outputs are distilled to improve generation quality. The disclosed embodiments model the discriminator into the generator, and the generator learns from its mistakes over time. In some aspects, a discriminator architecture encourages the model to be locally and globally consistent.

    EFFICIENT HUMAN POSE TRACKING IN VIDEOS

    公开(公告)号:US20210125342A1

    公开(公告)日:2021-04-29

    申请号:US16949594

    申请日:2020-11-05

    Applicant: Snap Inc.

    Abstract: Systems, devices, media and methods are presented for a human pose tracking framework. The human pose tracking framework may identify a message with video frames, generate, using a composite convolutional neural network, joint data representing joint locations of a human depicted in the video frames, the generating of the joint data by the composite convolutional neural network done by a deep convolutional neural network operating on one portion of the video frames, a shallow convolutional neural network operating on a another portion of the video frames, and tracking the joint locations using a one-shot learner neural network that is trained to track the joint locations based on a concatenation of feature maps and a convolutional pose machine. The human pose tracking framework may store, the joint locations, and cause presentation of a rendition of the joint locations on a user interface of a client device.

Patent Agency Ranking