Digital content query-aware sequential search

    公开(公告)号:US12124439B2

    公开(公告)日:2024-10-22

    申请号:US17513127

    申请日:2021-10-28

    Applicant: Adobe Inc.

    CPC classification number: G06F16/245 G06F16/248 G06N20/00

    Abstract: Digital content search techniques are described that overcome the challenges found in conventional sequence-based techniques through use of a query-aware sequential search. In one example, a search query is received and sequence input data is obtained based on the search query. The sequence input data describes a sequence of digital content and respective search queries. Embedding data is generated based on the sequence input data using an embedding module of a machine-learning model. The embedding module includes a query-aware embedding layer that generates embeddings of the sequence of digital content and respective search queries. A search result is generated referencing at least one item of digital content by processing the embedding data using at least one layer of the machine-learning model.

    Locally constrained self-attentive sequential recommendation

    公开(公告)号:US12019671B2

    公开(公告)日:2024-06-25

    申请号:US17501191

    申请日:2021-10-14

    Applicant: Adobe Inc.

    CPC classification number: G06F16/438 G06F16/447 G06N3/045

    Abstract: Digital content search techniques are described. In one example, the techniques are incorporated as part of a multi-head self-attention module of a transformer using machine learning. A localized self-attention module, for instance, is incorporated as part of the multi-head self-attention module that applies local constraints to the sequence. This is performable in a variety of ways. In a first instance, a model-based local encoder is used, examples of which include a fixed-depth recurrent neural network (RNN) and a convolutional network. In a second instance, a masking-based local encoder is used, examples of which include use of a fixed window, Gaussian initialization, and an adaptive predictor.

    Extracting textures from text based images

    公开(公告)号:US11776168B2

    公开(公告)日:2023-10-03

    申请号:US17219391

    申请日:2021-03-31

    Applicant: Adobe Inc.

    CPC classification number: G06T11/001 G06T5/005 G06T11/60 G06V30/153 G06V30/158

    Abstract: This disclosure describes one or more implementations of systems, non-transitory computer-readable media, and methods that extract a texture from embedded text within a digital image utilizing kerning-adjusted glyphs. For example, the disclosed systems utilize text recognition and text segmentation to identify and segment glyphs from embedded text depicted in a digital image. Subsequently, in some implementations, the disclosed systems determine optimistic kerning values between consecutive glyphs and utilize the kerning values to reduce gaps between the consecutive glyphs. Furthermore, in one or more implementations, the disclosed systems generate a synthesized texture utilizing the kerning-value-adjusted glyphs by utilizing image inpainting on the textures corresponding to the kerning-value-adjusted glyphs. Moreover, in certain instances, the disclosed systems apply a target texture to a target digital text based on the generated synthesized texture.

    GENERATING SCALABLE AND SEMANTICALLY EDITABLE FONT REPRESENTATIONS

    公开(公告)号:US20220414314A1

    公开(公告)日:2022-12-29

    申请号:US17362031

    申请日:2021-06-29

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately and flexibly generating scalable and semantically editable font representations utilizing a machine learning approach. For example, the disclosed systems generate a font representation code from a glyph utilizing a particular neural network architecture. For example, the disclosed systems utilize a glyph appearance propagation model and perform an iterative process to generate a font representation code from an initial glyph. Additionally, using a glyph appearance propagation model, the disclosed systems automatically propagate the appearance of the initial glyph from the font representation code to generate additional glyphs corresponding to respective glyph labels. In some embodiments, the disclosed systems propagate edits or other changes in appearance of a glyph to other glyphs within a glyph set (e.g., to match the appearance of the edited glyph).

    EXTRACTING TEXTURES FROM TEXT BASED IMAGES

    公开(公告)号:US20220319065A1

    公开(公告)日:2022-10-06

    申请号:US17219391

    申请日:2021-03-31

    Applicant: Adobe Inc.

    Abstract: This disclosure describes one or more implementations of systems, non-transitory computer-readable media, and methods that extract a texture from embedded text within a digital image utilizing kerning-adjusted glyphs. For example, the disclosed systems utilize text recognition and text segmentation to identify and segment glyphs from embedded text depicted in a digital image. Subsequently, in some implementations, the disclosed systems determine optimistic kerning values between consecutive glyphs and utilize the kerning values to reduce gaps between the consecutive glyphs. Furthermore, in one or more implementations, the disclosed systems generate a synthesized texture utilizing the kerning-value-adjusted glyphs by utilizing image inpainting on the textures corresponding to the kerning-value-adjusted glyphs. Moreover, in certain instances, the disclosed systems apply a target texture to a target digital text based on the generated synthesized texture.

    Interpretable user modeling from unstructured user data

    公开(公告)号:US11381651B2

    公开(公告)日:2022-07-05

    申请号:US16424949

    申请日:2019-05-29

    Applicant: ADOBE INC.

    Abstract: Methods and systems are provided for generating interpretable user modeling system. The interpretable user modeling system can use an intent neural network to implement one or more tasks. The intent neural network can bridge a semantic gap between log data and human language by leveraging tutorial data to understand user logs in a semantically meaningful way. A memory unit of the intent neural network can capture information from the tutorial data. Such a memory unit can be queried to identify human readable sentences related to actions received by the intent neural network. The human readable sentences can be used to interpret the user log data in a semantically meaningful way.

    Preserving Document Design Using Font Synthesis

    公开(公告)号:US20220172498A1

    公开(公告)日:2022-06-02

    申请号:US17675206

    申请日:2022-02-18

    Applicant: Adobe Inc.

    Abstract: Automatic font synthesis for modifying a local font to have an appearance that is visually similar to a source font is described. A font modification system receives an electronic document including the source font together with an indication of a font descriptor for the source font. The font descriptor includes information describing various font attributes for the source font, which define a visual appearance of the source font. Using the source font descriptor, the font modification system identifies a local font that is visually similar in appearance to the source font by comparing local font descriptors to the source font descriptor. A visually similar font is then synthesized by modifying glyph outlines of the local font to achieve the visual appearance defined by the source font descriptor. The synthesized font is then used to replace the source font and output in the electronic document at the computing device.

    Texture hallucination for large-scale image super-resolution

    公开(公告)号:US11288771B2

    公开(公告)日:2022-03-29

    申请号:US16861688

    申请日:2020-04-29

    Applicant: ADOBE INC.

    Abstract: Systems and methods for texture hallucination with a large upscaling factor are described. Embodiments of the systems and methods may receive an input image and a reference image, extract an upscaled feature map from the input image, match the input image to a portion of the reference image, wherein a resolution of the reference image is higher than a resolution of the input image, concatenate the upscaled feature map with a reference feature map corresponding to the portion of the reference image to produce a concatenated feature map, and generate a reconstructed image based on the concatenated feature map using a machine learning model trained with a texture loss and a degradation loss, wherein the texture loss is based on a high frequency band filter, and the degradation loss is based on a downscaled version of the reconstructed image.

    TEXTURE HALLUCINATION FOR LARGE-SCALE IMAGE SUPER-RESOLUTION

    公开(公告)号:US20210342974A1

    公开(公告)日:2021-11-04

    申请号:US16861688

    申请日:2020-04-29

    Applicant: ADOBE INC.

    Abstract: Systems and methods for texture hallucination with a large upscaling factor are described. Embodiments of the systems and methods may receive an input image and a reference image, extract an upscaled feature map from the input image, match the input image to a portion of the reference image, wherein a resolution of the reference image is higher than a resolution of the input image, concatenate the upscaled feature map with a reference feature map corresponding to the portion of the reference image to produce a concatenated feature map, and generate a reconstructed image based on the concatenated feature map using a machine learning model trained with a texture loss and a degradation loss, wherein the texture loss is based on a high frequency band filter, and the degradation loss is based on a downscaled version of the reconstructed image.

Patent Agency Ranking