SIMULATED HANDWRITING IMAGE GENERATOR

    公开(公告)号:US20210166013A1

    公开(公告)日:2021-06-03

    申请号:US16701586

    申请日:2019-12-03

    Applicant: ADOBE INC.

    Abstract: Techniques are provided for generating a digital image of simulated handwriting using an encoder-decoder neural network trained on images of natural handwriting samples. The simulated handwriting image can be generated based on a style of a handwriting sample and a variable length coded text input. The style represents visually distinctive characteristics of the handwriting sample, such as the shape, size, slope, and spacing of the letters, characters, or other markings in the handwriting sample. The resulting simulated handwriting image can include the text input rendered in the style of the handwriting sample. The distinctive visual appearance of the letters or words in the simulated handwriting image mimics the visual appearance of the letters or words in the handwriting sample image, whether the letters or words in the simulated handwriting image are the same as in the handwriting sample image or different from those in the handwriting sample image.

    Domain Adaptation for Machine Learning Models

    公开(公告)号:US20220391768A1

    公开(公告)日:2022-12-08

    申请号:US17883811

    申请日:2022-08-09

    Applicant: Adobe Inc.

    Abstract: Adapting a machine learning model to process data that differs from training data used to configure the model for a specified objective is described. A domain adaptation system trains the model to process new domain data that differs from a training data domain by using the model to generate a feature representation for the new domain data, which describes different content types included in the new domain data. The domain adaptation system then generates a probability distribution for each discrete region of the new domain data, which describes a likelihood of the region including different content described by the feature representation. The probability distribution is compared to ground truth information for the new domain data to determine a loss function, which is used to refine model parameters. After determining that model outputs achieve a threshold similarity to the ground truth information, the model is output as a domain-agnostic model.

    SIMULATED HANDWRITING IMAGE GENERATOR

    公开(公告)号:US20220148326A1

    公开(公告)日:2022-05-12

    申请号:US17648718

    申请日:2022-01-24

    Applicant: Adobe Inc.

    Abstract: Techniques are provided for generating a digital image of simulated handwriting using an encoder-decoder neural network trained on images of natural handwriting samples. The simulated handwriting image can be generated based on a style of a handwriting sample and a variable length coded text input. The style represents visually distinctive characteristics of the handwriting sample, such as the shape, size, slope, and spacing of the letters, characters, or other markings in the handwriting sample. The resulting simulated handwriting image can include the text input rendered in the style of the handwriting sample. The distinctive visual appearance of the letters or words in the simulated handwriting image mimics the visual appearance of the letters or words in the handwriting sample image, whether the letters or words in the simulated handwriting image are the same as in the handwriting sample image or different from those in the handwriting sample image.

    Simulated handwriting image generator

    公开(公告)号:US12229399B2

    公开(公告)日:2025-02-18

    申请号:US18420444

    申请日:2024-01-23

    Applicant: Adobe Inc.

    Abstract: Techniques are provided for generating a digital image of simulated handwriting using an encoder-decoder neural network trained on images of natural handwriting samples. The simulated handwriting image can be generated based on a style of a handwriting sample and a variable length coded text input. The style represents visually distinctive characteristics of the handwriting sample, such as the shape, size, slope, and spacing of the letters, characters, or other markings in the handwriting sample. The resulting simulated handwriting image can include the text input rendered in the style of the handwriting sample. The distinctive visual appearance of the letters or words in the simulated handwriting image mimics the visual appearance of the letters or words in the handwriting sample image, whether the letters or words in the simulated handwriting image are the same as in the handwriting sample image or different from those in the handwriting sample image.

    SHARING OF USER MARKINGS BETWEEN PRINTED AND DIGITAL DOCUMENTS

    公开(公告)号:US20210303779A1

    公开(公告)日:2021-09-30

    申请号:US16834940

    申请日:2020-03-30

    Applicant: Adobe Inc.

    Abstract: Techniques are disclosed for sharing user markings between digital documents and corresponding physically printed documents. The sharing is facilitated using an Augmented Reality (AR) device, such as a smartphone or a tablet. The device streams images of a page of a book on a display. The device accesses a corresponding digital document that is a digital version of content printed on the book. In an example, the digital document has a digital user marking, e.g., a comment associated with a paragraph of the digital document, wherein a corresponding paragraph of the physical book lacks any such comment. When the device streams the images of the page of the book on the display, the device appends the digital comment on the paragraph of the page of the book within the image stream. Thus, the user can view the digital comment in the AR environment, while reading the physical book.

    Sharing of user markings between printed and digital documents

    公开(公告)号:US11520974B2

    公开(公告)日:2022-12-06

    申请号:US16834940

    申请日:2020-03-30

    Applicant: Adobe Inc.

    Abstract: Techniques are disclosed for sharing user markings between digital documents and corresponding physically printed documents. The sharing is facilitated using an Augmented Reality (AR) device, such as a smartphone or a tablet. The device streams images of a page of a book on a display. The device accesses a corresponding digital document that is a digital version of content printed on the book. In an example, the digital document has a digital user marking, e.g., a comment associated with a paragraph of the digital document, wherein a corresponding paragraph of the physical book lacks any such comment. When the device streams the images of the page of the book on the display, the device appends the digital comment on the paragraph of the page of the book within the image stream. Thus, the user can view the digital comment in the AR environment, while reading the physical book.

    WINDOWED CONTEXTUAL POOLING FOR OBJECT DETECTION NEURAL NETWORKS

    公开(公告)号:US20220237444A1

    公开(公告)日:2022-07-28

    申请号:US17158639

    申请日:2021-01-26

    Applicant: Adobe Inc.

    Abstract: Techniques are disclosed for neural network based windowed contextual pooling. A methodology implementing the techniques according to an embodiment includes segmenting input feature channels into first and second groups of feature channels. The method also includes applying a first windowed pooling process to the first group of feature channels to generate a first group of pooled feature channels and applying a second windowed pooling process to the second group of feature channels to generate a second group of pooled feature channels. The method further includes performing a weighted merging of the first group of pooled feature channels and the second group of pooled feature channels to generate merged pooled feature channels. The method further includes concatenating the merged pooled feature channels with the input feature channels to generate concatenated feature channels and applying a two-dimensional convolutional neural network to the concatenated feature channels to generate contextually pooled output feature channels.

Patent Agency Ranking