Generating object images with different lighting conditions

    公开(公告)号:US12062221B2

    公开(公告)日:2024-08-13

    申请号:US17587840

    申请日:2022-01-28

    申请人: ADOBE INC.

    摘要: An image generation system generates images of objects under different lighting conditions. An image of an object and lighting conditions for an output image are received. The lighting conditions may specify, for instance, a location and/or color of one or more light sources. The image of the object is decomposed into a shading component and a reflectance component. A machine learning model takes the reflectance component and specified lighting conditions as input, and generates an output image of the object under the specified lighting conditions. In some configurations, the machine learning model may be trained on images of objects labeled with object classes, and the output image may be generated by also providing an object class of the object in the image as input to the machine learning model.

    CONTENT-ADAPTIVE TILING SOLUTION VIA IMAGE SIMILARITY FOR EFFICIENT IMAGE COMPRESSION

    公开(公告)号:US20230126890A1

    公开(公告)日:2023-04-27

    申请号:US18145118

    申请日:2022-12-22

    申请人: Adobe, Inc.

    IPC分类号: G06T9/00 H03M7/30 G06T3/40

    摘要: Techniques are provided herein for more efficiently storing images that have a common subject, such as product images that share the same product in the image. Each image undergoes an adaptive tiling procedure to split the image into a plurality of tiles, with each tile identifying a region of the image having pixels with the same content. The tiles across multiple images can then be clustered together and those tiles having identical content are removed. Once all duplicate tiles have been removed from the set of all tiles across the images, the tiles are once again clustered based on their encoding scheme and certain encoding parameters. Tiles within each cluster are compressed using the best compression technique for the tiles in each corresponding cluster. By removing duplicative tile content between numerous images of the same subject, the total amount of data that needs to be stored is reduced.

    TRANSLATING TEXTS FOR VIDEOS BASED ON VIDEO CONTEXT

    公开(公告)号:US20230102217A1

    公开(公告)日:2023-03-30

    申请号:US18049185

    申请日:2022-10-24

    申请人: Adobe Inc.

    摘要: The present disclosure describes systems, non-transitory computer-readable media, and methods that can generate contextual identifiers indicating context for frames of a video and utilize those contextual identifiers to generate translations of text corresponding to such video frames. By analyzing a digital video file, the disclosed systems can identify video frames corresponding to a scene and a term sequence corresponding to a subset of the video frames. Based on images features of the video frames corresponding to the scene, the disclosed systems can utilize a contextual neural network to generate a contextual identifier (e.g. a contextual tag) indicating context for the video frames. Based on the contextual identifier, the disclosed systems can subsequently apply a translation neural network to generate a translation of the term sequence from a source language to a target language. In some cases, the translation neural network also generates affinity scores for the translation.

    Generating contextualized image variants of multiple component images

    公开(公告)号:US11354828B2

    公开(公告)日:2022-06-07

    申请号:US17344094

    申请日:2021-06-10

    申请人: Adobe Inc.

    IPC分类号: G06T11/00 G06T7/13 G06T11/20

    摘要: In some embodiments, contextual image variations are generated for an input image. For example, a contextual composite image depicting a variation is generated based on a input image and a synthetic image component. The synthetic image component includes contextual features of a target object from the input image, such as shading, illumination, or depth that are depicted on the target object. The synthetic image component also includes a pattern from an additional image, such as a fabric pattern. In some cases, a mesh is determined for the target object. Illuminance values are determined for each mesh block. An adjusted mesh is determined based on the illuminance values. The synthetic image component is based on a combination of the adjusted mesh and the pattern from the additional image, such as a depiction of the fabric pattern with stretching, folding, or other contextual features from the target image.

    Image Modification to Generate Ghost Mannequin Effect in Image Content

    公开(公告)号:US20220129973A1

    公开(公告)日:2022-04-28

    申请号:US17077739

    申请日:2020-10-22

    申请人: Adobe Inc.

    IPC分类号: G06Q30/06 G06K9/62

    摘要: An image modification system receives image features of a base image and an additional image. The base image and the additional image depict an apparel item displayed on a mannequin. A first feature pair from the base image and a second feature pair from the additional images are determined. A first distance is calculated between the first feature pair and a second distance is calculated between the second feature pair. Based on a ratio including the first and second distances, a matching relationship between the first and second feature pairs is determined. A pixel of the base image is identified within an image area occluded by the mannequin. Based on the matching relationship, image data is identified for a corresponding additional pixel from the additional image. A modified base image including a ghost mannequin effect is generated by modifying the pixel to include the image data of the additional pixel.

    CONTENT-ADAPTIVE TUTORIALS FOR GRAPHICS EDITING TOOLS IN A GRAPHICS EDITING SYSTEM

    公开(公告)号:US20220108506A1

    公开(公告)日:2022-04-07

    申请号:US17064231

    申请日:2020-10-06

    申请人: ADOBE INC.

    IPC分类号: G06T11/60 G06K9/00 G06F9/451

    摘要: Methods, systems, and computer storage media for providing tool tutorials based on tutorial information that is dynamically integrated into tool tutorial shells using graphics editing system operations in a graphics editing systems. In operation, an image is received in association with a graphics editing application. Tool parameters (e.g., image-specific tool parameters) are generated based on processing the image. The tool parameters are generated for a graphics editing tool of the graphics editing application. The graphics editing tool (e.g., object removal tool or spot healing tool) can be a premium version of a simplified version of the graphics editing tool in a freemium application service. Based on the tool parameters and the image, a tool tutorial data file is generated by incorporating the tool parameters and the image into a tool tutorial shell. The tool tutorial data file can be selectively rendered in an integrated interface of the graphics editing application.

    Search Input Generation for Image Search

    公开(公告)号:US20210124774A1

    公开(公告)日:2021-04-29

    申请号:US16663191

    申请日:2019-10-24

    申请人: Adobe Inc.

    摘要: In implementations of search input generation for an image search, a computing device can capture image data of an environment scene that includes multiple objects. The computing device implements a search input module that can detect the multiple objects in the image data, and initiate a display of a selectable indication for each of the multiple objects. The search input module can then determine a subject object from the detected multiple objects, and generate the subject object as the search input for the image search.

    GENERATING TOOL-BASED SMART-TUTORIALS

    公开(公告)号:US20210118325A1

    公开(公告)日:2021-04-22

    申请号:US16654737

    申请日:2019-10-16

    申请人: Adobe Inc.

    IPC分类号: G09B19/00 G09B5/02 G06T13/80

    摘要: The present disclosure relates to systems, non-transitory computer-readable media, and methods that generate dynamic tool-based animated tutorials. In particular, in one or more embodiments, the disclosed systems generate an animated tutorial in response to receiving a request associated with an image editing tool. The disclosed systems then extract steps from existing general tutorials that pertain to the image editing tool to generate tool-specific animated tutorials. In at least one embodiment, the disclosed systems utilize a clustering algorithm in conjunction with image parameters to provide a set of these generated animated tutorials that showcase diverse features and/or attributes of the image editing tool based on measured aesthetic gains resulting from application of the image editing tool within the animated tutorials.

    Generating and providing topic visual elements based on audio content and video content of a digital video

    公开(公告)号:US10945040B1

    公开(公告)日:2021-03-09

    申请号:US16653541

    申请日:2019-10-15

    申请人: Adobe Inc.

    摘要: The present disclosure relates to methods, systems, and non-transitory computer-readable media for generating a topic visual element for a portion of a digital video based on audio content and visual content of the digital video. For example, the disclosed systems can generate a map between words of the audio content and their corresponding timestamps from the digital video and then modify the map by associating importance weights with one or more of the words. Further, the disclosed systems can generate an additional map by associating words embedded in one or more video frames of the visual content with their corresponding timestamps. Based on these maps, the disclosed systems can identify a topic for a portion of the digital video (e.g., a portion currently previewed on a computing device), generate a topic visual element that includes the topic, and provide the topic visual element for display on a computing device.

    Intelligently generating digital note compilations from digital video

    公开(公告)号:US10929684B2

    公开(公告)日:2021-02-23

    申请号:US16415374

    申请日:2019-05-17

    申请人: Adobe Inc.

    摘要: The present disclosure relates to systems, non-transitory computer-readable media, and methods for intelligently merging handwritten content and digital audio from a digital video based on monitored presentation flow. In particular, the disclosed systems can apply an edge detection algorithm to intelligently detect distinct sections of the digital video and locations of handwritten content entered onto a writing surface over time. Moreover, the disclosed systems can generate a transcription of handwritten content utilizing digital audio. For instance, the disclosed systems can utilize an audio text transcript as input to an optical character recognition algorithm and auto-correct text utilizing the audio text transcript. Further, the disclosed systems can analyze short form text from handwritten script and generate long form text from audio text transcripts. The disclosed systems can accurately, efficiently, and flexibly generate digital summaries that reflect diagrams, handwritten text transcriptions, and audio text transcripts over different presentation time periods.