-
公开(公告)号:US12159109B2
公开(公告)日:2024-12-03
申请号:US17525311
申请日:2021-11-12
Applicant: ADOBE INC.
IPC: G06F40/289 , G06F18/214 , G06F40/211 , G06F40/284 , G06F40/30 , G06F40/42 , G06F18/22
Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for pre-training entity extraction models to facilitate domain adaptation in resource-constrained domains. In an example embodiment, a first machine learning model is used to encode sentences of a source domain corpus and a target domain corpus into sentence embeddings. The sentence embeddings of the target domain corpus are combined into a target corpus embedding. Training sentences from the source domain corpus within a threshold of similarity to the target corpus embedding are selected. A second machine learning model is trained on the training sentences selected from the source domain corpus.
-
公开(公告)号:US20230169632A1
公开(公告)日:2023-06-01
申请号:US17521503
申请日:2021-11-08
Applicant: Adobe Inc.
Inventor: Kuldeep Kulkarni , Soumya Dash , Hrituraj Singh , Bholeshwar Khurana , Aniruddha Mahapatra , Abhishek Bhatia
Abstract: Certain aspects and features of this disclosure relate to semantically-aware image extrapolation. In one example, an input image is segmented to produce an input segmentation map of object instances in the input image. An object generation network is used to generate an extrapolated semantic label map for an extrapolated image. The extrapolated semantic label map includes instances in the original image and instances that will appear in an outpainted region of the extrapolated image. A panoptic label map is derived from coordinates of output instances in the extrapolated image and used to identify partial instances and boundaries. Instance-aware context normalization is used to apply one or more characteristics from the input image to the outpainted region to maintain semantic continuity. The extrapolated image includes the original image and the outpainted region and can be rendered or stored for future use.
-
公开(公告)号:US20230262237A1
公开(公告)日:2023-08-17
申请号:US17651076
申请日:2022-02-15
Applicant: ADOBE INC.
Inventor: Subrata Mitra , Aniruddha Mahapatra , Kuldeep Sharad Kulkarni , Abhishek Yadav , Abhijith Kuruba , Manoj Kilaru
IPC: H04N19/176 , H04N19/61 , H04N19/172
CPC classification number: H04N19/176 , H04N19/61 , H04N19/172 , H04N19/132
Abstract: Systems and methods for image processing are described. The systems and methods include receiving a plurality of frames of a video at an edge device, wherein the video depicts an action that spans the plurality of frames, compressing, using an encoder network, each of the plurality of frames to obtain compressed frame features, wherein the compressed frame features include fewer data bits than the plurality of frames of the video, classifying, using a classification network, the compressed frame features at the edge device to obtain action classification information corresponding to the action in the video, and transmitting the action classification information from the edge device to a central server.
-
公开(公告)号:US20240331102A1
公开(公告)日:2024-10-03
申请号:US18737344
申请日:2024-06-07
Applicant: Adobe Inc.
Inventor: Kuldeep Kulkarni , Soumya Dash , Hrituraj Singh , Bholeshwar Khurana , Aniruddha Mahapatra , Abhishek Bhatia
CPC classification number: G06T5/50 , G06T7/181 , G06V20/70 , G06N3/045 , G06T7/11 , G06V10/26 , G06V10/82
Abstract: Certain aspects and features of this disclosure relate to semantically-aware image extrapolation. In one example, an input image is segmented to produce an input segmentation map of object instances in the input image. An object generation network is used to generate an extrapolated semantic label map for an extrapolated image. The extrapolated semantic label map includes instances in the original image and instances that will appear in an outpainted region of the extrapolated image. A panoptic label map is derived from coordinates of output instances in the extrapolated image and used to identify partial instances and boundaries. Instance-aware context normalization is used to apply one or more characteristics from the input image to the outpainted region to maintain semantic continuity. The extrapolated image includes the original image and the outpainted region and can be rendered or stored for future use.
-
公开(公告)号:US12020403B2
公开(公告)日:2024-06-25
申请号:US17521503
申请日:2021-11-08
Applicant: Adobe Inc.
Inventor: Kuldeep Kulkarni , Soumya Dash , Hrituraj Singh , Bholeshwar Khurana , Aniruddha Mahapatra , Abhishek Bhatia
CPC classification number: G06T5/50 , G06T7/181 , G06V20/70 , G06N3/045 , G06T7/11 , G06V10/26 , G06V10/82
Abstract: Certain aspects and features of this disclosure relate to semantically-aware image extrapolation. In one example, an input image is segmented to produce an input segmentation map of object instances in the input image. An object generation network is used to generate an extrapolated semantic label map for an extrapolated image. The extrapolated semantic label map includes instances in the original image and instances that will appear in an outpainted region of the extrapolated image. A panoptic label map is derived from coordinates of output instances in the extrapolated image and used to identify partial instances and boundaries. Instance-aware context normalization is used to apply one or more characteristics from the input image to the outpainted region to maintain semantic continuity. The extrapolated image includes the original image and the outpainted region and can be rendered or stored for future use.
-
公开(公告)号:US12282992B2
公开(公告)日:2025-04-22
申请号:US17856362
申请日:2022-07-01
Applicant: ADOBE INC.
Inventor: Kuldeep Kulkarni , Aniruddha Mahapatra
Abstract: Systems and methods for machine learning based controllable animation of still images is provided. In one embodiment, a still image including a fluid element is obtained. Using a flow refinement machine learning model, a refined dense optical flow is generated for the still image based on a selection mask that includes the fluid element and a dense optical flow generated from a motion hint that indicates a direction of animation. The refined dense optical flow indicates a pattern of apparent motion for the at least one fluid element. Thereafter, a plurality of video frames is generated by projecting a plurality of pixels of the still image using the refined dense optical flow.
-
公开(公告)号:US20230153533A1
公开(公告)日:2023-05-18
申请号:US17525311
申请日:2021-11-12
Applicant: ADOBE INC.
IPC: G06F40/289 , G06F40/211 , G06F40/42
CPC classification number: G06F40/289 , G06F40/211 , G06F40/42 , G06K9/6215
Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for pre-training entity extraction models to facilitate domain adaptation in resource-constrained domains. In an example embodiment, a first machine learning model is used to encode sentences of a source domain corpus and a target domain corpus into sentence embeddings. The sentence embeddings of the target domain corpus are combined into a target corpus embedding. Training sentences from the source domain corpus within a threshold of similarity to the target corpus embedding are selected. A second machine learning model is trained on the training sentences selected from the source domain corpus.
-
-
-
-
-
-