-
公开(公告)号:US20210271705A1
公开(公告)日:2021-09-02
申请号:US16804750
申请日:2020-02-28
Applicant: Adobe Inc.
Inventor: Eunyee Koh , Xin Qian , Sungchul Kim , Sana Malik Lee
IPC: G06F16/583 , G06F40/169 , G06N3/08 , G06N3/04 , G06F17/16
Abstract: Techniques of captioning for figures includes generating a caption unit for a figure by defining a finite set of caption types. From each caption type, additional input for that caption type, as well as figure image data and figure metadata, an automated system may generate a respective caption unit, each caption unit including a sequence of words. Further, the generated caption for a figure includes a combination of the generated caption units.
-
公开(公告)号:US10783361B2
公开(公告)日:2020-09-22
申请号:US16723619
申请日:2019-12-20
Applicant: ADOBE INC.
Inventor: Sungchul Kim , Deepali Jain , Deepali Gupta , Eunyee Koh , Branislav Kveton , Nikhil Sheoran , Atanu Sinha , Hung Hai Bui , Charles Li Chen
IPC: G06K9/00 , G06N3/04 , G06N3/08 , G06F16/954 , G06K9/62
Abstract: Systems and methods provide for generating predictive models that are useful in predicting next-user-actions. User-specific navigation sequences are obtained, the navigation sequences representing temporally-related series of actions performed by users during navigation sessions. To each navigation sequence, a Recurrent Neural Network (RNN) is applied to encode the navigation sequences into user embeddings that reflect time-based, sequential navigation patterns for the user. Once a set of navigation sequences is encoded to a set of user embeddings, a variety of classifiers (prediction models) may be applied to the user embeddings to predict what a probable next-user-action may be and/or the likelihood that the next-user-action will be a desired target action.
-
公开(公告)号:US20200285951A1
公开(公告)日:2020-09-10
申请号:US16296076
申请日:2019-03-07
Applicant: ADOBE INC.
Inventor: Sungchul Kim , Scott Cohen , Ryan A. Rossi , Charles Li Chen , Eunyee Koh
Abstract: Embodiments of the present invention are generally directed to generating figure captions for electronic figures, generating a training dataset to train a set of neural networks for generating figure captions, and training a set of neural networks employable to generate figure captions. A set of neural networks is trained with a training dataset having electronic figures and corresponding captions. Sequence-level training with reinforced learning techniques are employed to train the set of neural networks configured in an encoder-decoder with attention configuration. Provided with an electronic figure, the set of neural networks can encode the electronic figure based on various aspects detected from the electronic figure, resulting in the generation of associated label map(s), feature map(s), and relation map(s). The trained set of neural networks employs a set of attention mechanisms that facilitate the generation of accurate and meaningful figure captions corresponding to visible aspects of the electronic figure.
-
公开(公告)号:US10514842B2
公开(公告)日:2019-12-24
申请号:US16169877
申请日:2018-10-24
Applicant: Adobe Inc.
Inventor: Byungmoon Kim , Jihyun Lee , Eunyee Koh
IPC: G09G5/00 , G06F3/0488 , G02B27/01 , G06F3/01 , G06F3/023
Abstract: Systems and methods for detecting a user interaction by identifying a touch gesture on a touch interface on a virtual reality headset. The touch gestures are received on a front surface that is on the opposite side of the headset's inner display screen so that correspondence between the touch location and displayed content is intuitive to the user. The techniques of the invention display a cursor and enable the user to move the cursor by one type of input and make selections with the cursor using a second type of input. In this way, the user is able to intuitively control a displayed cursor by moving a finger around (e.g., dragging) on the opposite side of the display in the cursor's approximate location. The user then uses another type of touch input to make a selection at the cursor's current location.
-
15.
公开(公告)号:US20190272559A1
公开(公告)日:2019-09-05
申请号:US15910926
申请日:2018-03-02
Applicant: Adobe Inc.
Inventor: Tak Yeon Lee , Eunyee Koh
Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for determining and resolving semantic misalignments between digital messages containing links and corresponding external digital content. For example, in one or more embodiments, the disclosed systems compare semantic message features from the digital message with semantic external digital content features from the external digital content. More specifically, in at least one embodiment, the disclosed systems compare semantic message feature vectors and semantic external digital content feature vectors to determine a relevance score for the digital message and identify semantic misalignments. Additionally, in one or more embodiments, the disclosed systems provide for display a user interface that presents a plurality of digital messages, the linked external digital content, and identified semantic misalignments.
-
公开(公告)号:US12125148B2
公开(公告)日:2024-10-22
申请号:US17664972
申请日:2022-05-25
Applicant: ADOBE INC.
Inventor: Chang Xiao , Ryan A. Rossi , Eunyee Koh
CPC classification number: G06T19/006 , G06T7/73 , G06T7/97 , G06T2207/20224 , G06T2207/30204
Abstract: A system and methods for providing human-invisible AR markers is described. One aspect of the system and methods includes identifying AR metadata associated with an object in an image; generating AR marker image data based on the AR metadata; generating a first variant of the image by adding the AR marker image data to the image; generating a second variant of the image by subtracting the AR marker image data from the image; and displaying the first variant and the second variant of the image alternately at a display frequency to produce a display of the image, wherein the AR marker image data is invisible to a human vision system in the display of the image.
-
17.
公开(公告)号:US20240320421A1
公开(公告)日:2024-09-26
申请号:US18338033
申请日:2023-06-20
Applicant: Adobe Inc.
Inventor: Victor Soares Bursztyn , Wei Zhang , Prithvi Bhutani , Eunyee Koh , Abhisek Trivedi
IPC: G06F40/186
CPC classification number: G06F40/186
Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating naturally phrased insights about data charts using light language models distilled from large language models. To synthesize training data for the light language model, in some embodiments, the disclosed systems leverage insight templates for prompting a large language model for generating naturally phrased insights. In some embodiments, the disclosed systems anonymize and augment the synthesized training data to improve the accuracy and robustness of model predictions. For example, the disclosed systems anonymize training data by injecting noise into data charts before prompting the large language model for generating naturally phrased insights from insight templates. In some embodiments, the disclosed systems further augment the (anonymized) training data by splitting or partitioning data charts into folds that act as individual data charts.
-
公开(公告)号:US20240311406A1
公开(公告)日:2024-09-19
申请号:US18482754
申请日:2023-10-06
Applicant: ADOBE INC.
Inventor: Arpit Narechania , Fan Du , Atanu Sinha , Nedim Lipka , Alexa F. Siu , Jane Elizabeth Hoffswell , Eunyee Koh , Vasanthi Holtcamp
IPC: G06F16/332 , G06F16/338 , G06F40/205
CPC classification number: G06F16/3329 , G06F16/338 , G06F40/205
Abstract: Aspects of a method, apparatus, non-transitory computer readable medium, and system include obtaining a document and a query. A plurality of data elements are identified from the document by locating a plurality of corresponding flexible anchor elements. Then, the data elements are extracted based on the plurality of flexible anchor elements. Content including an analysis of the extracted data elements based on the query is generated.
-
公开(公告)号:US11922691B2
公开(公告)日:2024-03-05
申请号:US17724686
申请日:2022-04-20
Applicant: Adobe Inc.
Inventor: Shunan Guo , Ryan A. Rossi , Jane Elizabeth Hoffswell , Fan Du , Eunyee Koh , Bingjie Xu
CPC classification number: G06V20/20 , G06T7/70 , G06T19/006 , H04N23/631 , G06T2207/30204
Abstract: In implementations of augmented reality systems for comparing physical objects, a computing device implements a comparison system to detect physical objects and physical markers depicted in frames of a digital video captured using an image capture device and displayed in a user interface. The comparison system associates a physical object of the physical objects with a physical marker of the physical markers based on an association distance estimated using two-dimensional coordinates of the user interface corresponding to a center of the physical object and a distance from the image capture device to the physical marker. Characteristics of the physical object are determined that are not displayed in the user interface based on an identifier of the physical marker. The comparison system generates a virtual object for display in the user interface that includes indications of a subset of the characteristics of the physical object.
-
公开(公告)号:US11829705B1
公开(公告)日:2023-11-28
申请号:US17949903
申请日:2022-09-21
Applicant: ADOBE INC.
Inventor: Md Main Uddin Rony , Fan Du , Iftikhar Ahamath Burhanuddin , Ryan Rossi , Niyati Himanshu Chhaya , Eunyee Koh
IPC: G06F17/00 , G06F40/106 , G06F40/40
CPC classification number: G06F40/106 , G06F40/40
Abstract: Methods, computer systems, computer-storage media, and graphical user interfaces are provided for facilitating generation and presentation of insights. In one implementation, a set of data is used to generate a data visualization. A candidate insight associated with the data visualization is generated, the candidate insight being generated in text form based on a text template and comprising a descriptive insight, a predictive insight, an investigative, or a prescriptive insight. A set of natural language insights is generated, via a machine learning model. The natural language insights represent the candidate insight in a text style that is different from the text template. A natural language insight having the text style corresponding with a desired text style is selected for presenting the candidate insight and, thereafter, the selected natural language insight and data visualization are providing for display via a graphical user interface.
-
-
-
-
-
-
-
-
-