Abstract:
Content creation collection and navigation techniques and systems are described. In one example, a representative image is used by a content sharing service to interact with a collection of images provided as part of a search result. In another example, a user interface image navigation control is configured to support user navigation through images based on one or more metrics. In a further example, a user interface image navigation control is configured to support user navigation through images based on one or more metrics identified for an object selected from the image. In yet another example, collections of images are leveraged as part of content creation. In another example, data obtained from a content sharing service is leveraged to indicate suitability of images of a user for licensing as part of the service.
Abstract:
User data and a plurality of micro-segment definitions are received. Each micro-segment definition in the plurality of micro-segment definitions corresponds to one or more offers in an offer provider campaign. Further, a each micro-segment definition from the plurality of micro-segment definitions is parsed into a plurality of parsed expression segments that indicate a plurality of micro-segment condition rules. The plurality of parsed expression segments are compiled into an executable object that indicates a plurality of instructions to determine if the user data matches the plurality of micro-segment definitions. Each micro-segment definition is processed to apply the plurality of micro-segment condition rules to the user data to determine a match of a user belonging to a micro-segment. Further, a score is assigned to indicate the strength of each match. In addition, each match is ranked according to the score for each match.
Abstract:
Content creation collection and navigation techniques and systems are described. In one example, a representative image is used by a content sharing service to interact with a collection of images provided as part of a search result. In another example, a user interface image navigation control is configured to support user navigation through images based on one or more metrics. In a further example, a user interface image navigation control is configured to support user navigation through images based on one or more metrics identified for an object selected from the image. In yet another example, collections of images are leveraged as part of content creation. In another example, data obtained from a content sharing service is leveraged to indicate suitability of images of a user for licensing as part of the service.
Abstract:
Content creation collection and navigation techniques and systems are described. In one example, a representative image is used by a content sharing service to interact with a collection of images provided as part of a search result. In another example, a user interface image navigation control is configured to support user navigation through images based on one or more metrics. In a further example, a user interface image navigation control is configured to support user navigation through images based on one or more metrics identified for an object selected from the image. In yet another example, collections of images are leveraged as part of content creation. In another example, data obtained from a content sharing service is leveraged to indicate suitability of images of a user for licensing as part of the service.
Abstract:
A method for providing a response to an input post on a social page of a brand is provided. The input post is detected upon posting of the input post on the social page of the brand. The social page is present on a social channel. An inquiry regarding the brand is identified from content of the input post. At least one social post is determined from already posted posts on one or more social channels based on the inquiry. The at least one social post is associated with the brand. A response post is created using the at least one social post. The response post addresses the inquiry. The response post is then posted on the social page of the social channel as a reply to the input post. An apparatus for performing the method as described herein is also provided.
Abstract:
The present invention is directed towards providing automated workflows for the identification of a reading order from text segments extracted from a document. Ordering the text segments is based on trained natural language models. In some embodiments, the workflows are enabled to perform a method for identifying a sequence associated with a portable document. The methods includes iteratively generating a probabilistic language model, receiving the portable document, and selectively extracting features (such as but not limited to text segments) from the document. The method may generate pairs of features (or feature pair from the extracted features). The method may further generate a score for each of the pairs based on the probabilistic language model and determine an order to features based on the scores. The method may provide the extracted features in the determined order.
Abstract:
A digital medium environment is described to determine textual content that is responsible for causing a viewing spike within a video. Video analytics data associated with a video is queried. The video analytics data identifies a number of previous user viewing s at various locations within the video. A viewing spike within the video is detected using the video analytics data. The viewing spike corresponds to an increase in the number of previous user viewings of the video that begins at a particular location within the video. Then, text of one or more video sources or video referral sources read by users prior to viewing the video from the particular location within the video is analyzed to identify textual content that is at least partially responsible for causing the viewing spike.
Abstract:
Techniques and systems are described to model and extract knowledge from images. A digital medium environment is configured to learn and use a model to compute a descriptive summarization of an input image automatically and without user intervention. Training data is obtained to train a model using machine learning in order to generate a structured image representation that serves as the descriptive summarization of an input image. The images and associated text are processed to extract structured semantic knowledge from the text, which is then associated with the images. The structured semantic knowledge is processed along with corresponding images to train a model using machine learning such that the model describes a relationship between text features within the structured semantic knowledge. Once the model is learned, the model is usable to process input images to generate a structured image representation of the image.
Abstract:
Natural language system question classifier, semantic representations, and logical form template techniques and systems are described. In one or more implementations, a natural language input is classified as corresponding to respective ones of a plurality of classes of questions. A semantic intent of the natural language input is extracted as a semantic entity and a semantic representation. Question classification labels that classify the question included in the natural language input is then used to select at least one of a plurality of logical form templates. The semantic intent that is extracted from the natural language input is then used to fill in the selected logical form templates, such as to fill in entity, subject, predicate, and object slots using the semantic entity and semantic representation. The filled-in logical form template is then mapped to form a database query that is then executed to query a database to answer the question.
Abstract:
Image search techniques and systems involving emotions are described. In one or more implementations, a digital medium environment of a content sharing service is described for image search result configuration and control based on a search request that indicates an emotion. The search request is received that includes one or more keywords and specifies an emotion. Images are located that are available for licensing by matching one or more tags associated with the image with the one or more keywords and as corresponding to the emotion. The emotion of the images is identified using one or more models that are trained using machine learning based at least in part on training images having tagged emotions. Output is controlled of a search result having one or more representations of the images that are selectable to license respective images from the content sharing service.