Abstract:
A method and system for auto-curating a media are provided. Media content is received over the network interface. A set of markers is identified for the media content, each marker corresponding to one of a plurality of visible and audible cues in the media content. Segments in the media content are identified based on the identified set of markers. An excitement score is computed for each segment based on the identified markers that fall within the segment. A highlight clip is generated by identifying segments having excitement scores greater than a threshold.
Abstract:
Generating a textual description of an image includes classifying an image represented by image data into a domain-specific category, and segmenting one or more elements in the image data based on the domain-specific category. Each element of the one or more elements is compared to a domain-independent model to detect one or more statistical anomalies in the one or more elements. The one or more detected statistical anomalies are characterized using one or more domain-independent text phrases. The one or more domain-independent text phrases are converted to one or more domain-specific descriptions based upon the domain-specific category.
Abstract:
A method and system for auto-curating a media are provided. Media content is received over the network interface. A set of markers is identified for the media content, each marker corresponding to one of a plurality of visible and audible cues in the media content. Segments in the media content are identified based on the identified set of markers. An excitement score is computed for each segment based on the identified markers that fall within the segment. A highlight clip is generated by identifying segments having excitement scores greater than a threshold.
Abstract:
A compression system for compressing a video received from an imaging device, the compression system including a transformation estimation device configured to estimate, based on pixel transformations between a first frame and a second frame of the video, a transformation matrix, an encoding device configured to encode the second frame as an increment with the first frame based on the transformation matrix generated by the transformation device, a compression device configured to compress the increments into compressed data, and a reconstruction device configured to reconstruct the first frame and the second frame using the transformation matrix generated by the transformation estimation device.
Abstract:
A method and a system of stitching a plurality of views of a scene, the method including extracting points of interest in each view to include a point set from each of the plurality of image views of the scene, matching the points of interest and reducing an outlier, grouping the matched points of interest in a plurality of groups, determining a similarity transformation with a smallest rotation angle for each grouping of the match points, generating virtual matching points on a non-overlapping area of the plurality of image views, generating virtual matching points on an overlapping area for each of the plurality of image views, and calculating piecewise projective transformations for the plurality of image views.
Abstract:
Methods, systems, and computer program products for static image segmentation are provided herein. A method includes segmenting a static image containing a target object into multiple regions based on one or more visual features of the static image; analyzing video content containing the target object to determine a similarity metric across the multiple segmented regions based on motion information associated with each of the multiple segmented regions; and applying the similarity metric to the static image to identify two or more of the multiple segmented regions as being portions of the target object.
Abstract:
Methods for image segmentation are provided herein. A method includes creating an anatomical model from training data comprising one or more imaging modalities, generating one or more simulated images in a target modality based on the anatomical model and one or more principles of physics pertaining to image contrast generation, and comparing the one or more simulated images to an unlabeled input image of a given imaging modality to determine a simulated image of the one or more simulated images to represent the unlabeled input image.
Abstract:
Techniques for detecting an event via social media content. A method includes obtaining multiple images from at least one social media source, extracting at least one visual semantic concept from the multiple images, differentiating an event semantic concept signal from a background semantic concept signal to detect an event in the multiple images, and retrieving one or more images associated with the event semantic concept signal for presentation as a visual description of the detected event.
Abstract:
Methods, systems, and articles of manufacture for image segmentation are provided herein. A method includes creating an anatomical model from training data comprising one or more imaging modalities, generating one or more simulated images in a target modality based on the anatomical model and one or more principles of physics pertaining to image contrast generation, and comparing the one or more simulated images to an unlabeled input image of a given imaging modality to determine a simulated image of the one or more simulated images to represent the unlabeled input image.
Abstract:
A system and article of manufacture for social media event detection and content-based retrieval include obtaining multiple images from at least one social media source, extracting at least one visual semantic concept from the multiple images, differentiating an event semantic concept signal from a background semantic concept signal to detect an event in the multiple images, and retrieving one or more images associated with the event semantic concept signal for presentation as a visual description of the detected event.