Generating a Compact Video Feature Representation in a Digital Medium Environment

    公开(公告)号:US20180173958A1

    公开(公告)日:2018-06-21

    申请号:US15384831

    申请日:2016-12-20

    Abstract: Techniques and systems are described to generate a compact video feature representation for sequences of frames in a video. In one example, values of features are extracted from each frame of a plurality of frames of a video using machine learning, e.g., through use of a convolutional neural network. A video feature representation is generated of temporal order dynamics of the video, e.g., through use of a recurrent neural network. For example, a maximum value is maintained of each feature of the plurality of features that has been reached for the plurality of frames in the video. A timestamp is also maintained as indicative of when the maximum value is reached for each feature of the plurality of features. The video feature representation is then output as a basis to determine similarity of the video with at least one other video based on the video feature representation.

    DYNAMIC FONT SIMILARITY
    14.
    发明申请

    公开(公告)号:US20170262414A1

    公开(公告)日:2017-09-14

    申请号:US15067108

    申请日:2016-03-10

    CPC classification number: G06F17/214 G06N3/0454

    Abstract: Embodiments of the present invention are directed at providing a font similarity system. In one embodiment, a new font is detected on a computing device. In response to the detection of the new font, a pre-computed font list is checked to determine whether the new font is included therein. The pre-computed font list including feature representations, generated independently of the computing device, for corresponding fonts. In response to a determination that the new font is absent from the pre-computed font list, a feature representation for the new font is generated. The generated feature representation capable of being utilized for a similarity analysis of the new font. The feature representation is then stored in a supplemental font list to enable identification of one or more fonts installed on the computing device that are similar to the new font. Other embodiments may be described and/or claimed.

    Image tagging
    16.
    发明授权

    公开(公告)号:US09607014B2

    公开(公告)日:2017-03-28

    申请号:US14068238

    申请日:2013-10-31

    CPC classification number: G06F17/30265 G06K9/6263 G06K2209/27

    Abstract: A system is configured to annotate an image with tags. As configured, the system accesses an image and generates a set of vectors for the image. The set of vectors may be generated by mathematically transforming the image, such as by applying a mathematical transform to predetermined regions of the image. The system may then query a database of tagged images by submitting the set of vectors as search criteria to a search engine. The querying of the database may obtain a set of tagged images. Next, the system may rank the obtained set of tagged images according to similarity scores that quantify degrees of similarity between the image and each tagged image obtained. Tags from a top-ranked subset of the tagged images may be extracted by the system, which may then annotate the image with these extracted tags.

    DISTRIBUTED SIMILARITY LEARNING FOR HIGH-DIMENSIONAL IMAGE FEATURES
    17.
    发明申请
    DISTRIBUTED SIMILARITY LEARNING FOR HIGH-DIMENSIONAL IMAGE FEATURES 有权
    分布式相似度学习用于高维图像特征

    公开(公告)号:US20150146973A1

    公开(公告)日:2015-05-28

    申请号:US14091972

    申请日:2013-11-27

    CPC classification number: G06K9/6269 G06K9/6235

    Abstract: A system and method for distributed similarity learning for high-dimensional image features are described. A set of data features is accessed. Subspaces from a space formed by the set of data features are determined using a set of projection matrices. Each subspace has a dimension lower than a dimension of the set of data features. Similarity functions are computed for the subspaces. Each similarity function is based on the dimension of the corresponding subspace. A linear combination of the similarity functions is performed to determine a similarity function for the set of data features.

    Abstract translation: 描述了用于高维图像特征的分布式相似性学习的系统和方法。 访问一组数据功能。 使用一组投影矩阵来确定由该组数据特征形成的空间的子空间。 每个子空间的尺寸小于数据特征集合的维度。 为子空间计算相似度函数。 每个相似度函数基于相应子空间的维度。 执行相似度函数的线性组合以确定该组数据特征的相似度函数。

    Photometric stabilization for time-compressed video

    公开(公告)号:US10116897B2

    公开(公告)日:2018-10-30

    申请号:US15446906

    申请日:2017-03-01

    Abstract: Photometric stabilization for time-compressed video is described. Initially, video content captured by a video capturing device is time-compressed by selecting a subset of frames from the video content according to a frame sampling technique. Photometric characteristics are then stabilized across the frames of the time-compressed video. This involves determining correspondences of pixels in adjacent frames of the time-compressed video. Photometric transformations are then determined that describe how photometric characteristics (e.g., one or both of luminance and chrominance) change between the adjacent frames, given movement of objects through the captured scene. Based on the determined photometric transformations, filters are computed for smoothing photometric characteristic changes across the time-compressed video. Photometrically stabilized time-compressed video is generated from the time-compressed video by using the filters to smooth the photometric characteristic changes.

    Photometric Stabilization for Time-Compressed Video

    公开(公告)号:US20180255273A1

    公开(公告)日:2018-09-06

    申请号:US15446906

    申请日:2017-03-01

    Abstract: Photometric stabilization for time-compressed video is described. Initially, video content captured by a video capturing device is time-compressed by selecting a subset of frames from the video content according to a frame sampling technique. Photometric characteristics are then stabilized across the frames of the time-compressed video. This involves determining correspondences of pixels in adjacent frames of the time-compressed video. Photometric transformations are then determined that describe how photometric characteristics (e.g., one or both of luminance and chrominance) change between the adjacent frames, given movement of objects through the captured scene. Based on the determined photometric transformations, filters are computed for smoothing photometric characteristic changes across the time-compressed video. Photometrically stabilized time-compressed video is generated from the time-compressed video by using the filters to smooth the photometric characteristic changes.

    DEEP HIGH-RESOLUTION STYLE SYNTHESIS
    20.
    发明申请

    公开(公告)号:US20180240257A1

    公开(公告)日:2018-08-23

    申请号:US15438147

    申请日:2017-02-21

    Abstract: In some embodiments, techniques for synthesizing an image style based on a plurality of neural networks are described. A computer system selects a style image based on user input that identifies the style image. The computer system generates an image based on a generator neural network and a loss neural network. The generator neural network outputs the synthesized image based on a noise vector and the style image and is trained based on style features generated from the loss neural network. The loss neural network outputs the style features based on a training image. The training image and the style image have a same resolution. The style features are generated at different resolutions of the training image. The computer system provides the synthesized image to a user device in response to the user input.

Patent Agency Ranking