Methods to present search keywords for image-based queries

    公开(公告)号:US10706098B1

    公开(公告)日:2020-07-07

    申请号:US15084256

    申请日:2016-03-29

    Applicant: A9.com, Inc.

    Abstract: Techniques for providing recommended keywords in response to an image-based query are disclosed herein. In particular, various embodiments utilize an image matching service to identify recommended search keywords associated with image data received from a user. The search keywords can be used to perform a keyword search to identify content associated with an image input that may be relevant. For example, an image search query can be received from a user. The image search query may result in multiple different types of content that are associated with the image. The system may present keywords associated with matching images to allow a user to further refine their search and/or find other related products that may not match with the particular image. This enables users to quickly refine a search using keywords that may be difficult to identify otherwise and to find the most relevant content for the user.

    VISUAL FEEDBACK OF PROCESS STATE
    12.
    发明申请

    公开(公告)号:US20190272425A1

    公开(公告)日:2019-09-05

    申请号:US15911850

    申请日:2018-03-05

    Applicant: A9.com, Inc.

    Abstract: Various embodiments of the present disclosure provide systems and method for visual search and augmented reality, in which an onscreen body of visual markers overlayed on the interface signals the current state of an image recognition process. Specifically, the body of visual markers may take on a plurality of behaviors, in which a particular behavior is indicative of a particular state. Thus, the user can tell what the current state of the scanning process is by the behavior of the body of visual markers. The behavior of the body of visual markers may also indicate to the user recommended actions that can be taken to improve the scanning condition or otherwise facilitate the process. In various embodiments, as the scanning process goes from one state to another state, the onscreen body of visual markers may move or seamlessly transition from one behavior to another behavior, accordingly.

    Image match for featureless objects

    公开(公告)号:US10210423B2

    公开(公告)日:2019-02-19

    申请号:US15166973

    申请日:2016-05-27

    Applicant: A9.com, Inc.

    Abstract: Object identification through image matching can utilize ratio and other data to accurately identify objects having relatively few feature points otherwise useful for identifying objects. An initial image analysis attempts to locate a “scalar” in the image, such as may include a label, text, icon, or other identifier that can help to narrow a classification of the search, as well as to provide a frame of reference for relative measurements obtained from the image. By comparing the ratios of dimensions of the scalar with other dimensions of the object, it is possible to discriminate between objects containing that scalar in a way that is relatively robust to changes in viewpoint. A ratio signature can be generated for an object for use in matching, while in other embodiments a classification can identify priority ratios that can be used to more accurately identify objects in that classification.

    Scalable image matching
    14.
    发明授权

    公开(公告)号:US10140549B2

    公开(公告)日:2018-11-27

    申请号:US15443730

    申请日:2017-02-27

    Applicant: A9.com, Inc.

    Abstract: Various embodiments may increase scalability of image representations stored in a database for use in image matching and retrieval. For example, a system providing image matching can obtain images of a number of inventory items, extract features from each image using a feature extraction algorithm, and transform the same into their feature descriptor representations. These feature descriptor representations can be subsequently stored and used to compare against query images submitted by users. Though the size of each feature descriptor representation isn't particularly large, the total number of these descriptors requires a substantial amount of storage space. Accordingly, feature descriptor representations are compressed to minimize storage and, in one example, machine learning can be used to compensate for information lost as a result of the compression.

    Item recommendation based on feature match

    公开(公告)号:US10109051B1

    公开(公告)日:2018-10-23

    申请号:US15196644

    申请日:2016-06-29

    Applicant: A9.com, Inc.

    Abstract: Images may be analyzed to determine a visually cohesive color palette, for example by comparing a subset of the colors most frequently appearing in the image to a plurality of color schemes (e.g., complementary, analogous, etc.), and potentially modifying one or more of the subset of colors to more accurately fit the selected color scheme. Various regions of the image are selected and portions of the regions having one or more colors of the color palette are extracted and classified to generate and compare feature vectors of the patches to previously-determined feature vectors of items to identify visually similar items. The visually similar items are selected for presentation in various ways, such as by choosing an outfit of visually-similar apparel items based on the locations of the corresponding colors in the image, etc.

    Video content alignment
    18.
    发明授权
    Video content alignment 有权
    视频内容对齐

    公开(公告)号:US09275682B1

    公开(公告)日:2016-03-01

    申请号:US14498818

    申请日:2014-09-26

    Applicant: A9.com, Inc.

    Abstract: Various embodiments identify differences between frame sequences of a video. For example, to determine a difference between two versions of a video, a fingerprint of each frame of the two versions is generated. From the fingerprints, a run-length encoded representation of each version is generated. The fingerprints which appear only once (i.e., unique fingerprints) in the entire video are identified from each version and compared to identify matching unique fingerprints across versions. The matching unique fingerprints are sorted and filtered to determine split points, which are used to align the two versions of the video. Accordingly, each version is segmented into smaller frame sequences using the split points. Once segmented, the individual frames of each segment are aligned across versions using a dynamic programming algorithm. After aligning the segments at a frame level, the segments are reassembled to generate a global alignment output.

    Abstract translation: 各种实施例识别视频的帧序列之间的差异。 例如,为了确定视频的两个版本之间的差异,生成两个版本的每个帧的指纹。 从指纹中,生成每个版本的游程长度编码表示。 从每个版本识别整个视频中仅出现一次的指纹(即,唯一指纹),并进行比较以识别跨越版本的匹配的唯一指纹。 匹配的唯一指纹被分类和过滤以确定分割点,其用于对准视频的两个版本。 因此,使用分割点将每个版本分割成较小的帧序列。 一旦分段,每个段的各个帧在版本之间使用动态规划算法对齐。 在帧级别对齐段之后,重新组合段以产生全局对准输出。

    USING SENSOR DATA TO ENHANCE IMAGE DATA
    19.
    发明申请
    USING SENSOR DATA TO ENHANCE IMAGE DATA 审中-公开
    使用传感器数据来增强图像数据

    公开(公告)号:US20150271395A1

    公开(公告)日:2015-09-24

    申请号:US14733698

    申请日:2015-06-08

    Applicant: A9.com, Inc.

    Inventor: Colin Jon Taylor

    CPC classification number: H04N5/23222 G06T5/00 H04N5/23238 H04N5/23293

    Abstract: Image data and position and orientation data collected by a computing device can be aggregated to create enhanced videos. One example of an enhanced video is a panoramic video generated from a single video camera having a standard field of view. Enhanced videos can also be created to have a display resolution that is greater than is capable of being recorded by at least one video camera of the computing device providing input to the computing device. Enhanced videos can also be streamed live to a viewer, and the viewer can change the perspective of the streamed video or auto-center and auto-focus on a specified location or object in the streamed video.

    Abstract translation: 计算设备收集的图像数据和位置和方向数据可以进行聚合,以创建增强的视频。 增强视频的一个示例是从具有标准视场的单个摄像机生成的全景视频。 还可以创建增强的视频以使显示分辨率大于能够由计算设备的至少一个摄像机向计算设备提供输入的记录。 增强的视频也可以直播到观众,观众可以改变流媒体视频的视角或自动对焦,并自动对焦于流媒体视频中的指定位置或对象。

    Real-time visual effects for a live camera view

    公开(公告)号:US10924676B2

    公开(公告)日:2021-02-16

    申请号:US15890641

    申请日:2018-02-07

    Applicant: A9.com, Inc.

    Abstract: Visual effects for element of interest can be displayed within a live camera view in real time or substantially using a processing pipeline that does not immediately display an acquired image until it has been updated with the effects. In various embodiments, software-based approaches, such as fast convolution algorithms, and/or hardware-based approaches, such as using a graphics processing unit (GPU), can be used reduce the time between acquiring an image and displaying the image with various visual effects. These visual effects can include automatically highlighting elements, augmenting the color, style, and/or size of elements, casting a shadow on elements, erasing elements, substituting elements, or shaking and jumbling elements, among other effects.

Patent Agency Ranking