METHODS TO PRESENT SEARCH KEYWORDS FOR IMAGE-BASED QUERIES

    公开(公告)号:US20200311126A1

    公开(公告)日:2020-10-01

    申请号:US16900362

    申请日:2020-06-12

    Applicant: A9.com, Inc.

    Abstract: Techniques for providing recommended keywords in response to an image-based query are disclosed herein. In particular, various embodiments utilize an image matching service to identify recommended search keywords associated with image data received from a user. The search keywords can be used to perform a keyword search to identify content associated with an image input that may be relevant. For example, an image search query can be received from a user. The image search query may result in multiple different types of content that are associated with the image. The system may present keywords associated with matching images to allow a user to further refine their search and/or find other related products that may not match with the particular image. This enables users to quickly refine a search using keywords that may be difficult to identify otherwise and to find the most relevant content for the user.

    VISUAL FEEDBACK OF PROCESS STATE
    5.
    发明申请

    公开(公告)号:US20200160058A1

    公开(公告)日:2020-05-21

    申请号:US16773763

    申请日:2020-01-27

    Applicant: A9.com, Inc.

    Abstract: Various embodiments of the present disclosure provide systems and method for visual search and augmented reality, in which an onscreen body of visual markers overlayed on the interface signals the current state of an image recognition process. Specifically, the body of visual markers may take on a plurality of behaviors, in which a particular behavior is indicative of a particular state. Thus, the user can tell what the current state of the scanning process is by the behavior of the body of visual markers. The behavior of the body of visual markers may also indicate to the user recommended actions that can be taken to improve the scanning condition or otherwise facilitate the process. In various embodiments, as the scanning process goes from one state to another state, the onscreen body of visual markers may move or seamlessly transition from one behavior to another behavior, accordingly.

    Auxiliary device as augmented reality platform

    公开(公告)号:US10026229B1

    公开(公告)日:2018-07-17

    申请号:US15019257

    申请日:2016-02-09

    Applicant: A9.com, Inc.

    Abstract: An auxiliary device can be used to display a fiducial that contains information useful in determining the physical size of the fiducial as displayed on the auxiliary device. A primary device can capture image data including a representation of the fiducial. The scale and orientation of the fiducial can be determined, such that a graphical overlay can be generated of an item of interest that corresponds to that scale and orientation. The overlay can then be displayed along with the captured image data, in order to provide an augmented reality experience wherein the image displayed on the primary device represents a scale-appropriate view of the item in a location of interest corresponding to the location of the auxiliary device. As the primary device is moved and the viewpoint of the camera changes, changes in relative scale and orientation to the fiducial are determined and the overlay is updated accordingly.

    Video content alignment
    8.
    发明授权

    公开(公告)号:US09984728B2

    公开(公告)日:2018-05-29

    申请号:US14997351

    申请日:2016-01-15

    Applicant: A9.com, Inc.

    Abstract: Various embodiments identify differences between frame sequences of a video. For example, to determine a difference between two versions of a video, a fingerprint of each frame of the two versions is generated. From the fingerprints, a run-length encoded representation of each version is generated. The fingerprints which appear only once (i.e., unique fingerprints) in the entire video are identified from each version and compared to identify matching unique fingerprints across versions. The matching unique fingerprints are sorted and filtered to determine split points, which are used to align the two versions of the video. Accordingly, each version is segmented into smaller frame sequences using the split points. Once segmented, the individual frames of each segment are aligned across versions using a dynamic programming algorithm. After aligning the segments at a frame level, the segments are reassembled to generate a global alignment output.

    Detection of cast members in video content
    10.
    发明授权
    Detection of cast members in video content 有权
    在视频内容中检测演员

    公开(公告)号:US09449216B1

    公开(公告)日:2016-09-20

    申请号:US13860347

    申请日:2013-04-10

    Applicant: A9.com, Inc.

    Abstract: Disclosed are various embodiments for detection of cast members in video content such as movies, television shows, and other programs. Data indicating cast members who appear in a video program is obtained. Each cast member is associated with a reference image depicting a face of the cast member. A frame is obtained from the video program, and a face is detected in the frame. The detected face in the frame is recognized as being a particular cast member based at least in part on the reference image depicting the cast member. An association between the cast member and the frame is generated in response to the detected face in the frame being recognized as the cast member.

    Abstract translation: 公开了用于在诸如电影,电视节目和其他节目的视频内容中检测演员的各种实施例。 获取表示出现在视频节目中的演员的数据。 每个铸造构件与描绘铸件的表面的参考图像相关联。 从视频节目获得帧,并且在帧中检测到一个脸部。 至少部分地基于描绘铸件的参考图像将框架中检测到的面部识别为特定铸件。 响应于被识别为铸件的框架中检测到的面而产生铸造构件和框架之间的关联。

Patent Agency Ranking