Abstract:
Techniques and systems are described that leverage computer vision as part of search to expand functionality of a computing device available to a user and increase operational computational efficiency as well as efficiency in user interaction. In a first example, user interaction with items of digital content is monitored. Computer vision techniques are used to identify digital images in the digital content, objects within the digital images, and characteristics of those objects. This information is used to assign a user to a user segment of a user population which is then used to control output of subsequent digital content to the user, e.g., recommendations, digital marketing content, and so forth.
Abstract:
A system comprising a computer-readable storage medium storing at least one program and a computer-implemented method for facilitating automatic-guided image capturing and presentation are presented. In some embodiments, the method includes capturing an image of an item, removing automatically a background of the image frame, performing manual mask editing, generating an item listing, inferring item information from the image frame and automatically applying the inferred item information to an item listing form, and presenting an item listing in an augmented reality environment.
Abstract:
Systems and methods to fit an image of an inventory part are described. In one aspect, a method includes receiving images of items over a computer network from a server, capturing a live video image of an object using a camera, playing the live video of the object on an electronic display, and continually refitting an image of a first item from the received images of items to the object as the object changes perspective in the video by applying an affine transformation to the image of the first item.
Abstract:
Disclosed are methods and systems for displaying items of clothing on a model having a similar body shape to that of an ecommerce user. In one aspects, a system includes one or more hardware processors configured to perform operations comprising receiving, by one or more hardware processors, an image, the image representing a user height, user weight, and user gender, causing display, by the one or more hardware processors, of a second image via a computer interface, the second image representing a model, the model selected based on a comparison of a model height, weight, and gender with the user height, weight, and gender respectively, receiving, by the one or more hardware processors, a selection of an item of clothing, and causing display, by the one or more hardware processors, of a representation of the selected model wearing the selected item of clothing.
Abstract:
Embodiments of the present disclosure can be used to identify relationships between terms/words used in Internet search queries. Among other things, this helps systems provide Internet search results that are more useful and applicable to a given search query than conventional systems, thereby providing better content to users than conventional systems.
Abstract:
An apparatus and method for obtaining image feature data of an image are disclosed herein. A color histogram of the image is extracted from the image, the extraction of the color histogram including performing one-dimensional sampling of pixels comprising the image in each of a first dimension of a color space, a second dimension of the color space, and a third dimension of the color space. An edge map corresponding to the image is analyzed to detect a pattern included in the image. In response to a confidence level of the pattern detection being below a pre-defined threshold, extracting from the image an orientation histogram of the image. And identify a dominant color of the image.
Abstract:
Vehicles and other items often have corresponding documentation, such as registration cards, that includes a significant amount of informative textual information that can be used in identifying the item. Traditional OCR may be unsuccessful when dealing with non-cooperative images. Accordingly, features such as dewarping, text alignment, and line identification and removal may aid in OCR of non-cooperative images. Dewarping involves determining curvature of a document depicted in an image and processing the image to dewarp the image of the document to make it more accurately conform to the ideal of a cooperative image. Text alignment involves determining an actual alignment of depicted text, even when the depicted text is not aligned with depicted visual cues. Line identification and removal involves identifying portions of the image that depict lines and removing those lines prior to OCR processing of the image.
Abstract:
A method, system, and article of manufacture for recommending items for a room. An image of a room is received, a box image is fitted to the image of the room. Information is extracted from the fitted box image and is used for recommending items for the room. The image is a color image and extracting information is done by extracting color histograms from the fitted box image. The color histograms are used to determine items that match the color scheme of the room, the lighting of the room, and/or the decorating style of the room.
Abstract:
Apparatus and method for performing accurate text recognition of non-simplistic images (e.g., images with clutter backgrounds, lighting variations, font variations, non-standard perspectives, and the like) may employ a machine-learning approach to identify a discriminative feature set selected from among features computed for a plurality of irregularly positioned, sized, and/or shaped (e.g., randomly selected) image sub-regions.
Abstract:
An apparatus and method for obtaining image feature data of an image are disclosed herein. A color histogram of the image is extracted from the image, the extraction of the color histogram including performing one-dimensional sampling of pixels comprising the image in each of a first dimension of a color space, a second dimension of the color space, and a third dimension of the color space. An edge map corresponding to the image is analyzed to detect a pattern included in the image. In response to a confidence level of the pattern detection being below a pre-defined threshold, extracting from the image an orientation histogram of the image. And identify a dominant color of the image.