Abstract:
Providing product recommendations in a physical retail store. A method includes detecting that the user arrives at the physical retail store. The method further includes, in response, receiving information from a recommendation server for a particular user. The method further includes storing locally, the information from the recommendation server. The method further includes, detecting a plurality of user interactions for the user with products in the retail store as part of the shopping experience and prior to a check-out phase of the shopping experience. The method further includes based on the locally stored information and the user interaction, providing product recommendations.
Abstract:
Scalable-effort machine learning may automatically and dynamically adjust the amount of computational effort applied to input data based on the complexity of the input data. This is in contrast to fixed-effort machine learning, which uses a one-size-fits-all approach to applying a single classifier algorithm to both simple data and complex data. Scalable-effort machine learning involves, among other things, classifiers that may be arranged as a series of multiple classifier stages having increasing complexity (and accuracy). A first classifier stage may involve relatively simple machine learning models able to classify data that is relatively simple. Subsequent classifier stages have increasingly complex machine learning models and are able to classify more complex data. Scalable-effort machine learning includes algorithms that can differentiate among data based on complexity of the data.
Abstract:
Identifying products in a physical store shopping environment. The method includes, using a first detection method, identifying that a given product likely belongs to a given set of products. The method further includes, using one or more other detection methods, determining that the product is likely a specific product from the given set of products.
Abstract:
Identifying products in a physical store shopping environment. The method includes, using a first detection method, identifying that a given product likely belongs to a given set of products. The method further includes, using one or more other detection methods, determining that the product is likely a specific product from the given set of products.
Abstract:
Disclosed herein are systems and methods for compressing data and for estimating sparsity of datasets to aid in compressing data. A device receives a plurality of samples of the sensor data from the sensor and determine a plurality of bits, in which each bit has a substantially equal probability of being determined as a 0 bit or of being determined as a 1 bit. The device estimates a sparsity value of the sensor data based at least in part on the sequence of bits. The device compresses the received samples of the sensor data based at least in part on the determined sparsity value to provide compressed data and transmits the compressed data via the transmitter to a receiver. Sparse data other than sensor data may also be compressed based at least in part on an estimated sparsity value.
Abstract:
Technology relating to tuning for operating memory devices is disclosed. The technology includes a computing device that selectively configures operating parameters for at least one operating memory device based at least in part of performance characteristics for an application or other workload that the computing device has been requested to execute. This technology may be implemented, at least in part, in the firmware via a Basic Input/Output System (BIOS) or Unified Extensible Firmware Interface (UEFI) of the computing device. Further, this technology may be employed by a computing device that is executing workloads on behalf of a distributed computing system, e.g., in a data center. Such data centers may include, for example, thousands of computing devices and even more operating memory devices.
Abstract:
Systems, methods, and computer media for implementing convolutional neural networks efficiently in hardware are disclosed herein. A memory is configured to store a sparse, frequency domain representation of a convolutional weighting kernel. A time-domain-to-frequency-domain converter is configured to generate a frequency domain representation of an input image. A feature extractor is configured to access the memory and, by a processor, extract features based on the sparse, frequency domain representation of the convolutional weighting kernel and the frequency domain representation of the input image. The feature extractor includes convolutional layers and fully connected layers. A classifier is configured to determine, based on extracted features, whether the input image contains an object of interest. Various types of memory can be used to store different information, allowing information-dense data to be stored in faster (e.g., faster access time) memory and sparse data to be stored in slower memory.
Abstract:
Scalable-effort machine learning may automatically and dynamically adjust the amount of computational effort applied to input data based on the complexity of the input data. This is in contrast to fixed-effort machine learning, which uses a one-size-fits-all approach to applying a single classifier algorithm to both simple data and complex data. Scalable-effort machine learning involves, among other things, classifiers that may be arranged as a series of multiple classifier stages having increasing complexity (and accuracy). A first classifier stage may involve relatively simple machine learning models able to classify data that is relatively simple. Subsequent classifier stages have increasingly complex machine learning models and are able to classify more complex data. Scalable-effort machine learning includes algorithms that can differentiate among data based on complexity of the data.
Abstract:
Examples of the disclosure enable efficient processing of images. One or more features are extracted from a plurality of images. Based on the extracted features, the plurality of images are classified into a first set including a plurality of first images and a second set including a plurality of second images. One or more images of the plurality of first images are false positives. The plurality of first images and none of the plurality of second images are transmitted to a remote device. The remote device is configured to process one or more images including recognizing the extracted features, understanding the images, and/or generating one or more actionable items. Aspects of the disclosure facilitate conserving memory at a local device, reducing processor load or an amount of energy consumed at the local device, and/or reducing network bandwidth usage between the local device and the remote device.
Abstract:
Disclosed herein are systems and methods for compressing data and for estimating sparsity of datasets to aid in compressing data. A device receives a plurality of samples of the sensor data from the sensor and determine a plurality of bits, in which each bit has a substantially equal probability of being determined as a 0 bit or of being determined as a 1 bit. The device estimates a sparsity value of the sensor data based at least in part on the sequence of bits. The device compresses the received samples of the sensor data based at least in part on the determined sparsity value to provide compressed data and transmits the compressed data via the transmitter to a receiver. Sparse data other than sensor data may also be compressed based at least in part on an estimated sparsity value.