Abstract:
The subject technology provides for dynamic task allocation for neural network models. The subject technology determines an operation performed at a node of a neural network model. The subject technology assigns an annotation to indicate whether the operation is better performed on a CPU or a GPU based at least in part on hardware capabilities of a target platform. The subject technology determines whether the neural network model includes a second layer. The subject technology, in response to determining that the neural network model includes a second layer, for each node of the second layer of the neural network model, determines a second operation performed at the node. Further the subject technology assigns a second annotation to indicate whether the second operation is better performed on the CPU or the GPU based at least in part on the hardware capabilities of the target platform.
Abstract:
Disclosed herein is a technique for implementing a framework that enables application developers to enhance their applications with dynamic adjustment capabilities. Specifically, the framework, when utilized by an application on a mobile computing device that implements the framework, can enable the application to establish predictive models that can be used to identify meaningful behavioral patterns of an individual who uses the application. In turn, the predictive models can be used to preempt the individual's actions and provide an enhanced overall user experience. The framework is configured to interface with other software entities on the mobile computing device that conduct various analyses to identify appropriate times for the application to manage and update its predictive models. Such appropriate times can include, for example, identified periods of time where the individual is not operating the mobile computing device, as well as recognized conditions where power consumption is not a concern.
Abstract:
This application relates to features for a mobile device that allow the mobile device to assign utility values to applications and thereafter suggest applications for a user to execute. The suggested application can be derived from a list of applications that have been assigned a utility by software in the mobile device. The utility assignment of the individual applications from the list of applications can be performed based on the occurrence of an event, an environmental change, or a period of frequent application usage. A feedback mechanism is provided in some embodiments for more accurately assigning a utility to particular applications. The feedback mechanism can track what a user does during a period of suggestion for certain applications and thereafter modify the utility of applications based on what applications a user selects during the period of suggestion.
Abstract:
The subject matter of the disclosure relates to low temperature power throttling at a mobile device to reduce the likelihood of an unexpected power down event in cold weather environments. A mobile device employing a power management solution may be configured to determine that a monitored temperature at the mobile device (at the battery of the mobile device) is below a first threshold level, and whether a hardware component (such as a camera) is active or inactive. Then, based on these determinations, the mobile device can select a throttle setting from a first set of throttle settings when the hardware component is active, and a second set of throttle settings when the hardware component is inactive. Subsequently the mobile device can throttle power consumption for one or more components of the mobile device according to the selected throttle setting.
Abstract:
The embodiments set forth techniques for implementing various “prediction engines” that can be configured to provide different kinds of predictions within a mobile computing device. According to some embodiments, each prediction engine can assign itself as an “expert” on one or more “prediction categories” within the mobile computing device. When a software application issues a request for a prediction for a particular category, and two or more prediction engines respond with their respective prediction(s), a “prediction center” can be configured to receive and process the predictions prior to responding to the request. Processing the predictions can involve removing duplicate information that exists across the predictions, sorting the predictions in accordance with confidence levels advertised by the prediction engines, and the like. In this manner, the prediction center can distill multiple predictions down into an optimized prediction and provide the optimized prediction to the software application.
Abstract:
Systems and methods are disclosed for improving search results returned to a user from one or more search domains, utilizing query features learned locally on the user's device. A search engine can receive, analyze and forward query results from multiple search domains and pass the query results to a client device. A search engine can determine a feature by analyzing query results, generate a predictor for the feature, instruct a client device to use the predictor to train on the feature, and report back to the search engine on training progress. A search engine can instruct a first and second set of client devices to train on set A and B of predictors, respectively, and report back training progress to the search engine. A client device can store search session context and share the context with a search engine between sessions with one or more search engines. A synchronization system can synchronize local predictors between multiple client devices of a user.
Abstract:
This application relates to features for a mobile device that allow the mobile device to assign utility values to applications and thereafter suggest applications for a user to execute. The suggested application can be derived from a list of applications that have been assigned a utility by software in the mobile device. The utility assignment of the individual applications from the list of applications can be performed based on the occurrence of an event, an environmental change, or a period of frequent application usage. A feedback mechanism is provided in some embodiments for more accurately assigning a utility to particular applications. The feedback mechanism can track what a user does during a period of suggestion for certain applications and thereafter modify the utility of applications based on what applications a user selects during the period of suggestion.
Abstract:
Systems and methods are disclosed for advising a user when an energy storage device in a computing system needs charging. State of charge data of the energy storage device can be measured and stored at regular intervals. The historic state of charge data can be queried over a plurality of intervals and a state of charge curve generated that is representative of a user's charging habits over time. The state of charge curve can be used to generate a rate of charge histogram and an acceleration of charge histogram. These can be used to predict when a user will charge next, and whether the energy storage device will have an amount of energy below a predetermined threshold amount before the next predicted charging time. A first device can determine when a second device typically charges and whether the energy storage device in the second device will have an amount of energy below the predetermined threshold amount before the next predicted charge time for the second device. The first device can generate an advice to charge notification to the user on either, or both, devices.
Abstract:
The subject technology provides receiving a neural network (NN) model to be executed on a target platform, the NN model including multiple layers that include operations and some of the operations being executable on multiple processors of the target platform. The subject technology further sorts the operations from the multiple layers in a particular order based at least in part on grouping the operations that are executable by a particular processor of the multiple processors. The subject technology determines, based at least in part on a cost of transferring the operations between the multiple processors, an assignment of one of the multiple processors for each of the sorted operations of each of the layers in a manner that minimizes a total cost of executing the operations. Further, for each layer of the NN model, the subject technology includes an annotation to indicate the processor assigned for each of the operations.
Abstract:
The subject technology transforms a machine learning model into a transformed machine learning model in accordance with a particular model specification when the machine learning model does not conform to the particular model specification, the particular model specification being compatible with an integrated development environment (IDE). The subject technology generates a code interface and code for the transformed machine learning model, the code interface including code statements in the object oriented programming language, the code statements corresponding to an object representing the transformed machine learning model. Further, the subject technology provides the generated code interface and the code for display in the IDE, the IDE enabling modifying of the generated code interface and the code.