Abstract:
Technologies are generally described for customization of a list of properties associated with media files based at least in part on user's preferences. In some examples, a method may include receiving, by a server, a plurality of user inputs that respectively identify the user's designated favorites from among a plurality of media files; determining, by the server, the user's preferences from among a plurality of properties associated with the user's designated favorites from among the plurality of media files, based at least in part on the received user inputs; and providing, by the server, the user with a list of the plurality of properties based at least in part on the user's preferences from among the plurality of properties.
Abstract:
Technology is disclosed for preventing classification of objects, e.g., in an augmented reality system. The technology can identify a set of objects to be classified, determine whether context information for one or more objects in the identified set of objects to be classified is identified as not to be employed during classification, and during classification of two different objects, include context information for one object but not the other.
Abstract:
A method performed under control of a mobile device may include receiving at least one probe response from at least one Wi-Fi access point; determining whether a number of the at least one Wi-Fi access point satisfies a predetermined condition; adjusting a signal transmission power of a probe request, when the determining indicates that the number of the at least one Wi-Fi access point does not satisfy the predetermined condition; and transmitting the probe request with the adjusted signal transmission power.
Abstract:
Technology is disclosed for preventing classification of objects, e.g., in an augmented reality system. The technology can identify a set of objects to be classified, determine whether context information for one or more objects in the identified set of objects to be classified is identified as not to be employed during classification, and during classification of two different objects, include context information for one object but not the other.
Abstract:
Technologies are generally described to provide alternate user interfaces based on user context. In some examples, a user interface system may measure a user characteristic associated with a particular user interface type. The user interface system may then determine whether the measured user characteristic is suitable for use as a user interface input, for example by comparison with a baseline user characteristic. Upon determination that the measured user characteristic is suitable, the user interface system may use the measured user characteristic for user interface purposes. On the other hand, upon determination that the measured user characteristic is not suitable, the user interface system may use a different user interface type to attempt to receive user inputs.
Abstract:
A system has a first lens including a first polarization filter and a light polarization layer, a second lens including a second polarization filter and a polarization angle control module coupled to the first lens. The polarization angle control module operatively enables determination of an angle of polarization of the second polarization filter and adjusts an angle of polarization of the light polarization layer such that an image may be viewed when looking through the first lens and the second lens.
Abstract:
Technologies are generally described for dynamically distributing a processing load. In some examples, a method performed under control of a server may include receiving information regarding load distribution from an end device and dynamically distributing a processing load between the server and the end device based at least in part on the information regarding load distribution.
Abstract:
Technologies are generally described for video encoding for real-time streaming based on audio analysis. In one example, a method includes analyzing, by a system comprising a processor, audio data representative of audio content associated with a video comprising video frames. The method also includes selecting a set of the video frames based on a determination that each video frame of the set of the video frames satisfies a defined condition associated with the audio content. Further, the method includes video encoding at least one video frame of the set of the video frames as an intra frame based on the audio analysis.