Abstract:
Electronic devices may use touch pads that have touch sensor arrays, force sensors, and actuators for providing tactile feedback. A touch pad may be mounted in a computer housing. The touch pad may have a rectangular planar touch pad member that has a glass layer covered with ink and contains a capacitive touch sensor array. Force sensors may be mounted under each of the four corners of the rectangular planar touch pad member. The force sensors may be used to measure how much force is applied to the surface of the planar touch pad member by a user. Processed force sensor signals may indicate the presence of button activity such as press and release events. In response to detected button activity or other activity in the device, actuator drive signals may be generated for controlling the actuator. The user may supply settings to adjust signal processing and tactile feedback parameters.
Abstract:
Methods and apparatuses are disclosed that allow measurement of a user's interaction with the housing of an electronic device. Some embodiments may measure the electrical characteristics of a housing of an electrical device, where the housing is capable of being temporarily deformed by the user's interaction. By measuring the electrical characteristics of the housing, such as the housing's capacitance, the user's interaction with the housing can be measured in a manner that is independent of the user's electrical characteristics and/or in a manner that may allow the pressure applied to the housing to be quantified.
Abstract:
A method and system for displaying images on a transparent display of an electronic device. The display may include one or more display screens as well as a flexible circuit for connecting the display screens with internal circuitry of the electronic device. Furthermore, the display screens may allow for overlaying of images over real world viewable objects, as well as a visible window to be present on an otherwise opaque display screen. Additionally, the display may include active and passive display screens that may be utilized based on images to be displayed.
Abstract:
The present disclosure addresses methods and apparatus facilitating capacitive sensing using a conductive surface, and facilitating the sensing of proximity to the conductive surface. The sensed proximity will often be that of a user but can be another source of a reference voltage potential. In some examples, the described systems are capable of sensing, capacitance (including parasitic capacitance) in a circuit that includes the outer conductive surface, and where that outer conductive surface is at a floating electrical potential. In some systems, the systems can be switched between two operating modes, a first mode in which the system will sense proximity to the conductive surface, and a second mode in which the system will use a capacitance measurement to sense contact with the conductive surface.
Abstract:
Certain embodiments may take the form of a method of operating an electronic device to find and determine an identity of other local devices. The method includes transmitting electromagnetic signals from a first electronic device to find devices within a prescribed distance of the first device and receiving electromagnetic response signals from a second electronic device within the prescribed distance from the first electronic device. The method also includes identifying the second electronic device using information received in the electromagnetic response signals. Additionally, the method includes determining if the second electronic device is aware of other electronic devices and, if the second electronic device is aware of other electronic devices, obtaining identifying information of the other devices from the second electronic device.
Abstract:
A device can receive live video of a real-world, physical environment on a touch sensitive surface. One or more objects can be identified in the live video. An information layer can be generated related to the objects. In some implementations, the information layer can include annotations made by a user through the touch sensitive surface. The information layer and live video can be combined in a display of the device. Data can be received from one or more onboard sensors indicating that the device is in motion. The sensor data can be used to synchronize the live video and the information layer as the perspective of video camera view changes due to the motion. The live video and information layer can be shared with other devices over a communication link.
Abstract:
A device can receive live video of a real-world, physical environment on a touch sensitive surface. One or more objects can be identified in the live video. An information layer can be generated related to the objects. In some implementations, the information layer can include annotations made by a user through the touch sensitive surface. The information layer and live video can be combined in a display of the device. Data can be received from one or more onboard sensors indicating that the device is in motion. The sensor data can be used to synchronize the live video and the information layer as the perspective of video camera view changes due to the motion. The live video and information layer can be shared with other devices over a communication link.
Abstract:
A device can receive live video of a real-world, physical environment on a touch sensitive surface. One or more objects can be identified in the live video. An information layer can be generated related to the objects. In some implementations, the information layer can include annotations made by a user through the touch sensitive surface. The information layer and live video can be combined in a display of the device. Data can be received from one or more onboard sensors indicating that the device is in motion. The sensor data can be used to synchronize the live video and the information layer as the perspective of video camera view changes due to the motion. The live video and information layer can be shared with other devices over a communication link.
Abstract:
A system for enhancing audio including a plurality of sensors, an output device, and a processor in communication with the plurality of sensors and the output device. The processor is configured to process data captured by the plurality of sensors, and based on that, modify an output of the output device. The processor also is configured to determine whether there are a plurality of users associated with a video conferencing session, determine which user of the plurality of users is speaking, and enhance the audio or video output of the speaking user on the output device.
Abstract:
A device can receive live video of a real-world, physical environment on a touch sensitive surface. One or more objects can be identified in the live video. An information layer can be generated related to the objects. In some implementations, the information layer can include annotations made by a user through the touch sensitive surface. The information layer and live video can be combined in a display of the device. Data can be received from one or more onboard sensors indicating that the device is in motion. The sensor data can be used to synchronize the live video and the information layer as the perspective of video camera view changes due to the motion. The live video and information layer can be shared with other devices over a communication link.