Abstract:
In an example embodiment, an application is executed on a mobile device, causing generation of application graphical output in an application layer. The application layer is rendered on a touchscreen display. A first gesture of a user is detected, and in response to the first gesture a viewfinder layer is generated, the viewfinder layer containing real-time image data from the embedded camera via a camera service. A composite of the application layer, the viewfinder layer, and a transparency mask is rendered so that a portion of the viewfinder layer is visible and a portion of the application layer is visible on the touchscreen display at the same time. Then a second gesture by the user is detected, and in response to the second gesture, viewfinder data from the viewfinder layer is captured and the viewfinder layer and transparency mask are removed.
Abstract:
A method comprising receiving sensor data from a sensor, obtaining image data from a graphical effects shader based on the sensor data, blending the image data with a plurality of application surfaces to create a blended image, and transmitting the blended image to a display. The method may further comprise blending a color image with the blended image in response to a reduction in ambient light. Also disclosed is a mobile node (MN) comprising a sensor configured to generate sensor data, a display device, and a processor coupled to the sensor and the device display, wherein the processor is configured to receive the sensor data, obtain image data generated by a graphical effects shader based on the sensor data, blend the image data with an application surface associated with a plurality of applications to create a blended image, and transmit the blended image to the display.
Abstract:
A method comprising receiving sensor data from a sensor, obtaining image data from a graphical effects shader based on the sensor data, blending the image data with a plurality of application surfaces to create a blended image, and transmitting the blended image to a display. The method may further comprise blending a color image with the blended image in response to a reduction in ambient light. Also disclosed is a mobile node (MN) comprising a sensor configured to generate sensor data, a display device, and a processor coupled to the sensor and the device display, wherein the processor is configured to receive the sensor data, obtain image data generated by a graphical effects shader based on the sensor data, blend the image data with an application surface associated with a plurality of applications to create a blended image, and transmit the blended image to the display.
Abstract:
In an example embodiment, a received JPEG image compression format image includes one or more minimum coded units (ICUs). Each MCU is decoded using an image compression format decoder. Each decoded MCU is then split into multiple decoded subblocks. Each decoded subblock can then be encoded into texture compression format using a texture compression format encoder. Each encoded texture compression format subblock can then be passed to a graphical processing unit (GPU) for processing.
Abstract:
In an example embodiment, a received JPEG image compression format image includes one or more minimum coded units (ICUs). Each MCU is decoded using an image compression format decoder. Each decoded MCU is then split into multiple decoded subblocks. Each decoded subblock can then be encoded into texture compression format using a texture compression format encoder. Each encoded texture compression format subblock can then be passed to a graphical processing unit (GPU) for processing.
Abstract:
A processing system with multiple processors is switchable between two modes of operation dynamically: symmetrical multi-processing (SMP) and asymmetrical multi-processing (ASMP). The system uses certain criteria to determine when to switch to improve the power consumption or performance. A controller enables control and fast-switching between the two modes. Upon receipt of a switching command to switch between SMP and ASMP, a series or sequence of actions are performed to control voltage supplies and CPU/memory clocks to the multiple processors and cache memory.
Abstract:
In an example embodiment, an application is executed on a mobile device, causing generation of application graphical output in an application layer. The application layer is rendered on a touchscreen display. A first gesture of a user is detected, and in response to the first gesture a viewfinder layer is generated, the viewfinder layer containing real-time image data from the embedded camera via a camera service. A composite of the application layer, the viewfinder layer, and a transparency mask is rendered so that a portion of the viewfinder layer is visible and a portion of the application layer is visible on the touchscreen display at the same time. Then a second gesture by the user is detected, and in response to the second gesture, viewfinder data from the viewfinder layer is captured and the viewfinder layer and transparency mask are removed.
Abstract:
Various disclosed embodiments include methods and systems that automatically adjust camera functions of an electronic device to provide improved image quality by determining whether the electronic device is being operated indoors or outdoors.
Abstract:
Various disclosed embodiments include methods and systems that automatically adjust camera functions of an electronic device to provide improved image quality by determining whether the electronic device is being operated indoors or outdoors.
Abstract:
A method includes dividing a display of an electronic device into multiple regions defined by vertices, calculating time varying positions for each vertex relative to a z dimension, and composing a screen for the display that includes the varying positions for each vertex to create an animated display distortion. The distortion may occur in association with haptic effects.