Abstract:
Systems, apparatuses and methods may provide for visually or audibly indicating to users what areas are being covered or monitored by cameras, microphones, motion sensors, capacitive surfaces, or other sensors. Indicators such as projectors, audio output devices, ambient lighting, haptic feedback devices, and augmented reality may indicate the coverage areas based on a query from a user.
Abstract:
In one example, a projection device includes a first light source to provide visible optical radiation. Additionally, the projection device includes a second light source to provide invisible optical radiation. Further, the projection device includes a projection mechanism. Also, the projection device includes a depth receiver. The projection device further includes a processor to cause the projection mechanism to project each of a group of pixels in a frame of an image using optical radiation provided by both the first light source and the second light source.
Abstract:
Techniques to patch a shader program after the shader has been compiled and/or while the shader is in an execution pipeline are described. The shader may be patched based on references to global constants in a global constant buffer. For example, the reference to the global constant buffer may be patched with the value of the global constant, conditional statements based on references to the global constant buffer may be replaced with unconditional statements based on the value of the global constant in the global constant buffer, to optimize the shader or increase computational efficiency of the shader.
Abstract:
Various embodiments are generally directed to an apparatus, method and other techniques for monitoring a task of a graphics processing unit (GPU) by a graphics driver, determining if the task is complete, determining an average task completion time for the task if the task is not complete and enabling a sleep state for a processing circuit for a sleep state time if the average task completion time is greater than the sleep state time.
Abstract:
Various systems and methods for implementing automatic image metatagging are described herein. A system for metatagging media content comprises a camera system; a user input module to receive user input from a user to capture media content via the camera system; a camera control module to: activate the camera system to capture a scene, and obtain an image with at least a portion of a face of the user; a user identification module to identify the user based on the image; and a metatagging module to tag the scene with the user as an author of the scene
Abstract:
Systems and methods may provide for identifying an amount of time associated with a user based activity with respect to a battery powered device, and determining a battery drain rate of the battery powered device. An indicator of whether the user based activity can be completed in the amount of time may be generated based on the battery drain rate.