Abstract:
Systems and methods may be implemented to prevent application crashes by correlating a history of operating system (OS) updates with occurrence of past client application crashes using information that is crowd-sourced from multiple information handling systems so that action/s may be taken to prevent occurrence of future client application crashes on the information handling system/s. Machine learning (e.g., deep learning) may be employed to automatically correlate the history of OS updates with a record of past client application crashes that have occurred on multiple client information handling systems, and then the likely root cause/s for the client application crashes identified based on this correlation. These likely root cause/s may be corrected or otherwise addressed, e.g., by further investigation into the details of the root cause, and/or user or automatic system action to remove or block the root cause.
Abstract:
Visual images projected on a projection surface by a projector provide an interactive user interface having end user inputs detected by a detection device, such as a depth camera. The detection device monitors projected images initiated in response to user inputs to determine calibration deviations, such as by comparing the distance between where a user makes an input and where the input is projected. Calibration is performed to align the projected outputs and detected inputs. The calibration may include a coordinate system anchored by its origin to a physical reference point of the projection surface, such as a display mat or desktop edge.
Abstract:
Inputs to a projected or other type of displayed user interface are verified with a verification device that enhances input detection accuracy. For example, inputs at a projected keyboard are detected by an infrared curtain projected over the keyboard and breached by an end user finger as it strikes a key. The inputs are verified with a capacitive sensor device disposed below the keyboard that confirms a user touch. Alternatively, proximity sensing by the capacitive sensor measures distance and velocity associated with a finger to verify the inputs detected by breaching of the infrared curtain are intended inputs. Other verification devices may include an accelerometer that detects accelerations associated with inputs and three dimensional cameras that capture finger positions. Verification devices may be selectively enabled based upon power and accuracy constraints.
Abstract:
Inputs to a projected or other type of displayed user interface are filtered at different portions of the displayed visual images to provide a user-defined input management. For example, a user defines a portion of a desktop so that touch inputs have a first effect while the other portion of the desktop has a second effect, thus allowing the user to manage the risk of inadvertent inputs in the defined portion relative to other desktop regions. In one embodiment, an icon activates and deactivates touch input filtering in a region defined by dragging the icon around the user interface. The defined region is depicted with an identifying visual image, such as a coloration or shading that distinguishes the regions as having a response to touch inputs different from that of other regions.
Abstract:
Desktop surface references are selected and applied to define a coordinate system for calibrating projected visual images and end user inputs at the projected visual images. For example, a desktop edge is detected with the depth camera by the increase in detected distance along the axis from the depth camera to the desktop edge, and then the desktop edge is used as an origin for a coordinate system that defines a projection area for presenting a user interface. Monitoring end user inputs and projected outputs relative to the desktop edge aids in coordinating interactions by a user through the projected user interface in the event the camera or projector move relative to the desktop surface.
Abstract:
Systems and methods are provided that may be implemented to secure a publicly-hosted web application so that it will render only within the determined context of a trusted client application. Such an authentication decision may be made, for example, using front-end web application code that is rendered in a client web view together with client application code to authenticate the client application context in which the web page is rendered. In this way, the web application may validate that it is being rendered in the context of a trusted and/or well-known client application rendering engine/environment.
Abstract:
A non-linear user interface display presented at a desktop conforms to dimensions of a user detected by a depth camera, such as by presenting the user interface along an arc having a radius determined from a reach of the user detected by the depth camera. Windows presented in the arc vary in size based upon their position relative to a user focus, such as by detecting a user gaze direction or by comparing position to a central display mat. User gestures control presentation of visual images in the arc, such rotating visual image windows in a circular manner around the arc radius and to different orientations in the arc relative to the user.
Abstract:
Visual images projected on a projection surface by a projector provide an interactive user interface having end user inputs detected by a detection device, such as a depth camera. The detection device monitors projected images initiated in response to user inputs to determine calibration deviations, such as by comparing the distance between where a user makes an input and where the input is projected. Calibration is performed to align the projected outputs and detected inputs. The calibration may include a coordinate system anchored by its origin to a physical reference point of the projection surface, such as a display mat or desktop edge.
Abstract:
A single camera detects user interactions with separate infrared light sources, such as structured light for three dimensional detection and an infrared curtain surface touch interactions. A touch input module monitors user interactions with a desktop surface, such as at a projected or display mat user interface presentation, and selectively illuminates the separate infrared light sources to support camera detection based upon the user interactions. For example, separate infrared light sources illumination in an interleaved manned at variable rates based upon analysis of visible light or structured light images captured by the camera.
Abstract:
A non-linear user interface display presented at a desktop conforms to dimensions of a user detected by a depth camera, such as by presenting the user interface along an arc having a radius determined from a reach of the user detected by the depth camera. Windows presented in the arc vary in size based upon their position relative to a user focus, such as by detecting a user gaze direction or by comparing position to a central display mat. User gestures control presentation of visual images in the arc, such rotating visual image windows in a circular manner around the arc radius and to different orientations in the arc relative to the user.