Abstract:
Embodiments of systems and methods for providing a graphical user interface (GUI) for controlling virtual workspaces produced across Information Handling Systems (IHSs) are described. An IHS may include a processor and a memory coupled to the processor, the memory having program instructions stored thereon that, upon execution, cause the IHS to: establish a virtual workspace across a screen of the IHS and a second screen of a second IHS, at least in part, through a backend IHS; and provide a virtual workspace interface on the screen, where the virtual workspace interface comprises a first graphical representation of the IHS and a second graphical representation of the second IHS, and where in response to a user dragging a first instance of an application window displayed on the screen to the second graphical representation of the second IHS, a second instance of the application window is rendered on the second screen.
Abstract:
In one or more embodiments, one or more systems, methods, and/or processes may: determine if the user is utilizing a previously utilized a workspace configuration; if the user is utilizing the previously utilized workspace configuration, display multiple windows respectively associated with multiple applications; and if the user is not utilizing the previously utilized workspace configuration: determine hardware resources of a current workspace configuration; modify the workspace configuration data based at least on the hardware resources of the current workspace configuration; map the multiple windows respectively associated with the multiple applications to multiple displays of the current workspace configuration based at least on the workspace configuration data; adjust a resolution of a window of the multiple windows based at least on a resolution of a display of the multiple displays that shall display the window; and translate a saved position of the window to a position associated with the display.
Abstract:
Embodiments of a foldable case for a multi-form factor IHS with a detachable keyboard are described. In some embodiments, a folio case may include: a first panel comprising a left-side magnet and a right-side magnet; a second panel comprising a left-side magnet and a right-side magnet, where a top edge of the second panel is coupled to a bottom edge of the first panel; and a third panel comprising a left-side magnet and a right-side magnet, where a top edge of the third panel is coupled to a bottom edge of the second panel.
Abstract:
Projected input and output devices adapt to a desktop environment by sensing objects at the desktop environment and altering projected light in response to the sensed objects. For instance, projection of input and output devices is altered to limit illumination against and end user's hands or other objects disposed at a projection surface. End user hand positions and motions are detected to provide gesture support for adapting a projection work space, and configurations of projected devices are stored so that an end user can rapidly recreate a projected desktop. A projector scan adjusts to limit traces across inactive portions of the display surface and to increase traces at predetermined areas, such as video windows.
Abstract:
An information handling system touchscreen discriminates touches with a tool discriminator before analyzing the touches with a touch discriminator that identifies touches as intended or unintended inputs. The tool discriminator isolates touches associated with tools to assign tool functions to tool touches so that touch discriminator analysis is bypassed for tools, thus providing a more rapid and accurate horizontal workspace having tools placed on the touchscreen.
Abstract:
Projected input and output devices adapt to a desktop environment by sensing objects at the desktop environment and altering projected light in response to the sensed objects. For instance, projection of input and output devices is altered to limit illumination against and end user's hands or other objects disposed at a projection surface. End user hand positions and motions are detected to provide gesture support for adapting a projection work space, and configurations of projected devices are stored so that an end user can rapidly recreate a projected desktop. A projector scan adjusts to limit traces across inactive portions of the display surface and to increase traces at predetermined areas, such as video windows.
Abstract:
A non-linear user interface display presented at a desktop conforms to dimensions of a user detected by a depth camera, such as by presenting the user interface along an arc having a radius determined from a reach of the user detected by the depth camera. Windows presented in the arc vary in size based upon their position relative to a user focus, such as by detecting a user gaze direction or by comparing position to a central display mat. User gestures control presentation of visual images in the arc, such rotating visual image windows in a circular manner around the arc radius and to different orientations in the arc relative to the user.
Abstract:
A totem device accepts inputs from an end user and communicates the inputs to an information handling system through a capacitive mat by moving an interactive portion of the totem relative to a main portion. In one embodiment, the capacitive mat integrates a display that presents input images proximate the totem, such as a volume gauge or mouse keys. Interactive portions include compressible materials that alter surface area pressed against a capacitive mat when pressed upon and plungers that move independent of a main body to press. In one embodiment, a camera captures images of the totem to enhance the distinguishing of inputs.
Abstract:
Visual images projected on a projection surface by a projector provide an interactive user interface having end user inputs detected by a detection device, such as a depth camera. The detection device monitors projected images initiated in response to user inputs to determine calibration deviations, such as by comparing the distance between where a user makes an input and where the input is projected. Calibration is performed to align the projected outputs and detected inputs. The calibration may include a coordinate system anchored by its origin to a physical reference point of the projection surface, such as a display mat or desktop edge.
Abstract:
Projected input and output devices adapt to a desktop environment by sensing objects at the desktop environment and altering projected light in response to the sensed objects. For instance, projection of input and output devices is altered to limit illumination against and end user's hands or other objects disposed at a projection surface. End user hand positions and motions are detected to provide gesture support for adapting a projection work space, and configurations of projected devices are stored so that an end user can rapidly recreate a projected desktop. A projector scan adjusts to limit traces across inactive portions of the display surface and to increase traces at predetermined areas, such as video windows.