Abstract:
In some embodiments, an electronic device optionally identifies a person's face, and optionally performs an action in accordance with the identification. In some embodiments, an electronic device optionally determines a gaze location in a user interface, and optionally performs an action in accordance with the determination. In some embodiments, an electronic device optionally designates a user as being present at a sound-playback device in accordance with a determination that sound-detection criteria and verification criteria have been satisfied. In some embodiments, an electronic device optionally determines whether a person is further or closer than a threshold distance from a display device, and optionally provides a first or second user interface for display on the display device in accordance with the determination. In some embodiments, an electronic device optionally modifies the playing of media content in accordance with a determination that one or more presence criteria are not satisfied.
Abstract:
In some embodiments, an electronic device optionally identifies a person's face, and optionally performs an action in accordance with the identification. In some embodiments, an electronic device optionally determines a gaze location in a user interface, and optionally performs an action in accordance with the determination. In some embodiments, an electronic device optionally designates a user as being present at a sound-playback device in accordance with a determination that sound-detection criteria and verification criteria have been satisfied. In some embodiments, an electronic device optionally determines whether a person is further or closer than a threshold distance from a display device, and optionally provides a first or second user interface for display on the display device in accordance with the determination. In some embodiments, an electronic device optionally modifies the playing of media content in accordance with a determination that one or more presence criteria are not satisfied.
Abstract:
In some embodiments, a multifunction device with a display and a touch-sensitive surface creates a plurality of workspace views. A respective workspace view is configured to contain content assigned by a user to the respective workspace view. The content includes application windows. The device displays a first workspace view in the plurality of workspace views on the display without displaying other workspace views in the plurality of workspace views and detects a first multifinger gesture on the touch-sensitive surface. In response to detecting the first multifinger gesture on the touch-sensitive surface, the device replaces display of the first workspace view with concurrent display of the plurality of workspace views.
Abstract:
In some embodiments, an electronic device optionally identifies a person's face, and optionally performs an action in accordance with the identification. In some embodiments, an electronic device optionally determines a gaze location in a user interface, and optionally performs an action in accordance with the determination. In some embodiments, an electronic device optionally designates a user as being present at a sound-playback device in accordance with a determination that sound-detection criteria and verification criteria have been satisfied. In some embodiments, an electronic device optionally determines whether a person is further or closer than a threshold distance from a display device, and optionally provides a first or second user interface for display on the display device in accordance with the determination. In some embodiments, an electronic device optionally modifies the playing of media content in accordance with a determination that one or more presence criteria are not satisfied.
Abstract:
Electronic devices are often equipped with a camera for capturing video content and/or a display for displaying video content. However, amateur users often capture video content without regard to composition, framing, or camera movement, resulting in video content that can be jarring or confusing to viewers. There is a need to automate the processing and presentation of video content in an aesthetically pleasing manner. The embodiments described herein provide a method of automatically cropping video content for presentation on a display.
Abstract:
Electronic devices are often equipped with a camera for capturing video content and/or a display for displaying video content. However, amateur users often capture video content without regard to composition, framing, or camera movement, resulting in video content that can be jarring or confusing to viewers. There is a need to automate the processing and presentation of video content in an aesthetically pleasing manner. The embodiments described herein provide a method of automatically cropping video content for presentation on a display.
Abstract:
Detecting a signal from a touch and hover sensing device, in which the signal can be indicative of concurrent touch events and/or hover events, is disclosed. A touch event can indicate an object touching the device. A hover event can indicate an object hovering over the device. The touch and hover sensing device can ensure that a desired hover event is not masked by an incidental touch event, e.g., a hand holding the device, by compensating for the touch event in the detected signal that represents both events. Conversely, when both a hover event and a touch event are desired, the touch and hover sensing device can ensure that both events are detected by adjusting the device sensors and/or the detected signal. The touch and hover sensing device can also detect concurrent hover events by identifying multiple peaks in the detected signal, each peak corresponding to a position of a hovering object.
Abstract:
Methods and systems related to interfaces for interacting with a digital assistant in a desktop environment are disclosed. In some embodiments, a digital assistant is invoked on a user device by a gesture following a predetermined motion pattern on a touch-sensitive surface of the user device. In some embodiments, a user device selectively invokes a dictation mode or a command mode to process a speech input depending on whether an input focus of the user device is within a text input area displayed on the user device. In some embodiments, a digital assistant performs various operations in response to one or more objects being dragged and dropped onto an iconic representation of the digital assistant displayed on a graphical user interface. In some embodiments, a digital assistant is invoked to cooperate with the user to complete a task that the user has already started on a user device.
Abstract:
In some embodiments, an electronic device optionally identifies a person's face, and optionally performs an action in accordance with the identification. In some embodiments, an electronic device optionally determines a gaze location in a user interface, and optionally performs an action in accordance with the determination. In some embodiments, an electronic device optionally designates a user as being present at a sound-playback device in accordance with a determination that sound-detection criteria and verification criteria have been satisfied. In some embodiments, an electronic device optionally determines whether a person is further or closer than a threshold distance from a display device, and optionally provides a first or second user interface for display on the display device in accordance with the determination. In some embodiments, an electronic device optionally modifies the playing of media content in accordance with a determination that one or more presence criteria are not satisfied.
Abstract:
In some embodiments, a multifunction device with a display and a touch-sensitive surface creates a plurality of workspace views. A respective workspace view is configured to contain content assigned by a user to the respective workspace view. The content includes application windows. The device displays a first workspace view in the plurality of workspace views on the display without displaying other workspace views in the plurality of workspace views and detects a first multifinger gesture on the touch-sensitive surface. In response to detecting the first multifinger gesture on the touch-sensitive surface, the device replaces display of the first workspace view with concurrent display of the plurality of workspace views.