Abstract:
An electronic device with a display and sensor(s) to detect location of an input object displays a first user interface object. The device detects an input object at a first hover location that meets first hover proximity criteria with respect to the first user interface object. After detecting the input object at the first hover location, the device detects movement of the input object away from the first hover location. In response to detecting movement of the input object away from the first hover location: in accordance with a determination that the input object meets first augmented hover proximity criteria, the device performs a first operation associated with the movement of the input object; and in accordance with a determination that the input object does not meet the first augmented hover proximity criteria, the device forgoes performing the first operation.
Abstract:
An electronic device that is in communication with a display generation component, and sensor(s) to detect location of an input object displays a content selection object within selectable content, wherein the content selection object includes a first edge and a second edge. The device detects an input by the input object, including detecting the input object at a first hover location that corresponds to the first edge of the content selection object. In response to detecting the first portion of the input: in accordance with a determination that the first portion of the input meets first criteria that require the input object meets proximity criteria with respect to the content selection object, the device changes an appearance of the first edge relative to the second edge of the content selection object to indicate that the first edge will be selected for movement when the input object meets second criteria.
Abstract:
A computer system displays a three-dimensional environment and detects, via an input device that includes a first portion and a second portion that can be physically coupled in a first configuration and physically decoupled in a second configuration, a first input. In response to detecting the first input while the first portion is coupled to the second portion of the input device in the first configuration, the computer system performs a first operation in the three-dimensional environment. While the first portion of the input device and the second portion of the input device are decoupled in the second configuration, the computer system detects a sequence of one or more inputs that includes movement of the first portion of the input device relative to the second portion of the input device. In response to detecting the sequence of one or more inputs, the computer system performs one or more second operations.
Abstract:
A first device sends a request to a second device to initiate a shared annotation session. In response to receiving acceptance of the request, a first prompt to move the first device toward the second device is displayed. In accordance with a determination that connection criteria for the first device and the second device are met, a representation of a field of view of the camera(s) of the first device is displayed in the shared annotation session with the second device. During the shared annotation session, one or more annotations are displayed via the first display generation component and one or more second virtual annotations corresponding to annotation input directed to the respective location in the physical environment by the second device is displayed via the first display generation component, provided that the respective location is included in the field of view of the first set of cameras.
Abstract:
A first device sends a request to a second device to initiate a shared annotation session. In response to receiving acceptance of the request, a first prompt to move the first device toward the second device is displayed. In accordance with a determination that connection criteria for the first device and the second device are met, a representation of a field of view of the camera(s) of the first device is displayed in the shared annotation session with the second device. During the shared annotation session, one or more annotations are displayed via the first display generation component and one or more second virtual annotations corresponding to annotation input directed to the respective location in the physical environment by the second device is displayed via the first display generation component, provided that the respective location is included in the field of view of the first set of cameras.
Abstract:
An electronic device, while displaying a first user interface, detects an input for an input object, detects that first hover proximity criteria are met by the input object, and displays first visual feedback. While displaying the first visual feedback, the device detects a change in a current value of a hover proximity parameter of the input object and that second hover proximity criteria are met by the input object after the change. In response to detecting that the second hover proximity criteria are met, the device displays second visual feedback, distinct from the first visual feedback.
Abstract:
A device with a display and a touch-sensitive surface displays a user interface including a user interface object at a first location. While displaying the user interface, the device detects a portion of an input, including a contact at a location on the touch-sensitive surface corresponding to the user interface object. In response to detecting the portion of the input: upon determining that the portion of the input meets menu-display criteria, the device displays a plurality of selectable options that corresponds to the user interface object on the display; and, upon determining that the portion of the input meets object-move criteria, the device moves the user interface object or a representation thereof from the first location to a second location according to the movement of the contact.
Abstract:
An electronic device with a touch-sensitive display and one or more sensors to detect signals from a stylus associated with the device: displays a user interface on the touch-sensitive display; while displaying the user interface on the touch-sensitive display, detects the stylus moving towards the touch-sensitive display, without the stylus making contact with the touch-sensitive display; determines whether the detected stylus movement towards the touch-sensitive display satisfies one or more stylus movement criteria; in accordance with a determination that the detected stylus movement satisfies the one or more stylus movement criteria, displays a menu overlaid on the user interface, the menu including a plurality of selectable menu options; detects selection of a first menu option in the plurality of selectable menu options; and, in response to detecting selection of the first menu option: performs an operation that corresponds to the first menu option, and ceases to display the menu.
Abstract:
A computer system displays virtual objects overlaid on a view of a physical environment as a virtual effect. The computer system displays respective animated movements of the virtual objects over the view of the physical environment, wherein the respective animated movements are constrained in accordance with a direction of simulated gravity associated with the view of the physical environment. If current positions of virtual objects during the respective animated movement of the virtual objects corresponds to different surfaces at different heights detected in the view of the physical environment, the computer constrains the respective animated movements of the virtual objects in accordance with the different surfaces detected in the view of the physical environment.
Abstract:
A device with a display and a touch-sensitive surface displays a user interface including an object. While displaying the user interface, the device detects a first and a second portion of an input, where the first portion includes contact(s) at a location corresponding to the object and the second portion includes movement of the contact(s). In response: upon determining that the second portion was detected shortly after detecting the contact(s): when the first input has a first predefined number of contacts, the device drags the user interface object or a representation; and when the first input has a second predefined number of contacts, the device forgoes the dragging. Further in response, upon determining that the second portion was detected after the contact(s) had been detected at the location for at least the first threshold amount of time, the device drags the user interface object or the representation thereof.