-
公开(公告)号:US20210312694A1
公开(公告)日:2021-10-07
申请号:US17352080
申请日:2021-06-18
Applicant: Apple Inc.
Inventor: Arthur Y Zhang , Ray L. Chang , Timothy R. Oriol , Ling Su , Gurjeet S. Saund , Guy Cote , Jim C. Chou , Hao Pan , Tobias Eble , Avi Bar-Zeev , Sheng Zhang , Justin A. Hensley , Geoffrey Stahl
Abstract: A mixed reality system that includes a device and a base station that communicate via a wireless connection The device may include sensors that collect information about the user's environment and about the user. The information collected by the sensors may be transmitted to the base station via the wireless connection. The base station renders frames or slices based at least in part on the sensor information received from the device, encodes the frames or slices, and transmits the compressed frames or slices to the device for decoding and display. The base station may provide more computing power than conventional stand-alone systems, and the wireless connection does not tether the device to the base station as in conventional tethered systems. The system may implement methods and apparatus to maintain a target frame rate through the wireless link and to minimize latency in frame rendering, transmittal, and display.
-
公开(公告)号:US20210096638A1
公开(公告)日:2021-04-01
申请号:US17019856
申请日:2020-09-14
Applicant: Apple Inc.
Inventor: Adam M. O'Hern , Eddie G. Mendoza , Mohamed Selim Ben Himane , Timothy R. Oriol
IPC: G06F3/01 , G06F3/0481 , G06F3/0488 , G06F3/0489 , G06F3/0354
Abstract: Implementations use a first device (e.g., an HMD) to provide a CGR environment that augments the input and output capabilities of a second device, e.g., a laptop, smart speaker, etc. In some implementations, the first device communicates with a second device in its proximate physical environment to exchange input or output data. For example, an HMD may capture an image of a physical environment that includes a laptop. The HMD may detect the laptop, send a request the laptop's content, receive content from the laptop (e.g., the content that the laptop is currently displaying and additional content), identify the location of the laptop, and display a virtual object with the received content in the CGR environment on or near the laptop. The size, shape, orientation, or position of the virtual object (e.g., a virtual monitor or monitor extension) may also be configured to provide a better user experience.
-
公开(公告)号:US10719303B2
公开(公告)日:2020-07-21
申请号:US15081451
申请日:2016-03-25
Applicant: Apple Inc.
Abstract: The disclosure pertains to the operation of graphics systems and to a variety of architectures for design and/or operation of a graphics system spanning from the output of an application program and extending to the presentation of visual content in the form of pixels or otherwise. In general, many embodiments of the invention envision the processing of graphics programming according to an on-the-fly decision made regarding how best to use the specific available hardware and software. In some embodiments, a software arrangement may be used to evaluate the specific system hardware and software capabilities, then make a decision regarding what is the best graphics programming path to follow for any particular graphics request. The decision regarding the best path may be made after evaluating the hardware and software alternatives for the path in view of the particulars of the graphics program to be processed.
-
公开(公告)号:US20200225746A1
公开(公告)日:2020-07-16
申请号:US16828852
申请日:2020-03-24
Applicant: Apple Inc.
Inventor: Avi Bar-Zeev , Ryan S. Burgoyne , Devin W. Chalmers , Luis R. Deliz Centeno , Rahul Nair , Timothy R. Oriol , Alexis H. Palangie
IPC: G06F3/01 , G06F3/0481 , G06F3/0484
Abstract: In an exemplary process for interacting with user interface objects using an eye gaze, an affordance associated with a first object is displayed. A gaze direction or a gaze depth is determined. While the gaze direction or the gaze depth is determined to correspond to a gaze at the affordance, a first input representing user instruction to take action on the affordance is received, and the affordance is selected responsive to receiving the first input.
-
公开(公告)号:US10580191B2
公开(公告)日:2020-03-03
申请号:US15216575
申请日:2016-07-21
Applicant: Apple Inc.
Inventor: James J Cwik , Timothy R. Oriol , Ross R. Dexter , Bruno M. Sommer
Abstract: Systems and techniques for generating an artificial terrain map can select a plurality of component terrains for each of several terrain types. Values of a selection noise map ranging between a lower bound and an upper bound can be computed on a tile-by-tile basis. One or more noise bands within the range of selection-noise-map values can correspond to each terrain type. The noise map can be sampled on a tile-by-tile basis to determine a tile value for each tile. Each respective tile can be assigned to the noise band in which the tile value falls. A terrain value can be assigned to each respective tile in the selection noise map based on the noise band assigned to the respective tile. Generated maps in machine-readable form can be converted to a human-perceivable form, and/or to a modulated signal form conveyed over a communication connection.
-
公开(公告)号:US20190391726A1
公开(公告)日:2019-12-26
申请号:US16440048
申请日:2019-06-13
Applicant: Apple Inc.
Inventor: Edwin Iskandar , Ittinop Dumnernchanvanit , Samuel L. Iglesias , Timothy R. Oriol
IPC: G06F3/0481 , G06F3/01 , G06F3/16 , G06F3/0484
Abstract: Various implementations disclosed herein include devices, systems, and methods that provide a CGR environment in which virtual objects from one or more apps are included. User interactions with the virtual objects are detected and interpreted by a system that is separate from the apps that provide the virtual objects. The system detects user interactions received via one or more input modalities and interprets those user interactions as events. These events provide a higher-level, input modality-independent, abstractions of the lower-level input-modality dependent user interactions that are detected. The system uses UI capability data provided by the apps to interpret user interactions with respect to the virtual object provided by the apps. For example, the UI capability data can identify whether a virtual object is moveable, actionable, hover-able, etc. and the system interprets user interactions at or near the virtual object accordingly.
-
公开(公告)号:US20170358122A1
公开(公告)日:2017-12-14
申请号:US15216575
申请日:2016-07-21
Applicant: Apple Inc.
Inventor: James J. Cwik , Timothy R. Oriol , Ross R. Dexter , Bruno M. Sommer
Abstract: Systems and techniques for generating an artificial terrain map can select a plurality of component terrains for each of several terrain types. Values of a selection noise map ranging between a lower bound and an upper bound can be computed on a tile-by-tile basis. One or more noise bands within the range of selection-noise-map values can correspond to each terrain type. The noise map can be sampled on a tile-by-tile basis to determine a tile value for each tile. Each respective tile can be assigned to the noise band in which the tile value falls. A terrain value can be assigned to each respective tile in the selection noise map based on the noise band assigned to the respective tile. Generated maps in machine-readable form can be converted to a human-perceivable form, and/or to a modulated signal form conveyed over a communication connection.
-
公开(公告)号:US20250036252A1
公开(公告)日:2025-01-30
申请号:US18910335
申请日:2024-10-09
Applicant: Apple Inc.
Inventor: Edwin Iskandar , Ittinop Dumnernchanvanit , Samuel L. Iglesias , Timothy R. Oriol
IPC: G06F3/04815 , G06F3/01 , G06F3/04842 , G06F3/04845 , G06F3/16
Abstract: Various implementations disclosed herein include devices, systems, and methods that provide a CGR environment in which virtual objects from one or more apps are included. User interactions with the virtual objects are detected and interpreted by a system that is separate from the apps that provide the virtual objects. The system detects user interactions received via one or more input modalities and interprets those user interactions as events. These events provide a higher-level, input modality-independent, abstractions of the lower-level input-modality dependent user interactions that are detected. The system uses UI capability data provided by the apps to interpret user interactions with respect to the virtual object provided by the apps. For example, the UI capability data can identify whether a virtual object is moveable, actionable, hover-able, etc. and the system interprets user interactions at or near the virtual object accordingly.
-
公开(公告)号:US20240394952A1
公开(公告)日:2024-11-28
申请号:US18797340
申请日:2024-08-07
Applicant: Apple Inc.
Inventor: Arthur Y Zhang , Ray L. Chang , Timothy R. Oriol , Ling Su , Gurjeet S. Saund , Guy Cote , Jim C. Chou , Hao Pan , Tobias Eble , Avi Bar-Zeev , Sheng Zhang , Justin A. Hensley , Geoffrey Stahl
Abstract: A mixed reality system that includes a device and a base station that communicate via a wireless connection The device may include sensors that collect information about the user's environment and about the user. The information collected by the sensors may be transmitted to the base station via the wireless connection. The base station renders frames or slices based at least in part on the sensor information received from the device, encodes the frames or slices, and transmits the compressed frames or slices to the device for decoding and display. The base station may provide more computing power than conventional stand-alone systems, and the wireless connection does not tether the device to the base station as in conventional tethered systems. The system may implement methods and apparatus to maintain a target frame rate through the wireless link and to minimize latency in frame rendering, transmittal, and display.
-
公开(公告)号:US12141414B2
公开(公告)日:2024-11-12
申请号:US18217711
申请日:2023-07-03
Applicant: Apple Inc.
Inventor: Edwin Iskandar , Ittinop Dumnernchanvanit , Samuel L. Iglesias , Timothy R. Oriol
IPC: G06F3/04815 , G06F3/01 , G06F3/04842 , G06F3/04845 , G06F3/16
Abstract: Various implementations disclosed herein include devices, systems, and methods that provide a CGR environment in which virtual objects from one or more apps are included. User interactions with the virtual objects are detected and interpreted by a system that is separate from the apps that provide the virtual objects. The system detects user interactions received via one or more input modalities and interprets those user interactions as events. These events provide a higher-level, input modality-independent, abstractions of the lower-level input-modality dependent user interactions that are detected. The system uses UI capability data provided by the apps to interpret user interactions with respect to the virtual object provided by the apps. For example, the UI capability data can identify whether a virtual object is moveable, actionable, hover-able, etc. and the system interprets user interactions at or near the virtual object accordingly.
-
-
-
-
-
-
-
-
-