Abstract:
A non-transitory computer-readable medium includes instructions that, when executed by processing circuitry, are configured to cause the processing circuitry to receive, from first sensors, first sensory datasets associated with an industrial automation system, receive, from second sensors, second sensory datasets associated with a machine configured to perform mechanical operations, determine a position of the machine relative to the industrial automation system based on the first sensory datasets and the second sensory datasets, determine output representative data associated with the industrial automation system based on the first sensory datasets and the second sensory datasets and in accordance with the position of the machine relative to the industrial automation system, instruct an extended reality device to present the output representative data, determine movement of components of the machine, and instruct the extended reality device to present feedback based on the movement of the components.
Abstract:
A method may include receiving, via a processor, image data associated with a user's surrounding and generating, via the processor, a visualization that may include a virtual industrial automation device. The virtual industrial automation device may depict a virtual object within image data, and the virtual object may correspond to a physical industrial automation device. The method may include displaying, via the processor, the visualization via an electronic display and detecting, via the processor, a gesture in image data that may include the user's surrounding and the visualization. The gesture may be indicative of a request to move the virtual industrial automation device. The method may include tracking, via the processor, a user's movement, generating, via the processor, a visualization that may include an animation of the virtual industrial automation device moving based on the user's movement, and displaying, via the processor, the visualization via the electronic display.
Abstract:
A non-transitory computer-readable medium includes computer-executable instructions that, when executed by at least one processor, are configured to cause the at least one processor to receive an inquiry from a training system, in which the inquiry includes a request for assistance to perform a first operation, retrieve a training profile for the first operation from a database based on the inquiry, and transmit the training profile to the training system, in which the training system is configured to present image data, audio data, or both regarding the first operation based on the training profile. The computer-executable instructions are also configured to cause the at least one processor to receive variant feedback from the training system, generate an updated training profile based on the variant feedback, store the updated training profile in the database.
Abstract:
A system includes a training system configured to display first image data and a remote expert system configured to display second image data that corresponds to the first image data, receive feedback data associated with the second image data, and transmit a command to the training system based on the feedback data. The command is configured to modify the first image data presented via the training system.
Abstract:
For determining a tag based location, a display presents the image. A processor identifies a given equipment tag within the image at a user focus determined at the augmented reality display. The processor further determines a device location based on the given equipment tag.
Abstract:
A computer system for controlling an industrial automation environment comprising a plurality of industrial components is provided. The computer system includes a machine interface, a user interface, a hardware memory, and a processor. The processor is configured to select an industrial component for configuration based on a user input. The processor is also configured to determine a context of the selected industrial component and display a plurality of interface modules to the user for the selected industrial component based on the context of the selected industrial component. The processor is further configured to receive a selection of an interface module by the user through the user interface, and add the selected interface module to a human-machine interface.
Abstract:
For tag based location, a camera captures an image. A display presents the image. A processor identifies a given equipment tag within the image. The processor further determines a device location based on the given equipment tag.
Abstract:
The present disclosure generally relates to a method for performing industrial automation control in an industrial automation system. As such, the method may include detecting, via a sensor system, positions and/or motions of a human. The method may then include determining a possible automation command corresponding to the detected positions and/or motions. After determining the possible automation command, the method may implement a control and/or notification action based upon the detected positions and/or motions.
Abstract:
A tangible, non-transitory, computer-readable medium includes instructions. The instructions, when executed by processing circuitry, are configured to cause the processing circuitry to receive a plurality of sensory datasets associated with an industrial automation system from a plurality of sensors, categorize each sensory dataset of the plurality of sensory datasets into one or more sensory dataset categories of a plurality of sensory dataset categories, determine context information associated with the plurality of sensory datasets, the context information being representative of an environmental condition associated with an extended reality device, the industrial automation system, or both, determine a priority of each sensor dataset category of the plurality of sensory dataset categories based on the context information, determine output representative data to be presented by the extended reality device based on the plurality of sensory datasets and the priority, and instruct the extended reality device to present the output representative data.
Abstract:
A tangible, non-transitory, computer-readable medium includes instructions that, when executed by processing circuitry, are configured to cause the processing circuitry to receive sensory datasets associated with an industrial automation system, determine context information based on a sensory dataset and representative of an environmental condition, predict an intent of a user to complete a task associated with the industrial automation system based on the sensory datasets and the context information, present first output representative data via an extended reality device based on the intent and a setting, the setting including a data presentation format for presenting the sensory datasets, receive inputs indicative of changes to the data presentation format, present second output representative data via the extended reality device in response to receiving the inputs, and update the setting based on the inputs and historical data indicative of users changing the data presentation format of the first output representative data.