Abstract:
An industrial visualization system generates and delivers virtual reality (VR) and augmented reality (AR) presentations of industrial facilities to wearable appliances to facilitate remote or enhanced interaction with automation systems within the facility. VR presentations can comprise three-dimensional (3D) holographic views of a plant facility or a location within a plant facility. The system can selectively render a scaled down view that renders the facility as a 3D scale model, or as a first-person view that renders the facility as a full-scale rendition that simulates the user's presence on the plant floor. Camera icons rendered in the VR presentation can be selected to switch to a live video stream generated by 360-degree cameras within the plant. The system can also render workflow presentations that guide users through the process of correcting detected maintenance issues.
Abstract:
An industrial visualization system defines and enforces a virtual safety shield comprising a three-dimensional space surrounding a wearer of a client device. The dimensions of the virtual safety shield are defined by a specified safe distance surrounding the user that allows sufficient reaction time in response to notification that the wearer is at risk of interacting with a safety zone, hazardous machinery, or vehicles within the plant. If a boundary of a safety zone or hazardous equipment falls within the three-dimensional space defined by the virtual safety shield, the system sends a notification to the user's client device, or places the hazardous equipment in a safe operating mode.
Abstract:
The present disclosure generally relates to a method for performing industrial automation control may include detecting, via a sensor system, positions and/or motions of a human in an industrial automation system, determining at least one derivative value from the detected positions and/or motions, and determining a possible automation command and an undesirable condition based upon the detected positions and/or motions and the at least one derivative value. The method may then include implementing a control and/or notification action based upon the determined possible automation command and the undesired condition.
Abstract:
The present disclosure generally relates to a method for performing industrial automation control in an industrial automation system. As such, the method may include detecting, via a sensor system, positions and/or motions of a human. The method may then include determining a possible automation command corresponding to the detected positions and/or motions. After determining the possible automation command, the method may implement a control and/or notification action based upon the detected positions and/or motions.
Abstract:
The present disclosure generally relates to a method for performing industrial automation control in an industrial automation system may include detecting, via a sensor system, positions and/or motions of one or more humans and/or one or more objects in an industrial automation system and distinguishing, via a programmed computer system, between one or more humans and one or more objects based upon the detected positions and/or motions. The method may then include implementing a control and/or notification action based upon the distinction.
Abstract:
The present disclosure generally relates to a method for performing industrial automation control in an industrial automation system may include detecting, via a sensor system, positions and/or motions of one or more humans and/or one or more objects in an industrial automation system and distinguishing, via a programmed computer system, between one or more humans and one or more objects based upon the detected positions and/or motions. The method may then include implementing a control and/or notification action based upon the distinction.
Abstract:
The present disclosure generally relates to a method for performing industrial automation control in an industrial automation system. The method may include detecting, via a sensor system, positions and/or motions of a human. The method may then determine a possible automation command corresponding to the detected positions and/or motions, receive an automation system signal from at least one additional control component, and implement a control and/or notification action based upon the determined possible automation command and the received automation system signal.
Abstract:
A tangible, non-transitory, computer-readable medium includes instructions that, when executed by processing circuitry, are configured to cause the processing circuitry to receive sensory datasets associated with an industrial automation system, determine context information based on a sensory dataset and representative of an environmental condition, predict an intent of a user to complete a task associated with the industrial automation system based on the sensory datasets and the context information, present first output representative data via an extended reality device based on the intent and a setting, the setting including a data presentation format for presenting the sensory datasets, receive inputs indicative of changes to the data presentation format, present second output representative data via the extended reality device in response to receiving the inputs, and update the setting based on the inputs and historical data indicative of users changing the data presentation format of the first output representative data.
Abstract:
A tangible, non-transitory, computer-readable medium includes instructions. The instructions, when executed by processing circuitry, are configured to cause the processing circuitry to receive a plurality of sensory datasets associated with an industrial automation system from a plurality of sensors, categorize each sensory dataset of the plurality of sensory datasets into one or more sensory dataset categories of a plurality of sensory dataset categories, determine context information associated with the plurality of sensory datasets, the context information being representative of an environmental condition associated with an extended reality device, the industrial automation system, or both, determine a priority of each sensor dataset category of the plurality of sensory dataset categories based on the context information, determine output representative data to be presented by the extended reality device based on the plurality of sensory datasets and the priority, and instruct the extended reality device to present the output representative data.
Abstract:
A non-transitory computer-readable medium includes computer-executable instructions that, when executed by at least one processor, are configured to cause the at least one processor to receive an inquiry from a training system, in which the inquiry includes a request for assistance to perform a first operation, retrieve a training profile for the first operation from a database based on the inquiry, and transmit the training profile to the training system, in which the training system is configured to present image data, audio data, or both regarding the first operation based on the training profile. The computer-executable instructions are also configured to cause the at least one processor to receive variant feedback from the training system, generate an updated training profile based on the variant feedback, store the updated training profile in the database.