Abstract:
A device management system comprises a data collection server and edge server including an adapter that transfers a device-related data set of an industrial device. Upon receipt of the device-related data set and when the data collection server is unable to specify a row that both adapter identification information of the device-related data set and device identification information match, in a devices list table and when the data collection server is able to specify a location name of a row that only the adapter identification information matches, in the devices list table, the data collection server is configured to generate/assign a new unique device identification ID based on the specified location name, the locations list table, and the devices list table. The data collection server associates the new unique device identification ID with the received device-related data set, and then stores the device-related data set in the data accumulation section.
Abstract:
This invention is intended to process data synchronization more efficiently. Disclosed is a data synchronization system comprising: an all records fetching unit that fetches all records of synchronization target data, i.e., data specified as a target of synchronization, from a first device that is a source of synchronization; one or more storage units to prestore synchronization destination data, namely, data that is now retained on a second device that is a destination of synchronization and store synchronization target data fetched by the all records fetching unit; and a difference extraction unit that identifies difference to be reflected in the data on the second device by using the synchronization destination data and the synchronization target data, makes identified difference reflected in the data on the second device, and, after the reflection, updates the synchronization destination data based on the synchronization target data.
Abstract:
A data processing unit required for the data processing is started and a data processing unit not required for the data processing is stopped to change a part of the data processing settings or to add a new data processing setting without stopping the multi-stage data processing, and when the multi-stage data processing is executed, a rear-stage data processing unit reads the tag assigned in a front-stage data processing unit to discriminate the data processing unit that executes the data processing.
Abstract:
Example implementations described herein are directed to systems and methods for provisioning an edge device where for a smart device located in proximity to the edge device, including obtaining, by the smart device, a photo of a physical identifier of the edge device; determining a location from the smart device; transmitting information indicating the physical identifier and the location to a data server; receiving a set of configuration information; and providing the set of configuration information to the edge device, wherein the edge device is configured based on the set of configuration information independent of communication with the data server.
Abstract:
Example implementations described herein involve a device rearrangement detection system and method. Example implementations maintain the structure of devices in manufacturing field stored in an information technology (IT) system to the latest version, even if the IT system is on a network that is isolated from the devices in the manufacturing field.
Abstract:
Example implementations described herein involve a question and answer based interface for the automatic construction of an object detection system. In example implementations, the interface aids in configuring an analytics server to conduct image analytics for a selected camera through the generation of a base framework with glue modules to implement the analytics.
Abstract:
Systems and methods described herein are directed to changing models and analytics algorithms on an edge node from a core server, which can be useful in situations such as optimized factories when the edge node is physically close to data sources such as within a factory plant. The core server or cloud runs analytics algorithms and models concurrently with data received from edge nodes, and replaces the edge node with more accurate analytics algorithms and models as applicable.
Abstract:
The relationship between cache servers and backup cache servers is dynamically managed, and when a fault has arisen, a second cache server that is close in terms of distance to a PBR router that is forwarding traffic to a first cache server at which the fault has arisen is used as a backup cache server. Also, a module or a device having functionality as a cache manager and a cache agent is prepared, and with the trigger being the detection of a fault in the first cache server, the cache agent automatically alters the traffic forwarding destination of the PBR router, which is forwarding traffic to the first cache server at which the fault has arisen, to be the second cache server that is close in terms of distance to the PBR router.
Abstract:
Meta data is provided flexibly according to an application A related data extraction apparatus for extracting related data which is given to data collected from a target system and is related to the data includes: a configuration data accumulation unit that manages configuration information of the target system; a configuration data input unit that accepts input of registration or update of the configuration information; an application linkage unit that accepts a request for the related data given to the data by an application for analyzing the data; and a related data extraction unit that extracts the related data from the configuration information on the basis of the request.
Abstract:
A digital twin management system manages a virtual model that represents an actual physical system in a virtual space on a real-time basis. To generate an integrated virtual model by adding a second virtual model to a first virtual model, a processor of the digital twin system extracts multiple parts that can be used in common in the first virtual model and the second virtual model, generates multiple integrated virtual models that are candidates of an integrated virtual model by changing the extracted parts that can be used in common, calculates an evaluation of each of the generated integrated virtual models, and outputs configuration information regarding each of the integrated virtual model candidates and an evaluation of the integrated virtual model candidate in association with each other.