-
公开(公告)号:US11789779B2
公开(公告)日:2023-10-17
申请号:US17188901
申请日:2021-03-01
CPC分类号: G06F9/5038 , G06F9/4887 , G06N5/04 , G06N20/00
摘要: Systems, computer program products, and methods are described herein for monitoring and automatically controlling batch processing. The present invention may be configured to receive a plurality of data processing requests and determine a processing plan for the plurality of data processing requests. The present invention may be configured to provide, to processing applications and based on the processing plan, actions for performance by the processing applications to complete the plurality of data processing requests. The present invention may be configured to predict, while the processing applications are performing the actions and using a completion time predicting machine learning model, completion times for the plurality of data processing requests.
-
公开(公告)号:US20230239236A1
公开(公告)日:2023-07-27
申请号:US17583634
申请日:2022-01-25
发明人: Nagendra B. Grandhye , Venugopala Rao Randhi , Vijaya Kumar Vegulla , Rama Venkata S. Kavali , Damodarrao Thakkalapelli
摘要: A system accesses a set of devices transferring a plurality of data elements from a source device to a destination device. The system determines that a first subset of data elements from among the plurality of data elements is transformed in a first subset of devices. The system determines that a second subset of data elements from among the plurality of data elements is transformed in a second subset of devices. The system splits the plurality of data elements into the first subset of data elements and the second subset of data elements. The system communicates the first subset of data elements using a first transfer path through the first subset of devices. The system communicates the second subset of data elements using a second transfer path through the second subset of devices.
-
公开(公告)号:US20220222213A1
公开(公告)日:2022-07-14
申请号:US17149115
申请日:2021-01-14
IPC分类号: G06F16/178 , G06F16/11 , G06F16/174
摘要: Aspects of the disclosure relate to management of databases in different server environments. In particular, various aspects of this disclosure relate to correction, synchronization, and/or migration of databases between different database servers. A feed file that is rejected from loading in a database associated with a source server may prioritized in a destination server. A feed file hierarchy of the rejected feed file may be determined and the destination server may process loading of the rejected feed file to a database based on the determine feed file hierarchy. Any corrections applied at the destination server may also be applied at the source server.
-
公开(公告)号:US11176088B2
公开(公告)日:2021-11-16
申请号:US16985326
申请日:2020-08-05
摘要: Aspects described herein may relate to a data processing engine that executes on a computing device in order to store data from one or more feed files, which may be heterogeneous, to a destination data structure on a designated computing device. Because the files may be huge in size, it is important that the files be stored in a manner in order to reduce the time to move the data and to support an efficient mechanism for recovering from errors. A feed file may be dynamically partitioned into groups of contiguous rows based on a dynamic partitioning key, where data chunks are loaded into a plurality of clone tables and subsequently moved into a destination data structure. The data processing engine may determine a row size for the clone files and request for resources from a computing cloud to obtain those resources.
-
公开(公告)号:US11055187B2
公开(公告)日:2021-07-06
申请号:US16238638
申请日:2019-01-03
摘要: A method includes receiving a plurality of data processing requests and assigning each data processing request to a group based on the source of the data. The method further includes generating a primary processing stack indicating a queue for processing the first data, wherein: the primary processing stack comprises a plurality of layers; each layer comprises a plurality of slices, wherein each slice represents a portion of the first data of at least one data processing request; and the plurality of slices are arranged within each layer based at least on the priority indicator corresponding to the first data that each slice represents. The method further includes receiving resource information about a plurality of servers, assigning each slice of the primary processing stack to one of the servers, and sending processing instructions comprising an identification of each slice of the primary processing stack assigned to the respective server.
-
6.
公开(公告)号:US20200371991A1
公开(公告)日:2020-11-26
申请号:US16985326
申请日:2020-08-05
摘要: Aspects described herein may relate to a data processing engine that executes on a computing device in order to store data from one or more feed files, which may be heterogeneous, to a destination data structure on a designated computing device. Because the files may be huge in size, it is important that the files be stored in a manner in order to reduce the time to move the data and to support am efficient mechanism for recovering from errors. A feed file may be dynamically partitioned into groups of contiguous rows based on a dynamic partitioning key, where data chunks are loaded into a plurality of clone tables and subsequently moved into a destination data structure. The data processing engine may determine a row size for the clone files and request for resources from a computing cloud to obtain those resources.
-
7.
公开(公告)号:US20240346016A1
公开(公告)日:2024-10-17
申请号:US18648679
申请日:2024-04-29
CPC分类号: G06F16/2386 , G06F9/4843 , G06F9/5077 , G06F16/21 , G06F16/2365 , G06N10/40 , G06N10/60 , G06F2209/501 , G06F2209/5019
摘要: A quantum computing platform may establish a smart contract approval and management model, including: rules for automated validation, and rules for smart contract approver validation. The computing platform may receive, from a workload processing system, a data feed indicating current workload information. The computing platform may generate, based on the data feed, a first container configuration output, defining a batch configuration for use in processing the data feed. The computing platform may validate, using the one or more rules for automated validation, the first container configuration output. The computing platform may send, to the workload processing system, the first container configuration output and one or more commands directing the workload processing system to process the data feed using the batch configuration defined by the first container configuration output, which may cause the workload processing system to process the data feed using the batch configuration.
-
公开(公告)号:US12112059B2
公开(公告)日:2024-10-08
申请号:US18542998
申请日:2023-12-18
发明人: Rama Venkata S. Kavali , Venugopala Rao Randhi , Damodarrao Thakkalapelli , Vijaya Kumar Vegulla , Rajasekhar Maramreddy
CPC分类号: G06F3/0655 , G06F3/0604 , G06F3/067 , G06N3/08
摘要: A device configured to identify a first link between a value of a first data element in a first plurality of data elements and values of a first set of data elements in a second plurality of data elements and to remove the first link between the first data element and the first set of data elements. The device is further configured to input the data elements into a machine learning model that is configured to output a second link between the first data element and a second set of data elements. The device is further configured to create an entry in a relationship table that identifies the first data element and the second set of data elements. The device is further configured to generate a data stream with the first data element and the second set of data elements and to output the data stream.
-
公开(公告)号:US12093145B2
公开(公告)日:2024-09-17
申请号:US18475033
申请日:2023-09-26
CPC分类号: G06F11/1469 , G06F9/4881 , G06F11/1451 , G06F11/1456
摘要: A system includes one or more source memory devices of a source computing environment that store a database comprising data files, wherein each of a plurality of data tables of the source computing environment includes data from one or more of the data files, one or more target memory devices of a target computing environment and at least one processor configured to receive a command to copy data files from the source memory devices to the target memory devices, detect that the target memory devices have insufficient memory, calculate a value coefficient for each data table, assign a priority index to each data table based on the value coefficient, order the data files in a copy queue based on the priority index of the data tables, and copy the ordered data files to the target memory devices.
-
10.
公开(公告)号:US20240160512A1
公开(公告)日:2024-05-16
申请号:US18418458
申请日:2024-01-22
发明人: Vijaya Kumar Vegulla , Rama Venkata S. Kavali , Venugopala Rao Randhi , Damodarrao Thakkalapelli
CPC分类号: G06F11/0754 , G06F11/0715 , G06F11/0793 , G06F11/3452
摘要: Systems, computer program products, and methods are described herein for evaluating, validating, correcting, and loading data feeds based on artificial intelligence input. The present invention may be configured to receive a data feed from a source for loading to a target data structure, analyze, based on historical feed data, metadata of the data feed to determine a likelihood of the data feed failing to load, and determine whether the likelihood of the data feed failing to load satisfies a threshold. The present invention may be configured to load the data feed to the target data structure, determine, after loading the data feed to the target data structure, whether the data feed failed to load, and either correct errors in the data feed or add error-containing portions of the data feed to a failed data log.
-
-
-
-
-
-
-
-
-