摘要:
A computer-implemented method, computer program product and a system for identifying and handling slowly changing dimension (SCD) attributes for use with an Extract, Transform, Load (ETL) process, comprising importing a data model for dimensional data into a data integration system, where the dimensional data comprises a plurality of attributes, identifying via a data discovery analyzer one or more attributes in the data model as SCD attributes, importing the identified SCD attributes into the data integration system, selecting a data source comprising dimensional data, automatically generating an ETL job for the dimensional data utilizing the imported SCD attributes, and executing the automatically generated ETL to extract the dimensional data from the data source and loading the dimensional data into the imported SCD attributes in a target data system.
摘要:
A computer-implemented method, computer program product and a system for supporting star and snowflake data schemas for use with an Extract, Transform, Load (ETL) process, comprising selecting a data source comprising dimensional data, where the dimensional data comprises at least one source table comprising at least one source column, importing a data model for the dimensional data into a data integration system, analyzing the imported data model to select a star or snowflake target data schema comprising target dimensions and target facts, generating a meta-model representation by mapping at least one source table or source column to each target fact and target dimension, automatically converting the meta-model representation into one or more ETL jobs, and executing the ETL jobs to extract the dimensional data from the data source and loading the dimensional data into the selected target data schema in a target data system.
摘要:
A computer-implemented method, computer program product and a system for identifying and handling slowly changing dimension (SCD) attributes for use with an Extract, Transform, Load (ETL) process, comprising importing a data model for dimensional data into a data integration system, where the dimensional data comprises a plurality of attributes, identifying via a data discovery analyzer one or more attributes in the data model as SCD attributes, importing the identified SCD attributes into the data integration system, selecting a data source comprising dimensional data, automatically generating an ETL job for the dimensional data utilizing the imported SCD attributes, and executing the automatically generated ETL to extract the dimensional data from the data source and loading the dimensional data into the imported SCD attributes in a target data system.
摘要:
Techniques for running an Extract Transform Load (ETL) job in parallel on one or more processors wherein the ETL job comprises use of an extensible markup language (XML) document are provided. The techniques include receiving an XML document input, identifying a node in the XML document at which partitioning of the XML document is to begin, sending partition information to each respective processor, performing a shallow parsing of the XML document in parallel on the one or more processors, wherein each processor performs shallow parsing using the identified partition node until it reaches its identified partition, using the shallow parsing to generate the partition of the input XML document, wherein each processor generates a different partition of the same XML document, and sending each partition in streaming format to an ETL job instance.
摘要:
A computer-implemented method, computer program product and a system for supporting star and snowflake data schemas for use with an Extract, Transform, Load (ETL) process, comprising selecting a data source comprising dimensional data, where the dimensional data comprises at least one source table comprising at least one source column, importing a data model for the dimensional data into a data integration system, analyzing the imported data model to select a star or snowflake target data schema comprising target dimensions and target facts, generating a meta-model representation by mapping at least one source table or source column to each target fact and target dimension, automatically converting the meta-model representation into one or more ETL jobs, and executing the ETL jobs to extract the dimensional data from the data source and loading the dimensional data into the selected target data schema in a target data system.
摘要:
A computer-implemented method, computer program product and a system for supporting star and snowflake data schemas for use with an Extract, Transform, Load (ETL) process, comprising selecting a data source comprising dimensional data, where the dimensional data comprises at least one source table comprising at least one source column, importing a data model for the dimensional data into a data integration system, analyzing the imported data model to select a star or snowflake target data schema comprising target dimensions and target facts, generating a meta-model representation by mapping at least one source table or source column to each target fact and target dimension, automatically converting the meta-model representation into one or more ETL jobs, and executing the ETL jobs to extract the dimensional data from the data source and loading the dimensional data into the selected target data schema in a target data system.
摘要:
Techniques for running an Extract Transform Load (ETL) job in parallel on one or more processors wherein the ETL job comprises use of an extensible markup language (XML) document are provided. The techniques include receiving an XML document input, identifying a node in the XML document at which partitioning of the XML document is to begin, sending partition information to each respective processor, performing a shallow parsing of the XML document in parallel on the one or more processors, wherein each processor performs shallow parsing using the identified partition node until it reaches its identified partition, using the shallow parsing to generate the partition of the input XML document, wherein each processor generates a different partition of the same XML document, and sending each partition in streaming format to an ETL job instance.
摘要:
A computer-implemented method, computer program product and a system for identifying and handling slowly changing dimension (SCD) attributes for use with an Extract, Transform, Load (ETL) process, comprising importing a data model for dimensional data into a data integration system, where the dimensional data comprises a plurality of attributes, identifying via a data discovery analyzer one or more attributes in the data model as SCD attributes, importing the identified SCD attributes into the data integration system, selecting a data source comprising dimensional data, automatically generating an ETL job for the dimensional data utilizing the imported SCD attributes, and executing the automatically generated ETL to extract the dimensional data from the data source and loading the dimensional data into the imported SCD attributes in a target data system.
摘要:
Methods and arrangements for extracting tuples from a streaming XML document. A query twig is applied to the XML document stream, tuples are extracted from the XML document stream based on the query twig, and a quantity of extracted tuples is limited via foregoing extraction of duplicate tuples extraction of tuples that do not satisfy query twig criteria.
摘要:
A method, computer program product, and system for enabling parallel processing of an XML document without pre-parsing, utilizing metadata associated with the XML document and created at the same time as the XML document. The metadata is used to generate partitions of the XML document at the time of parallel processing, without requiring system-intensive pre-parsing.