Data Ingestion 

Schedule a demo

Data ingestion and synchronization into a big data environment is harder than most people think.  Loading large volumes of data at high speed and managing the incremental ingestion and synchronization of data at scale into a data lake can present a significant technical challenge.

Infoworks automates data ingestion for batch and streaming

No-code ingestion configuration

Our data ingestion tools provide a no-code environment for configuring the ingestion of data from a wide variety of data sources. Infoworks also uses native connectors when available to provide the highest possible speed of data ingestion.

 

Data type conversion

Data types on relational sources map differently depending on the Hadoop or cloud data storage environment you select. Infoworks automatically handles data type conversions which reduces the errors typical in manual handling of data type conversion. In addition, this automation makes it easy to move data from the data lake to other consuming systems without having to recode data types.

Scalable, parallelized data ingestion process

Infoworks’ automated process parallelizes the ingestion of data into your data lake and significantly accelerates the loading of large tables with small ingestion windows, without requiring code development.

Schema change detection and propagation

When new columns are added to source systems, data ingestion processes often break if they aren’t manually updated prior to the change. Infoworks automatically detects source side schema changes, adjusts for those changes and ingests the new columns automatically into the data lake.

Synch and merge of incremental data

Infoworks automates log and query-based change data capture and also manages slowly changing dimensions (type I and II).  Infoworks reconciles and merges incremental data at ingestion time with the base data that had previously been ingested.  Our data ingestion tool’s continuous merge capability supports fast ingestion and continuous fresh data availability while keeping the data optimized for downstream query performance.

Streaming data

Infoworks supports both batch and streaming use cases. Configuration of a streaming data flow is done via a simple menu-based interface with no coding required. Infoworks uses Kafka as the underlying streaming engine and can connect to any data source to stream large amounts of data in real time.

Time axis tracking of current, history and slowly changing dimensions

As part of the synchronization and merge process, Infoworks tracks slowly changing dimensions (SCD) and automatically keeps a history table of prior state data, including the date of any changes as well as tracking any errors that might have occurred in the SCD process.

Data validation and reconciliation

Infoworks automatically validates data ingested into the data lake for full and incremental loads coming through change data capture. For all data sources loaded, Infoworks provides:

  • Row count validation to ensure that the row counts between source and target match
  • User specified aggregate data matches between source and target tables

Data Ingestion and Synchronization Metrics

Infoworks Autonomous Data Engine has been proven in production customer deployments to perform much better than alternatives. The examples in the table below illustrate the level of performance improvement Infoworks’ customers have obtained.

Ready to unlock the value of your data?

Schedule a demo