Big Data
Jun 17, 2020
Written by Amar Arsikere
Every data analytics project starts with the critical first step of creating and operationalizing healthy data lakes. A unified data lake is created by onboarding multiple data sources. Onboarding a data ...

AI and Machine Learning
Feb 24, 2020
Written by Amar Arsikere
Data onboarding is the critical first step in operationalizing your data lake. DataFoundry automates data ingestion as well as the key functionality that must accompany ingestion to establish a complete ...

Data Ingestion
May 30, 2018
Written by Ramesh Menon
There are many "gotchas" in populating a data lake. Read on to find out the magic behind automating away many of the most difficult challenges.

Data Ingestion
May 14, 2018
Written by Ramesh Menon
There are a variety of tools and frameworks for data ingestion, and most will appear to be suitable in a proof-of-concept. Don't be deceived!

This collection of data ingestion best practices is from the Infoworks blog. In the world of big data, data ingestion refers to the process of accessing and importing data for immediate use or storage in a database for later analysis. The data might be in different formats and come from numerous sources, including RDBMS, other types of databases, S3 buckets, CSVs, or from streams.

Organizations can approach ingestion differently as data can be ingested in batches or in real-time. When ingested in batches, every piece of data will be imported in various chunks over specific intervals of time. Real-time ingestion involves importing data as soon as sources disseminate them. Alternatively, a lambda architecture is an approach that attempts to combine the benefits of both batch processing and real-time ingestion. Ingestion is a critical function considering that enterprise organizations often have data flowing in from hundreds of sources and in countless formats.

At the highest level, the key functions of data ingestion are the collection of data from a source, filtering, and routing that data to one or more data stores. Managing a data ingestion pipeline involves dealing with recurring challenges such as lengthy processing times, overwhelming complexity, and security risks associated with moving data.

Depending on how a given organization or team wishes to store or leverage their data, data ingestion can be automated with the help of some software. Automating data ingestion in this way often includes transforming the data in such a way that it has a better structure for streamlined organization later on throughout its lifecycle.

To learn more about data ingestion best practices and gain unique insights from industry experts, be sure to check out all the data ingestion articles found in this archive.

Looking for More Data Engineering Content? 

The Infoworks blog also dives into big data news, data engineering articles, new announcements from the team at Infoworks, data lake news articles, and data operations articles.