DataOps: Accelerate Time to Repeatable and Operational Data Processes

Written by Todd Goldman | Category: Data Operations

Ask the former CDO of HSBC and JP Morgan Chase’s retail divisions about big data, and he’ll say, “This is about speed and cost. Both are key requirements. Speed to market, and speed to value.”

In fact, that’s what Mark Clare did say as reported by Forbes this week in an excellent piece on DataOps by NewVantage Partners’ Randy Bean. The article goes on to describe how Clare was frustrated by a technology landscape that couldn’t scale up to his need to draw insights from his data that were timely enough to still matter. So, he came up with his own approach, incorporating automation and agility into his processes, which allowed him to cut the delivery time on projects by 75% and compete more effectively against his equally agile competitors.

What Clare was channeling is called DataOps, an automated, process-oriented methodology that plays off of the DevOps idea but used by analytic and data teams to reduce the delivery of data analytics. Driven by the rise of machine learning and data science, DataOps combines close collaboration among all data professionals (developers, governance, data scientists, engineers, etc.) with agile data engineering processes and technologies that support automation and agility. It’s something you’re going to hear a lot more about as more organizations have got their data houses in order (much of which was driven by GDPR) and start realizing it’s not just about the data you have but how quickly you can act on it.

Sure, people have been talking about time to value for years. The problem is very few organizations have been able to act on it. As soon as they started hoarding data, they began running into all the usual big data hurdles one by one–governing data, validating data, integrating disparate data sources, managing the growth, recruiting the right talent… Nobody was really talking about agile data because so few organizations were actually doing anything with their data anyway.

That’s about to change. Last year TDWI published an “Accelerating the Path to Value with Business Intelligence and Analytics” study that showed many organizations felt it was taking too long to achieve value with their big data projects. Only a third of organizations said their projects were delivering value faster than the previous year. A quarter of businesses said their time to value actually slowed! But these organizations are figuring out their big data problem by using agile data engineering platforms and DataOps tools to dramatically accelerate the time it takes to deliver value on data. New research shows a surge in DataOps hiring as the new trend starts to pick up. Soon, their problem won’t be a problem. And that will be your problem.

Jeff Bezos once touted “high velocity, high quality decision making” as a key ingredient for Amazon’s success. Without it, he contends, a business encounters stasis. “Followed by irrelevance. Followed by excruciating, painful decline. Followed by death.”  DataOps and the associated agile data engineering solutions that automate dataops processes may just be the way to avoid such a terrible end.

About this Author
Todd Goldman
Todd is the VP of Marketing and a silicon valley veteran with over 20+ years of experience in marketing and general management. Prior to Infoworks, Todd was the CMO of Waterline Data and COO at Bina Technologies (acquired by Roche Sequencing). Before Bina, Todd was Vice President and General Manager for Enterprise Data Integration at Informatica where he was responsible for their $200MM PowerCenter software product line. Todd has also held marketing and leadership roles at both start-ups and large organizations including Nlyte, Exeros (acquired by IBM), ScaleMP, Netscape/AOL and HP.

Eckerson Report: Best Practices in DataOps

This Eckerson Group report recommends 10 vital steps to attain success in DataOps. 

READ MORE
Want to learn more?
Watch 12 minute product demo