Width.ai

Best Practices in Data Lake Implementation

Karthik Shiraly
·
June 22, 2022
Data lake chart

The raw data that emerges from your company's operations often contains valuable business insights and opportunities. Trends in historical data can help mold your present and future business decisions. For example, a department restructuring in the past might have had cascading effects on sales that can teach important lessons to your management.

However, an enterprise’s instinct has always been to retain only the data that employees of the time assumed would be directly useful in decision-making. There were good reasons for this prioritization in the past. Expensive data storage costs, weak hardware, and limitations of data analysis techniques made storing all data a complex, expensive effort that was difficult to justify to management.

But in recent years, cloud services have made it possible to cheaply store and process petabytes of data. Simultaneously, data science and machine learning techniques have also improved to a point where they can find associations and correlations across petabytes of data.

These two developments enable all data to be stored and analyzed in systems called data lakes. 

The key philosophy behind data lakes is that all your data is valuable because business insights can emerge from any part of it at any time. This idea has found widespread acceptance given that demand for data lakes is projected to grow by 20.6% every year from 2020 to 2027.

In this practical guide, you'll get to know the principles, architectures, and technologies we use for your data lake implementation.

What Is a Data Lake?

A data lake is a software and ETL pipeline system for long-term storage, retrieval, and analysis of all the data in any form that an enterprise consumes and produces over its lifetime. 

This may sound underwhelming but consider that "all the data in any form over a lifetime" implies handling:

  • Data of every variety: Relational data, text, filesystems, time series, images, or audio
  • Data of any volume: From kilobytes to petabytes and more
  • Data with any structure: Structured, semi-structured, or unstructured data
  • Data in any processing state: Raw, modified, curated, or generated data
  • Data from any data source: From databases, data warehouses, or real-time streaming data from Internet of Things (IoT) devices
  • Data from any time: Historical, contemporary, and future data

Seen in this light, it's obvious that a data lake is orders of magnitude more complex than any other big data management system.  We use a set of core principles to control this complexity when implementing data lakes.

Principles for a Data Lake Implementation

We use these foundational principles to guide all our architecture and technology decisions.

Business Insights Can Come From Anywhere at Anytime

We recommend preserving all your data because business insights can come from any subset of that data at any time. They emerge from imaginative business analysts and data scientists who can creatively join the dots across datasets and times. Such creativity is impossible unless you have preserved the data systematically.

Raw Data Has Business Value

We believe that all your data — even raw, messy data — holds inherent business value just waiting to be discovered. The mere ownership of some enterprise data may open up new business opportunities for you in future.

A data lake's workflows should ensure that no data is discarded based on the business or engineering thinking of a particular time period. At the same time, it should not become an unorganized data swamp where data is simply dumped without any context.

Your raw data may be semi-structured data or unstructured data. All data has some structure but the relevance of that structure to your particular enterprise decides if it's semi-structured or unstructured.

For example, the XML structure of XML-ACH bank transaction data is relevant to a financial enterprise but because it can’t be queried without modification, it's termed semi-structured. A scanned image of an official letter also has some structure — perhaps one defined by the JPEG standard. But since the enterprise is only interested in the text captured in that image and not its JPEG encoding, it's considered unstructured.

Unstructured data example
An example of unstructured data in a scanned legal document. Computer vision and OCR are used to extract useful data like case plaintiff, case defendant, court address, and attorneys.

Support Diverse Users and Roles

Your data lake should support all the different data roles and their use cases in your company:

  • Your business analysts expect business intelligence data in relational databases.
  • Your data scientists expect data in raw formats like JSON and CSV or in relational databases.
  • Your data engineers expect data in binary formats like Parquet to optimize performance.

Self-Service

A data lake should provide self-service features to all its end users. Your employees can search for the data they need, request and receive access automatically, explore the data, run their analytics, and add any new data or results to the data lake.

Traditionally, databases and data warehouses needed an IT division to support analysts and data scientists. Such an approach is just not practical at the scale of a data lake.

Architectural Principles

Data architecture visualization

We derive a set of architectural principles from these foundations to guide all our architecture decisions.

Schema-on-Read

Support for diverse roles implies that a data schema can't be imposed on incoming data. Incoming data is stored in a common format such as JSON or Parquet. Data that already has associated schema — such as relational data — has its schema stored as metadata in the same common format.

Schema-on-read means a task-specific schema is applied by your analyst or data scientist only at the time of reading the data. This may involve an interface that presents a relational or another view over the data, or it may involve temporarily exporting data to a database.

For example, Siemens Mobility, a transportation solution provider, receives streaming data from thousands of IoT sensors installed on their customers’ rail assets. All this unstructured data is stored raw in their cloud data lake. Their data scientists then apply schema-on-read (using Amazon Athena) over this raw sensor data to be able to query it by country, customer, location, asset ID, and so on for tasks like rail defect detection.

Metadata Management

Your data lake should provide mechanisms to describe each dataset in detail. It should include the business and regulatory context in which the data is created and used. It should support other metadata such as schema and semantics of data fields. It should support the versioning of data. Since creating metadata manually is laborious, it should support mechanisms for automated inference of field types using machine learning.

Blackline Safety, a global provider of safety monitoring solutions, stores unstructured streaming data from their safety devices in a cloud data lake and manages their metadata to be able to find relevant datasets easily.

Searchability Using Data Catalogs

Your employees should be able to first discover the data before they can discover any insights from them. Self-serviceability means the diverse sets of data in a data lake are easily found through their metadata. It should provide data cataloging features that offer user-friendly Amazon-like search with queries, suggestions, and search filters.

For example, the BMW Group stores massive volumes of vehicle telemetry data every day. Their data teams transform this raw data into structured data whose technical schemas and human-readable notes are registered with the lake’s catalog service (provided by AWS Glue Data Catalog) and made available through a searchable data portal for use by other data and business intelligence teams.

Integrations With Existing Data Stores

Most enterprises don't start with data lakes. You probably started with a few relational databases, grew to multiple siloed databases, and scaled up to data marts and data warehouses. It's impractical and expensive to migrate your existing line-of-business applications to read directly from a data lake. Future applications too may prefer to use specialized databases.

A data lake architecture should support interfaces to pull data from all your existing and future data stores.

For example, Nasdaq stores billions of equities-related data records every day in their data lake. They combine this raw data with existing data in their Redshift warehouse using a data integration service like Redshift Spectrum to provide their data analysts with a unified SQL query layer over all their data.

Data Lake Governance

Data lake governance are policies that govern data quality, metadata quality, data discoverability, data access control, data security, data privacy, and regulatory compliance. Well-defined policies and systematic workflows are essential to avoid turning it into a messy data swamp.

For example, Southwest Airlines uses a data lake governance service like AWS Lake Formation to govern and manage their secure data lakes that store flight and passenger information.

Scalability

The scalability of every architectural layer, every component, and every technology is crucial to store, search, and run analytics on these petabyte-sized data lakes.

For Nielsen, a global media and advertising metrics company with thousands of customers around the world, managing 30-petabyte data lakes without any availability and latency problems requires frictionless scalability.

Resilience

The principle that business insights can come from anywhere anytime implies that data lake storage has to remain resilient across space and time over the lifetime of an enterprise. Storage redundancies, disaster recovery workflows, longevity, business continuity planning, and geographical redundancies are all essential for long-term data preservation.

Sysco, a global food service distribution company, follows this principle by hosting their data lakes on multiple geographically distributed, redundant storage services like Amazon S3 and S3 Glacier.

Machine Learning and Data Analytics

Machine learning, data analytics, and business analytics are the creative approaches that can discover business insights from your data. A data lake's architecture should enable running machine learning and data analytic workloads without governance and performance challenges. It should provide interfaces to import data for processing and to add newly generated data, including machine learning models.

Mueller, a manufacturer of water distribution products for municipalities, collects sensor data from their water supply networks in a data lake. Their data scientists apply machine learning models for pipe leak event detection using a service like SageMaker that is tightly integrated with the data lake’s storage interfaces for minimum governance hurdles and maximum performance.

Security

Data security and data privacy present major challenges in data lake implementations. Regulations and societal attitudes over security and privacy may change over time, requiring data lakes to catch up. Your data lakes are prime attack targets because they may contain data profiles of thousands of people and organizations gathered over long periods of time. Robust data governance and security policies are absolutely essential for data lakes.

At the same time, the main idea of a data lake is to foster a culture of data creativity to discover business insights. Your security policies should not be so restrictive that this goal is missed.

Capital One, a large digital bank, migrated its entire data lake infrastructure to the cloud while following all data security protocols required by banking regulators. Despite their precautions, an employee of their cloud provider managed to breach their protections and obtained sensitive financial data of some 100 million customers that resulted in a $80 million fine on the bank. The incident is a cautionary tale about the importance of the security policies of both your company and your providers.

Data Lake Components

Person points to data lake implementation

With these foundational and architectural principles in place, we can explore different architectural approaches and the components that constitute them.

Data Lake Catalogs

Catalogs are essential components for self-serviceability and governance of your data lake.

They should support:

  • APIs to add and remove data packages programmatically
  • Graphical user interface for managing and searching data packages
  • Full-text search and faceted search using filters
  • Query suggestions
  • Related data package suggestions using a recommender system
  • Automated cataloging of data packages by inferring field types
  • Provide unified views across geographically separated data lakes

Data Lake Zones

Placing your diverse employee roles under a single set of governance policies may hamper their creativity. Data science can be a repetitive trial-and-error process that produces many intermediate datasets. Applying strict data quality rules to these intermediate datasets would be laborious and discourage creative experiments.

Zones are an intuitive way of solving this. Multiple zones can be created for each kind of task:

  • A raw zone for all raw data that your data scientists and machine learning engineers can use
  • A sandbox zone for data science experiments that your data scientists and machine learning engineers can use
  • A production zone for processed data of a similar quality to ETL data from a data warehouse that is suitable for your business analysts
  • A sensitive zone for data that requires higher levels of security and privacy

Each zone will have different governance policies optimized for its respective target users.

Governance Layer

The governance layer is critical for administering your data lake and preventing it from turning into a swamp. It supports workflows for:

  • Requesting, granting, and revoking access to data packages
  • Enforcing privacy and de-identification of data
  • Regulatory and data sovereignty compliance
  • Managing security permissions
  • Auditing
  • Monitoring
  • System performance

Data Storage Layer

The storage layer stores all data under scalability, resilience, availability, and security guarantees. A distributed object store like S3 or Ceph, thanks to its versatility and simplicity, is often preferred over a filesystem or database. The storage layer has interfaces for data ingestion, addition, versioning, deletion, and transfer.

Analytics Layer

Performance is better when your machine learning and analytics operations are run as close as possible to the data, preferably in the same network, to reduce data transfer delays.

A data lake can optionally contain an analytics layer that:

  • Supports interfaces to ingest data
  • Provides SQL, NoSQL, or filesystem views over the underlying data
  • Supports interfaces to create and run machine learning models
  • Supports interfaces to create and run data analytics and visualizations

Data Lake Deployments

Deployment architecture decides where to host your data lake in a way that satisfies all architectural principles.

Data Lake in the Cloud

Cloud services like Amazon AWS, Microsoft Azure, and Google Cloud have emerged as preferred deployment choices for these reasons:

  • Tightly integrated and tested components: No problems due to version incompatibilities
  • Scalability, availability, and resilience track record: Large enterprises already host their data lakes and data warehouses on these services
  • Infrastructure expertise: No need for hiring in-house expertise, which is especially attractive if you’re a startup or SMB who cannot afford expert talent on demand
  • Pay only for used resources: No risk of buying unnecessary hardware only to leave it unused
  • Ready-to-use services: For security and governance
  • Quality of service guarantees: Through service level agreements
  • Customer support: Enterprises prefer somebody they can talk to

Logical or Virtual Data Lakes

A logical or virtual data lake is a lake-like unified interface over multiple physical data lakes.

Redundancy, performance, or regulatory compliance can require multiple data lake deployments in different locations. The cataloging and governance components coordinate between them to provide a unified interface to your analysts and data scientists.

On-Premises Data Lakes

Cloud deployments on platforms like Google Cloud, Azure, and AWS have their benefits but you may prefer hosting your data lakes entirely on-prem in your own data centers for business, security, or regulatory reasons.

On-prem deployments come with huge capital and operational expenditures. First, you have to procure adequate storage, server, and networking hardware. Next, you’ll need to purchase commercial software licenses and support contracts. And finally, you’ll have to hire experienced hardware and software experts to run them.

For these reasons, only the largest enterprises prefer this capital-intensive deployment approach.

Implementation Technologies

Server with illustration overlay of cloud storage

Let's look at some cloud and open-source data lake solutions we use to realize these architectures.

Amazon Web Services

We use these services to build data lakes on the AWS cloud:

  • AWS Lake Formation: Governance, administration, security, and coordination services
  • AWS Glue: Data lake catalog service
  • Amazon S3: Scalable redundant object store is the storage layer
  • Amazon EMR: Run analytics
  • Amazon Athena: SQL query service for business analysts
  • Amazon Redshift: Data warehouse service for exporting data to the lake and importing data from it
  • Amazon SageMaker: Machine learning models

A disadvantage of AWS is that AWS Glue is limited to cataloging only data lakes hosted on AWS. As a result, multi-cloud or hybrid virtual data lakes are more complex to deploy on AWS.

Microsoft Azure

We use these services to build data lakes on the Azure cloud:

  • Azure Purview: Provides a unified data governance layer
  • Data Lake Store: Serves as the storage layer
  • Data Catalog: Implements a data lake cataloging service
  • Data Lake Analytics: Runs batch and streaming analytics
  • Azure Machine Learning: Runs machine learning models

Google Cloud

Google Cloud is less feature-rich compared to AWS or Azure when building data lakes. We use the following services:

  • Cloud Storage: Serves as the storage layer
  • Dataflow: Exposes interfaces for batch and streaming data ingestion
  • Google Data Catalog: Implements a cataloging service
  • Dataproc: Runs analytics
  • BigQuery: Provides SQL query service for business analysts
  • Data Studio: Supports visualization for business analysts

Snowflake

Snowflake is an independent data management platform in the cloud that provides an ecosystem of data lake, warehouse, and analytics services. Unlike the other cloud services, they focus only on data management and tightly integrate their services with one another to achieve a high quality of service.

On-Premises Technologies

On-premises deployments tend to use the following open source technologies:

  • Apache Atlas: Data governance and cataloging software
  • Apache HDFS: Distributed file system as a storage layer
  • Ceph: Distributed object store as a storage layer
  • Apache Hadoop: Analytics for batch data
  • Apache Spark: Analytics for batch and streaming data
  • Apache Storm: Analytics for streaming data
  • Apache Hive: SQL view over Hadoop

Since each of these is a complex system in itself, a tightly integrated stack like Cloudera is strongly preferred.

Build Your Enterprise Data Lake With Width.ai

Data lakes are complex because they have to preserve data for the long term. They have several subsystems that require careful thought even when using cloud services. Every decision has to be evaluated against the essential principles underlying data lakes. But the insights they can deliver can propel your startup, SMB, or enterprise to new heights. 

Contact us to learn how we can help you build your data lakes.