How Zero-shot Object Detection Changes Computer Vision Tasks in Business
Discover the capabilities of zero-shot object detection, which enables anyone to use a model out-of-the-box without any training and generate production-grade results.
The raw data that emerges from your company's operations often contains valuable business insights and opportunities. Trends in historical data can help mold your present and future business decisions. For example, a department restructuring in the past might have had cascading effects on sales that can teach important lessons to your management.
However, an enterprise’s instinct has always been to retain only the data that employees of the time assumed would be directly useful in decision-making. There were good reasons for this prioritization in the past. Expensive data storage costs, weak hardware, and limitations of data analysis techniques made storing all data a complex, expensive effort that was difficult to justify to management.
But in recent years, cloud services have made it possible to cheaply store and process petabytes of data. Simultaneously, data science and machine learning techniques have also improved to a point where they can find associations and correlations across petabytes of data.
These two developments enable all data to be stored and analyzed in systems called data lakes.
The key philosophy behind data lakes is that all your data is valuable because business insights can emerge from any part of it at any time. This idea has found widespread acceptance given that demand for data lakes is projected to grow by 20.6% every year from 2020 to 2027.
In this practical guide, you'll get to know the principles, architectures, and technologies we use for your data lake implementation.
A data lake is a software and ETL pipeline system for long-term storage, retrieval, and analysis of all the data in any form that an enterprise consumes and produces over its lifetime.
This may sound underwhelming but consider that "all the data in any form over a lifetime" implies handling:
Seen in this light, it's obvious that a data lake is orders of magnitude more complex than any other big data management system. We use a set of core principles to control this complexity when implementing data lakes.
We use these foundational principles to guide all our architecture and technology decisions.
We recommend preserving all your data because business insights can come from any subset of that data at any time. They emerge from imaginative business analysts and data scientists who can creatively join the dots across datasets and times. Such creativity is impossible unless you have preserved the data systematically.
We believe that all your data — even raw, messy data — holds inherent business value just waiting to be discovered. The mere ownership of some enterprise data may open up new business opportunities for you in future.
A data lake's workflows should ensure that no data is discarded based on the business or engineering thinking of a particular time period. At the same time, it should not become an unorganized data swamp where data is simply dumped without any context.
Your raw data may be semi-structured data or unstructured data. All data has some structure but the relevance of that structure to your particular enterprise decides if it's semi-structured or unstructured.
For example, the XML structure of XML-ACH bank transaction data is relevant to a financial enterprise but because it can’t be queried without modification, it's termed semi-structured. A scanned image of an official letter also has some structure — perhaps one defined by the JPEG standard. But since the enterprise is only interested in the text captured in that image and not its JPEG encoding, it's considered unstructured.
Your data lake should support all the different data roles and their use cases in your company:
A data lake should provide self-service features to all its end users. Your employees can search for the data they need, request and receive access automatically, explore the data, run their analytics, and add any new data or results to the data lake.
Traditionally, databases and data warehouses needed an IT division to support analysts and data scientists. Such an approach is just not practical at the scale of a data lake.
We derive a set of architectural principles from these foundations to guide all our architecture decisions.
Support for diverse roles implies that a data schema can't be imposed on incoming data. Incoming data is stored in a common format such as JSON or Parquet. Data that already has associated schema — such as relational data — has its schema stored as metadata in the same common format.
Schema-on-read means a task-specific schema is applied by your analyst or data scientist only at the time of reading the data. This may involve an interface that presents a relational or another view over the data, or it may involve temporarily exporting data to a database.
For example, Siemens Mobility, a transportation solution provider, receives streaming data from thousands of IoT sensors installed on their customers’ rail assets. All this unstructured data is stored raw in their cloud data lake. Their data scientists then apply schema-on-read (using Amazon Athena) over this raw sensor data to be able to query it by country, customer, location, asset ID, and so on for tasks like rail defect detection.
Your data lake should provide mechanisms to describe each dataset in detail. It should include the business and regulatory context in which the data is created and used. It should support other metadata such as schema and semantics of data fields. It should support the versioning of data. Since creating metadata manually is laborious, it should support mechanisms for automated inference of field types using machine learning.
Blackline Safety, a global provider of safety monitoring solutions, stores unstructured streaming data from their safety devices in a cloud data lake and manages their metadata to be able to find relevant datasets easily.
Your employees should be able to first discover the data before they can discover any insights from them. Self-serviceability means the diverse sets of data in a data lake are easily found through their metadata. It should provide data cataloging features that offer user-friendly Amazon-like search with queries, suggestions, and search filters.
For example, the BMW Group stores massive volumes of vehicle telemetry data every day. Their data teams transform this raw data into structured data whose technical schemas and human-readable notes are registered with the lake’s catalog service (provided by AWS Glue Data Catalog) and made available through a searchable data portal for use by other data and business intelligence teams.
Most enterprises don't start with data lakes. You probably started with a few relational databases, grew to multiple siloed databases, and scaled up to data marts and data warehouses. It's impractical and expensive to migrate your existing line-of-business applications to read directly from a data lake. Future applications too may prefer to use specialized databases.
A data lake architecture should support interfaces to pull data from all your existing and future data stores.
For example, Nasdaq stores billions of equities-related data records every day in their data lake. They combine this raw data with existing data in their Redshift warehouse using a data integration service like Redshift Spectrum to provide their data analysts with a unified SQL query layer over all their data.
Data lake governance are policies that govern data quality, metadata quality, data discoverability, data access control, data security, data privacy, and regulatory compliance. Well-defined policies and systematic workflows are essential to avoid turning it into a messy data swamp.
For example, Southwest Airlines uses a data lake governance service like AWS Lake Formation to govern and manage their secure data lakes that store flight and passenger information.
The scalability of every architectural layer, every component, and every technology is crucial to store, search, and run analytics on these petabyte-sized data lakes.
For Nielsen, a global media and advertising metrics company with thousands of customers around the world, managing 30-petabyte data lakes without any availability and latency problems requires frictionless scalability.
The principle that business insights can come from anywhere anytime implies that data lake storage has to remain resilient across space and time over the lifetime of an enterprise. Storage redundancies, disaster recovery workflows, longevity, business continuity planning, and geographical redundancies are all essential for long-term data preservation.
Sysco, a global food service distribution company, follows this principle by hosting their data lakes on multiple geographically distributed, redundant storage services like Amazon S3 and S3 Glacier.
Machine learning, data analytics, and business analytics are the creative approaches that can discover business insights from your data. A data lake's architecture should enable running machine learning and data analytic workloads without governance and performance challenges. It should provide interfaces to import data for processing and to add newly generated data, including machine learning models.
Mueller, a manufacturer of water distribution products for municipalities, collects sensor data from their water supply networks in a data lake. Their data scientists apply machine learning models for pipe leak event detection using a service like SageMaker that is tightly integrated with the data lake’s storage interfaces for minimum governance hurdles and maximum performance.
Data security and data privacy present major challenges in data lake implementations. Regulations and societal attitudes over security and privacy may change over time, requiring data lakes to catch up. Your data lakes are prime attack targets because they may contain data profiles of thousands of people and organizations gathered over long periods of time. Robust data governance and security policies are absolutely essential for data lakes.
At the same time, the main idea of a data lake is to foster a culture of data creativity to discover business insights. Your security policies should not be so restrictive that this goal is missed.
Capital One, a large digital bank, migrated its entire data lake infrastructure to the cloud while following all data security protocols required by banking regulators. Despite their precautions, an employee of their cloud provider managed to breach their protections and obtained sensitive financial data of some 100 million customers that resulted in a $80 million fine on the bank. The incident is a cautionary tale about the importance of the security policies of both your company and your providers.
With these foundational and architectural principles in place, we can explore different architectural approaches and the components that constitute them.
Catalogs are essential components for self-serviceability and governance of your data lake.
They should support:
Placing your diverse employee roles under a single set of governance policies may hamper their creativity. Data science can be a repetitive trial-and-error process that produces many intermediate datasets. Applying strict data quality rules to these intermediate datasets would be laborious and discourage creative experiments.
Zones are an intuitive way of solving this. Multiple zones can be created for each kind of task:
Each zone will have different governance policies optimized for its respective target users.
The governance layer is critical for administering your data lake and preventing it from turning into a swamp. It supports workflows for:
The storage layer stores all data under scalability, resilience, availability, and security guarantees. A distributed object store like S3 or Ceph, thanks to its versatility and simplicity, is often preferred over a filesystem or database. The storage layer has interfaces for data ingestion, addition, versioning, deletion, and transfer.
Performance is better when your machine learning and analytics operations are run as close as possible to the data, preferably in the same network, to reduce data transfer delays.
A data lake can optionally contain an analytics layer that:
Deployment architecture decides where to host your data lake in a way that satisfies all architectural principles.
Cloud services like Amazon AWS, Microsoft Azure, and Google Cloud have emerged as preferred deployment choices for these reasons:
A logical or virtual data lake is a lake-like unified interface over multiple physical data lakes.
Redundancy, performance, or regulatory compliance can require multiple data lake deployments in different locations. The cataloging and governance components coordinate between them to provide a unified interface to your analysts and data scientists.
Cloud deployments on platforms like Google Cloud, Azure, and AWS have their benefits but you may prefer hosting your data lakes entirely on-prem in your own data centers for business, security, or regulatory reasons.
On-prem deployments come with huge capital and operational expenditures. First, you have to procure adequate storage, server, and networking hardware. Next, you’ll need to purchase commercial software licenses and support contracts. And finally, you’ll have to hire experienced hardware and software experts to run them.
For these reasons, only the largest enterprises prefer this capital-intensive deployment approach.
Let's look at some cloud and open-source data lake solutions we use to realize these architectures.
We use these services to build data lakes on the AWS cloud:
A disadvantage of AWS is that AWS Glue is limited to cataloging only data lakes hosted on AWS. As a result, multi-cloud or hybrid virtual data lakes are more complex to deploy on AWS.
We use these services to build data lakes on the Azure cloud:
Google Cloud is less feature-rich compared to AWS or Azure when building data lakes. We use the following services:
Snowflake is an independent data management platform in the cloud that provides an ecosystem of data lake, warehouse, and analytics services. Unlike the other cloud services, they focus only on data management and tightly integrate their services with one another to achieve a high quality of service.
On-premises deployments tend to use the following open source technologies:
Since each of these is a complex system in itself, a tightly integrated stack like Cloudera is strongly preferred.
Data lakes are complex because they have to preserve data for the long term. They have several subsystems that require careful thought even when using cloud services. Every decision has to be evaluated against the essential principles underlying data lakes. But the insights they can deliver can propel your startup, SMB, or enterprise to new heights.
Contact us to learn how we can help you build your data lakes.