Home Slider Data Fabric: Simplifying Data Management in an Increasingly Data-Reliant World

Data Fabric: Simplifying Data Management in an Increasingly Data-Reliant World

by internationalbanker

By Valerie Hernandez, International Banker

 

Businesses all over the world are seeking to become more data-driven in their decision-making and extract as much value as possible from the data available to them. But given the rise of various data-related technologies, devices and storage environments, data management has become tougher than ever. One architecture that is now gaining traction within such organisations to help them effectively manage this problem is data fabric.

Indeed, today’s global organisation is almost certain to have much of its data deployed across a vast geographical area—on-site and off-premise, and perhaps even across various physical and cloud environments. Sensors for the internet of things (IoT), cloud computing and edge computing represent just a few common ways in which data is generated across various remote locations. What’s more, an expanding array of data facilities have also emerged, including data lakes, relational databases, data mesh and flat files, and as such, managing, processing, storing, integrating and securing data across all these data types and platforms can present a major headache to enterprise organisations.

Data fabric can be utilised to consolidate disparate data sources and locations into one significantly more manageable environment. Sometimes confused with data lakes, which involve centralised storage of large quantities of raw data, data fabric supports a multitude of storage locations, making the process of managing the data across all these locations as simple as possible. “Previously, software development teams went with their own implementation for data storage and retrieval,” explained Palo Alto-based data-integration firm Striim. “A typical enterprise data centre stores data in relational databases (e.g., Microsoft SQL Server), non-relational databases (e.g., MongoDB), data repositories (e.g., a data warehouse), flat files, and other platforms. As a result, data is spread across rigid and isolated data silos, which creates issues for modern businesses.” The data fabric thus simplifies data access across such remote data locations to facilitate efficient self-service data storage and consumption.

This simplification crucially prevents organisations from having to overhaul their entire data infrastructure; instead, data fabrics can make existing data architectures more efficient. According to IBM, the architecture thus becomes agnostic to data environments, processes, utilities and geographies, all while integrating end-to-end data-management capabilities. “A data fabric automates data discovery, governance and consumption, enabling enterprises to use data to maximize their value chain,” the US tech giant explained. “With a data fabric, enterprises elevate the value of their data by providing the right data, at the right time, regardless of where it is resides.”

Specifically, the data fabric improves upon existing data infrastructure, often by adding automation to the data-management process. It operates as an integrated layer—the fabric—of data and connecting processes, and it implements analytics over metadata assets (that is, the data that provides more information about other data), which, according to Gartner, supports “the design, deployment and utilization of integrated and reusable data across all environments, including hybrid and multi-cloud platforms.” With in-built analytics able to interpret this metadata, data fabric can learn what data is being used as well as make recommendations for more, different and better data, which, in turn, can lower data-management requirements by up to 70 percent.

Perhaps not surprisingly, then, machine learning (ML) plays a crucial role in learning existing data and making further recommendations. Exposing data to ML models facilitates improvement in their learning capabilities, with ML algorithms connected to the data fabric used to monitor data pipelines, identify valuable relationships and make appropriate recommendations. According to the Dallas-based data-fabric firm K2View, this ML capability can be broken down into three stages:

Passive learning: Data fabric learns what data exists and applies artificial intelligence (AI) to add any missing metadata.

Behaviour analysis: Data fabric uses metadata to learn where and how data is being used and then analyses other data for similar behaviour. “If the behaviour is the same, it’s probably already usable,” explained K2View.

Active recommendations: Data fabric relies on active metadata to generate recommendations for data engineers, such as new data, more data or best data to deliver to data consumers.

Data fabric leverages both human and machine capabilities to access the data in place or support its consolidation where appropriate, as Gartner’s Ashutosh Gupta explained in May 2021. “It continuously identifies and connects data from disparate applications to discover unique, business-relevant relationships between the available data points. The insight supports re-engineered decision-making, providing more value through rapid access and comprehension than traditional data management practices.”

Gartner has ranked data fabric as the top technology trend for 2022 and predicted that data-fabric deployments will quadruple efficiency in data utilisation while cutting human-driven data-management tasks in half by 2024. It has identified four key pillars of a data-fabric architecture that data and analytics leaders must know:

  1. Data fabric must collect and analyse all forms of metadata: This contextual information provides the data-fabric design’s foundation, and as such, there should be a mechanism to enable the data fabric to identify, connect and analyse all kinds of metadata.
  2. Data fabric must convert passive metadata to active metadata: Data fabric should analyse available metadata for key metrics and statistics, graphically depict metadata in an easy-to-understand manner and leverage key metadata metrics to enable AI/ML algorithms.
  3. Data fabric must create and curate knowledge graphs: Knowledge graphs enrich data with semantics and thus enable businesses to derive value from the data. The semantic layer of the knowledge graph adds depth and meaning to the data usage and content graph, allowing AI/ML algorithms to use the information for analytics and other operational use cases.
  4. Data fabric must have a robust data-integration backbone: Data fabric should be compatible with various data-delivery styles, such as ETL (extract, transform, and load), streaming, replication and messaging, and it should support all types of data users, including IT (information technology) users and business users.

The data analytics and ML firm AtScale has also outlined the necessary capabilities a “true data fabric solution” should have, including autonomous data engineering; unified data semantics; centralised data security and governance; data-management visibility and being agnostic to the platform and application. “The need for speed is the competitive differentiator for global enterprises,” AtScale explained in September 2019. “A data fabric can serve to minimize disruption by creating a highly adaptable data management environment that automatically adjusts to changing technology.”

And to underline just how vital data fabric is expected to prove to businesses over the next few years, a recent report published by market-research firm StrategyR projected that the global data-fabric market will reach $3.7 billion by 2026 from $1.6 billion this year at a compound annual growth rate (CAGR) of 22.2 percent over the forecast period. “The market is slated to receive a noteworthy push from a number of favourable factors like increasing acceptance of big data and analytics for deriving business decisions,” according to the report. “The increasing penetration of connected devices and systems has resulted in a notable spike in the amount and variety of data. The deployment of sensors and video cameras across facilities for gaining location-related and other information is leading to significant data volumes that can be used for deriving business decisions. These trends are creating strong demand for data fabric solutions.”

 

Related Articles

Leave a Comment

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.