Best Software for 2025 is now live!

When Platforms Collide, Analytics Evolves

30. Juli 2020
von Tom Pringle

Within the enterprise tech space, the seemingly endless evolution of data-driven insights continues apace—but when will it end? 

(When data is no longer useful, so, never.)

In previous columns, I discussed the transformation of the decision support systems of yesteryear to today’s analytics platforms. The latest expansion in 2020 is powered by artificial intelligence, or at least is labeled as such. In reality, it is machine learning—a branch of AI that delivers the majority of current AI use cases—that has brought forth an increasingly popular term: augmented analytics. 

The industry is keen on putting new words in front of analytics—business, data, edge, distributed, real-time—and I am as guilty as the next analyst for giving into this temptation. But I will defend the use of augmented analytics (even if I didn’t come up with it) because it is critical for enterprises to become genuinely data-driven.

Augmented analytics is the latest addition to the analytics continuum

So how shall we define augmented analytics? A quick internet search will yield plenty of possibilities. Simply put, augmented analytics is the use of machine learning to automate the creation and delivery of data-driven insights. Think of it as working in three key areas of the data journey:

1. Finding and managing data

Machine learning (ML) can be used in multiple areas of data management, from helping with initial formatting, scanning for connections between different data sets, assessing data quality, and so on. Data management has been one of the biggest barriers to more people working with data—since most do not have the necessary skills. Automating away some of this challenge is a major advancement.

2. Discovery experience 

Okay, the data is prepped: So what? Many people talk about discovery as if it requires limited or no skill. Most data discovery experiences have required significant data and analysis skills to start generating insights. Augmenting data discovery means helping users understand what the data they have access to is and how it connects to other data sets, as well as providing guidewires to help them join and analyze data in a way that makes analytical sense.

3. Presentation of insights 

Does visualizing a newly created regional sales report work in a pie chart, or would it look better as a stacked bar chart? A user potentially would not realize the data could be placed on a map and color coded, but an augmented analytics tool might. Helping people consume information in the easiest way is critical, and not everyone can be a visualization expert. From suggesting how information is presented, to smart alerts inside another app—augmenting the delivery of analysis helps more people get more from data.

Familiar names such as IBM, Oracle, Salesforce, SAP, Tableau, and Qlik, among others, have been busy augmenting the user experience of their analytics products. This is good news for expert and novice analytics users alike. But, does this mean that augmented analytics represents the final, mature state of data analysis? Of course not!

Data science is moving from the lab to the shop floor

Enriching analytics platforms with AI is the first step in a long journey ahead: the meeting and ultimate merging of analytics and today’s data science

The use of machine learning to augment and automate analytics features is analogous to the development of self-driving cars. First came parking sensors and trip computers, then parking assistance and auto braking; developments will continue until we can deliver the fully autonomous vehicle. In the same vein, as augmented analytics gets smarter, its share of insight-creating use cases will grow, and either data sciences will recede—or, and more likely, find new and ever more complex ways to interrogate data. In other words, analytical tools and techniques from the scientific laboratory will make their way to regular use on the shop floor.

Data science is undergoing its own transformation, from a vast collection of disparate but interdependent tools to an integrated platform. Here at G2, we have recently updated our taxonomy to address these changes, adding a new category, data science and machine learning platforms, to our AI software taxonomy.

Connecting the data journey in data science is similar to that in analytics—starting with model building and testing, training the model on relevant data, and moving through operationalization and model monitoring and management. Many vendors in the space are bringing tools and functionality together in a single platform, which assists end users with improving the efficiency of their projects. Additionally, these platforms help ensure data scientists' work is compatible and comparable, and bring functionality like baked-in governance, lineage, and reproducibility features that grow confidence in the models created.

Read More: The G2 on Enterprise AI & Analytics: AI — What is It Really & Why Does It Matter?

Using the necessity of investment in data to enable better analytics

The growing appetite for the analysis of data has sparked multiple software evolutions—what I refer to as the analytics continuum. As discussed in one of my recent articles, there are decades of history here, and I firmly predict that as data science and machine learning platforms consolidate and standardize functionality, their orbit around analytics platforms will come closer and closer. Acting as a brake on these developments will continue to be, well, what has always stood in the way of analytics: the availability of data.

Organizations are rarely short of data; usually the problem is quite reversed, and many struggle to deal with both the volume of available data and what should be retained. However, the availability of data that meets the use case—that is, finding it, physically accessing it, ensuring its quality is sufficient, and formatting to an analytics-consumable standard—has been a consistent issue. Furthermore, getting support for the investments necessary to provide data management skills, software, and an ongoing program of governance has traditionally been difficult, and remains so. The myriad data storage and format options now available only add to this challenge, with data stored in everything from connected devices and public clouds, to mission-critical mainframes.

So how to solve the data dilemma? There are several options for organizations that need to invest in their data, and perhaps unsurprisingly, regulation is one of the key drivers. Data retention laws—the EU’s GDPR and California’s CCPA, already in effect, are among a growing range of similar legislation—are adding serious firepower to the case for investing in data management capabilities, such as data quality, data catalogs, master data management, and new cloud data stores. 

But investing because you must to minimize risk is hardly the most exciting prospect (albeit hugely important). To bolster these cases, G2 suggests focusing on what having access to better data enables. From more in-depth analysis leading to better business outcomes, through machine learning-powered automation—the options presented by analytics platforms consuming this data, matched to emerging data science platforms, are substantial.

Möchten Sie mehr über Künstliche Intelligenz Software erfahren? Erkunden Sie Künstliche Intelligenz Produkte.

Tom Pringle
TP

Tom Pringle

Tom is Vice President of Market Research at G2, and leads our analyst team. Tom's entire professional experience has been in information technology where he has worked in both consulting and research roles. His personal research has focused on data and analytics technologies; more recently, this has led to a practical and philosophical interest in artificial intelligence and automation. Prior to G2, Tom held research, consulting, and management roles at Datamonitor, Deloitte, BCG, and Ovum. Tom received a BSc. from the London School of Economics.