Best Software for 2025 is now live!

G2 on Enterprise AI & Analytics: What is It Really & Why Does It Matter?

11 de Setembro de 2019
por Tom Pringle

Artificial intelligence (AI) has become the hottest topic in technology. Its use in the age of digital transformation spans the commercial and public sectors, fueling debate among businesses, lawmakers, and the public. I have been putting pen to paper on the subject for a few years now; having recently joined G2 to nurture the expansion of our AI categories (as well as our analytics, cloud, and security categories), I would like to sum up some of my views, while setting the scene for a more detailed exploration of the technologies, business challenges, and trends shaping AI. 

I hope you find it useful and informative, and—as always—I welcome feedback from users, buyers, and builders alike. The AI conversation is ongoing, and it is up to us to unearth the real value of AI, make available the data which is its fuel, and deliver better outcomes for everyone.

There are two types of AI, but only one is currently realistic 

I am not alone in proposing that AI can be thought of as being divided into two camps. First, there is a general AI: the replication of human-like intelligence, which on interrogation by humans should be indistinguishable from it.

The second type is narrow AI. Narrow AI is a task-specific capability which automates human labor in a highly defined and structured way. Narrow AI derives benefits for users by completing tasks on a larger scale, at greater speeds, and with higher accuracy than a human could.

General AI is the guilty party when it comes to much of the confusion and misinformation in the AI market. Why? Because, at this point (and for the foreseeable future), it is the subject matter of science fiction. Popular perceptions of AI are driven by examples such as The Terminator or I, Robot. You know the story here: Insanely (I use that word deliberately) intelligent machinery outpaces human intelligence, decides that people are either a threat or incapable of making decisions for themselves, and does something catastrophic about it. Although it is important to debate the ethical implications of futuristic general AI, we can focus on what is realistic for technology buyers today: narrow AI.

Today, narrow AI is used for its accuracy and speed. The technology can complete tasks quicker, with greater consistency, or at a grander scale (or some combination of these) than a person can. Importantly, it is also a natural complement to many existing technologies by virtue of the ability to embed it within them. More on that later.

Artificial intelligence has many uses, including handling large amounts of data for its users.

AI technology is not new, but the data and compute to run it is

For those interested in the history of AI technologies, some of those being used today have been around since the 1950s and 60s.

But if AI has been around for so long, why have we only started to use it the past handful of years?

The answer is simple: Only recently has there been the sheer volume of data necessary to fuel it and the computing power to process it. Narrow AI excels at tasks on a scale and at a pace that humans struggle to manage. Big data paired with the availability of vast, scalable storage and computing services in the public cloud make it possible and valuable.

The value of AI is found in productivity improvements, for now

I will continue to argue that the majority of AI use cases are really about improving productivity; that is, producing more, but using less to do so. In essence my argument is this: AI helps people do their work either more quickly (through automation of labor) or more effectively (through unearthing previously hidden insights that positively influence decisions). In terms of a simple formula for productivity, this either reduces the inputs (labor) to deliver outputs (work completed), or increases the amount of outputs for the same inputs, or—even better—both.

This is an evolving situation, too. The “art of the possible” is rapidly expanding. For example,  labor involved in monitoring internet-scale data sources for signals and insights is beyond the abilities of a human. This means that not only are productivity gains available for existing types of labor and output, but AI and the machines that run it can and are creating previously impossible uses. Costs, revenue, and profit are not the only business levers to pull, either. Consider the scale and speed of cybersecurity threats; machines excel at this type of challenge, and humans cannot keep up.

A warning shot must be fired in the interests of balance, and will be the subject of a future post by G2: Labor reduction will inevitably lead to (current) job losses. A major debate about whether AI will usher in mass unemployment, or herald the beginning of a new era of creativity and economic freedom has begun. Whether you are an AI optimist or pessimist in this regard, these unanswered socioeconomic questions need more time and effort invested in seeking a positive outcome.

Getting started with AI for most is likely to be as a feature of an existing product

Many enterprises are rightly excited about the prospect of using (although they may not know it, narrow) AI to improve their businesses, but how should they begin? For some, the answer has been to attempt to build AI competency centers, hiring much talked-about data scientists to lead the charge. Two issues tend to come up in this scenario. First, data scientists are scarce and expensive. Secondly, there is often little clarity about what objectives a business wants to accomplish beyond, “Let’s do something with AI.” 

Fortunately, many software vendors are aware of these challenges and are addressing them by integrating narrow AI into their applications and solutions. Examples are plentiful, with customer relationship management software (CRM) as probably the most visible, delivering AI-powered features such as lead scoring or next-best offer. Oracle’s Adaptive Intelligent Apps, and Salesforce’s Einstein products offer AI-powered capabilities for customer experience (CX), among other application areas. Organizations benefit from the technology without requiring a plethora of AI knowledge and skills. The insights delivered to organizations are in the context of the application and its processes, making consumption simple. Enterprises must make additional considerations about their data. We will explore the role of data extensively in future columns, but it is important to bring up as many enterprises still struggle to manage their data. No data equals no AI.

For enterprises with either the budget, very specific requirements (for example, certain areas of financial services), or with existing in-house skills, building their own AI is an option supported by a range of what are most commonly referred to as data science platforms. A data science platform delivers the tooling to source and manage necessary data, the AI tool kit (most often machine learning, but also deep learning for some use cases), deployment, and model management capabilities. Examples include Dataiku, Datascience.com, IBM Watson Studio, Microsoft Azure Machine Learning, and RapidMiner.

Making AI more accessible is the immediate challenge

While a skills shortage and challenges in sourcing the right data to fuel AI are immediate and ongoing challenges, the industry and national governments are responding. 

Here are three trends I see shaping how enterprises will adopt and use AI in the near future:

  • Augmentation is the word in AI right now. Rightly so, as it accurately represents what most AI solutions are currently capable of, and focuses the AI debate on enterprise value as opposed to technology philosophy. Think of AI-powered augmentation as a copilot for users of business technology, hiding the complexity of the technology and the vast amounts of data powering it away from the user and assisting them with the automatic completion of mundane tasks and proactively suggesting actions that either add value or mitigate risk. We see this in action today. As new capabilities and use cases become available, and users grow to accept and trust AI-powered capabilities, the AI copilot will become an indispensable part of the business person’s daily work.
  • The skills gap currently limiting the adoption of AI will be eroded by developments in technology that will drive accessibility and grow potential use cases. In the same way analytics moved from the preserve of a small group of expert power users to a tool accessible to millions, AI technologies will become more accessible through the use of drag-and-drop, low- or no-code, and prebuilt capabilities open to developers and business users. Expect to see the emergence of machine learning models as a service or marketplaces where enterprises can access sophisticated, prebuilt, and configurable for their own unique circumstances IP from a range of providers.
  • Combining many narrow AI capabilities to create a “more than the sum of its parts” AI-powered solution is another development I expect to have a major impact. The ability to combine and orchestrate these narrow AIs in the context of, for example, a business process (and likely paired with robotic process automation) will drive productivity across a much wider range of tasks within organizations.

In addition to my predictions, over the coming months are some of the biggest technology user events of the year, including AWS’s re:Invent, Salesforce’s Dreamforce, and Oracle’s OpenWorld. Expect a slew of AI announcements from each. Our G2 analysts will be there, offering technology buyers our views and opinions on them as they happen, on the G2 Research Hub.

Quer aprender mais sobre Software de Inteligência Artificial? Explore os produtos de Inteligência Artificial.

Tom Pringle
TP

Tom Pringle

Tom is Vice President of Market Research at G2, and leads our analyst team. Tom's entire professional experience has been in information technology where he has worked in both consulting and research roles. His personal research has focused on data and analytics technologies; more recently, this has led to a practical and philosophical interest in artificial intelligence and automation. Prior to G2, Tom held research, consulting, and management roles at Datamonitor, Deloitte, BCG, and Ovum. Tom received a BSc. from the London School of Economics.