
Artificial intelligence (AI) exploded onto the scene in its most recent form with the birth of ChatGPT from OpenAI at the end of 2022. Three years later and there are a lot of Large Language Models (LLMs) in the market that are brilliant at responding to texts and images. This article focuses on the former, and the ability that AI has to digest and extract or summarise information in documents for ease of reading, speed of summarising or to extract into a particular format ready for further analysis and parsing further down the pipeline. This technique has been refined and is utilised in a lot of areas of industry, with up to 78% of global companies using AI in some format to streamline processes and optimise1. New technologies have been created in the three years since, such as Retrieval Augmented Generation (RAG) techniques, to allow for an AI to hone more accurately into the pertinent area of information, and few shot / multi shot learning, allowing a user to explain exactly what it is looking for before giving the context to be analysed. While these have made AI algorithms more efficient, and hence, more valuable, they have always had a “human in the loop approach”, allowing (and indeed requiring) a human to ensure the AI steers in the right direction, and then act accordingly. The most recent technological advancement, agentic AI, changes this fundamentally.
What is Agentic AI?
Despite Agentic AI being the new ‘buzz word’ in the AI domain, there is no universally agreed-upon definition of what it actually is. “AI agents” and “agentic AI” are sometimes used interchangeably, while other firms try and draw distinct definitions between the two terms. It is a reflection of how new and ‘untested’ this methodology is in real world environments that the 78% adoption of AI drops to less than 1% of enterprise applications actively using agentic AI techniques in 20242. Regardless, there are some key functionalities that overlap the various definitions used. Three of the most important aspects of agentic AI are:
- Adaptability: The ability to respond and learn from past interactions and events, in theory improving on every iteration.
- Multimodal: The capacity to ingest from different sources, platforms and locations to enhance the decision-making process.
- Driven: Being configured based on predefined goals (that may or may not evolve) to pursue particular objectives
Most of these are not dissimilar to concepts like fine tuning models, multi shot learning or RAG techniques which can be layered on top of traditional data driven LLMs. However, the biggest change and aspect that separates agentic AI from its predecessors is the concept of autonomy. That is, the ability to function and react to the given data without a human in the loop. It is the ability to remove the human interaction with AI that provides the real potential for a step change in the way that the process works.
So, how does Agentic AI work?
Agentic AI is typically more of a structure of different LLMs working together, to generate a single flow from one end of the task to be completed to the other. Each of these LLMs is given a role and interact with one another. They are typically exposed to external data points and databases and are given a higher level of permission than a typical LLM model would have to perform the necessary actions, allowing for a potentially vast time saving. A good example of agentic AI in action is through autonomous cars. A singular AI agent is tasked with the objective to “drive from A to B”. This then assess the objective, before tasking other LLMs: One to look at traffic data, and provide analysis of the best route, one to ingest and analyse potential immediate risks around the car, and provide the data required to ensure safety, another to investigate the current laws of the road being driven such as speed limits, give way signs etc. This is all fed to a decision engine, which then acts to propel the car in the correct direction safely. By definition, these LLMs need access to the data, to analyse correctly.
However, it is the concept of taking the human out of the loop, and automating the underlying actions, that is the key difference between agentic AI and the LLM AI that has been typically used since 2022. It is also the piece of the puzzle that raises the biggest pros, and scariest pitfalls, of using AI agents when a human in the loop workflow would also work.
Simple Model
‘I want to accomplish X’ (1 human step)
A basic LLM setup example. Here the human user optimises for one step, and then manually completes the rest of the task assigned.

Complex Model
‘I want to accomplish X’ (3 human steps)
A more involved (but not agentic) setup. Here the human user has optimised for all steps, but is still involved at every step of the process. While this takes time, it also allows for manual correction at every step.

- Initial response triggers second prompt
- Second response
- Second response triggers third prompt
- Final response to accomplish X
Agentic Model
(1 human step)
An agentic AI setup. In this process the amount of human work has been minimised, so is the most optimal for speed. In this setup, the human user cannot correct at every step.

Pros of using agentic AI
There are clear advantages to using agentic AI in areas that make sense to. Most of these all fall broadly into the category of optimising time and process to allow for a quicker turnaround in mundane, repetitive or ‘administrative heavy’ tasks. In product development it can quickly enhance innovation by streamlining prototyping, testing, and then refining a product to fit a given task. Data processing and analysis can move quicker through each step of an extraction and analysis piece, discerning the data, and then determining the summary of the extracted data without the need for human interaction to begin the next step. This results in a massive increase in the efficiency of operations.
Potential pitfalls of agentic AI
As is always the case with brand new emerging technologies, Agentic AI, while having the potential to vastly change the way businesses work on a systemic, macro level, have a large amount of uncertainty and risk attached to them.
The first one to address is the same risk that LLMs have faced since their inception – that of the dreaded hallucination. At the heart of most Agentic AI systems are at least one, if not multiple, large language models interacting with one another. Hence, the hallucination problem, far from going away, in fact increases. Imagine if you were to have an AI agent to “analyse my stocks and shares holding and distribute where I will make the most money.” An AI agent would break that task into actionable chunks, but at least two of them would result in analysing data from different streams.
Namely the stocks you currently held and the current state of the market. If either of those analyses hallucinates (e.g. it hallucinates stock A at being 100X it’s current value), the task actioned by the agent could quickly end up being costly.
Giving the decision to the agent is a pseudo trust exercise, and these hallucinations are difficult to resolve. A commonly used example for the power of Agentic AI (almost the “hello world” in coding practices) is to book a flight. Great when it works, but very costly to correct if it does not.
The highly complicated nature of the agentic approach, being typically multi-step, and multi-agent in its action and reasoning can result in a much broader attack surface for potential information security breaches.
There is also the risk of conflicting data, contradicting objectives across agents and task decomposition that can result in stepwise exploitation of the agents, to either cause errors (mostly accidentally) or divert reasoning (potentially intentionally and nefariously)3.
It is not surprising then that this implementation of AI is the most expensive and dependent yet, requiring regular updates from skilled personnel to keep the feedback loops clear and correct to ensure the AI doesn’t continue to make the same (or worse) mistakes, and to manage the whole system. Constructing the initial setup can be difficult, and it can be costly and time consuming to set up and new objective pipeline to be analysed.
The further development and deployment of agentic AI significantly enhances the security and privacy risks of data further than just having a broader attack surface, as the ‘trust’ placed in the agent to act, and interact with data, is one that could lead to extensive security vulnerabilities if not properly maintained and protected. This is the beginning of the “black box” problem4, where it is impossible to understand the logic that goes into the decision made by the agent, and the underlying data (training or otherwise) can result in it being difficult, if not impossible to apportion any blame or responsibility for the action taken.
Conclusion
Agentic AI has the potential to be the next step change in the AI race for optimising and improving existing processes with the abundance of data that is available in almost every business across the globe. However, it is very much an emerging technology, and one that, for the moment, has a lot more potential pitfalls than pros. It is not surprising that the percentage of users of Agentic AI (less than 1%) is dramatically less than the percentage using ‘traditional’ AI (78%).
There also seems to be a real need to understand the objective and have leeway for the action to be a forgivable one, which can be costly and time consuming.
Compare and contrast that to more established mechanisms (such as RAG techniques, few-shot and multi-shot learning, fine tuning of models) that so far have always kept a human in the loop, it appears the edge and difference in Agentic AI is primarily to remove the decision making from the human to speed up processes. At the moment, it remains to be seen if the compounding risk of hallucinations by getting rid of the human in the loop, is worth the pay off.
If your task can be expressed as a workflow, build a workflow, and keep humans in the loop as you optimise it. While the newest approach of complex agents is an interesting one, it can become exceedingly difficult to manage, maintain and troubleshoot. Established, AI-augmented workflows inherently deliver predictability and control. Until the agentic approach solves the issue of hallucinations that 1% number of enterprise applications will stay low.
- https://explodingtopics.com/blog/companies-using-ai ↩︎
- https://www.gartner.com/en/documents/5850847 ↩︎
- https://arxiv.org/abs/2406.14595 ↩︎
- https://digitaldefynd.com/IQ/challenges-in-scaling-agentic-ai-systems/ ↩︎
