Author: Andy Goldstein
Agentic artificial intelligence (Agentic AI) is a concept in artificial intelligence (AI) that describes systems acting autonomously with limited human interactions (in particular, without step-by-step instructions) to fulfil goals rather than isolated tasks. These systems reason and plan to set the tasks that are required to achieve a given goal or set of goals. Agentic AI systems, by themselves, can follow a logical process involving making inferences about how to achieve a goal (reasoning), identifying and coordinating actions to accomplish that goal (planning) in changing environments. These systems can prioritise actions based on their importance and urgency, while simultaneously coordinating multiple activities.
While AI agents are single systems that autonomously perform tasks and use tools such as search engines or code generation to achieve simple goals, Agentic AI[i] goes further by coordinating multiple agents, managing their communication, and distributing tasks to accomplish larger, more complex objectives.
The autonomy of an Agentic AI system can range from requiring a certain degree of user input to being fully autonomous.
A crucial aspect of Agentic AI is its ability to use tools, consult databases, do some limited programming and call other IT systems using an API, and interact/sense the environment without human involvement. This allows it to gather information, perform actions, adapt and ultimately accomplish its goals.
Agentic AI also have persistent memory, which spans across tasks and remains after a goal is reached. This enables them to retain context for its future actions, improve their performance, adapt depending on the result of the actions taken and the feedback received from the environment, and correct mistakes. In other words, these systems are able to handle their own errors and detect when the results have not been achieved, diagnosing the problem and adjusting accordingly. The ability to make progress towards the goals even when encountering obstacles or unexpected situations is a fundamental aspect: Agentic AI can adapt and learn. It can modify its behaviour based on feedback or its understanding of the environment and refine its approach over time.
To better illustrate the capabilities of an Agentic AI system, consider an example regarding medical diagnosis assistance. Such a system could consist of multiple specialised agents working together:
- Agent 1 analyses medical images (e.g. X-rays, MRIs);
- Agent 2 retrieves relevant patient data from electronic health records, including medical history, lab results and medications;
- Agent 3 synthesises this information, requests additional tests and, when it has enough information, suggests possible diagnoses and generates treatment options;
- Agent 4 orchestrates the entire process by coordinating the other agents, managing user interactions, handling workflows (including iterative refinement if needed), and addressing potential errors.
The first two agents - image analysis and patient data retrieval - can operate autonomously and in parallel. The third agent depends on their outputs before producing diagnostic insights. The fourth agent also works in parallel, ensuring the system runs smoothly.
Trend developments
Agentic AI is still in early development stages. Most practical applications consist only of individual AI agents designed for specific tasks such as code generation, content creation or customer service, operating within controlled environments with significant human oversight. For this reason, those cannot be considered Agentic AI. However, the field has made progress and is focusing on communication protocols and standards that enable AI agents to interact with each other, such as Google’s open Agent2Agent Protocol (A2A), Anthropic’s Model Context Protocol (MCP) - an open standard. These protocols and standards are still evolving. In practice, genuinely autonomous Agentic AI systems capable of independently managing complex business processes remain an area of ongoing research and have not yet been successfully implemented.
However, the effort to integrate simple AI agents with existing tools is well under way. For example, many available large language models (LLMs), such as ChatGPT, Claude and Perplexity, are already capable of integrating with search engines for the Internet and use it to augment their capabilities and provide more up-to-date information to the user.
Looking forward, the field appears headed toward a period of consolidation. The near-term outlook points toward specialisation rather than generalisation. Industry-specific AI agents might be the first to appear, paving the way for more complex AI systems that can be called Agentic AI.
In other words, the next phase of development will focus on creating AI agents with deep domain expertise rather than broad general capabilities.
| Global Enterprise Agentic AI Market estimates that the Agentic AI market is expected to grow from USD 3.6 billion in 2024 to nearly USD 171 billion by 2034, with a CAGR (Compound Annual Growth Rate) of 47.2%.[ii] |
Potential impact on individuals
Due to its emphasis on autonomy, memory, access to tools, databases and other software, Agentic AI could create privacy risks that go beyond those of its AI components/agents. To properly operate on consumer devices AI agents might require extensive access to data stored on the devices. This is even more concerning when such agents are embedded in the operating system of the devices and not offered as an option to consumers. Such blanket access to data might raise security concerns down the line by creating avenues for data regurgitation through prompt injection and jailbreaking. Moreover, Agentic AI may be capable of bypassing APIs.
Considering that Agentic AI could autonomously gather, analyse and act on personal data across multiple systems, it may be challenging to determine in advance what personal data is gathered, how it is used, and for what specific purposes. There is also the risk that Agentic AI might autonomously determine new uses for personal data as it pursues its goals.
The complex decision-making processes of Agentic AI could make it difficult for users to understand how personal data would be used, what conclusions would be drawn from personal data and why certain actions would be taken on their behalf (lack of transparency).
Personal data aggregated from diverse sources may be combined in unforeseen ways, potentially without user consent, resulting in comprehensive profiles that reveal sensitive patterns of behaviour, preferences and activities. Agentic AI systems, by retaining memory of past interactions, continuously learning from user behaviour and sharing information across multiple AI agents, amplify these risks.
Together, the creation of extensive profiles and the persistent retention of historical data pose significant privacy concerns for the individuals involved, potentially leading to high-impact breaches of personal privacy.
In this context, implementing data subject rights (such as right of access or erasure) would be very difficult to achieve.
The continuous adaptation of Agentic AI based on user interactions can potentially perpetuate and amplify existing biases in ways that could be difficult to detect or correct. These systems could develop biased patterns through their autonomous personal data collection processes, learning from skewed datasets or user behaviours that reflect societal inequalities, and then applying these biased models to make decisions that affect users' lives.
Additionally, Agentic AI systems could make confident predictions and take actions based on incomplete or misrepresented personal data. Their autonomous nature means these errors would cascade through multiple decisions before being detected. These behaviours could compromise the fairness and accuracy of the systems.
If an Agentic AI system causes harm, violates privacy regulations or treats individuals unfairly, determining responsibility can be challenging - whether it lies with the AI developers, the deploying organisation acting as the data controller, or the users interacting with the system who may have provided incorrect instructions - resulting in a potential accountability gap.
When an Agentic AI system would interact with external services to complete tasks, personal data might be shared with third parties holding separate personal data collection and processing practices. Users might not be aware of the interactions with these third-party or the implications that personal data sharing has for their privacy.
As Agentic AI systems could make decisions affecting human lives with minimal direct oversight, they risk undermining human dignity and autonomy by reducing individuals to data points in algorithmic calculations rather than ensuring the individuals’ position as the arbiters of choices affecting their own lives. There is a risk that Agentic AI may have a manipulative effect on the person concerned, thus reducing the agency of the human being.
| Agentic AI is expected to bring significant changes in how we use AI. Unlike traditional systems that just follow instructions, Agentic AI can set intermediate goals, plan, adapt and coordinate different agents to handle complex tasks. This makes it powerful for areas like healthcare, scientific research or finance, but it also raises serious questions about privacy, fairness and accountability. Because these systems learn, remember and act with little human oversight, it can become harder for users to understand or control how personal data is used and how decisions are made. |
Suggestions for further reading
- Sapkota, R., Roumeliotis, K. I., & Karkee, M. (2025). AI agents vs. Agentic AI: A conceptual taxonomy, applications and challenges. arXiv preprint arXiv:2505.10468.
- Acharya, D. B., Kuppan, K., & Divya, B. (2025). Agentic AI: Autonomous intelligence for complex goals - a comprehensive survey. IEEe Access.
- Schneider, J. (2025). Generative to agentic AI: Survey, conceptualization, and challenges. arXiv preprint arXiv:2504.18875.
[i] Generative AI describes systems that can produce content (text, images, sounds, etc.) based on their training data (Large Language Models or Large Image Models)
[ii] Global Enterprise Agentic AI Market, https://market.us/report/enterprise-agentic-ai-market/