How to get started with AI agents (and do it right)


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Due to the fast-moving nature of AI and fear of missing out (FOMO), generative AI initiatives are often top-down driven, and enterprise leaders can tend to get overly excited about the groundbreaking technology. But when companies rush to build and deploy, they often deal with all the typical issues that occur with other technology implementations. AI is complex and requires specialized expertise, meaning some organizations quickly get in over their heads. 

In fact, Forrester predicts that nearly three-quarters of organizations that attempt to build AI agents in-house will fail. 

“The challenge is that these architectures are convoluted, requiring multiple models, advanced RAG (retrieval augmented generation) stacks, advanced data architectures and specialized expertise,” write Forrester analysts Jayesh Chaurasia and Sudha Maheshwari. 

So how can enterprises choose when to adopt third-party models, open source tools or build custom, in-house fine-tuned models? Experts weigh in. 

AI architecture is far more complex than enterprises think

Organizations that attempt to build agents on their own often struggle with retrieval augmented generation (RAG) and vector databases, Forrester senior analyst Rowan Curran told VentureBeat. It can be a challenge to get accurate outputs in expected time frames, and organizations don’t always understand the process — or importance of — re-ranking, which helps ensure that the model is working with the highest quality data. 

For instance, a user might input 10,000 documents and the model may return the 100 most relevant to the task at hand, Curran pointed out. But short context windows limit what can be fed in for re-ranking. So, for instance, a human user may have to make a judgment call and choose 10 documents, thus reducing model accuracy. 

Curran noted that RAG systems may take 6 to 8 weeks to build and optimize. For example, the first iteration may have a 55% accuracy rate before any tweaking; the second release may have 70% and the final deployment will ideally get closer to 100%. 

Developers need to have an understanding of data availability (and quality) and how to re-rank, iterate, evaluate and ground a model (that is, match model outputs to relevant, verifiable sources). Additionally, turning the temperature up or down determines how creative a model will be — but some organizations are “really tight” with creativity, thus constraining things, said Curran. 

“There’s been a perception that there’s an easy button around this stuff,” he noted. “There just really isn’t.” 

A lot of human effort is required to build AI systems, said Curran, emphasizing the importance of testing, validation and ongoing support. This all requires dedicated resources. 

“It can be complex to get an AI agent successfully deployed,” agreed Naveen Rao, VP of AI at Databricks and founder and former CEO of MosaicAI. Enterprises need access to various large language models (LLMs) and also have the ability to govern and monitor not only agents and models but underlying data and tools. “This is not a simple problem, and as time goes on there will be ever-increasing scrutiny over what and how data is being accessed by AI systems.” 

Factors to consider when exploring AI agents

When looking at options for deploying AI agents — third party, open source or custom — enterprises should take a controlled, tactical approach, experts advise. 

Start by considering several important questions and factors, recommended Andreas Welsch, founder and chief AI strategist at consulting company Intelligence Briefing. These include: 

  • Where does your team spend the majority of their time?
  • Which tasks or steps in this process take up the most time?
  • How complex are these tasks? Do they involve IT systems and accessible data? 
  • What would being faster or more cost-effective allow your enterprise to do? And can (and how) do you measure benchmarks?

It’s also important to factor in existing licenses and subscriptions, Welsch pointed out. Talk to software sales reps to understand whether your enterprise already has access to agent capabilities, and if so, what it would take to use them (such as add-ons or higher tier subscriptions).

From there, look for opportunities in one business function. For instance: “Where does your team spend time on several manual steps that can not be described in code?” Later, when exploring agents, learn about their potential and “triage” any gaps. 

Also, be sure to enable and educate teams by showing them how agents can help with their work. “And don’t be afraid to mention the agents’ limitations as well,” said Welsch. “This will help you manage expectations.”

Build a strategy, take a cross-functional approach

When developing an enterprise AI strategy, it is important to take a cross-functional approach, Curran emphasized. Successful organizations involve several departments in this process, including business leadership, software development and data science teams, user experience managers and others. 

Build a roadmap based on the business’ core principles and objectives, he advised. “What are our goals as an organization and how will AI allow us to achieve those goals?”

It can be difficult, no doubt because the technology is moving so fast, Curran acknowledged. “There’s not a set of best practices, frameworks,” he said. Not many developers have experience with post-release integrations and DevOps when it comes to AI agents. “The skills to build these things haven’t really been developed and quantified in a broad-based way.”

As a result, organizations struggle to get AI projects (of all kinds) off the ground, and many eventually switch to a consultancy or one of their existing tech vendors that have the resources and capability to build on top of their tech stacks. Ultimately, organizations will be most successful when they work closely with their partners. 

“Third-party providers will likely have the bandwidth to keep up with the latest technologies and architecture to build this,” said Curran. 

That’s not to say that it’s impossible to build custom agents in-house; quite the contrary, he noted. For instance, if an enterprise has a robust internal development team and RAG and machine learning (ML) architecture, they can use that to create their own agentic AI. This also goes if “you have your data well governed, documented and tagged” and don’t have a “giant mess” of an API strategy, he emphasized. 

Whatever the case, enterprises must factor ongoing, post-deployment needs into their AI strategies from the very beginning. 

“There is no free lunch post-deployment,” said Curran. “All of these systems require some type of post launch maintenance and support, ongoing tweaking and adjustment to keep them accurate and make them more accurate over time.”



Leave a Comment