Skip to Main Content

ARTIFICIAL INTELLIGENCE

AI Churn Forces Enterprise Tech Stack Rebuilds

Frequent AI technology evolution and shifting strategies are compelling enterprises to rebuild their AI infrastructures every few months, hindering full agent deployment.

Read time
6 min read
Word count
1,292 words
Date
Dec 9, 2025
Summarize with AI

Enterprises are grappling with significant AI tech churn, leading to frequent rebuilding of their AI infrastructures. A recent survey reveals that a large percentage of organizations, particularly regulated ones, replace parts of their AI stacks quarterly. This rapid evolution, coupled with a lack of satisfaction with existing AI components, poses challenges for deploying AI agents beyond pilot stages. Experts suggest that while AI agents hold immense promise, the current landscape necessitates continuous adaptation and strategic planning to navigate the fast-moving technological advancements and marketplace complexities.

Depiction of rapid technological change and evolving infrastructure. Credit: Shutterstock
🌟 Non-members read here

The rapid advancements in artificial intelligence are presenting a formidable challenge for businesses, which frequently find themselves overhauling their AI infrastructures. This constant cycle of rebuilding is driven by the dynamic nature of AI capabilities and the continuous adjustment of corporate AI strategies. Such frequent modifications highlight a broader struggle within organizations to keep pace with an ever-shifting technological landscape.

A recent study by AI data quality firm Cleanlab surveyed over 1,800 software engineering leaders, shedding light on this escalating issue. The findings reveal that a substantial 70% of regulated enterprises, alongside 41% of unregulated organizations, are replacing at least a segment of their AI technology stacks every three months. An additional quarter of both regulated and unregulated companies reported updating their systems every six months, underscoring a widespread need for continuous adaptation.

Curtis Northcutt, CEO of Cleanlab, emphasized that these statistics reflect the ongoing difficulties organizations face in both monitoring the evolving AI environment and successfully deploying AI agents into production. Despite the considerable investment and interest in AI, the survey indicated that only 5% of respondents currently have AI agents in full production or are on the verge of doing so. Based on the technical hurdles reported by engineers, Cleanlab estimates that a mere 1% of the surveyed enterprises have moved AI agents beyond initial pilot phases.

Northcutt further elaborated on the current state of enterprise AI agents, noting that they are far from widespread adoption. He pointed out that numerous startups have attempted to introduce components of AI agents for businesses, many of which have not achieved significant success. This suggests a disconnect between the ambitious claims surrounding AI agent capabilities and their practical implementation within corporate environments. The journey from conceptual promise to tangible, scaled deployment remains a significant hurdle for many organizations.

The Accelerating Cycle of Tech Stack Rebuilds

The frequency with which organizations are revamping components of their AI agent tech stacks every few months is a clear indicator of both the rapid evolution within the AI landscape and a prevalent skepticism regarding the efficacy of agentic outcomes. This constant churn is not merely about minor adjustments; it often involves fundamental shifts in infrastructure. Changes can range from straightforward updates to an underlying AI model’s version to more complex transitions, such as moving from a closed-source model to an open-source alternative. Even altering the database used for storing agent data can trigger a cascade of necessary modifications throughout the system.

Northcutt highlighted the ripple effect of such changes. For instance, migrating to an open-source model that operates on an organization’s own servers necessitates a complete overhaul of the existing infrastructure, introducing new complexities. Should this transition prove less effective than anticipated, companies might revert to a different model, potentially leading to further infrastructural changes, especially if moving to a new cloud API with different specifications. This iterative process often feels like a continuous search for the most suitable, stable solution in a constantly shifting technological terrain.

Nuha Hashem, cofounder and CTO of Cozmo AI, a voice-based AI provider, corroborated these observations, noting a similar pattern of frequent changes in agent tech stacks across their clientele, particularly in regulated sectors. Hashem explained that early setups often function as a patchwork, performing differently in testing environments compared to live production. Even minor alterations in a library or a routing rule can profoundly impact how an agent handles a task, frequently necessitating another complete rebuild. This constant need to dismantle and reconstruct underscores the fragility of current AI deployments.

The underlying issue, Hashem suggested, is that many agent systems depend on behaviors embedded within the model itself, rather than relying on explicit, clear rules. Consequently, when a model undergoes an update, its behavior can subtly drift, leading to unexpected outcomes. To mitigate this, Hashem advises teams to establish clear, well-defined steps and verification processes for their agents. By doing so, the tech stack can evolve more gracefully without the constant threat of breakage and the need for frequent, disruptive rebuilds.

Addressing Trust Deficits and Future Outlook

A significant challenge within the current AI landscape is the pervasive dissatisfaction with existing AI stack components, reflecting a deeper trust deficit in AI agent outcomes. Cleanlab’s survey investigated user experience across various agent infrastructure components, including orchestration, fast inference, and observability. Alarmingly, only about a third of respondents expressed satisfaction with any of these five critical components. Furthermore, approximately 40% indicated they were actively seeking alternatives for each, signaling widespread discontent with their current solutions.

The concern extends notably to security and guardrails; a mere 28% of respondents reported satisfaction with their current implementations. This low figure highlights a considerable lack of confidence in the reliability and safety of AI agent results, which is a significant barrier to broader adoption. Despite what might appear as a pessimistic assessment from Cleanlab’s survey, several AI experts affirm the accuracy of these findings, suggesting the depicted challenges are indeed prevalent across the industry.

Jeff Fettes, CEO of Laivly, an AI-based customer experience provider, echoed these sentiments, confirming that many enterprises frequently rebuild parts of their AI stacks. He emphasized that the organizations achieving greater success with AI are those that have embraced an iterative approach. Fettes observed that many companies struggle because they cling to traditional IT deployment methodologies, which are ill-suited for the rapid pace of AI evolution. Conventional IT platforms typically involve lengthy evaluation and deployment cycles, a timeline that AI advancements have rendered obsolete.

Fettes explained that IT departments historically engaged in extensive planning before transforming their tech stacks, expecting these changes to remain stable for a considerable period. However, with AI, they often discover that the technology has advanced significantly before their planning phases are even complete, forcing them to restart. This dynamic leads many companies to abandon existing AI pilots as the technology quickly moves forward, effectively rendering their own developments obsolete in a short timeframe. Compounding this issue is the sheer volume of new companies and solutions entering the AI market, making it challenging for CIOs to discern effective tools from those that are unproven or simply do not work as advertised.

Artur Balabanskyy, cofounder and CTO of Tapforce, an app development firm, also noted the trend of enterprises frequently rebuilding their AI stacks due to constant technological evolution. He warned that organizations failing to keep their systems updated risk lagging in performance, security, and reliability. However, Balabanskyy believes that these continuous rebuilds do not inherently need to create chaos. He advocates for a layered approach to agent stacks, incorporating robust version control, continuous monitoring, and a modular deployment strategy. This modular architecture enables leaders to modify specific components without destabilizing the entire system, while guardrails, automated testing, and strong observability ensure the reliability of production systems even amidst rapid tech changes.

Cleanlab’s Northcutt advises IT leaders to adopt a rigorous process for AI agent deployment, starting with a precise definition of the agent’s expected functions. Instead of broad goals like “let’s have AI do customer support,” he recommends detailed specifications: precisely where AI intervention begins, what constitutes good performance, what accomplishments are expected, and which tools the agent will actually utilize. This meticulous planning is crucial for setting realistic expectations and achieving measurable outcomes. Northcutt’s projections suggest that widespread deployment of AI agents is still several years away. He predicts that the current 1% of organizations with agents in production might rise to 3% or 4% by 2027, with true agents reaching 30% of enterprises by 2030. While acknowledging the significant benefits AI agents will eventually bring, he urged evangelists to temper their rhetoric and set more reasonable expectations. He believes that a more pragmatic approach will ensure that the substantial investments in AI ultimately yield their promised returns.