About: Akshaya Murthy leads AI transformation at Zendesk. Akshaya started off developing video games, and after his MBA from the University of Pittsburgh, he worked with Deloitte, Oracle, BNP Paribas, in consulting and executive advising capacities, working on strategic initiatives and heading internal audit functions. He has been at Zendesk for three years now - working towards transformation of Zendesk’s operations to be AI first.
When making sense of enterprise AI adoption, we often focus on technologies and frameworks. But the human element—specifically, who leads these transformations and why—offers equally valuable insights and matters far more to the success of these initiatives. My recent conversation with Akshaya, Director of AI Transformation at Zendesk, revealed patterns that every AI practitioner should note, especially as we guide organizations through their AI journeys.
The first insight came from an unexpected place: organizational structure. AI transformation at Zendesk isn't led by IT or digital transformation teams, but by someone who simply got curious early and went deep. "The more I explored, the more I was convinced this is the future," Akshaya told me, explaining his journey from interest to leadership.
This isn't just a one-off case. We're seeing a pattern emerge across enterprises: traditional IT transformation teams, despite their expertise, often lack the specific skillset needed for AI initiatives. This gap creates opportunities—and responsibilities—for those who understand AI - both at the technology level and its business implications.
AI apps are seldom about technology. It’s about recognizing those patterns early and developing an intuition as to where AI would work well. Akshaya and Zendesk’s engineering team started thinking about AI driven out Enterprise Search in summer 2023 (because “No one could find anything they were looking for”), which meant they were grappling with questions about embeddings, retrieval, and inference that would become industry-wide concerns a year later. This timeline offers a crucial lesson: early implementation, even at a smaller scale, provides invaluable insights for larger rollouts. It creates necessary conditions for understanding a technology and business use case.
One of the most compelling insights from our conversation was about AI's impact on work structure. Akshaya envisions a future where core operations are decentralized to individual employees rather than teams. Take marketing: with AI agents today, a single person could potentially handle end-to-end campaigns. For AI practitioners, this raises fascinating technical and architectural questions: How do we design systems that empower individual contributors while maintaining enterprise-grade reliability? How does this shift affect our thinking about access control and permissions? What do guardrails look like in a team full of all-round generalists?
The challenges Akshaya's team faces will resonate with many AI folks out there:
1. Infrastructure decisions between GPUs and LPUs for inference—a question that becomes more critical as deployment scales
2. The build-vs-buy dilemma, which they resolved by choosing to buy, acknowledging the complexity of keeping pace with rapid AI advancement. Now, they are looking at LLM routers which can help them change LLMs at the flick of a switch.
3. Handling long-tail queries in RAG systems—a challenge that persists despite advances in LLM technology. In their case, RAG fails for specialized queries, where people may need a nuanced answer. I have heard about this problem from multiple sources now. The solution isn’t straightforward, and I think automatic triaging is going to be a billion dollar business.
However, there's also immense potential and upside. Akshaya shared how an AI agent prototype completed a week's worth of competitive intelligence work in just 8 hours. This exemplifies the kind of concrete wins that help build organizational buy-in.
Perhaps the most revealing insight from my conversation with Akshaya wasn't about technology stacks or implementation strategies—it was about how he secured such strong executive support for AI transformation. The CEO and CFO didn't need convincing because they had already experienced AI's potential firsthand, using ChatGPT to prepare for their quarterly meetings, asking the bot to roleplay as an analyst and ask questions about the result.
This personal experience proved more powerful than any ROI projection or market analysis. When key decision-makers have already felt the impact of AI in their own work, the conversation shifts from "Should we do this?" to "How quickly can we scale this?"
The appetite for AI knowledge extends far beyond the C-suite. When Akshaya's team announced an initiative to help non-technical staff understand and work with AI, the response was explosive: multiple teams signed up within a week. This wasn't polite corporate interest—it was a groundswell of genuine enthusiasm from employees who had glimpsed AI's potential through tools like ChatGPT and wanted more.
For AI practitioners, these two patterns—executive firsthand experience and bottom-up employee enthusiasm—create a perfect environment for driving transformation. The key is not just building technical solutions but tapping into this existing momentum and channeling it into structured, scalable change.
Akshaya's team uses a rigorous POC process to evaluate AI software, requiring clear value demonstration before wider deployment. Over time, given the advent of newer models, most apps show enough value barring rare cases, his team hasn’t rejected any. “Cost is not really a constraint. This is business transformation. If this works well, we will evaluate options later.
When people see Uber using our AI solutions, they want it for their orgs too.
Curiously, this methodical approach is also used by Zendesk in rolling out their own AI products to their customers. They worked with design partners, and demonstrated value to them. This has created powerful social proof. "When people see Uber using our AI solutions, they want to do it too," he explains. The results— customers are reporting over 60% first contact support ticket resolution rates —validate this careful approach. Overall they have acquired a substantial number of customers after rolling out AI.
For AI practitioners, Akshaya's mental model offers a valuable framework: "work backward from a future state where AI fundamentally changes how work gets done." In this vision, humans become generalists while AI handles specialized tasks. The traditional $5T+ IT spend shifts dramatically toward AI solutions.
His current focus on multimodal AI applications and comprehensive upskilling efforts points to where enterprise AI is heading. Practitioners need to think beyond individual use cases to how AI reshapes entire organizational workflows.
The future of enterprise AI isn't just about selecting the right models or optimizing inference costs—it's about understanding how these technical decisions reshape how organizations work. As we guide organizations through this transformation, keeping both the technical and human elements in focus will be crucial for success.
He ends the chat with one powerful insight:
If I knew at the start of 2023 what I know now, I would have prepared myself for long haul conversations slowly building trust. True organizational change isn't about implementing AI—it's about cultivating trust, building consensus, and preparing people to embrace the journey. The most sophisticated technology fails without human readiness and emotional buy-in.