As AI rapidly reshapes the SaaS landscape, finance leaders are being asked to balance growth with cost discipline, infrastructure investment, and long-term scalability. In this conversation, Mr. Ankit Sarawagi, Chief Financial Officer at Verloop, shares how the company is building a sustainable AI-led business model, the metrics that matter beyond acquisition, and the financial shifts shaping the next phase of enterprise adoption.
With Verloop's core promise being cost efficiency for clients, what is your primary financial strategy to ensure your own growth is equally efficient and sustainable for 2026?
At Verloop, cost efficiency starts internally. Our 2026 strategy is built on being deliberate about where we spend, what we build, and who we hire. We avoid hiring for work that can be reliably handled by AI, even if the upfront cost looks higher. That discipline lets us invest in people who solve complex problems and own outcomes. We focus on impact per person rather than expanding teams for scale alone.
Market conditions are also working in our favour. With AI and Voice AI, buyer readiness has increased sharply, leading to stronger inbound demand and faster deal closures. Voice has also driven 6 to 8x higher ACVs, which improves revenue efficiency. For 2026, we are targeting 3x ARR growth, with India and MENA as core markets. We are building co-sell partnerships in LATAM and running focused go-to-market tests in SEA and Germany.
Beyond customer acquisition revenue, what are the key unit economic metrics you monitor to validate the true health and scalability of Verloop's AI-heavy business model?
We focus on metrics that compound over time. LTV is critical because we sit deep inside customer operations. Retention and expansion matter more to us than raw logo growth. Expansion revenue is another key signal. Customers often start with chat and then move into WhatsApp, campaigns, audits, or Voice. That progression shows trust and validates platform depth.
We also track ACV and revenue per customer, especially for AI-led products like Voice. Higher ACVs combined with low incremental delivery costs allow us to scale without linear cost growth. Finally, we closely monitor cost-to-serve per customer. As usage grows, marginal costs should reduce. If they don’t, it is a signal to revisit product or architecture decisions.
How does your finance team establish financial predictability and investor trust in a sector defined by rapid technological change and evolving AI costs?
Predictability comes from discipline, not from guesswork. Our finance team focuses on clear cost ownership, conservative planning, and tight coordination across product, revenue, and finance. We have been EBITDA positive on average for the past seven months, which gives us confidence that growth is controlled and not spend-driven. It also helps us absorb fluctuations in AI infrastructure costs.
We prioritise revenue quality over headline growth. Voice has driven higher ACVs and faster closures, which improves visibility and reduces dependence on volume-led growth. We track gross margins by product line and model AI cost changes cautiously. Crossing close to a million dollars in monthly revenue was meaningful because it validated that scale can come with control.
How do you evaluate the long-term ROI of foundational AI infrastructure investments against the pressure for rapid, short-term product experimentation?
We ask one question consistently: does this investment make the product easier to adopt and scale, now or in the future? If yes, we proceed. If not, we pass. Foundational systems like Voice infrastructure and orchestration are built to support faster experimentation, not slow it down. Without a strong base, experimentation becomes fragile and expensive.
Voice is a good example. The infrastructure we built for latency handling, orchestration, and quality monitoring now benefits chat and other channels as well. If an investment compounds across products, it earns its place. We also track time-to-value closely. Long go-live cycles kill adoption. Any infrastructure that cuts implementation effort or reduces manual tuning justifies the spend.
Looking at 2026, what specific financial trend in the AI-SaaS space poses the biggest strategic opportunity for Verloop's scale-up plans?
The biggest shift is AI spend moving from experimentation budgets to operating budgets. Buyers are now evaluating AI based on cost replacement, not novelty. This plays to our strengths. AI that reduces cost per interaction and improves productivity gets scaled faster and with more confidence.
We are also seeing a strong push toward platform consolidation. Buyers want fewer vendors that can do more. This supports deeper expansion within accounts, something we already see across our customer base.
How do you measure the success and value of AI R&D projects that are essential for innovation but may not have an immediate path to monetization?
We treat AI R&D as strategic bets, not isolated costs. The first filter is capability. Does this work raises what the platform can deliver across products? We then look at operational impact. If an R&D effort shortens go-live time, reduces manual work, or improves internal workflows, it has direct commercial value even without a price tag.
Internal adoption is a strong signal. When teams pull an R&D output into daily use without prompting, it tells us the work solves a real problem. We also believe in disciplined exits. Projects that don’t show a clear path to platform impact or cost efficiency are reshaped or stopped during quarterly reviews.



















