How Nektar helps AI Hypergrowth companies move even faster
How Nektar Helps AI Hypergrowth Companies Move Even Faster Artificial Intelligence 10 min Fast-moving AI companies are having a moment. Every week a new AI-native startup crosses $100M ARR in what feels like record time. Accel’s 2025 Globalscape report shows a “new breed of AI-native applications” hitting scale much faster than previous generations of SaaS, with some reaching $100M ARR in just a few years. That velocity is backed by unprecedented capital. Prominent AI companies like Cursor, Writer, Groq and Fireworks are raising huge rounds, hiring at triple-digit growth rates, and building products that spread virally from individual builders into the world’s largest enterprises. AI application categories like developer tools, finance, cybersecurity and vertical AI each attracted multiple billions of dollars in 2025 funding alone. Nektar sits right in the middle of this wave. Over the past year, we’ve partnered with some of the fastest-growing AI companies in the US – including Writer, Cursor, Groq, Chainguard and Fireworks to help them turn raw go-to-market activity into clean, structured, AI-ready data they can actually execute on. This blog looks at why AI companies grow differently, what that does to their GTM data, and how Nektar helps them grow even faster. The new AI growth curve: Speed, Efficiency and Youth Funding and company maturity Accel’s data makes one thing clear: AI is no longer a niche category. It’s the new centre of gravity for software investing. Total EU/US/IL cloud & AI funding (excluding models) has climbed into the ~$180B+ range annually, with 2025 setting fresh records. AI model funding is heavily concentrated in the US, but on the application side, EU/IL funding now represents roughly two-thirds of US levels, showing how global this wave has become. The winners look very different from the last SaaS cycle: over 65% of the Accel US & Europe AI 100 are 0–3 years old, and US winners skew especially young at 2.4 years on average. Put simply: AI companies are raising big, hiring fast, and still figuring out their GTM motion on the fly. Bottom-up adoption and insane efficiency AI-native tools are spreading from the bottom up: Developers using AI coding assistants jumped from 36% in 2023 to 90% in 2025 – in just two years. Tools like AI IDEs, agents and copilots are hitting milestones such as “$100M ARR in 8 months” and “10x YoY growth,” according to Accel’s case studies of leading AI-native apps. This isn’t just fast growth – it’s efficient growth. Accel estimates that leading AI applications now generate 3–10x more ARR per employee than prior generations of SaaS companies. But that speed and efficiency create a GTM paradox: You can scale product adoption and revenue incredibly fast. But your GTM data, process and tooling often lag badly behind. The hidden tax of hypergrowth: messy GTM data Most fast-growing AI companies share a few traits: They sell into large, multi-person buying committees (Fortune 500, Global 2000, high-growth tech). They run hybrid motions – PLG bottoms-up adoption plus enterprise sales, often with heavy founder-led or executive-led outbound. Their GTM stack is complex and evolving: Salesforce + Gong + Snowflake + ABM + sequencing tools, changing every few quarters. They are young – which means processes, definitions and data hygiene were rarely “designed,” they just happened. That shows up in four chronic problems: Invisible buying groups Activity sits at the account or activity object level, not tied to which humans are actually influencing a deal. Contact roles are incomplete, incorrect, or simply not used. Multi-threading that’s impossible to measure Leadership wants reps and CSMs to multi-thread. But nobody can answer basic questions like: “How many net new stakeholders did this SDR actually bring in?” “Which deals progressed because we pulled in the economic buyer early?” Broken marketing attribution for enterprise deals First-touch and last-touch models collapse when there are 10–20 stakeholders, dozens of events and campaigns, and long sales cycles. “Marketing sourced” covers only a small fraction of reality. No shared view of the customer journey Pre-pipeline engagement, in-pipeline meetings, onboarding, success reviews, expansion conversations – they live in different systems owned by different teams. This is exactly the gap Nektar is built to fill. Nektar as the data backbone for AI GTM At its core, Nektar is a revenue data platform that: Harvests metadata from communication tools (email, calendar, meetings, sequences). Cleans and transforms that data. Writes it into Salesforce against the right opportunities, accounts, contacts and leads. Automatically creates and updates Opportunity Contact Roles (OCRs) with accurate personas (economic buyer, champion, influencer, etc.). Generates revenue signals that help teams act – from “missing exec sponsor” to “multi-threading risk” to “QBR overdue.” Writer is a great illustration of how fast-moving AI companies use this foundation Writer: building an AI-ready GTM engine on top of Nektar Writer is an enterprise AI platform selling into Fortune 500 and Global 2000 organizations. Their GTM complexity is huge: multi-persona deals, long cycles, and a mix of PLG, partner, and enterprise motions. One activity capture layer for Sales, CS and Marketing Writer started with Nektar in sales, then expanded to sales engineering, customer success and now marketing. Nektar: Captures emails, meetings and other activities from tools like Gmail and calendar. Associates them correctly with accounts, opportunities and contacts in Salesforce. Backfills historical data by “travelling back in time” across past emails and calendars, so data isn’t limited to post-implementation activity. Creates missing contacts and writes them into Salesforce as OCRs with mapped personas. Compared with their previous setup (Gong plus internal workarounds), Writer’s RevOps leaders called out that Nektar simply does a better job of capturing and correctly associating activities, especially in complex account structures with multiple open opportunities. This gives Writer a single, reliable activity dataset they can push into their warehouse (GCP) and model in Omni for analytics – a critical enabler for AI-driven GTM. Making multi-threading measurable (and compensable) Writer wants SDRs and AEs to multi-thread aggressively – and they want to pay them for doing it. The problem: Nektar was so good at


























