Written by:

Justin Capaldi

Share this post:

We hosted three full days of events during LA Tech Week, each one focused on helping founders learn, connect, and build with the latest in AI and startup growth.

  • Day 1: AI Building Day – Exploring how founders are building and scaling AI startups
  • Day 2: Startup Growth Day – Hands-on tactics for go-to-market and growth
  • Day 3: AI Search Day – Navigating the new frontier of AI-driven search

This post covers the first day: AI Building Day, where we brought together founders, investors, and builders to dive deep into what it really takes to build an AI startup that works. The day included three sessions:


Session 1: Building an AI Startup – Founder Panel

We brought together a diverse mix of voices including an investor, a lawyer, a PhD in Data Science, and a founder to dive into the realities of building an AI startup.

Panelists:

Mark Lee, Mucker Capital (moderator)

The AI Market Right Now: “Autonomous software” over shiny demos

What investors care about has shifted fast. Things that wowed a year ago are now table-stakes. At the application layer (vs infrastructure), investors are now prioritizing products that behave like autonomous teammates, software you manage, not “use.” Think: set up once, then Slack, email, or call it like you would an employee.

The agent shake-out is real. AI SDRs were early darlings; many hit around $2M ARR quickly then churned when quality and retention fell short, some even faking numbers. Lesson: full end-to-end autonomy was premature. Winning teams now focus on fitting real workflows, not replacing them wholesale.

Takeaway: Pitch “autonomous outcomes” anchored in a clear, owned workflow. If your agent can run largely unattended and fit neatly into the way work already happens, you have investors’ attention.

Differentiation: Own a vertical, master the workflow

During the discussion, Danny shared how pivoting his AI tool from a broad productivity assistant into a VC-specific workflow instantly improved traction and feedback. Irina added an example from one of her portfolio startups in healthcare, which narrowed its focus to automating claim processing for dental offices rather than all medical practices. That specialization allowed them to achieve faster integrations, clearer messaging, and a defensible customer base.

Irina’s diligence lens: the best pitches aren’t feature tours; they demonstrate real customer understanding, edge-case handling, and end-to-end fit (e.g., customer service agents that handle the real “call me when things break” moments).

Danny’s founder-builder lens: he pivoted a horizontal “do-tasks from Notion + email” agent into a VC-specific workflow product and immediately found a moat and better feedback. The meta-lesson: specialization builds defensibility.

Founder checklist to sharpen differentiation

  1. Pick one ICP and one “hair-on-fire” workflow.
  2. Map the last 10% of tricky edge cases and build for those first.
  3. Prove autonomous value inside that workflow before expanding.
  4. Teach the product to speak the team’s native artifacts (e.g., CRM objects, case IDs, claims, tickets).

Services as a bridge

Historically, “services revenue” raised eyebrows. Not anymore.

  • Collin: Quome sells AI-powered services today to deliver full outcomes while the pure product matures, treating LLMs like a swappable brain with guardrails and tests around it.
  • Danny: Early revenue matters in an era where coding is easy. Services shorten time-to-cash, fund the product, and validate the workflow you’ll later productize.

Play it this way: Sell “AI-powered implementation” now; use those engagements to harvest reusable configs, prompts, evaluation suites, and adapters—the seeds of your product moat.

Building with LLMs: Guardrails over fine-tuning (early on)

Treat the LLM as an interchangeable core. Avoid owning training costs unless you’ve proven ROI. Invest instead in orchestration, evaluation, and deterministic rails. Most defensibility lives in problem-specific constraints, test harnesses, and integrations, not in the base model.

Practical stack guidance:

  • Brains: flexible (closed or open-source as needed)
  • Memory & facts: retrieval + structured stores
  • Control: tools, policies, allowlists, and “don’t try” boundaries
  • Quality: offline evals + live guardrails for the final 20% that makes or breaks production

IP & Legal Strategy: Offense and defense without burning runway

Don’t patent everything. File surgical patents only where you’ve got a truly novel, durable use case (your “knife in the fight” against giants). Otherwise, prioritize trade secrets, copyright in original coordination/selection/arrangement, and trademarks for brand.

If AI helped author your code/content, what do you own? The more purely AI-generated, the weaker your copyright claim. Layer in meaningful human expression where ownership matters (especially for creative assets or code structure).

Stage by stage: Engage counsel early for IP strategy and streamlined contracts to cut sales cycles. Save heavier spends (like privacy infra) for Series A scale-up unless customers demand it sooner.

Risk, Cost & Compliance: The 3:00am reality

Founders also discussed the importance of planning budgets and using operational tools to manage these risks. Common practices include setting clear monthly cloud spend limits with tools like AWS Cost Explorer or CloudZero, tracking uptime with monitoring platforms such as Datadog or New Relic, and maintaining clear incident playbooks in Notion or Linear. These habits keep operations sustainable and help prevent costly surprises.

  • Cloud bills: Many teams forget compute can hit five figures per month quickly, so budget runway accordingly.
  • Production is harder than the demo: Expect failures (often at bad hours); have an on-call plan and remediation runbooks.
  • What to do now: If you’re a B2B enterprise, you’ll need SOC 2 or HIPAA to close big logos. If you’re D2C or early, ship with basic security and iterate, no one’s suing pre-traction, but data leaks will kill you.

Data Governance & Where the model runs

Healthcare buyers increasingly prefer on-system deployments and zero-data-retention setups. The good news: modern open-source models are now “good enough” for many on-prem use cases. Arm yourself with data governance docs/FAQs to speed procurement.

Caution on indemnities: Model-provider IP indemnities often arrive with many exceptions. Don’t over-rely on them.

Bias, UX, and “who gets served”

Bias isn’t academic, it shows up as failed appointments and dropped calls. Irina’s test: several voice agents hang up or fail to recognize her (female, English not first language). If you’re building “agents,” your QA set must represent real users, and not everything needs to be an agent, sometimes a calendar widget is better.

Regulation is fragmented (EU ahead, US state-by-state). Don’t freeze; build toward the strictest relevant standard when you start selling into that category (like Illinois biometrics), but don’t let compliance stall pre-product-market fit.

Founder Playbook (from the panel’s hard-won advice)

1) Nail the wedge

  • Choose one vertical and one painful workflow.
  • Define “autonomous outcome” and the exact human-in-the-loop boundaries.

2) Ship services-backed outcomes

  • Sell implementation/services to generate revenue and capture repeatable product components (prompts, policies, adapters).

3) Engineer for the last 20%

  • Add guardrails, eval suites, and operational runbooks. Budget real on-call.

4) Be smart on IP

  • Reserve patents for truly novel, durable inventions. Document trade secrets, brand aggressively, and keep meaningful human authorship where ownership matters.

5) Close deals faster with trust artifacts

  • Prepare a security overview, data-flow diagrams, zero-retention posture, and FAQs to bypass legal/procurement bottlenecks.

6) Test for inclusivity

  • Include diverse voices, accents, and edge cases in QA. If your agent can’t book Irina’s appointment, it’s not ready.

Memorable lines & moments

  • It’s software you manage, not software you use.” – on agents as autonomous coworkers.
  • The more you use AI and the less you input, the less you own.” – on copyright and human expression.
  • It’s super easy to get a prototype that looks awesome. You get excited about it, maybe even land a sale, but that’s only 20% of the effort. The last 20%, actually getting it to work in production, is 80% of the real work..” – on getting to production.

Takeaways

  • Investors want autonomous outcomes embedded in a real workflow.
  • Services are a perfectly fine (and often optimal) on-ramp to revenue and product.
  • Your moat is the boring stuff: domain depth, edge-case handling, rails, and GTM artifacts that let enterprises say “yes.”
  • Own what matters (IP), secure what’s sensitive (data), and test with the people you aim to serve.

 

Session 2: Live Vibe-Coding Iron (AI) Chef Style Recap

This second session turned coding into a live show, a mix of Iron Chef, hackathon, and AI tools demo rolled into one. Three builders, Petros Hong, Shaun Merritt, and Collin Overbay, took on the challenge of building a Craigslist buy-and-sell AI bot in real time using modern vibe-coding platforms like Lovable, V0, Bolt, and Cursor, while Tony Yang and Danny Pantuso from Mucker Capital provided live commentary.

The Challenge: Build an Agent That Buys and Sells for You On Craigslist

Participants were asked to create a bot that could:

  • Search Craigslist, rank listings by price, condition, or pickup preferences
  • Negotiate a purchase or sale automatically
  • Connect with other marketplaces (Facebook Marketplace, eBay, etc.)
  • Explore creative prompts (e.g., “Find me a $40 White Elephant gift”)

The challenge was in seeing how far founders can go with today’s no-code and AI-coding tools within an hour.

How the Builders Approached It

Petros Hong:
Encouraged beginners to think like product managers. He framed ChatGPT as the senior engineer and Lovable as the junior developer, emphasizing planning and step-by-step prompt iteration before coding. He used Apify to scrape Craigslist and Lovable for front-end generation, highlighting that understanding integration flow and “reading the errors” are key learning steps.

Shaun Merritt:
Focused on user experience. Starting from UI design in V0, he built a chat-style interface instead of a standard marketplace. He used “personality prompts” (“You’re a principal designer at Apple – make it 10× better”) to push creative output. He later discussed connecting V0 to GitHub and iterating via Cursor for backend integration.

Collin Overbay:
Demonstrated structured development through page-by-page PRDs and multi-tool parallelism, generating front-ends in multiple platforms (Lovable, Bolt, V0) and consolidating the best elements before refining the backend in Cursor. His live debugging walkthrough illustrated how agents, Docker, and Gemini Deep Research can combine to build working prototypes fast.

Key Lessons from the Session

  • Plan first, prompt second: A well-structured PRD or workflow makes every AI tool more reliable.
  • Use LLMs like collaborators: Treat ChatGPT or Gemini as mentors for architecture and debugging, copy error logs, ask “why,” not just “fix.”
  • Parallelize early, consolidate late: Generate multiple drafts quickly, then merge the best.
  • Stay simple: MVPs should aim for working outcomes, not perfection.
  • Debug across tools: When one model stalls, switch LLMs or use discussion mode before regenerating code.
  • Keep production discipline: Maintain Git checkpoints, review pull requests, and limit huge auto-generated updates.
  • Backend reality check: Cursor remains the go-to for production-ready code, vibe-coding platforms excel at UI and prototyping.

Takeaways

  • Vibe-Coding proved that with today’s AI-building tools, founders, even non-technical ones, can go from idea to prototype in hours. Yet, real success still depends on planning, debugging, and disciplined engineering habits.
  • Don’t oversubscribe, pick one tool, master it, and expand later.
  • Read and understand error messages before re-prompting.
  • Use structured documentation (PRDs, rules, style guides) to avoid chaotic “demo code.”
    Business and legal responsibility still rest with the human, not the AI.

As Petros summed up: “If I can build this on stage, you can too. It’s easier than you think because these AI tools do so much of the heavy lifting for you. But it’s also harder than it looks because it still requires structure, understanding, and discipline. You have to plan, read your errors, and know when to stop prompting and start fixing. That’s where the real skill comes in.”


 

Session 3: Building AI Agents Using Google ADK

The final workshop of AI Building Day took attendees from theory to hands-on practice. Led by Christian Gunther, AI Customer Engineer at Google Cloud, and facilitated by Tony Yang and Lucas Fontaine, this session offered a practical deep dive into Google’s Agent Development Kit (ADK) showing how developers can build, deploy, and orchestrate multi-agent systems with real code.

From Concept to Cloud: Understanding the Agent Stack

Christian began by explaining the three main challenges startups face when building AI agents:

  1. Fragmented frameworks – too many models and tools to integrate effectively.
  2. Hard-to-debug workflows – agents with complex dependencies that break easily.
  3. Lack of governance and monitoring – difficulty deploying and scaling reliably.

Google’s solution is its Agent X Stack, which includes:

  • Agent Development Kit (ADK): a code-first toolkit for building, evaluating, and deploying AI agents.
  • Model Context Protocol (MCP): an open standard for connecting models to real-world data and tools.
  • Vertex AI Agent Engine: a managed platform for scalable, secure deployment.
  • Agent-to-Agent Protocol (H2A): a new standard for inter-agent collaboration.

This ecosystem enables agents to talk to databases, APIs, and even other agents with minimal boilerplate code.

Hands-On Lab: Building Multi-Agent Systems

Participants followed step-by-step labs on Quick Labs, setting up their own Google Cloud environments.
The workshop guided users to:

  • Create parent and sub-agents for a travel-planning app using Gemini 2.5 models.
  • Establish session state dictionaries so agents can share context, like user preferences or prior messages.
  • Build tools such as “Save Attractions to State” to demonstrate how agents store and retrieve user data.

Live testing through ADK Web, Google’s built-in dev UI, let attendees visualize agent hierarchies, message flows, and state changes in real time.

Advanced Workflows: Sequential, Loop, and Parallel Agents

Christian then introduced workflow agents, enabling developers to create structured, multi-step AI processes.
Using a “Movie Pitch Generator” demo, attendees built agents that:

  • Sequential Agent: researched a historical figure, drafted a screenplay, and wrote output files in order.
  • Loop Agent: added a critic to iterate drafts until “good enough,” teaching refinement and iteration.
    Parallel Agent: ran a box office forecast and casting brainstorm simultaneously, then merged results.

Each workflow demonstrated modularity and how agents pass control between one another, from researcher, to screenwriter, to critic, to file writer.

Key Lessons

  • Think modularly: Break systems into specialized agents with narrow scopes.
  • Use state wisely: Store key data in the session dictionary to maintain context.
  • Automate orchestration: Sequential and loop agents enable structured pipelines.
  • Debug visually: Use the ADK Web UI to trace every message, tool, and response.
  • Scale responsibly: Use Vertex AI Agent Engine or Cloud Run for production-ready deployment.

Takeaway

This workshop showed that Google’s ADK is not just another LLM toolkit, it’s a framework for real-world, multi-agent engineering.

As Christian concluded:

“With ADK, you don’t need to rebuild everything yourself. You can connect, extend, and orchestrate, one agent at a time.”

Thank you to our sponsors Google for Startups, Manatt, Phelps & Phillips, LLP, and Fidelity Private Shares for making it all possible.

Scroll to Top