Posts tagged "ai"
9 articles
Why Your LLM App Feels Slow (And It's Not the Model)
An LLM generating 50 tokens/second isn't slow — but if your UI makes the user stare at a spinner for the first 2 seconds, it feels slow. Most LLM latency is a UX problem, not an infrastructure problem.
Building HIPAA-Compliant AI Features: What the Tutorials Skip
Integrating AI into a healthcare product isn't just about plugging in an API. HIPAA has specific requirements around PHI, audit logging, and vendor agreements that most AI tutorials completely ignore.
The Hidden Cost of Context Windows: Managing Tokens in Production
128k tokens sounds like infinite space until you're paying $0.40 per conversation and users are hitting limits mid-session. Here's how I actually manage context in long-running AI applications.
TypeScript Patterns I Actually Use in Production AI Apps
AI SDK types are a mess if you let them be. Here are the TypeScript patterns I've settled on for handling streaming responses, tool calls, and structured LLM output without losing type safety.
Async Python Patterns for AI Backends (That I Learned the Hard Way)
FastAPI and async Python are the obvious choice for AI backends — until you hit subtle concurrency bugs, blocked event loops, and streaming responses that silently drop chunks. Here's how I actually structure these systems.
Prompts Are Code: How I Manage Them Like a Senior Engineer
A prompt buried in a string literal is a bug waiting to happen. Here's how I version, test, and deploy prompts with the same rigour I'd apply to any production code.
RAG is Not Magic: Honest Lessons from Production Retrieval Systems
Every RAG demo looks impressive. Production RAG is a different story. Here's what actually breaks, why naive chunking destroys quality, and how I structure retrieval pipelines that hold up under real load.
Building a Voice AI Agent: LiveKit, Deepgram, and the Latency Problem Nobody Talks About
Voice AI sounds straightforward until you're staring at 800ms of lag between a user's question and the agent's first word. Here's how I actually got it under 400ms end-to-end.
Getting Started with AI and LLMs in Your Web App
Learn how to integrate large language models into your Next.js application using the Vercel AI SDK, with streaming responses and a clean API design.