#ai

Posts tagged "ai"

9 articles

Why Your LLM App Feels Slow (And It's Not the Model)
AIReactPerformance

Why Your LLM App Feels Slow (And It's Not the Model)

An LLM generating 50 tokens/second isn't slow — but if your UI makes the user stare at a spinner for the first 2 seconds, it feels slow. Most LLM latency is a UX problem, not an infrastructure problem.

April 5, 20266 min read
Building HIPAA-Compliant AI Features: What the Tutorials Skip
AIHIPAASecurity

Building HIPAA-Compliant AI Features: What the Tutorials Skip

Integrating AI into a healthcare product isn't just about plugging in an API. HIPAA has specific requirements around PHI, audit logging, and vendor agreements that most AI tutorials completely ignore.

March 28, 20266 min read
The Hidden Cost of Context Windows: Managing Tokens in Production
AILLMsPython

The Hidden Cost of Context Windows: Managing Tokens in Production

128k tokens sounds like infinite space until you're paying $0.40 per conversation and users are hitting limits mid-session. Here's how I actually manage context in long-running AI applications.

March 18, 20265 min read
TypeScript Patterns I Actually Use in Production AI Apps
TypeScriptAIReact

TypeScript Patterns I Actually Use in Production AI Apps

AI SDK types are a mess if you let them be. Here are the TypeScript patterns I've settled on for handling streaming responses, tool calls, and structured LLM output without losing type safety.

March 8, 20266 min read
Async Python Patterns for AI Backends (That I Learned the Hard Way)
PythonFastAPIAI

Async Python Patterns for AI Backends (That I Learned the Hard Way)

FastAPI and async Python are the obvious choice for AI backends — until you hit subtle concurrency bugs, blocked event loops, and streaming responses that silently drop chunks. Here's how I actually structure these systems.

February 24, 20265 min read
Prompts Are Code: How I Manage Them Like a Senior Engineer
AILLMsPrompt Engineering

Prompts Are Code: How I Manage Them Like a Senior Engineer

A prompt buried in a string literal is a bug waiting to happen. Here's how I version, test, and deploy prompts with the same rigour I'd apply to any production code.

February 10, 20265 min read
RAG is Not Magic: Honest Lessons from Production Retrieval Systems
AILLMsRAG

RAG is Not Magic: Honest Lessons from Production Retrieval Systems

Every RAG demo looks impressive. Production RAG is a different story. Here's what actually breaks, why naive chunking destroys quality, and how I structure retrieval pipelines that hold up under real load.

January 28, 20266 min read
Building a Voice AI Agent: LiveKit, Deepgram, and the Latency Problem Nobody Talks About
Voice AILiveKitAI

Building a Voice AI Agent: LiveKit, Deepgram, and the Latency Problem Nobody Talks About

Voice AI sounds straightforward until you're staring at 800ms of lag between a user's question and the agent's first word. Here's how I actually got it under 400ms end-to-end.

January 12, 20265 min read
Getting Started with AI and LLMs in Your Web App
AILLMsNext.js

Getting Started with AI and LLMs in Your Web App

Learn how to integrate large language models into your Next.js application using the Vercel AI SDK, with streaming responses and a clean API design.

December 15, 20253 min read