AI Practical Applications - Phase 4 & 5

Note: Learning plans age quickly in the fast-moving AI landscape.

For a personalized, up-to-date learning plan tailored to what you already know, check out the AI Learning Plan Generator Claude Project.

Learning Plan: AI/LLM Practical Applications for Solution Architecture

Goal: Develop detailed conceptual understanding of AI solution architectures to evaluate technical and economic viability, maintain technical credibility as a sanity-check resource, and position strategically in an AI-transformed landscape.

Target Depth: "AI-literate decision-maker and architect" - sufficient understanding to evaluate whether proposed solutions make sense before they're built, identify architectural limitations vs. implementation problems, and translate between technical possibilities and business requirements.

Time Commitment: 1 hour/day, sustained learning
Background: 15 years in education data/tech consulting, familiar with Karpathy's LLM content, regular Claude/ChatGPT user, data engineering background

Note on Structure: Phase 1 is designed to be completable in ~15 days before your strategy meeting. It front-loads actionable architectural knowledge. Phases 2-5 build deeper foundations and expand into specialized topics.


Phase 4: Advanced RAG & Production Patterns (Week 6)

Purpose: Go deeper on RAG implementation details, chunking strategies, and hybrid approaches now that you have strong foundations.


Week 6: Production RAG Deep Dive

Primary Resources:

Pinecone's "RAG from Scratch" Series (YouTube playlist)

  • Search for Pinecone or Lance Martin's RAG tutorials
  • Watch selected videos on: advanced chunking, reranking, hybrid search
  • ~2-3 hours total at 1.5x speed

LangChain RAG Documentation

Supplementary:

"Chunking Strategies for RAG" Articles (find 2-3 recent posts)

  • Search for: "RAG chunking strategies 2024 2025"
  • Compare: fixed-size, sentence-based, semantic, agentic chunking
  • ~45 minutes total

Why this matters: Moving beyond "just use RAG" to understanding production considerations: (1) chunking strategies impact retrieval quality, (2) reranking improves precision, (3) hybrid search combines semantic + keyword, (4) evaluation frameworks are essential, and (5) iterative improvement requires measurement.

Key concepts:

  • Advanced chunking: semantic boundaries, overlap strategies, metadata preservation
  • Reranking: two-stage retrieval (fast recall, then precision reranker)
  • Hybrid search: BM25 + vector search combined
  • Query transformation: rewriting, decomposition, hypothetical document embeddings
  • Evaluation metrics: retrieval precision/recall, answer quality, latency
  • A/B testing: comparing chunking strategies, embedding models
  • Cost-quality tradeoffs: more sophisticated = more expensive

Production readiness checklist (2-3 hours):

Create evaluation framework for RAG system:

Scenario: Education content Q&A system

Your framework:

  1. Retrieval metrics: how to measure if right docs are retrieved?
  2. Answer quality metrics: faithfulness, relevance, completeness
  3. Test dataset: how to create gold-standard Q&A pairs?
  4. Baseline: what's acceptable performance?
  5. Iteration plan: what to try if performance is poor?
  6. Monitoring: what to track in production?

Daily breakdown:

  • Days 1-2: Pinecone RAG series videos (chunking, reranking)
  • Days 3-4: Pinecone RAG series videos (hybrid search, evaluation)
  • Days 5: LangChain RAG patterns, chunking strategy articles
  • Days 6-7: Production readiness checklist exercise, review

Phase 5: Critical Evaluation & Decision Frameworks (Week 7)

Purpose: Synthesize everything into practical decision-making frameworks.


Week 7: Solution Architecture Decision Trees

Primary Resource:

Create Your Own Framework Document (this is the main work)

  • Synthesize learnings from Phases 1-4
  • Build decision trees for common scenarios

Supplementary Reading:

"When to Fine-Tune vs. RAG vs. Long-Context" (search for recent articles)

  • Find 2-3 perspectives
  • ~1 hour total reading

Case Studies: Real RAG/Fine-Tuning Implementations

  • Search for: "RAG case study" + industry (education, healthcare, finance)
  • Read 3-5 case studies
  • ~2 hours
  • Note: what worked, what failed, lessons learned

Why this matters: The ultimate goal is rapid, accurate evaluation of proposals. Decision frameworks help you: (1) quickly categorize proposals, (2) ask the right questions, (3) spot red flags, and (4) suggest alternatives when appropriate.

Framework creation exercise (full week, ~7 hours total):

1. Create "AI Solution Selector" Decision Tree:

  • Input: problem description
  • Outputs: RAG, fine-tuning, long-context, traditional approach, or hybrid
  • Decision nodes: data volume, update frequency, task type, accuracy requirements, budget

2. Create "RAG Viability Checklist":

  • Data characteristics: volume, update frequency, structure
  • Performance requirements: latency, accuracy, cost per query
  • Engineering requirements: team skills, infrastructure, timeline
  • Red flags: unrealistic expectations, missing validation plan
  • Go/no-go recommendation criteria

3. Create "Offshore Development Feasibility Matrix":

  • Tasks ranked by: complexity, required expertise, communication overhead
  • Senior-level required: novel architectures, production ML pipelines
  • Mid-level sufficient: standard RAG implementation, API integration
  • Offshore viable: data processing, testing, monitoring setup
  • Not recommended for offshore: unclear requirements, rapid iteration needed

4. Create "AI Economics Quick Reference":

  • Token cost calculators for common scenarios
  • Break-even analysis templates
  • Cost comparison: fine-tuning vs. RAG vs. long-context
  • Scaling curves: cost at 10x, 100x, 1000x usage

5. Create "BS Detection Decision Tree":

  • Claim type → questions to ask → red flags to watch for
  • Technical claims: how to validate
  • Economic claims: what to calculate
  • Timeline claims: what's realistic for team size/skill

Daily breakdown:

  • Days 1-2: Read case studies, identify patterns
  • Day 3: Create AI Solution Selector decision tree
  • Day 4: Create RAG Viability Checklist and Offshore Feasibility Matrix
  • Day 5: Create AI Economics Quick Reference
  • Day 6: Create BS Detection Decision Tree
  • Day 7: Integration, review, refinement of all frameworks

Reference Materials (Keep Accessible)

Essential Documentation

Resource Purpose URL
Anthropic API Docs Tool use, caching, models https://docs.anthropic.com
OpenAI Platform Docs Embeddings, fine-tuning https://platform.openai.com/docs
MCP Specification Protocol details https://modelcontextprotocol.io
Pinecone RAG Guide RAG best practices https://www.pinecone.io/learn/

Video Resources

Cost Calculators & Tools

Your Created Materials

Keep these in an accessible reference folder:

  • RAG Cost Analysis Exercise (Phase 1, Days 4-7)
  • Token Economics Spreadsheet (Phase 1, Days 8-10)
  • BS Detection Checklist (Phase 1, Days 14-15)
  • MCP Implementation Assessment (Phase 3, Days 1-3)
  • Production RAG Checklist (Phase 4, Week 6)
  • All Phase 5 Decision Frameworks

Pacing Notes & Adjustments

If you're moving faster:

  • Deep dive into Karpathy's full "Neural Networks: Zero to Hero" course
  • Implement actual RAG system (LangChain + Chroma + OpenAI embeddings)
  • Take fast.ai full Practical Deep Learning course
  • Build actual MCP server for a real use case

If you're moving slower:

  • Phase 1 is the priority—extend it to 3 weeks if needed
  • Phase 2 (foundations) can be compressed or skipped if time-pressured
  • Phases 4-5 can be done "on-demand" when you encounter those specific needs
  • Focus on exercises over reading—hands-on builds intuition faster

The key metric: Can you evaluate an AI solution proposal and write a 1-page technical assessment covering: viability, cost structure, failure modes, alternative approaches, and team requirements? That's the goal.


Cost Summary

Resource Cost
All video courses (YouTube, fast.ai, Coursera auditing) Free
Documentation (Anthropic, OpenAI, Microsoft, etc.) Free
API experimentation (OpenAI, Anthropic playgrounds) ~$5-10 (optional)
Optional: Coursera verified certificates ~$49 each
Optional: Hands-on RAG implementation ~$20 (API credits)

Minimum cost: $0 (all core resources are free; API experimentation is optional)


Success Indicators by Phase

After Phase 1 (Pre-meeting):

  • You can explain RAG to a non-technical executive and identify when it's appropriate
  • You can estimate token costs for a proposed AI solution and spot economic red flags
  • You can distinguish between genuine architectural complexity and unnecessary "agentic" framing
  • You have a checklist of questions to ask about any AI proposal

After Phase 2 (Foundations):

  • You understand why fine-tuning differs from RAG at a mechanical level
  • You can explain when more training data helps vs. when it doesn't
  • You understand model behavior (sampling, temperature) well enough to configure systems appropriately

After Phase 3 (MCP & Claude Tooling):

  • You can evaluate MCP server proposals and estimate implementation effort
  • You understand when to use system prompts/skills vs. RAG for knowledge injection
  • You know what's possible with computer use and what requires custom infrastructure

After Phase 4 (Production RAG):

  • You can design evaluation frameworks for RAG systems
  • You understand production considerations beyond MVP (monitoring, iteration, cost optimization)
  • You can recommend specific architectural patterns for RAG use cases

After Phase 5 (Decision Frameworks):

  • You have reusable frameworks for rapid evaluation of AI proposals
  • You can generate technical assessments of proposals in <30 minutes
  • You can confidently recommend offshore-suitable vs. senior-required work
  • You maintain technical credibility while translating between technical and business stakeholders

Meta Notes on Learning Approach

Why this structure:

  1. Front-loaded actionability: Phase 1 gets you to "credible evaluator" in 15 days, even though it's pedagogically backwards
  2. Foundations when they're most useful: After seeing practical applications, foundations make more sense
  3. Exercise-heavy: Each phase includes hands-on work because concepts without application don't stick
  4. Reference-optimized: Materials chosen for ongoing utility, not just one-time reading
  5. Economic focus: Unusual for learning plans, but critical for your role as solution architect

Learning philosophy: You're not trying to become an ML engineer—you're building "informed buyer" expertise. The goal is knowing enough to ask the right questions, spot impossible claims, and translate between technical possibilities and business requirements. This requires deeper understanding than typical "intro to AI" content, but different depth than an implementer needs.