Karpathy Ingests Karpathy: field report on Space Lake and foundation inversion
Part ofFoundation

Karpathy Ingests Karpathy

Karpathy Ingests Karpathy

Andrej Karpathy published his LLM wiki, a structured knowledge base covering everything from tokenization to RLHF to inference optimization. It is, characteristically, clear, opinionated, and well-organized.

We pointed StellarView’s Space Lake at it.

The Experiment

Foundation Inversion says: classify and index knowledge before you build anything on top of it. The knowledge layer forms first. Applications emerge from what the knowledge reveals.

Karpathy’s wiki is already well-structured. What does StellarView’s classification add?

SCREENSHOT: Space Lake ingestion pipeline showing bronze/silver/gold tiering of Karpathy's wiki content

Bronze Layer: Raw Ingestion

Space Lake ingested 47 markdown files from the wiki. Each file entered the bronze tier, raw, unprocessed, timestamped.

bronze/
 tokenization.md (12KB)
 attention-mechanisms.md (18KB)
 rlhf.md (15KB)
 inference-optimization.md (22KB)
 ..47 files

Silver Layer: Classification

The ontology engine classified each document against StellarView’s seven domain ontologies. Karpathy’s content mapped primarily to:

  • software-engineering: code examples, implementation patterns
  • analytics-data: model evaluation, benchmark analysis
  • migration-infrastructure: deployment, inference optimization

But interestingly, some content triggered compliance-governance classifications, his sections on AI safety, alignment, and responsible deployment patterns.

Gold Layer: Vector Indexing

Each classified document was chunked, embedded, and indexed. The RAG Companion could now answer questions grounded in Karpathy’s writing.

The Payoff

With the knowledge layer in place, we asked the RAG Companion:

“What does Karpathy recommend for production inference optimization?”

The answer came back grounded in three source documents, with citations, with the specific techniques he recommends. Not a generic LLM response, a response built from Karpathy’s actual words.

SCREENSHOT: RAG Companion showing a query result with cited sources from the ingested wiki, confidence scores

Then we asked:

“How do Karpathy’s tokenization recommendations compare with the approach used in our SolarScore project?”

Cross-galaxy intelligence. The RAG Companion drew from two knowledge bases, Karpathy’s wiki and SolarScore’s codebase, and produced a comparison. The tokenization patterns in SolarScore’s text processing pipeline were compared against Karpathy’s recommendations.

This is Foundation Inversion in practice. The knowledge existed. We classified it. Now it informs everything.

Why This Matters

Every organization has its Karpathy. The person who wrote the architecture docs. The team lead who documented the migration patterns. The architect who left behind 200 Confluence pages that nobody reads.

Space Lake ingests all of it. The ontology classifies it. The RAG Companion makes it queryable. The knowledge compounds across galaxies.

You don’t need Karpathy’s wiki specifically. You need your own organization’s knowledge, classified and searchable, before you write the next line of code.

That is Foundation Inversion. That is the practice.

Try it: Clone any documentation repository. Point Space Lake at it. Ask the RAG Companion questions you’d normally ask a senior engineer who left the company.

SCREENSHOT: Space Lake showing the full ingestion pipeline, bronze raw files, silver classified, gold indexed with vector embeddings