How I Fine-Tuned TinyLlama on My Laptop Using LoRA
A complete walkthrough of fine-tuning a 1.1B parameter model on a Mac using LoRA, merging the adapter, and serving it locally with Ollama.
Read essay →A complete walkthrough of fine-tuning a 1.1B parameter model on a Mac using LoRA, merging the adapter, and serving it locally with Ollama.
Read essay →You trained the model. Now what? This post explains model inference and harness engineering with real code and zero hand-waving.
Read →Cognitive offloading is smart. Cognitive surrender is dangerous. A Wharton study of 1,372 people shows most of us have stopped telling the difference.
Read →Skills are becoming the new language for AI tools. Write a better skill and the LLM doesn't just follow instructions, it starts thinking like you.
Read →AI can generate a product plan in minutes. That doesn't mean it can be shipped in the same time. Here's the part the hype doesn't talk about.
Read →Practical examples of how I use Cursor (AI editor) and Warp (AI terminal) to refactor code, write tests, and speed up everyday tasks with prompts, examples, and images.
Read →