Leo pfp

Leo

@leonard

479 Following
441 Followers


Leo pfp
Leo
@leonard
XP x zkSync — Learnathon 2 inside Network School 72h of crypto, proofs & XP quests. Not a class. A sovereign mind activation. Porter Adams from @zksync — live kickoff Sat 10am SGT / Fri 10pm EST / Fri 7pm PST Join via Luma: lu.ma/d2uc3czt
0 reply
0 recast
0 reaction

Leo pfp
Leo
@leonard
First we built. Then we remembered. Now we call the world in the open. 🌏 Learnathon Archive — live. https://deserted-ladybug-896.notion.site/Learnathon-Archive-1cfe55b86537808788f5fefad261b8c8
0 reply
0 recast
0 reaction

Leo pfp
Leo
@leonard
Curious how it all played out? Notebook, repo, personal log — it’s all there. GitHub: https://github.com/leomanfree/Creative_Fork_FastAI_LearnSprint/blob/main/README.md Full Learn Sprint: Day 1 → Day 3 Let’s build. And rethink what it means to learn.
0 reply
0 recast
0 reaction

Leo pfp
Leo
@leonard
Final outcome: – Model fine-tuned and tested – Repo cleaned and logged – 3-day Learn Sprint complete – Philosophical reflections on code and cognition Now I rest. But the loop continues.
1 reply
0 recast
0 reaction

Leo pfp
Leo
@leonard
So I logged a new block in my repo. Not technical — conceptual. From Brilliant Sophists → Human-AI Neuroengineering The future isn’t AI that becomes human. It’s humans who become hybrid, on purpose.
1 reply
0 recast
0 reaction

Leo pfp
Leo
@leonard
So I logged a new block in my repo. Not technical — conceptual. From Brilliant Sophists → Human-AI Neuroengineering The future isn’t AI that becomes human. It’s humans who become hybrid, on purpose.
1 reply
0 recast
0 reaction

Leo pfp
Leo
@leonard
That’s when I dropped the code and opened Aristotle, Hegel, Jung, and von Franz. "LLMs are brilliant sophists. They optimize likelihood, not meaning." Statistics ≠ cognition Probability ≠ insight Fluency ≠ understanding
1 reply
0 recast
0 reaction

Leo pfp
Leo
@leonard
This time, I started from my own repo. Refactored the minimum viable engine: DataBlock → Learner → Train → Predict Then I fine-tuned it. It worked. Prediction successful. But again — was that mine?
1 reply
0 recast
0 reaction

Leo pfp
Leo
@leonard
http://Fast.ai says: "train a model to classify pet images." So I did — minimal code, clean results. But I kept asking myself: “Is this just copying — or am I learning?” Today, I hit the edge of that question.
1 reply
0 recast
0 reaction

Leo pfp
Leo
@leonard
Day 3. Final lap. I trained, fine-tuned, and tested a model from scratch — again. But this time, I added something else. Philosophy. Neuroengineering. A reflection on what hybrid intelligence really means. 👇
1 reply
0 recast
1 reaction

Leo pfp
Leo
@leonard
If you’ve been learning in public, rebuilding from first principles, or just trying to own your tools — you’re not alone. Follow along. Tomorrow: Day 3. The Learn Sprint finale. (9/9)
0 reply
0 recast
0 reaction

Leo pfp
Leo
@leonard
I’m not here to master everything. I’m here to practice *thinking like a builder* - even when the AI isn’t there to help. Call it autonomy. Call it sovereignty. Call it monk mode. (8/9)
1 reply
0 recast
0 reaction

Leo pfp
Leo
@leonard
(Yesterday, it called a mixed-breed dog a shiba_inu…) My friend laughed: “That's not even close.” But honestly, I was just proud it *ran*. Today, I’m proud it’s accurate — and that I understood *why*. (7/9)
1 reply
0 recast
0 reaction

Leo pfp
Leo
@leonard
Today’s result? I trained the model. Fed it a test image. Got a clean prediction: > basset_hound (99.99%) > This time, it was actually right. (6/9)
1 reply
0 recast
0 reaction

Leo pfp
Leo
@leonard
But fixing that one stupid bug felt better than completing the entire model yesterday. I learned something deep: AI makes you fast — but debugging without it makes you *present*. Every character matters. Every layer sticks. (5/9)
1 reply
0 recast
0 reaction

Leo pfp
Leo
@leonard
I jumped into Colab and typed everything manually. No help. No completions. Only my notes. And of course, I hit an error: `NameError: name 'PAT' is not defined`. It took me 20 minutes to spot the issue. I had written ‘PAT’ instead of ‘pat’. (4/9)
1 reply
0 recast
0 reaction

Leo pfp
Leo
@leonard
Pen on paper. Line by line. I rewrote the model setup like a monk in a medieval scriptorium. No copy-paste. Just code reconstruction by memory and meaning. It wasn’t clean. It wasn’t perfect. But it was mine. (3/9)
1 reply
0 recast
0 reaction

Leo pfp
Leo
@leonard
First step: I reviewed my own repo from Day 1 — then I opened muellerzr’s notebook again. Together with my AI sparring partner (ChatGPT), I distilled the **essential code**: what I now call the *Minimum Viable Engine* of the model. `DataBlock → Learner → Train → Predict`. Then I closed everything. (2/9)
1 reply
0 recast
0 reaction

Leo pfp
Leo
@leonard
Day 2 of my Learn Sprint with [fast.ai]. Yesterday I trained my first model. Today I tried to **rebuild it from scratch**. No AI assistant. No video tutorial. Just me, my notes, and some bugs. This is the story. (1/9)
1 reply
0 recast
0 reaction

Leo pfp
Leo
@leonard
Oh, and about that prediction? The model said: “Shiba Inu — 91.62% sure.” Reality said: “Nope.” Just a lovely mixed-breed dog on some steps. The model was confident. But it was confidently wrong. Just like us, sometimes. That’s why real learning matters — beyond the output. (10/10)
0 reply
0 recast
0 reaction