← Back

I Drew a Classic Mac on My Kindle Scribe, and It Clarified My Design Process More Than Any Diagram Ever Has

I just drew a classic Mac on my Kindle Scribe using a 1-bit pixel art web app.

Not a perfect one. Not a clean one. But a recognizable little chunk of retro computer: a boxy face, a grin, a presence.

And that’s the point.

Because the drawing isn’t the outcome. It’s the proof-object. It’s the moment a thing stops being a “prototype” and starts behaving like a tool—when your attention leaves the interface and lands on making.

That’s when I know I’ve built something real.

The old lie: good process looks clean on paper

I used to think “good design process” was supposed to look like a timeline. A crisp sequence you could defend in a meeting.

Research → Requirements → Wireframes → Visuals → Build → Done.

I’m not saying that’s fake. I’m saying it’s incomplete—especially when you’re building small, personal tools where the real requirement is trust.

Because for tools like this—an e-ink sketchpad, true 1-bit, dithered textures—the truth doesn’t show up in the diagram.

It shows up in your hand.

The mirror shows itself where the pen hits the page

Here’s the thing I keep relearning: when I say “the mirror,” I don’t mean “AI is wise.”

I mean: it reflects my intent back at me in a form I can examine.

I’ll say, “It’s jittery. It doesn’t land under the pen.”
And the mirror replies with structure: coordinate space mismatch, CSS pixels vs canvas pixels, smoothing, pointer capture, transforms.

Not solutions as magic. Shape as diagnosis.

It takes a vague feeling in my body—this is wrong—and converts it into something testable.

That conversion is half of design.

Step one: start with the concept, not the interface

The concept wasn’t “a drawing app.”

It was: a 1-bit sketchpad that feels like an actual tool on Kindle Scribe.

That single sentence quietly contains all the non-negotiables:

The moment it starts acting “close enough,” the whole concept collapses. Tools can’t be haunted.

Step two: write constraints like a mechanic, not a poet

Before I let any system generate anything, I write down what it must do and what it must never do.

This is the repair-culture part of my design brain. The “what’s the failure mode?” reflex.

This is where I ask my favorite sovereignty question:

Who decided that “process” means ceremony instead of clarity?

Because for me, clarity is the only sacred step.

Step three: use ChatGPT to sharpen the spec until it can’t wriggle

This is where I stop using language like “I want a vibe” and start using language like “I want behavior.”

I don’t ask ChatGPT to invent the product. I ask it to interrogate the requirements until the weak points show.

What’s missing?
What’s ambiguous?
What will break on the actual device?
What would make this feel untrustworthy?

If I leave with a spec that can survive contact with reality, the mirror did its job.

Step four: turn the requirements into a PRD-style prompt

A normal prompt says: “Make a pixel sketchpad.”

A PRD-style prompt says: “Here are the tools, the data model, the input rules, the constraints, and the acceptance tests.”

It’s not about being bossy. It’s about removing ambiguity—the stuff generator tools turn into accidental chaos.

When I do this right, I’m not asking a model to “design.” I’m asking it to lay down scaffolding that I can judge.

Step five: generate the first pass… then become the editor-in-chief

The first output is rarely right.

But it gives me something critical: a real artifact to push against.

And then I do what I always do when something almost works:

I tighten it until it tells the truth.

This is the part nobody wants to talk about when they talk about “AI workflows.”

The value isn’t the first draft.
The value is the speed at which you can reach a draft you can judge.

Kindle Scribe turns tiny mistakes into loud ones

E-ink is an honesty machine.

On a normal screen, you can hide a lot behind smooth animations and forgiving pixels. On e-ink, every flaw is amplified:

And there’s another layer: my nervous system notices before my brain can rationalize it.

If the pen lands one millimeter away from where I meant, I feel it immediately. That’s not “preference.” That’s trust breaking.

So when it works—when the pixel lands under the nib—it isn’t “nice.”

It’s relief.

The loop I trust now

Here’s the loop as I actually live it:

I start with a concept.
I write constraints like a mechanic.
I use ChatGPT as a mirror to turn feelings into testable structure.
I generate a first pass with a builder tool.
Then I revise ruthlessly until the tool behaves like a tool.
Then I bring the plan back to ChatGPT and iterate.

I’m not outsourcing judgment.

I’m outsourcing friction—the part that delays reality from showing itself.

The classic Mac is the proof-object

That little classic Mac drawing matters because it’s the moment my attention stopped monitoring the interface and started making something.

A tool becomes real when you forget you’re testing it.

Not when it compiles.
Not when it demos.
When you draw a dumb, charming little computer on an e-ink screen and think:
“Okay. This... is good!”

1bitkindle.com