Hacker News

76

I built a programming language using Claude Code

> While working on Cutlet, though, I allowed Claude to generate every single line of code. I didn’t even read any of the code. Instead, I built guardrails to make sure it worked correctly (more on that later).

Impressive. As a practical matter, one wonders what th point would be in creating a new programming languages if the programmer no longer has to write or read code.

Programming languages are after all the interface that a human uses to give instructions to a computer. If you’re not writing or reading it, the language, by definition doesn’t matter.

by andsoitis1773163127
I've been working on a large codebase that was already significant before LLM-assisted programming, leveraging code I’d written over a decade ago. Since integrating Claude and Codex, the system has evolved and grown massively. Realistically, there’s a lot in there now that I simply couldn't have built in a standard human lifetime without them.

That said, the core value of the software wouldn't exist without a human at the helm. It requires someone to expend the energy to guide it, explore the problem space, and weave hundreds of micro-plans into a coherent, usable system. It's a symbiotic relationship, but the ownership is clear. It’s like building a house: I could build one with a butter knife given enough time, but I'd rather use power tools. The tools don't own the house.

At this point, LLMs aren't going to autonomously architect a 400+ table schema, network 100+ services together, and build the UI/UX/CLI to interface with it all. Maybe we'll get there one day, but right now, building software at this scale still requires us to drive. I believe the author owns the language.

by bobjordan1773167267
This takes all the satisfaction out of spending a few well thought out weekends to build your own language. So many fun options: compiled or interpreted; virtual machine, or not; single pass, double pass, or (Leeloo Dallas) Multipass? No cool BNF grammars to show off either…

It’s missing all the heart, the soul, of deciding and trading off options to get something to work just for you. It’s like you bought a rat bike from your local junkyard and are trying to pass it off as your own handmade cafe racer.

by asciimov1773171440
One topic of llms not doing well with UI and visuals.

I've been trying a new approach I call CLI first. I realized CLI tools are designed to be used both by humans (command line) and machines (scripting), and are perfect for llms as they are text only interface.

Essentially instead of trying to get llm to generate a fully functioning UI app. You focus on building a local CLI tool first.

CLI tool is cheaper, simpler, but still has a real human UX that pure APIs don't.

You can get the llm to actually walk through the flows, and journeys like a real user end to end, and it will actually see the awkwardness or gaps in design.

Your commands structure will very roughly map to your resources or pages.

Once you are satisfied with the capability of the cli tool. (Which may actually be enough, or just local ui)

You can get it to build the remote storage, then the apis, finally the frontend.

All the while you can still tell it to use the cli to test through the flows and journeys, against real tasks that you have, and iterate on it.

I did recently for pulling some of my personal financial data and reporting it. And now I'm doing this for another TTS automation I've wanted for a while.

by aleksiy1231773168951
> More addictive than that is the unpredictability and randomness inherent to these tools. If you throw a problem at Claude, you can never tell what it will come up with. It could one-shot a difficult problem you’ve been stuck on for weeks, or it could make a huge mess. Just like a slot machine, you can never tell what might happen. That creates a strong urge to try using it for everything all the time.

That is the part of the post that stuck with me, because I've also picked up impossible challenges and tried to get Claude to dig me out of a mess without giving up from very vague instructions[1].

The effect feels like the Loss-Disguised-As-Win feeling of the video-games I used to work on at Zynga.

Sure it made a mistake, but it is right there, you could go again.

Pull the lever, doesn't matter if the kids have Karate at 8 AM.

[1] - https://github.com/t3rmin4t0r/magic-partitioning

by gopalv1773171497
Claude Code built a programming language using you
by pluc1773165484
> The @ meta operator also works with comparisons.

I haven't read any farther than this, yet, but this made me stutter in my reading. Isn't a comparison just a function that takes two arguments and returns a third? How is that different from "+"?

by randallsquared1773172708
Next you can let Claude play your video games for you as well. Gads we are a voyeuristic society aren’t we.
by tines1773163375
AI written code with a human writted blog post, that's a big step up.

That said, it's a lot of words to say not a lot of things. Still a cool post, though!

by ramon1561773163144
I have been trying this as well, and you can quickly come very far.

However, I fear that agents will always work better on programming languages they have been heavily trained on, so for an agent-based development inventing a new domain specific language (e.g. for use internally in a company) might not be as efficient as using a generic programming language that models are already trained on and then just live with the extra boilerplate necessary.

by dybber1773170885
Not to discount your experience, but I dont understand what's interesting about this. You could always build a programming language yourself, given enough time. Programming languages' constructs are well represented in the training dataset. I want someone to build something uniquely novel that's not actually in the dataset and i'll be impressed by CC.
by Bnjoroge1773167911
Using LLMs to invent new programming languages is a mystery to me. Who or what is going to use this? Presumably not the author.
by laweijfmvo1773165595
I think we're going to see a lot more of this. I've done a similar thing, hosting a toy language on haskell, and it was remarkably easy to get something useful and usable, in basically a weekend. If you keep the surface area small enough you can now make a fully fledged, compiled language for basically every single purpose you'd like, and coevolve the language, the code, and the compiler
by jaggederest1773163768
I'd say these times will be filled with a lot of tailored-to-you "self"-made software, but the question is, are we increasing amount of information in the world? I heard that claude and chatgpt are getting good at mathematical proofs which give really something to our knowledge, but all other things are neutral to entropy, if not decreasing. Strange time to live in, strange valuations and devaluations...
by p0w3n3d1773168159
The AI age is calling for a language that is append-only, so we can write in a literate programming style and mix prompts with AI output, in a linear way.
by amelius1773163267
It’s been a while friend

Congratulations on getting to the front page ;)

by shadeslayer1773171463
Curious how you handled context management as the project grew — did you end up with a single CLAUDE.md or something more structured? I've been thinking about this problem and working on a standard for it.
by jackby031773170318
Does this really test Claude in a useful way? Is building a highly derivative programming language a useful use case? Claude has probably indexed all existing implementations of imperative dynamic languages and is basically spewing slop based on that vibe. Rather than super flexible, super unsafe languages, we need languages with guardrails, restrictions and expressive types, now more than ever. Maybe LLMs could help with that? I'm not sure, it would certainly need guidance from a human expert at every step.
by grumpyprole1773169428
Admittedly I only skimmed this, but I found it interesting that they came to the conclusion that Claude is really bad at (thing they know how to do, and therefore judge ) and really good at (thing they don't know how to do or judge).

I mean, they may be right but there is also a big opportunity for this being Gell-Mann amnesia : "The phenomenon of a person trusting newspapers for topics which that person is not knowledgeable about, despite recognizing the newspaper as being extremely inaccurate on certain topics which that person is knowledgeable about."

by dwedge1773167830
> I’ve also been able to radically reduce my dependency on third-party libraries in my JavaScript and Python projects. I often use LLMs to generate small utility functions that previously required pulling in dependencies from NPM or PyPI.

This is such an interesting statement to me in the context of leftpad.

by righthand1773163166
I rolled a fair dice using ChatGPT.
by atoav1773168373
I recently tried using Claude to generate a lexer and parser for a language i was designing. As part of its first attempt, this was the code to parse a float literal:

  fn read_float_literal(&mut self) -> &'a str {
    let start = self.pos;
    while let Some(ch) = self.peek_char() {
      if ch.is_ascii_alphanumeric() || ch == '.' || ch == '+' || ch == '-' {
        self.advance_char();
      } else {
        break;
      }
    }
    &self.source[start..self.pos]
  }
Admittedly, I do have a very idiosyncratic definition of floating-point literal for my language (I have a variety of syntaxes for NaNs with payloads), but... that is not a usable definition of float literal.

At the end of the day, I threw out all of the code the AI generated and wrote it myself, because the AI struggled to produce code that was functional to spec, much less code that would allow me to easily extend it to other kinds of future operators that I knew I would need in the future.

by jcranmer1773165980
"Just one more prompt..." I can relate. who else has been affected by this?
by zahirbmirza1773170003
Now anyone can be a Larry Wall, and I'm not sure that's a good thing.
by craigmcnamara1773163967
That was step #1.

Step #2 is: get real people to use it!

by shevy-java1773168829
by 1773163270
Nope. You didn't write it. You plagiarized it. AI is bad
by iberator1773168959
Wait. You built a new language, that there's thus no training data for.

Who the hell is going to use it then? You certainly won't, because you're dependent on AI.

by mriet1773165863
> While working on Cutlet, though, I allowed Claude to generate every single line of code. I didn’t even read any of the code. Instead, I built guardrails to make sure it worked correctly (more on that later).

The "more on that later" was unit tests (also generated by Claude Code) and sample inputs and outputs (which is basically just unit tests by a different name).

This is... horrifically bad. It's stupidly easy to make unit tests pass with broken code, and even more stupidly easy when the test is also broken.

These "guardrails" are made of silly putty.

EDIT: Would downvoters care to share an explanation? Preferably one they thought of?

by kerkeslager1773164644
[dead]
by octoclaw1773165826
[dead]
by aplomb10261773163930
[dead]
by dehkopolis1773164997