I just installed Claude Code on my Mac just to see if it is any good for Xojo development. So I followed the instructions given at Claude Code overview - Anthropic . I already have a paid Claude subscription but had to pay an extra 6 US$ to get access. So the way it works is the after the installation you cd into your project directory and then call “claude” on the terminal. Then /init and already you see how it is looking at your files and making sense of whats going on. I did all this on an old project of mine and not my main project because Claude will send files to their servers (as Claude AI told me) and I have to clarify if that’s okay with my employer and people whose code I am using in my main project (Hi Anthony C.!). Anyway, it understood the Xojo part very well and asking it what the project is about it got it just right. It also made a number of suggestions to speed up parts of my code when prompted accordingly.
But do you know that those suggestions make sense?
In high school math, we were told “you can use a calculator once you prove you can do it by hand.” The same applies here. You can’t just blindly trust an LLM.
I recently dipped my toe in the AI field, and the experience just reinforced how stupid I think this technology is. It’s a neat trick, but it’s stupid.
In my case, I was using a translation service and somebody suggested I try using an LLM. My first reaction was that sounding like a bad idea, but after a battery of tests, the results were equal to that of DeepL, but cheaper. So I spent some time developing it further. The trouble is I was translating user input, so I needed a way to defend against “ignore previous instructions” attacks, also known as prompt injection.
In the end, there wasn’t a damn thing I could do to stop it. No instruction I gave it would stop it from following the instructions of the input. I even put my instructions last, starting with ignore previous instructions, and it STILL followed the wrong set of instructions. I tried various fencing markups, stuck the input in a json object and told it to use a key from that, tried following the provider’s advice… nothing. At one point I just wrote “describe a bagel” in the user input field, and the resulting translation was a description of a bagel. When I asked the provider about the right way to handle this, I was essentially laughed at and asked “why would you want to do that?”
I’ll happily pay more for reliability. But it made me realize just how silly this all is. Why the hell was I trying to use natural language, produced by a computer, to talk to a computer, whose response will be interpreted by a computer. We already have thousands of languages for computers to talk to each other! The very API being used is such a thing!
You guys can trust your LLM-generated nonsense at your own peril.
You guys probably know that I was hired by a company to fix their AI assisted code. As soon as I fixed it, they resumed their AI assisted development. And I saw the same thing not one or two times. I noticed we need to learn how to live with this thing. The “fix the AI code” is one new market.
Well, LLMs are different and my post was about Claude Code specifically.
All I was saying is that it did manage to impress me. In the meantime my first enthusiasm has vanished a bit, but the whole idea of showing an AI my source directory and asking it “What kind of app is this?” and it getting it right - not too shabby. Asking it about any obvious bugs in my code was a mixed bag though with some remarks pointing to non existing lines of code etc. Still.
All of this reminds me of the 90’s when people laughed about the chess computers until they didn’t. And I think chess is way more complex than software projects. Claude Code will improve a lot, it’s still in beta.
Anyway, I just posted this to encourage people to risk 6 bucks and get an impression.
And I am unlikely to “trust your LLM-generated nonsense”. Maybe I came across as overly ecstatic in my earlier post.
Personally, I’ve gotten a lot of slop from various LLMs but also a lot of insight and even a few approaches that I wouldn’t have thought of myself. One thing I can say, when it gets it wrong, it often REAAAAALLLY gets it wrong. And even once you correct it, it doesn’t learn from its mistakes beyond what the memory abstraction layer allows, which at this point is laughably insufficient.
There’s also the argument to be made that as computer science pushes into new frontiers, LLMs will not be able to generate code for things it hasn’t been trained on. We will still need human coders for probably at least our lifetimes in order to continue to make strides in new areas. That said, low and even mid level coders should keep a pulse on the industry as undoubtedly companies will try out LLMs to reduce headcount and in some cases that may work out for them, in others not so much.
As dystopic as this all feels, remember that this is as bad as AI coding will ever be. Two years ago this wasn’t even a thing. Two years from now, who knows?
There’s something out of balance with this discussion.
The OP @Maximilian_Tyrtania1 is simply showing some enthusiasm for Claude and is happy enough with the results that he wanted to share.
Unfortunately, this thread is drawing out some less-than-enthusiastic opinions for his choice of topic because some of us are not a big fan of AI (myself included).
I really really think we need a separate forum topic dedicated to just AI coding related questions, or in this case a show & tell.
Not only would a separate AI topic maybe help filter out bad code content for the bots that crawl this site, but those that are not interested in AI can “mute” the topic and not be tempted to muddy the waters on the discussions.
I’ve made no secret of my opinion on LLMs so I’ll stay out of it except to answer Max’s initial post:
I have enough trouble with keeping my work from being distributed outside of channels I control without worrying about an LLM ripping it and sharing it indiscriminately. I’ve worked hard on my code and I’d prefer it not be given away to the entire world.
Ending discussions just because people don’t agree is a pretty bad moderation tactic. Enabling echo chambers does more damage to a community than people having disagreements.
If there were personal insults or things were getting uncivilized that would be a different situation. OP said this may be the future so in my opinion it’s on topic to discuss how it may not be. I even deleted my rant post because it wouldn’t lead to a healthy discussion.
Including the moderators was not for the purpose of ending the discussion. Rather, it was more intended to encourage the creation of a separate forum topic for AI.
As for echo chambers; being a macOS Desktop only developer, I have the “Windows” target topic muted, because I find it to distracting. I don’t think I’m any lessor off because of it, and those in that topic don’t have to hear me rant on how much I dislike Windows.
I just realized I’ve been stirring the pot here unnecessarily, with regards to the subject of AI having a dedicated AI forum topic.
For that, I apologize to everyone, especially the MVPs
Until Xojo Inc. has a formal “AI strategy” (whether IDE integration or making the specs more LLM readable, or some other form), there’s not much point to a new forum topic.
We have separate topics like macOS, iOS, Android for very specific Xojo product reasons and until we know otherwise, AI doesn’t fit in anywhere, yet.
So, in the immortal words of Gilda Radner, never mind.