Today I thought I’d make some use of the AI technology available and ask it to generate some test CSV data. I had exhausted free tokens at one service, so I moved to Copilot to get some more free credit. It was much more of a ride than I was expecting.
Copilot started by writing a python script when I was specifically looking for generated “natural” looking data (random customer names and email addresses). After some coercing and corrections, I finally got it generating useful data… but…
Halfway through the CSV it stopped writing commas and switched to a variable number of spaces between fields.
I understand and that’s why I kept making corrections to get the results I wanted. But at no point did I instruct it to “choose a point halfway through to switch to a random-number-of-spacesSV” and that amused me
… but hey I found a runtime exception, so I guess thanks Copilot!
By the way, I rarely or never write whole methods (notch programs) using LLMs. But I use it from time to time to find mistakes or overcome challenges.
I also suspect that my predominantly positive experiences result from the fact that I do not (yet) expect too much and above all do not demand too much from the LLM.
I just wanted to try and create a positive thread because I feel like my position on AI comes across wrong. I don’t think it’s evil or that anyone shouldn’t use it. I just think:
Don’t ask AI a question you aren’t going to (or can’t) verify the answer to.
Don’t use AI to undermine other humans.
In fact, because ObjC is so well documented, I find AI to be extremely useful for writing Declares.