i guess ai can handle modular software very well.
visuell with connectors like a jigsaw.
many tasks can seen abstract.
i could input :
new task, take a photo with gps position, request remark and rating by voice input, store in database as inventory.
new task, show inventory table with search filter by date.
AI works well when fine tuned to do one job. But it simulates intelligence, its not really intelligent. Don’t be fooled by hypes. A generic AI is not a “brain” thinking there, there’s a simulation doing its best to fool you that it knows what its doing. Don’t trust anything that you can convince that 10+10=25
I wonder how long it will be before some companies will use ChatGPT to write software in order to cut costs. Maybe one should ask ChatGPT what will be the cost having ChatGPT writing software !
ChatGPT can be very helpful if one learns how to use it.
For instance if one knows how to prepare the AI for the aimed context and then also be very specific with ones requirements, then it will deliver.
I was able to get the complete code for a wordpress plugin by pasting the requirements and specifications to ChatGPT. Of course one has to be able to validate the generated code and advise the AI about what specifically needs to be corrected. But then it turns out to be a huge timesaver.
Another example I recently stumbled over was on how to create SEO optimized Blog texts. Bloggers know that just writing text is not good enough to get highly ranking contents.
In this video it is shown how ChatGPT can be a huge timesaver and gamechanger in that area:
Students (working at home) have troubles with their exams using ChatGPT here at a Strasbourg University…
I am quite sure they didn’t intend to cheat, but only get help, but they are lucky, they only have to do the exam once more, probably at the University (no more at home).
Note how it even ‘changes its opinion’ about whether you can cook meat or poultry in your oven within 3 sentences.
I guess it fools Google into giving it a ranking because it uses words and punctuation.
I think it is not “changing its opinion” at all - because it IS no “opinion”. It just works by finding the next word with the highest probability to match - and this is calculated some words ahead. Like a crawling snake.
Reminds me to marketing people. In german I often use the saying: “Woher soll ich wissen was ich denke bevor ich gehört habe was ich gesagt habe” - translated something like “How am I supposed to know what I think before I have heard what I said?”
IMO its not doing this at all. Maybe that’s more akin to ML ?
If it were doing this, bearing in mind it is not feeding back new data to train, it’d give the same answer to the same prompt every time and that isn’t the case.