Are we asking too much from AI? Probably.
Sturgeon’s law says 90% of everything is crap. With AI involved I’d say that’s gone to 99%.
Maybe that’s unfair.
All the way through my dabbling over the past two-plus decades one thing keeps coming back info focus: Artificial Intelligence can be really good at specific tasks with clearly defined data sets and inputs; it’s not so good at the great mess we call life.
The current hype around Generative AI is driven by asking too much of it. We ask it to behave like we humans, then get annoyed when it doesn’t. Worse, we ask it to be more than we are.
I’ve called out Bing and Bard for their inaccurate answers to technical questions. I scream at ChatGPT generated articles that clearly need a human’s touch to make them flow. But is it really their fault? Or it is ours for expecting them to produce a perfect answer to an imperfect problem?
For all this I am not inherently “anti-AI”. I see it as a tool to support the squashy carbon based life-form struggling to find an answer to a question, make up for poor drawing skills, or looking for inspiration on what to write next. Yet I don’t accept what it tells me on faith, preferring to check everything it references. Yes that means more work, but it takes me down avenues I might otherwise have missed.
Perhaps this is because I know what to expect. I know when it tells me the size of the green hydrogen market is $x and its referenced source is 2 years old, I need to bring out the Google-fu. Check your sources, screams the wanna-be journalist in me, and ChatGPT is no more reliable a source than Dave98329 on Twitter.
I’m using AI to augment my expertise. It guides, inspires and forces me to challenge my assumptions. But it is never right.
Not until I’ve checked its sources.