Like many others, I have been spending some significant time playing around with AI (Artificial Intelligence), lately. Mainly focused on testing the output generated from some of the more commonly used AI tools such as ChatGPT. All for the sole purpose of doing more in less time with things like programming and math. Topics that I am somewhat familiar with, but claim to be no pro at. During this venture, what I’ve found very interesting, is that AI is nowhere near trustworthy for accurate data, quite yet. But it does a very good job at teaching (through the human discovering inaccuracy in the AI’s answers and examples).
Using AI for Teaching
AI tools make for a great teacher; when we ask it questions we don’t know the answers to, and then instead of accepting the answers as they are provided… begin to look to find the flaw(s) in those given answers.
Programming and math output samples are two simple ways to prove AI inaccuracy quickly. As all a person has to do is try to compile and run the resulting program snippet, or use a calculator to check the math equation.
When we do, we find that it’s rarely ever 100% correct. That’s where it becomes dangerous. It’s almost a trickster.
AI needs a Human Feedback Loop to correct its Answers
AI is programmed to derive a new answer from all information it is provided. But what it needs is a sort of feedback loop. Maybe something like this:
No that is not a correct or accurate answer, but this is… etc.
Perhaps it could work something like Wikipedia, allowing human user feedback to be taken into consideration in an effort to progressively refine the resulting answers. Where it can derive new answers based on a probability leaning taking into account a greater influence provided by human feedback.
Heck… maybe I just came up with the ultimate AI application?