Like many others, I have been spending a significant amount of time experimenting with AI, or artificial intelligence, lately. My focus has been on testing the output generated by some of the more commonly used tools, such as ChatGPT. The goal has been simple. To do more in less time, especially with tasks involving programming and math. These are areas I am familiar with, but would not claim to be an expert in.
What I have found most interesting during this exploration is that AI is not yet trustworthy when it comes to consistently accurate data. While it often sounds confident and authoritative, its answers frequently contain subtle errors. That said, it does something surprisingly well. It teaches.
Not by being correct, but by being wrong in ways that invite discovery.
Using AI for Teaching
AI tools can make excellent teachers when used correctly. The value does not come from blindly accepting the answers they provide, but from questioning them. When we ask questions we do not know the answers to, then actively search for flaws in the responses, learning naturally follows.
Programming and math are two areas where AI inaccuracies become obvious very quickly. All it takes is compiling and running a generated code snippet, or checking a math equation with a calculator. In many cases, the result does not work as expected.
This is where things become interesting. The output is often close enough to feel believable, but wrong enough to fail. That gap forces the human to investigate, reason, and correct the mistake. In doing so, understanding deepens.
This is also where AI becomes potentially dangerous. Because the answers are rarely completely wrong, they can easily be accepted without verification. In that sense, AI behaves almost like a trickster. Helpful, convincing, and occasionally misleading.
AI Needs a Human Feedback Loop
AI systems generate answers by synthesizing patterns from the information they have been trained on. They predict what a correct response should look like, rather than knowing whether it is actually correct. What they lack is a meaningful feedback loop.
Something as simple as a response that says, no, this is not accurate, but here is why, followed by a corrected explanation. Without that kind of feedback, the system has no grounding in reality. It only refines probabilities.
One possible solution could resemble a model like Wikipedia, where human feedback plays a direct role in refining information over time. If user corrections were weighted more heavily than raw probability, AI could progressively improve its reliability.
In that model, answers would not just be generated. They would be shaped by lived human understanding.
And who knows. Maybe that simple idea is already the foundation of the next major evolution in artificial intelligence. Or maybe I just accidentally described the ultimate AI application.
