There are many programs out there that are trying to make us more self-aware, but they are all built on a flawed premise: that machines can have real knowledge. They’ve actually done this to one degree or another. And in many cases, it’s an AI with an ego-centric perspective.
So what? That doesnt mean it is bad. In fact there are a few programs that are built on a fundamentally flawed premise that is very useful, and has been used to great effect by many people in history.
The key word here is “fundamentally.” This is a crucial distinction. Many programs and algorithms have been built on the premise that machines can have real knowledge. This is a very dangerous concept, because it implies that machines possess knowledge that humans do not. It suggests that machines of a certain type (human, computer, AI, etc.) have the ability to be smarter than us and be in full control of our lives.
This is not the case. In fact, AI programs such as Google’s self-driving cars, Facebook’s AI, Amazon’s Alexa, Apple’s Siri, and Microsoft’s Cortana all have limitations on their knowledge and understanding of the world that humans do not.
All knowledge is relative. All knowledge is based on a foundation of assumptions. I see this so clearly in the way that humans have built up their ideas about the world. We build assumptions and try to stick with them, but at some point we have to look more critically at what we are saying.
It’s also been my experience that the more people use AI, the less accurate our assumptions become. We as humans are not able to completely predict everything that will happen, but we have a lot of information at our disposal. With AI, we will continue to build up our assumptions as we use more and more of this information.
It’s also important to note that we humans have built up a great deal of knowledge about the world in general and our own world in particular. We can only be fully aware of the most specific elements of our world, and our knowledge only extends that far. AI is a whole new class of tool that we can use to create our own models of the world. They will be our “truths” as they become more accurate.
For example, in the world of physics, we know that mass is a force, so if you start building a massive clock that can accurately predict the number of days that it will go by, you can set its rate of rotation by making the clock too heavy. In that sense, AI will be able to create “truths.” The more we can reason about our own existence, the more accurate and detailed our models of the world will be.
This is a good thing. It means that, as we learn more about the world around us, our models will become more accurate. It is possible that we’ll be able to build a model of the world that predicts which way the plane’s going to fall and how much energy the plane will have to expend to get back up.