Hide table of contents

I’m was inspired to enter this contest to shed light on a worldview that may influence Future Fund's plans regarding AGI. Future Funds listed three ideas bellow. I think the last possibility is where their focus should continue.

“As a result, we think it's really possible that:

  • all of this AI stuff is a misguided sideshow,
  • we should be even more focused on AI, or
  • a bunch of this AI stuff is basically right, but we should be focusing on entirely different aspects of the problem.”

 

Do You Accept This Challenge?

I’m apprehensive about submitting this worldview, which disregards probabilities. This is a contest judged by super forecasters (our cultures prophets), about probabilities (our cultures prophesies); And I’m explaining that probabilities are irrelevant to the subject of this contest. Can you see the conflict? This is going to be tough for me to explain and for the judges to understand with an unbiased perspective. But I think we are all after the same thing. That is, we want to better understand reality and how to prepare for an uncertain future.

 

Intro

This worldview is a simplified attempt to explain how little we know in the required areas relating to Artificial General Intelligence, areas where we need a better understanding, before a genuine AGI is capable of being programmed, to THINK. Yes, a machine that can think, comprehend and explain things is what we need to qualify as an AGI.
 

AI vs AGI

AI- mindless machine. Includes, things we can explain and program.

AGI- what is required, is a mind running on a machine. Which cannot exclude knowledge creating processes (life), emotions, creativity, free will and consciousness.

AI can be better than humans at many things (dancing, chess, memory tasks, a finite list of things…) but not everything.

AGI will be better at everything and will have infinite potential. But to get an AGI, we have many hard problems to solve first.

 

Probabilities And Their Problems

*There are a lot more way to be wrong than to be right.

The first question one should ask is…Is AGI possible or impossible, not if it’s probable?  How can probabilities not be relevant when referring to developing an AGI within a specified timeframe? I’ll start by pointing out the errors of probabilities in a universe which contain people. People are problem solvers, we cannot prophesies what knowledge people will create into the future, there are infinite possibilities of what we will come up with next, our future knowledge growth is unpredictable. Probabilities only work within finite sets, like in a game of chess or poker. But, knowledge is infinite and has no bounds. So, when it comes to humans solving problems, in the real world, probabilities are irrelevant.

When referring to AGI we are trying to understand what will work in the physical world. In reality there is no way of knowing if a thing is probably true or certainly true. We can never know if we are 100% right or if we are 90%, 95%, 99% correct. We can only be “less wrong”, by eliminating errors from our best held ideas.

Prophecy Vs Predictions

Imagine trying to explain the metaverse to anyone from 1901. Now keep that in mind for the following…

Yes, we can make predictions, like the outcome of some science experiments; Or we can use a mathematical formula, to predict the location of a planet in orbit 100 years from now. This is based on knowledge we have today. We can’t predict the knowledge we will have in the further, or else we would have it today. Notice how no one from 100 years ago made a story about todays best technologies? We can’t imagine most of our future tech. In addition, predicting a way in which our tech could harm us is easier that imagining how it could help us, this explains why pessimistic, dystopian Sci-fi movies are more common that optimistic Sci-fi movies that have solved our problems.

Using predictions, We can only guess so far. If we could predict the outcome of an experiment, more than one step at a time, why wouldn’t we just jump past the first step, or second step,  to the outcomes of the next steps? The subsequent outcomes introduce new possibilities that were not possible before. Guessing some of those outcomes is prophesy, story telling, fun but not scientific.

Assigning probabilities on a genuine AGI before a specific time is prophetic. It’s similar to assigning a probability that our civilization will be wiped out before the end of the century. If prophecy were possible, we wouldn’t need incremental improvements in our ideas. How can we invent things that our next inventions will make possible happen only after they are invented. If we could prophecies, we would just jump the middle steps and invent the subsequent inventions. But we can’t. We have no idea what humans will come up with in the future, ideas are infinite and unpredictable.

This only touches on why we cannot forecast a probability of wether AGI will happen, in the real physical world, before a specific time. It’s prophesy, which is dependant on random luck.

(This understanding about probabilities takes time to come to terms with, it sure did we me).
 

The Knowledge Clock

For progress, it may help to think in terms of the speed of knowledge growths, not a date on a calendar or revolutions around the sun. Assigning an arbitrary due date on AGI, is non-science. Time isn’t a factor. The speed of our knowledge growth is our best metric, and this can’t be predicted.

If we can create the necessary knowledge regarding the entities I’ve listed in this worldview,  then will will have AGI sooner or later but it depends on the speed of our knowledge growth first. Yes, people, us, we need to create this knowledge, the first AGI isn’t going to create itself.

 

Is AGI Possible Or Impossible?

There is no law of physics that will make AGI impossible to create. For example, it is possible for our human consciousness to exist, which is a wetware computer. A persons mind (software), running on a persons brain (hardware). We don’t have a reason why it would be impossible to recreate this. Therefore, we can deduce that it is possible to program an AGI, after we create the required knowledge.

 

To Program An AGI We Need More Knowledge About Knowledge

Perhaps we have all the necessary technology today to program an AGI. What we are lacking is the necessary knowledge on how to program it.

Knowledge is not something you can get pre-assembled off a shelf. For ever piece of knowledge there is a building process. Let’s identify the two types of knowledge that we know of:

  1. Genes: The first knowledge creating process, that we know of. It is a mindless process. Genes create knowledge through adapting to an environment, using replication, variation and selection. The knowledge is embodied in genes. It’s a slow knowledge creating process.
  2. Knowledge created in our minds: An intentional and much faster process. We create knowledge by recognizing problems and creatively guessing ideas for solutions (adapting to an environment). We guess, then criticize our guesses. This processes starts in our minds. It’s happening right now in you. I am not uploading knowledge into your brain. You are guessing what I’m writing about, comparing and criticizing those guesses, using your own knowledge. You are trying to understand the meaning of what I’m trying to share with you, then you have that idea compete with your current knowledge on the subject matter. It’s a battle of ideas in your mind. If you are able to be unbiased  in your thoughts and criticize your own idea as well as the competing idea, the idea containing more errors can be discarded, leaving you with the better idea, therefore improving your knowledge. Transferring the meaning of ideas (replicating them) is hard to do. People are the only entity that can do it, and we do it imperfectly (with variation and selection).

Computers today are not creating any new knowledge. They are using the knowledge which people have already created only.  People still need to feed the knowledge into the machine.

 

AGI needs their own creativity to solve real Problems?

*When I refer to problems, I mean problems, that relate to reality, not abstract mathematical problems that don’t need to reference the physical world. Math claims certain truth, science doesn’t. There is no certainty in reality, we can never be certain we have found the truth. What we want to do is to solve problems that make life more enjoyable, relieve suffering, understand more about reality. We do this by identifying, understanding and fixing our problems.

All problems are people problems. Without people, problems aren’t recognized. The dinosaurs didn’t know there was a problem before they went extinct and no other entity, that we know of, can understand problems either. An AGI must be programmed to deal with new problems.

An AGI needs creativity to solve new problems. Creativity is about creating something new, that didn’t exists before. People have the potential to solve an infinite number of problems. An AI has a finite set of problems it can solve. They are dependent on humans to program that finite set of problems. AI can not solve new problems which have never existed before. Creativity is an essential step in the knowledge creation process, it’s how we invent theories.

The method:
 

  • Problem  —>  Theory (this is where creativity is necessary)  —>  Error Correction (experiment)  —>  New Better Problem  —>  Repeat ( ∞ )…
     

We know this method works, because this process creates progress. We see things around us improve, and problems being solved.

An AGI needs creativity to help solve new problems. Understanding creativity is a hard problem which we do not fully understand yet.

 

Can AGI Evolve Artificially?

For computers to evolve to have AGI, like humans only faster, we would first need to fill the gaps in our understanding of how life emerged. We don’t yet know how inorganic material can become organic self replicating life forms. Our theories contain huge gaps, which we need to fill before the process can be understood then programmed.

Another idea for AGI to evolve is, we could try to recreate the universe in an artificial simulator. For this we would need to know all the laws of physics, then recreate our universe according to those laws in a computer simulation. This may or may not be possible, given the amount of physical material we would need for the computations and the time available before the end of the universe. Even then, we have a lot of learning to do first.

Will a computer become a person spontaneously, if we keep filling it full of human knowledge and increasing the speed and memory? No, that would be similar to the old theory of Lamarckism which Darwin replaced with a better theory, namely, evolution by natural selection.

 

Consciousness

We don’t know how much we need to comprehend in order to understand “consciousness”.  Consciousness seems to be our minds subjective experiences. It would seem to emerge from the physical processes in our brain. Once we have understood consciousness, we can show this by programming it. David Deutsch (one of the god fathers of quantum computing) has a rule of thumb, “If you can’t program it, you haven’t understood it”.  Meaning, when we can understand human consciousness well enough to program it into the software running on our computers, only then will we have a real AGI.


Abstractions and why are they important regarding AGI?

If you haven’t spent much time thinking about abstractions before, then what I write here will not be enough for you to understand them but it’s a start. It takes a lot of thinking about them before they are understood. Abstractions, are real, complex systems that have effects on the physical world. But they are not physical. They emerge from physical entities. The physical in which I am referring, is made of something tangible in our universe. Non-physical abstraction, are powered by the physical but do something else. Our mind is an abstraction. Our brains (physical) carrying knowledge (non physical). Yes, the knowledge is encoded in our physical brains, like a program in a computer. But it’s like another layer above the physical, which is our mind.  Another way I’ve come to understand abstractions is, they are something that is more than the sum of its parts. The ‘More” is referring to abstractions. And they are objectively real.

Today, computer programs contain abstractions. They are made out of atoms but they contain abstractions which can effect the world. Eg. If you are playing chess against a computer and it wins, what beat you? What beat you is the abstract knowledge which was embodied in that computer program. People put that knowledge there.

Our minds (like computer programs) are abstract. Non-physical entities, not made of atoms, which effect physical entities.

Understanding abstractions are a necessary step to achieving AGI.  First we need a good explanation on how our abstract minds work, to get us closer to programming AGI. To create an AGI, we must program, our knowledge into, physical software to make possible an abstract entity like our mind.

 

AGI Progress So Far?

There hasn’t been any fundamental difference between todays computers and our original computers. They are still following the same philosophy, only faster, more memory and less error prone.

Today’s AI cannot genuinely pass the Turing test. Which is an AI that can fool a human judge into believing the AI is human. There are questions we can ask the AI, to test if the AI can understand something, anything. But as of yet, there is no understanding happening. Don’t expect SIRI to be your go-to companion any time soon, she’s going to be frustrating for a while still.

 

Conclusion

I think a real AGI will be a good thing. There are benefits that we can imagine and more that we can’t. Immortality comes to mind, populating to rest of the universe does as well.

People are a mind with an infinite repertoire of problem solving potential. After we understand our minds and program the AGI, it will be, by all definitions, a person. They will be able to understand things, they will be able to solve problems. They will be knowledge creators and explainers like us. And we will treat them like people.

Today, computers don’t have ideas, but people do. Computer don’t comprehend meaning from words, gestures, implications, symbols or anything at all. People do. For an AGI, what is needed is a knowledge creating, understanding and explaining program. We aren’t even close. It is possible to program an AGI. But “probably” having an AGI before a certain time is prophecy. Only after we understand human consciousness well enough can we begin the process of programming it.

Understanding that we can solve the many hard problems, needed to program an AGI, is how we deal with our unpredictable future. We can’t solve future problems today. But our knowledge continues to grow.

The Beginning…


* Please, before down voting could you explain why, then I can separate emotional votes from rational votes. I strongly encourage criticisms. After all, this is how knowledge growth works.

0

0
0

Reactions

0
0

More posts like this

Comments5
Sorted by Click to highlight new comments since:

Computers today are not creating any new knowledge. They are using the knowledge which people have already created only.  People still need to feed the knowledge into the machine.


Well, if you compare Stockfish and Alpha Zero, Alpha Zero learned to play chess by playing itself, while Stockfish (at least older versions) was programmed by human experts. Alpha Zero reliably beats Stockfish

You could say Alpha Zero has more knowledge of the game of chess than Stockfish, depending on how you define knowledge. It did not gain its knowledge directly from people. It learned it through trial and error guided by an algorithm.

An AGI needs creativity to solve new problems. Creativity is about creating something new, that didn’t exists before. People have the potential to solve an infinite number of problems. An AI has a finite set of problems it can solve. They are dependent on humans to program that finite set of problems. AI can not solve new problems which have never existed before. Creativity is an essential step in the knowledge creation process, it’s how we invent theories.

There's plenty of examples of computers producing creative works, the latest round of AI art generators is an example.

People are a mind with an infinite repertoire of problem solving potential. After we understand our minds and program the AGI, it will be, by all definitions, a person. They will be able to understand things, they will be able to solve problems. They will be knowledge creators and explainers like us. And we will treat them like people.

Expert systems can solve some problems better than humans and perform inferences more reliably. Knowledge-bases don't perform inferences, but coupled with an explanation module they can explain what they know enough to teach a person. In combination, an expert system can solve problems and explain knowledge. But it would still just be a dumb program, and doesn't understand things. However, people might treat such dumb programs as people, like Eliza, for example, the old therapy program.

Whether my examples actually contradict you depends on definitions for "knowledge" and "understanding". If you could define those terms explicitly, that might help me understand your article better.

Great questions and thank you for asking. I also had these questions come up in my own mind while learning this epistemology.

Here is how I understand the terms you mentioned:

Knowledge    Information with influence. Or information that has causal power (ie. genes, ideas). Fundamentally knowledge is our best guesses.

Understanding   Is part of a knowledge transfer process, which varies from subject to subject. It is the rebuilding of knowledge in ones own mind. In people it’s an attempt to replicate a piece of knowledge.

 

Trail and Error - Yes, I agree Alpha zero has more knowledge than Stockfish, but it’s not new knowledge to the world. Please let me try to explain, because this question also puzzled me for a while. A kind of trail and error happens in evolution as well. Genes create knowledge about the environment they live in buy replicating, with different variations (trial), and dying (error). Couldn’t a computer program do the same thing, only faster? I think it can. But in a simulated environment that people created. The difference is, genes have access to a niche in the physical world, where they confront problems in nature. They solve these problems or they go extinct. A computer program doesn’t have the same access to our physical environment. Therefore people must simulate it. But we still don’t know enough about our own environment to simulate it accurately enough, we have huge gaps in our knowledge about the laws of nature.

When a chess program, programs it’s own rules and step out of its’ game, that would hint at AGI.

 

Creativity in AI art generators - What you are seeing does not involve the creative process. Original art is being displayed and can be misunderstood as creative. It’s an algorithm made by people, to combine a variation of images, based on our inputs. The images are new an have never been seen before. But it’s not a creative, problem solving process that is happening.

 

I agree, there will be many cases where our AI will be useful and help people solve their problems, like Elisa whom you mentioned. People are still behind the scenes pulling the strings. And when people create new knowledge (like a deeper understanding phycology) we will include it in our programs and Elisa will work much better.

 

I really appreciate your questions. If you have anymore please don’t hesitate to ask.

What if the knowledge developed by giving a computer program a model of an environment, and then letting the program run along with an algorithm, surprises people with its insight? For example, people study Alpha Zero chess play because it is so novel. It violates what are thought be the basics of chess tactics and reveals new strategies of play. The knowledge "has influence" of a type.

I'm tempted to interpret you as believing that computers do not produce knowledge about an environment that people do not already have about the environment model(for example, the rules of chess) that they give a program (a learning algorithm). However, computer programs do produce surprising knowledge of some influence(for example, Alpha Zero's superior style of play) that was unknown to humans who programmed the computer program.

As far as development of depth of understanding, some work in automated theorem proving in geometry goes back several decades, and provided novel proofs of geometry theorems as far back as 1956. A proof of a theorem doesn't qualify as a new theory, but it could show "depth of understanding".

Then there's developing new theories. Software is having success generating its own hypotheses. Here's a quote from the linked article on Scientific American:

Many fields may soon turn to the muse of machine learning in an attempt to speed up the scientific process and reduce human biases.

The article linked from the "muse" link is about ai and artistic creativity.

In general, I don't believe that the AI tools we use now show autonomous thought and consciousness with any continuity. In that way, they do not have our intelligence.  However, I am not convinced by our discussion that we humans distinguish ourselves from AI in terms of capabilities for knowledge or understanding, as you have defined those terms.

I think we will learn a lot from AI. It will reveal inefficiencies and show us better ways to do many things. But it’s people that will find creative ways to utilize the information to create even better knowledge. AlphaZero did not create knowledge, rather it uncovered new efficiencies, and people can learn from that, but it takes a human to use what was uncovered to create new knowledge.

Alpha zero (machine learning) vs problem solving about the nature of reality:


Alpha zero is given the basic rules of the game (people invented these rules).

Then it plays a game with finite moves on a finite board.  It finds the most efficient ways to win (this is where Bayesian induction works).

Now graft the game over our reality, which includes a board with infinite squares and infinite new sets of problem arise. For instance, new pieces show up regularly and the rules for them are unknown. How would alpha zero solve these new problems? It can’t, it doesn’t have the necessary problem solving capabilities which people have. What AI needs is rational criticism or creativity with error correction abilities.

 

Games in general solved a problem for people (this introduces a new topic but it relevant nonetheless):

Imagine if Aphazero wasn’t given the general rules of the game chess. What would happen next? The program needs to be able to identify a problem before continuing.

People had a problem of being bored. We invented games as a temporary solution to boredom.

Does an AI get bored? No. So how could it invent games (if games weren’t invented yet)? It couldn’t, not without us, because it wouldn’t know it had a problem.

 

The article you linked to:

Yes, we will have many uses for machine learning and AI. And it will help people come up with better hypotheses and to solve complex (mathematical) problems and improve our lives. Notice, these are complex problems, like sifting through big data and combining variables, but no creativity is needed. The problems that I am referring to are problems about understanding the nature of reality. The article refers to a machine which is going though the same trial and error process as the AlphaZero algorithm mention earlier. But, it’s People who created the ranking system of the chemical combinations mention in the article, the same way people created the game and rules of chess which AlphaZero plays. People identified the problems and solved them using conjectures and refutations. After the rules are in place, the algorithm can take over.

Lastly, it’s people that interpret the results and come up with explanations to make any of this useful.

 

AI - finite problem solving capabilities.(Baysianism works here)

People and AGI- infinite problem solving capabilities. (Popperian works here)

It’s a huge gap from one to the next.


I don’t expect you to be convinced by my explanation. It took me years of carrying this epistemology around in my head, learning more from Popper and David Deutsch, and the like, to make sense of it.  It’s  a work in progress.


Thanks for your great questions, this is fun for me. It’s also helping me think of ways to better explain this worldview.

You're welcome, and thanks for the reply. I'm enjoying our conversation.

What about:

  • ai art as an example of human creativity
  • ai generating hypotheses that humans could not, seemingly demonstrating human creativity
  • ai generating theorems (conjectures, refutations), in old systems back in the 60's

If the concerns are:

  • creativity in response to real-world events
  • ability to increase understanding of a novel environment without aid from a predefined ontology, except for testing behaviors learned by mimicry
  • ability to improve epistemological distinctions

then I think future developments in robotics will satisfy human intuitions of what it takes for an agi to be an AGI. We can see the analogies between robot behavior and human behavior more easily, and they will be an easier proof of AGI functionality of the kind that your worldview denies.

EDIT:When the robots are controlled or communicated with by external AI using input from robot sensors or external sensors, we will have a fuller idea of the varieties of experience and learning that are humanlike that AI can demonstrate.

Curated and popular this week
Relevant opportunities