Meta
TLDR
- It might be possible to automate most currently-common[1] human decisions in the next ~30 years, in large part, because many humans are pretty bad at making decisions normally.
- We can think of most sorts of decision automation using levels similar to autonomous vehicles. We'll want to begin at levels 1-2 for many things, then gradually work our way up to levels 3-5.
- Decision automation can be great or terrible for humanity. I lean positive.
- A lot of software already does decision automation, but there are ways to speed up decision automation in new areas.
- General-purpose tools to do decision automation might be particularly relevant.
- Decision automation is one intervention within wisdom and intelligence, that represents a very different path to decision improvement than rationality workshops or much of institutional decision making.
Epistemic Status
Quickly written (~5 hours), uncertain. I've been thinking about this area a lot over the last 5-10 years or so. I haven't formally studied decision automation.
History
This was originally posted to Facebook here.
QURI
QURI is focused on making estimation infrastructure, which is a subset of decision automation.
The Key Points
A lot of people (including me!) make a lot of pretty stupid decisions.
Software is becoming much better at making decisions.
It seems, surprisingly easy to me (maybe $100 billion of tech effort over 20 years), to imagine making systems that would outperform the majority of people's top 10,000 decisions per year. (Incredibly rough ballparking)
For example:
- Which of these n jobs would be best for me?
- Which menu option should I order, at a restaurant?
- What sorts of medical interventions should I get?
- This nice-looking salesperson is trying to sell me on a new home loan. Should I go along with this?
- Does this business deal seem fishy?
- How should I handle this social situation? Is this person angry at me or just frustrated with other things?
- Should I move to a different country? Which one?
- What major should I choose in college?
- Which suppliers should our company use?
- Which person should our firm hire for this job? (This will require some human input)
- What writing changes should we make to this technical report?
It's true that doing a great job at any of these questions would be incredibly tough, perhaps AGI-complete.
But often, the alternative is not a great decision; it's a really mediocre decision, sometimes after a whole lot of grief to do the deciding. The bar is often really, really low.
Decision automation doesn't have to be highly accurate to be preferable to many human decisions. It's typically dramatically faster and cheaper. Human decision alternatives are very often highly inaccurate. Also, you don't need to replace human decisions, you can simply make suggestions and provide extra information. Levels 1 and 2 of car autonomy can go a long way, before aiming for levels 3 and 4.
Daniel Kahneman has written extensively about how often simple algorithms do better than personal intuitions. Being clever about applying many more simple algorithms would get us pretty far, but of course, we could go further with more complex algorithms.
We already really have a lot of decision automation.
- People have been trusting GPS navigation systems for a while, and are starting to trust AI with larger-scale driving decisions.
- Financial decisions have become largely automated with robo-advisors. Siri/Google Assistant make direct recommendations ("Would you like me to open up the messaging app, for you to message Sophie?") and are becoming more intelligent quickly.
- Email spam detection has gotten to be pretty good. It's often not as good as human judgment, but it's done much faster.
- Spell checkers and grammar checkers like Grammarly are becoming much more powerful.
Decision automation, called as such, is an established field. I believe automated decision-making has been discussed since the early days of artificial intelligence.[2]
My hunch is that overall, this is really really good. A society that makes better decisions is one that prospers.[5]
There are definitely dangers. Perhaps this decision automation will make large groups of humans even less capable of making basic decisions. Perhaps it would lead to a much more complex world that society couldn't actually steer.
But the other side is also very enticing. The less I have to worry about which dentist to use, the more I can worry about the global problems that we can't simply solve with technology. We know that many people don't have the time to be educated enough to make decent political decisions anyway. (See The Myth of the Rational Voter and that cluster of thinking)
I think people now personally identify with many of their decisions, so might be kind of freaked out initially, but I'd expect that in practice it will be pretty fine.
Some people identified as great or unusual drivers so were unhappy with driving automation. Some people identified as great accessors of product quality before Amazon reviews were a thing. But I think, on the whole, most people are happy to just focus on things that can't be so easily done with software.
"Decision Automation Couldn't Improve My Decisions"
I think it's easy for smart people[4] reading this to think
It would be tough to improve on my opinion on important things. I'm quite well researched in my opinion.
Some responses:
- People are often highly overconfident of their abilities to make decisions well.
- There are many people in the world without as much education, talent, and domain knowledge as you. I'm imagining an 80-year old grandma who has to choose a health plan or a person who's completely ignored health science trying to choose a doctor. Even if automation were only used by other people, it could still go a long way.
- Decision recommendations could be almost free and could act as a supplement. It doesn't have to be a replacement. It could just be used to catch occasional exceptions. "Levels 1 and 2" of automation for most decisions would still be very useful.
- Your opinion in some of these domains was costly to build. If you would have known that automation was coming, you might not have made the investment. (i.e. doing calculation by hand vs. using a calculator)
General-Purpose Decision Automation
As argued above, a whole lot of software right now is already doing decision automation. We can see the trajectory of software, and might thus conclude that the future of decision automation will just be what we already expected of software. This might not seem very exciting. Software is advancing quickly, but not that quickly.
Right now, decision automation is often highly specialized. There are autonomous driving systems that require massive engineering efforts and won't have any impact outside of driving decisions.[2] There are email spam detection systems that only apply for email spam. If we have 50,000 types of decisions, and we apply these strategies, we might need 50,000 unique engineering efforts. Naively, this will take a long time.
One big question is if there can be new general-purpose methods that could be applied to many yet-attempted forms of decision automation. Think of cross-domain tools like Airtable or various AWS services. I imagine some key uncertainties include:
- How much will general-purpose ML tools (like language models) be useful for decision automation across different domains?
- How much will improvements in estimation technologies (probabilistic programming, probabilistic libraries, forecasting platforms) be useful for decision automation across different domains?
- Are there other clever general-purpose workflows that could be constructed to allow for decision automation in many domains?
Language models are clearly advancing rapidly. Estimation technologies are advancing much slower, but are much more neglected. I haven't seen many clever general-purpose workflows, but could easily imagine them (but wouldn't be particularly optimistic here).
ML techniques would introduce all the risks associated with ML techniques, and if possible, should be handled with the care we would hope for ML applications. We might well prefer to focus on non-ML techniques for decision areas that might be dangerous.
Applications for Effective Altruism
Decision automation is very arguably part of Wisdom and Intelligence, or Institutional Decision-Making. It can be useful in the same ways those can be useful.
Around the effective altruism and rationality communities, there's been much more attention on ways to educate people to become more rational and wise, than on decision automation. See CFAR and other rationally bootcamps, for example. But certain clusters of decision automation might be much easier. It's very difficult to train people to think significantly better, but dramatically easier to say, "just look at this website and do what it says."
[1] Once some decisions are automated, humans are likely to spend more time on other decisions. So it might be incredibly difficult to automate "all decisions we'll ever have", but still realizable to automate "most decisions we have right now."
[2] For a while, anyway. Elon Musk claimed that Tesla will be able to use its autonomous competencies for other robotics. We'll see how this holds up.
[3] I'm highlighting this because I've heard it a few times in person, often from pretty smart people.
[4] One tricky point is that a whole lot of software period is basically doing light decision automation, but isn't often referred to specifically as such. I think "decision automation" has been used by certain enterprise players to mean fairly narrow things, and other vendors didn't want to be associated with those groups. But for our purposes, and I think for most reasonable definitions we might have of decision automation, a lot of software should count.
[5] Otherwise is clearly possible, but I think less likely.
Couldn't automating most human decisions before AGI make AGI catastrophes more likely when AGI does come? We'll trust AI more and would be more likely to use it in more applications, or give it more options to break through.
Or, maybe with more work with pre-AGI AI, we'll trust AI less and work harder on security, which could reduce AI risk overall?
Or maybe if we can discover how to use "primitive" AI usefully enough, we decide we never need AGI.
(This is an immediate reaction, not something I have ever thought about in detail)
My guess is that a success might look more like,
1. We use software and early AI to become more wise/intelligent
2. That wisdom/intelligence helps people realize how to make a good/safe plan for AGI. Maybe this means building it very slowly, maybe it means delaying it indefinitely.
To be clear, it's "automating most of the decisions we make now", but we'll still be making plenty of decisions (just different ones). Less "what dentist should I visit", more of other things, possibly, "how do we make sure AI goes well"
Automating most human decisions looks a whole lot like us being able to effectively think faster and better. My guess is that this will be great, though, like with other wisdom and intelligence interventions, there are risks. If AI companies think faster and better, and this doesn't get them to realize how important safety is, then that would be an issue. On the other hand, we might just need EA groups to think faster/better for us to actually save the world.
It's possible, but the benefits are really there too. I don't think the "trust AI more" will be a major factor, but "give it more options to breakthrough" might technically be.
Much of decision automation doesn't have to be ML-based, the rest looks much more like traditional software.
The internet might be a good example. The introduction of the internet has led to a big attack vector for AI, but it also allowed people to talk about and realize that AI safety was a thing. My guess is that the internet was a pretty big win in expectation.
The question of, "should we use a lot of AI soon, to understand it better and optimize it" is an interesting one, but I think a bit out of scope for this piece. I think we'd do decision automation for benefits other than "to try out AI".
One tool that I think would be quite useful is having some kind of website where you gather:
Then you could get a description of a decision that someone new is facing and automatically assemble a reference class for them of people with the most similar decisions and how they turned out. Could work without any ML, but language modelling to cluster similar situations would help.
Kind of similar information to a review site, but hopefully can aggregate by situation instead of by product used, and cover decisions that are not in the category of "pick a product to buy"
Good idea.
I think it's difficult to encourage people to write a huge amount of data on a website like that. Maybe you could scrape forums or something to get information.
I imagine that some specific sorts of decisions will be dramatically more tractable to work on than others.
From my perspective, most decision automation is highly neglected for some reason. I don't know why, but things seem to be moving really slowly right now, especially for the big picture sorts of decisions effective altruists care about. I don't know of any startups trying to help people make career decisions using probability distributions / expected values, for example. (Or most of the other questions I listed in this document).
To me, the distinction isn't so black and white. If people decided on better politicians, and those politicians decided on better policies, we'd probably have better risk mitigation procedures.
A whole lot of "effective altruist research" is made up of tiny, seemingly trivial decisions ("What should the title of this post be? What edits should I make to this piece?"). If we could get these out of the way, they could focus more on the bigger questions.
A lot of the "big picture" questions can be decomposed into smaller questions. Like, we use forecasting infrastructure to answer them, but them have lots of optimization to do on this forecasting infrastructure.