David Krakauer, president of the Santa Fe Institute, with a Dangerous Idea.
Here's something that is true about machine intelligence.
That's an immediate crisis and question for us. It's not what people call A.I. Artificial Intelligence. I call it "APP-I" — app intelligence. It has to do fundamentally with the application of free will.
It is already the case that for many of us, when we make a decision about what book to read or what film to see or what restaurant to eat at, we don't make a reasoned decision based on personal experience, but on the recommendations of an app. So that little bit of free will that we got to exercise in our daily lives has evaporated.
Now let's just extrapolate a little bit further forward. Imagine "voter app." Well, I have a certain income, I have certain preferences, I have certain ideals. Let me just enter those into my app and it will tell me who to vote for. Now, the reality is that it's amazing we don't already have this, because it's clearly better than what we do now: Most people will say "I love their hairdo" or "I like the way they speak" or "they amuse me" or whatever proxy for decision making they employ.
How about eater app. We are busy people. We work long days. I get home you know in just so many times I can eat Cheerios or a fried egg. Why don't I just enter my state of health. I'm fat, I don't have to, because my phone is already reporting exactly how many steps I've taken today. And it knows better than I do what I should be having for dinner. It also knows my taste profile and understand my pre-existing conditions, which it does not want to exacerbate.
So there's eater app, and reader app and listener app. Before long, the domain in which we get to exercise our intelligence will be a point.
That's a reality. I'm not a technology doom and gloom type. I love this stuff but I am aware that with all of these increments in capability come detriments to humanity.
I think the history of humanity is the history of our co-evolution with devices and artifacts that make our lives easier. And I think it's now a question from moral philosophy — not for science — to decide how we use them and what it would mean to be enslaved by them, whether we do so willingly or reluctantly and genuinely.
The fact of whether machines will get better and better and better I think is a foregone conclusion. They will. They have. What do we want to keep for ourselves? It's a philosophical, existential question.