appreciated among AI researchers. AI risk is not yet common knowledge either. In relation to the timeline of the first dissident message, I’d say we’re around the year 1988, when raising the Soviet-occupation topic was no longer a career-ending move but you still had to somewhat hedge your position. I hear similar hedging now—statements like, “I’m not concerned about superintelligent AI, but there are some real ethical issues in increased automation,” or “It’s good that some people are researching AI risk, but it’s not a short-term concern,” or even the very reasonable sounding, “These are small-probability scenarios, but their potentially high impact justifies the attention.”
As far as message propagation goes, though, we are getting close to the tipping point. A recent survey of AI researchers who published at the two major international AI conferences in 2015 found that 40 percent now think that risks from highly advanced AI are either “an important problem” or “among the most important problems in the field.”23
Of course, just as there were dogmatic Communists who never changed their position, it’s all but guaranteed that some people will never admit that AI is potentially dangerous. Many of the deniers of the first kind came from the Soviet nomenklatura; similarly, the AI-risk deniers often have financial or other pragmatic motives. One of the leading motives is corporate profits. AI is profitable, and even in instances where it isn’t, it’s at least a trendy, forward-looking enterprise with which to associate your company. So a lot of the dismissive positions are products of corporate PR and legal machinery. In some very real sense, big corporations are nonhuman machines that pursue their own interests—interests that might not align with those of any particular human working for them. As Wiener observed in The Human Use of Human Beings: “When human atoms are knit into an organization in which they are used, not in their full right as responsible human beings, but as cogs and levers and rods, it matters little that their raw material is flesh and blood.”
Another strong incentive to turn a blind eye to the AI risk is the (very human) curiosity that knows no bounds. “When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success. That is the way it was with the atomic bomb,” said J. Robert Oppenheimer. His words were echoed recently by Geoffrey Hinton, arguably the inventor of deep learning, in the context of AI risk: “I could give you the usual arguments, but the truth is that the prospect of discovery is too sweet.”
Undeniably, we have both entrepreneurial attitude and scientific curiosity to thank for almost all the nice things we take for granted in the modern era. It’s important to realize, though, that progress does not owe us a good future. In Wiener’s words, “It is possible to believe in progress as a fact without believing in progress as an ethical principle.”
Ultimately, we don’t have the luxury of waiting before all the corporate heads and AI researchers are willing to concede the AI risk. Imagine yourself sitting in a plane about to take off. Suddenly there’s an announcement that 40 percent of the experts believe there’s a bomb onboard. At that point, the course of action is already clear, and sitting there waiting for the remaining 60 percent to come around isn’t part of it.
23 Katja Grace, et al., “When Will AI Exceed Human Performance? Evidence from AI Experts,” https://arxiv.org/pdf/1705.08807.pdf.
73
HOUSE_OVERSIGHT_016876
Discussion 0
No comments yet
Be the first to share your thoughts on this epstein document