1/26/2024 0 Comments Paper clipThe AI is given the ability to learn, so that it can invent ways to achieve its goal better. Bostrom's thought experiment goes like this: suppose that someone programs and switches on an AI that has the goal of producing paperclips. Bostrom was examining the 'control problem': how can humans control a super-intelligent AI even when the AI is orders of magnitude smarter. The notion arises from a thought experiment by Nick Bostrom (2014), a philosopher at the University of Oxford. This gives me reason to believe that it's less likely than non-economists believe that the world will end this way. Taking a more future-bound perspective, my research (Gans 2017) shows that for a paperclip apocalypse to occur, we must make important underlying assumptions. Instead, their focus has been on the more mundane, recent improvements in machine learning (Agrawal et.al. But, to date, economists have not paid much attention to them. The underlying ideas behind the notion that we could lose control over an AI are profoundly economic. Instead it is that, at some point, switching on an AI may lead to destruction of everything, and that this destruction would both be easy and arise from a trivial or innocuous initial intent. The concern isn’t about paperclips per se. It has even led to a popular iPhone game explaining the concept. It motivated Stephen Hawking and Elon Musk to express concern about the existential threat of AI. The notion that artificial intelligence (AI) may lead the world into a paperclip apocalypse has received a surprising amount of attention.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |