Hot Best Seller

The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do

Availability: Ready to download

"If you want to know about AI, read this book...it shows how a supposedly futuristic reverence for Artificial Intelligence retards progress when it denigrates our most irreplaceable resource for any future progress: our own human intelligence."--Peter Thiel A cutting-edge AI researcher and tech entrepreneur debunks the fantasy that superintelligence is just a few clicks awa "If you want to know about AI, read this book...it shows how a supposedly futuristic reverence for Artificial Intelligence retards progress when it denigrates our most irreplaceable resource for any future progress: our own human intelligence."--Peter Thiel A cutting-edge AI researcher and tech entrepreneur debunks the fantasy that superintelligence is just a few clicks away--and argues that this myth is not just wrong, it's actively blocking innovation and distorting our ability to make the crucial next leap. Futurists insist that AI will soon eclipse the capacities of the most gifted human mind. What hope do we have against superintelligent machines? But we aren't really on the path to developing intelligent machines. In fact, we don't even know where that path might be. A tech entrepreneur and pioneering research scientist working at the forefront of natural language processing, Erik Larson takes us on a tour of the landscape of AI to show how far we are from superintelligence, and what it would take to get there. Ever since Alan Turing, AI enthusiasts have equated artificial intelligence with human intelligence. This is a profound mistake. AI works on inductive reasoning, crunching data sets to predict outcomes. But humans don't correlate data sets: we make conjectures informed by context and experience. Human intelligence is a web of best guesses, given what we know about the world. We haven't a clue how to program this kind of intuitive reasoning, known as abduction. Yet it is the heart of common sense. That's why Alexa can't understand what you are asking, and why AI can only take us so far. Larson argues that AI hype is both bad science and bad for science. A culture of invention thrives on exploring unknowns, not overselling existing methods. Inductive AI will continue to improve at narrow tasks, but if we want to make real progress, we will need to start by more fully appreciating the only true intelligence we know--our own.


Compare

"If you want to know about AI, read this book...it shows how a supposedly futuristic reverence for Artificial Intelligence retards progress when it denigrates our most irreplaceable resource for any future progress: our own human intelligence."--Peter Thiel A cutting-edge AI researcher and tech entrepreneur debunks the fantasy that superintelligence is just a few clicks awa "If you want to know about AI, read this book...it shows how a supposedly futuristic reverence for Artificial Intelligence retards progress when it denigrates our most irreplaceable resource for any future progress: our own human intelligence."--Peter Thiel A cutting-edge AI researcher and tech entrepreneur debunks the fantasy that superintelligence is just a few clicks away--and argues that this myth is not just wrong, it's actively blocking innovation and distorting our ability to make the crucial next leap. Futurists insist that AI will soon eclipse the capacities of the most gifted human mind. What hope do we have against superintelligent machines? But we aren't really on the path to developing intelligent machines. In fact, we don't even know where that path might be. A tech entrepreneur and pioneering research scientist working at the forefront of natural language processing, Erik Larson takes us on a tour of the landscape of AI to show how far we are from superintelligence, and what it would take to get there. Ever since Alan Turing, AI enthusiasts have equated artificial intelligence with human intelligence. This is a profound mistake. AI works on inductive reasoning, crunching data sets to predict outcomes. But humans don't correlate data sets: we make conjectures informed by context and experience. Human intelligence is a web of best guesses, given what we know about the world. We haven't a clue how to program this kind of intuitive reasoning, known as abduction. Yet it is the heart of common sense. That's why Alexa can't understand what you are asking, and why AI can only take us so far. Larson argues that AI hype is both bad science and bad for science. A culture of invention thrives on exploring unknowns, not overselling existing methods. Inductive AI will continue to improve at narrow tasks, but if we want to make real progress, we will need to start by more fully appreciating the only true intelligence we know--our own.

30 review for The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do

  1. 4 out of 5

    Dan

    Successful guys like Musk - who by the way believes that we live in a simulation like Matrix - spend money, form organizations, and participate in talks to block the imminent and evil AI. Bostrom devises brilliant schemes on how to outsmart, while still we can, a superintelligence that may turn us humans into means to some unexpected objective; objective that in no way we can anticipate. Tegmark presents us with naive utopias on how wonderful everything will be once AI will arrive; he even tells Successful guys like Musk - who by the way believes that we live in a simulation like Matrix - spend money, form organizations, and participate in talks to block the imminent and evil AI. Bostrom devises brilliant schemes on how to outsmart, while still we can, a superintelligence that may turn us humans into means to some unexpected objective; objective that in no way we can anticipate. Tegmark presents us with naive utopias on how wonderful everything will be once AI will arrive; he even tells us that he cries with joy at the prospect of AI's arrival. Kurzweil dreams of some imminent metaphysics with the rise of cyborgs and the nearness of “singularity”. And the list goes on. From time to time they get together, scare each other with their stories, and think that is good business to scare the rest of us also. But there is a basic problem here - as in fact no one has any idea how to practically achieve AI and how to fill the gaps in fundamental AI research; and thus all this talk about the imminence of AI is just hype and myth - as Larson argues here. It all started with the deductive and symbolic approach to AI that failed way back. Now we are in the middle of a statistical and inductive machine learning inference that is quite successful in narrow business fields; an approach that cannot be extrapolated and generalized to the mythical AGI. Larson nicely presents the deductive and inductive inferences along with their limitations as possible fundamentals for AI. Instead, he proposes what Pierce defined as “abductive” inference; basically a hybrid between the deductive and inductive inference. Since there is no developed theory of “abductive” inference, not much can be done in this respect. IBM successes with chase and “Jeopardy!” were marketing stunt events well prepared, patched, and achieved by narrow algorithms well coordinated and balanced. All the promises that Watson will turn to medicine and other fields proved nothing but promises. DeepMind did extremely well with games and video games like Go and StartCraft; but this is not the real since all game's "rules are known and the world is discrete with only a few types of objects." If AlphaStar, the algorithm who mastered StarCraft, plays a different race in the same StarCraft game, then it needs to be trained from scratch. Voice recognition and personal assistants like Alexa are just inferential inductive algorithms that are trained on huge data and provide you with the most likely associated answer – given a previous and similar question-answer pair. In other words, there is no depth, no causal understanding, no trace of any “intelligence”, and no generalization outside these extremely narrow fields where the relevant and commercial data are available in huge quantities. Actually, the fundamental and basic knowledge needed to construct an AI is not actively pursued by any scientist - simply because no one has a clue of what is needed or how to bridge the missing gaps. More specifically, the needed and unknown knowledge is actively hoped and prayed for by such “AI scientists”. That is, they are “sure” that the current and future algorithms running on larger and larger data sets will somehow fill the gaps in fundamental knowledge and that the “conscience”, “intelligence”, “singularity”, and so on will spontaneously emerge. In other words and according to Larson, the current AI scientists gave up science for some time now, replaced it with algorithms running on large data, and are waiting, praying, and hoping for AI to spontaneously emerge “soon”. Larson is great in demystifying all this AI hype; and he is doing it well and clearly in the AI researcher's domain and language – since he is one of them. However, I believe that Dreyfus's critique of AI is more fundamental and relevant when compared with this one – even if it is 50 years old.

  2. 5 out of 5

    Ben Chugg

    There is a prevailing dogma that achieving "artificial general intelligence" will require nothing more than bigger and better machine learning models. Add more layers, add more data, create better optimization algorithms and voila: a system as general purpose as humans but infinitely superior in their processing speed. Nobody quite knows exactly how this jump from narrow AI (good on a particular, very well defined task) to general AI will happen, but that hasn't stopped many from building career There is a prevailing dogma that achieving "artificial general intelligence" will require nothing more than bigger and better machine learning models. Add more layers, add more data, create better optimization algorithms and voila: a system as general purpose as humans but infinitely superior in their processing speed. Nobody quite knows exactly how this jump from narrow AI (good on a particular, very well defined task) to general AI will happen, but that hasn't stopped many from building careers based on erroneous predictions, or prophesying that such a development spells the doom of the human race. The AI space is dominated by vague arguments and absolute certainty in the conclusions. Onto the scene steps Erik Larson, an engineer who understands both how these systems work and their philosophical assumptions. Larson points out that all our machine learning models are built on induction: inferring general patterns from specific observations. We feed an algorithm 10,000 labelled pictures and it infers which relationships among the pixels are most likely to predict "cat". Some models are faster than others, more clever in their pattern recognition, and so on, but at bottom they're all doing the same thing: correlating datasets. We know of only one system capable of universal intelligence: human brains. And humans don't learn by induction. We don't infer the general from the specific. Instead, we guess the general and use the specifics to refute our guesses. We use our creativity to conjecture aspects of the world (space-time is curved, Ryan is lying, my shoes are in my backpack), and use empirical observations to disavow us of those ideas that are false. This is why humans are capable of developing general theories of the world. Induction implies that you can only know what you see (a philosophy called "empiricism") - but that's false (we've never seen the inside of a star, yet we develop theories which explain the phenomena). Charles Sanders Pierce called the method of guessing and checking "abduction." And we have no good theory for abduction. To have one, we would have to better understand human creativity, which plays a central role in knowledge creation. In other words, we need a philosophical and scientific revolution before we can possibly generate true artificial intelligence. As long as we keep relying on induction, machines will be forever constrained by what data they are fed. Larson argues that the philosophical confusion over induction and the current focus on "big-data" is infecting other areas of science. Many neuroscience departments have forgotten the role that theories play in advancing our knowledge, and are hoping that a true understanding of the human brain will be borne out of simply mapping it more accurately. But this is hopeless. Even after having developed an accurate map, what will you look for? There is no such thing as observation without theory. At a time when it's in fashion to point out all the biases and "irrationalities" in human thinking, hopefully the book helps remind us of the amazing ability of humans to create general purpose knowledge. Highly recommended read.

  3. 4 out of 5

    Live Forever or Die Trying

    I personally believe one of the most valuable things a reader can do is to read books that are contrarian to your held viewpoint. I surround myself with books and articles from Futurist, Transhumanist, Techno-progressive Leftists, and Scientist working on finding a cure for aging (writing this out makes me seem a bit out there huh?). A keystone piece of the futures these people write about is the coming superpower that is AGI or artificial general intelligence, a machine that can think the same I personally believe one of the most valuable things a reader can do is to read books that are contrarian to your held viewpoint. I surround myself with books and articles from Futurist, Transhumanist, Techno-progressive Leftists, and Scientist working on finding a cure for aging (writing this out makes me seem a bit out there huh?). A keystone piece of the futures these people write about is the coming superpower that is AGI or artificial general intelligence, a machine that can think the same way we do. That’s where “The Myth of Artificial Intelligence” by Erik J. Larson, put out by Harvard University Press comes in to serve as my contrarian viewpoint. Broken into 3 main sections this book first covers a history of computation and initial theories of intelligence and how we arrived at our present world. We look at Alan Turning and his time at Bletchley all the way to figures such as Nick Bostrom and Ray Kurzwiel and along the way learn what predictions these figures put forward as problems that AI would solve. Secondly we take a deep dive into what AI Is good at, namely Machine Learning, Deep Learning, Neural Nets, and other “narrow” forms of intelligence. We take a long hard look at why general intelligence is not making progress and the problems AI have when trying to “jump to a conclusion” or use “abductive” learning. We also spend a lot of time reading the problems of language learning and logic for AI. Finally we finish the book with the 3rd part where we analyze the halt in progress of AI and the damage these “myths” have cause. If we want AI to progress to AGI then we must go back to the drawing board and first understand how the mind works before we dive head first into full brain emulations. Overall this book was a 5/5 for me. I am very novice on the workings of AI and this book was very readable for me although at times tedious and dense. For someone new to the topic it had more information than I could integrate in one sitting but provides me endless jumping off points to learn about this topic in more detail. I would recommend it for anyone interested in AI, especially if you are like me and wish to see this tool leveraged in the future.

  4. 4 out of 5

    Loren Picard

    The best, most level headed, and honest take on where scientists are with AI. No talk of cosmic endowment, killer robots, and machines replacing humans as a species. Larson doesn't sidestep the narrow successes of AI; he explains them for what they are. Larson explains why computers can beat humans at games, but can't understand an ambiguous sentence. In an ironic twist, you come away from the book somewhat letdown that the idea of artificial general intelligence is nowhere in sight (there are n The best, most level headed, and honest take on where scientists are with AI. No talk of cosmic endowment, killer robots, and machines replacing humans as a species. Larson doesn't sidestep the narrow successes of AI; he explains them for what they are. Larson explains why computers can beat humans at games, but can't understand an ambiguous sentence. In an ironic twist, you come away from the book somewhat letdown that the idea of artificial general intelligence is nowhere in sight (there are no workable theories being explored), but in what is the best outcome of reading this book is you feel newly empowered as a human with an intellect that can't be duplicated.

  5. 5 out of 5

    Buzz Andersen

    Fantastic book, and a great compliment to a book like The Alignment Problem. Only deducting a star for the strong whiff of Thielism detectable in the polemical section toward the end. Really, though, a great and surprisingly philosophical book about the limits of AI and the misguided myths that have grown up around the field.

  6. 5 out of 5

    Zare

    AI was always a field of interest for me. By knowing people longer in this field than me I was aware of the up-and-down of interest in AI since after WW2, starting from simpler application like self-guiding systems to expert-systems and ways of training neural networks for purposes of data classification. For me this was always rather mechanistic, without that mystery and magnificence of SF AI's. As time went by I started to think that AI will at the end be a cross-over of human tissue and techn AI was always a field of interest for me. By knowing people longer in this field than me I was aware of the up-and-down of interest in AI since after WW2, starting from simpler application like self-guiding systems to expert-systems and ways of training neural networks for purposes of data classification. For me this was always rather mechanistic, without that mystery and magnificence of SF AI's. As time went by I started to think that AI will at the end be a cross-over of human tissue and technology, not unlike in W4oK. So when all the hype started couple of years ago I was taken aback. For any question I asked I got no answer - be it verbal or in texts in magazines. Everything came to - it works on its own, it is just required to add sufficient data. OK, this sounds like expert system but what about AI, how does it reason? Same answer. And it was given to me in a way like I was most stupid man ever for not getting it. And I started feeling like that after every discussion with my colleagues - AI is here, will take whole bunch of tasks because you can program it to do almost everything. OK, all good, great, but bloody how? Just add more data. And this put me in place where I could not make sense out of anything. I was sure I was missing something and tried finding additional literature - unfortunately these were such a mish-mash of wishful thinking that it only left me with more questions. And then I found this book. In a very concise way author gives the overview of current AI research and its rather sorry state. Author is very to the point and he writes as if he was asked so many times about the AI [by people like me :)] that he decided to write a book to provide reference to everyone interested in the field. Book is somewhere between popular and mid level science book. Few chapters that deal with logic and rules of logical reasoning might be uninteresting to people that are not in field of computer science or applied mathematics but rest of the book is accessible to everyone. And what a damning picture this book paints. Presented with the possibility that there are no more major breakthroughs ahead (rather theoretical Nobel prizes notwithstanding I think there is still lot more practical research to be done) scientists took an unscientific approach - instead of research they changed their course and joined forces with the corporations. Corporations, being in their nature, decided to push their own products as they are, because any further research would cost, and marketing hype came in force. Result? Terrible. It only confirmed that for person with hammer every problem looks like a nail. Due to the hype that came form all known authorities in the field (corporations and individual scientists associated with them) states started funding research that was using AI (in truth only classification engines) and neglecting others, that went more traditional way. As a result AI research was a major hit - nobody wanted to "waste time" when AI will definitely bring results (and if anyone asked how, answer was definitely "Add more data"). As author clearly states - entire AI research to date is based on constant futile attempts of simplifying human brain to the level of chain of data processors. Futile because this same attempt failed over and over again (including now, even with the enormous computer power and data collections). Reason? Very simple, how can one build intelligence when we do not know anything about our own intelligence process (when I read this I was stupefied, even after all these years we still do not have definitive answer on how our mind works, mind-blowing). It is like entire science community decided to decipher how does automobile work by just looking at the outside parts and not being aware of the main part - engine. And expected good results. Blimey. As a result complete AI research wasted good part of last decade. Significant improvements were made for classification systems and expert systems (as author says very very narrow AIs) but everything else was stopped in its tracks. Unfortunately hype caused quite a social upheaval. I agree with the author, it looks like anti-human revolution took place, humans were discarded as used tools (it is incredible how this morbid cult of human irrelevance and expendability took root world wide in last decade) all in expectations of rise of our machine masters. Benevolent or not, it seems it does not matter to any of the technocratic leaders - they are so eager to give birth to something - without even knowing what exactly. In a short time use of classification engines helped to create a divide between people by pushing news people like to read. It is not their error, mind you, they do what they are programmed to do but inadvertently this back-fired because of society totally surrendered itself to these computer idiot-savants for everyday news and information - from food to politics. This is very timely book, and I hope that author's message to bring back sense in AI scientific research is accepted by the community. AI in any form can do wonders for humans but it must not be goal in itself. It is a device that can bring enlightenment and propel the humanity forward, but that can only be achieved without trickery [in scientific approach] and by following age old scientific approach (getting to theories and proving or disapproving them] that proved itself many times over. Will it take time? Definitely. But this will help us to perform detailed and valuable research and, which might be more important, we will become mature enough to cope with the end-results. Excellent book, highly recommended.

  7. 4 out of 5

    David Zimmerman

    The myth of artificial intelligence is that it doesn't exist. Intelligence is the exclusive domain of sentient life. There is an uncrossable barrier between the virtually inexhaustible knowledge base available to machines and their ability to process that knowledge, and the ability of the human brain to tap into its limited data base, to reason, hypothesize, explore and experiment with ideas, to pave the way to new discoveries and technologies. The human brain can THINK, a machine can only PROCE The myth of artificial intelligence is that it doesn't exist. Intelligence is the exclusive domain of sentient life. There is an uncrossable barrier between the virtually inexhaustible knowledge base available to machines and their ability to process that knowledge, and the ability of the human brain to tap into its limited data base, to reason, hypothesize, explore and experiment with ideas, to pave the way to new discoveries and technologies. The human brain can THINK, a machine can only PROCESS data. That is the proposition set forth in the first quarter of this book, and I found it fascinating. The author has not only done his research, but has a background in the field of AI. Logically and progressively, he sets forth his case that the concept of building a machine with a an "intellect" that will surpass that of the people who designed it is a myth. Having stated his case, the author then sets forth to prove each part of his proposition. This portion of the book often felt tedious and redundant, as the author answered objections he knew would be raised. Because the concept of "thinking" machines has been so thoroughly popularized in our culture, the author writes to insure we - the readers -realize that there has not been a single technological or scientific breakthrough in the past fifty years of research that has moved us any closer to the goal of a thinking machine. His evidence is overwhelming, but seldom captivating. The author concludes the book with reasons why he believes the continued emphasis on artificial intelligence is actually inhibiting advancements in many other areas of scientific and technological research. The book will not appeal to everyone, but it does have something of importance to say. From a Christian perspective, I was surprised at how many elements of this myth of thinking machines I had allowed to passively invade my thinking. Humans are getting more skilled at creating machines that bear certain likenesses to our humanity, much the same way as God created humans in His image. But though we bear God's image, we are not gods, nor ever will be. Nor will any machine ever possess the qualities that make us human, and one of them is authentic intelligence.

  8. 4 out of 5

    Zach Feig

    Larson makes a really interesting argument concerning how far away we are from generalized artificial intelligence. Basically his idea is that there are three sorts of ways of coming to knowledge. Deductive reasoning finds answers based on premises with predetermined outcomes - this is essentially what we do when we hard code computers to make choices given inputs. Inductive reasoning takes in lots of data and makes assumptions - this is basically modern machine learning. Then there is abductive Larson makes a really interesting argument concerning how far away we are from generalized artificial intelligence. Basically his idea is that there are three sorts of ways of coming to knowledge. Deductive reasoning finds answers based on premises with predetermined outcomes - this is essentially what we do when we hard code computers to make choices given inputs. Inductive reasoning takes in lots of data and makes assumptions - this is basically modern machine learning. Then there is abductive reasoning which is basically guessing that lets us shortcut to the best solutions or eliminate many possible solutions based on common knowledge. Larson points out we have the first two types of reasoning down pat, but no one is working on the third type of reasoning and until someone works on it we have either automatons or idiot savants but not general intelligence. This point is well and good as far as it goes and the book is worth reading to see how he expands on the ideas above expressed. However, I deducted a star for the ending section of the book where he expands this out to a general critique on culture. Basically without the intellectual grounding he demonstrates the rest of his book he makes an argument that only the West can generate the insights and did for artificial intelligence, and PC culture is keeping our ability to identify new ideas down. I can't excuse the classic white man bullshit. That said if you don't read the afterword, you don't have to hear it...

  9. 5 out of 5

    Remy

    A thorough and mostly easy to understand book that requires little to no understanding of computers, statistics, or neuroscience to grasp. Prior to reading this book, I was skeptical of AI and its capabilities (eg "garbage in, garbage out" maxim)--now I see just how limited and farcical it really is. I share Larson's worries of the cultural death of science, though he suffers from a lack of greater cultural, political, and economic analysis--which is made fairly evident in the extremely brief (an A thorough and mostly easy to understand book that requires little to no understanding of computers, statistics, or neuroscience to grasp. Prior to reading this book, I was skeptical of AI and its capabilities (eg "garbage in, garbage out" maxim)--now I see just how limited and farcical it really is. I share Larson's worries of the cultural death of science, though he suffers from a lack of greater cultural, political, and economic analysis--which is made fairly evident in the extremely brief (and annoying) foray he makes into irrelevant political commentary in describing the origin of the term "kitsch." Whatever, dude who admits later to working on a contract for the US Department of Defense. Overall, worth a read.

  10. 5 out of 5

    Piritta

    This was a slow one, but I'm glad that I read it, because now I'm much more optimistic about the role of humans in the future. This was a slow one, but I'm glad that I read it, because now I'm much more optimistic about the role of humans in the future.

  11. 4 out of 5

    Caroline

    Maybe I'm exhibiting confirmation bias here, but when I ran across a reference to this book somewhere, I knew I had to read it. Having worked in computers for about 35 years, I have always been suspicious of the whole idea of AI because HELLO COMPUTERS CAN'T DO ANYTHING A HUMAN HASN'T TOLD THEM TO DO. And they will happily do anything they are told to do even if it has a stupid mistake in it that makes it loop forever, like more than one program I wrote in my time. Larson, a tech and AI expert (a Maybe I'm exhibiting confirmation bias here, but when I ran across a reference to this book somewhere, I knew I had to read it. Having worked in computers for about 35 years, I have always been suspicious of the whole idea of AI because HELLO COMPUTERS CAN'T DO ANYTHING A HUMAN HASN'T TOLD THEM TO DO. And they will happily do anything they are told to do even if it has a stupid mistake in it that makes it loop forever, like more than one program I wrote in my time. Larson, a tech and AI expert (and not the same guy who wrote Devil in White City), doesn't say that simple thing explicitly, but he demonstrates it over and over again by debunking all the facets of the durable myth. I need to read this again and underline all over it. Honestly there were parts I didn't entirely understand because they were related to mathematics and modeling, but there was a lot I did understand. Larson delves into the three kinds of logical reasoning (deduction, induction, and abduction) to highlight the limitations of the first two and makes it clear that the third (essentially, informed guessing) is the critical component of human intelligence that nobody has figured out how to program. For all the hoopla and wasted billions, therefore, no one has been able to create a computer with human general intelligence. As for claims that once a computer has general human intelligence it will be able to design one with greater than human intelligence, he asks: We already have general human intelligence, can we design an intelligence greater than that? Of course not. The response of futurists and AI zealots, who both want to believe and have careers and money riding on it, has been to change their definitions so that inconvenient difficulties are excluded from them. No computer has ever passed the Turing test so they now want to declare that invalid (this test says that when a computer can converse with a panel of judges so that they believe it's a human, it passes and qualifies as intelligent). So, all ballyhooed AI systems are successful only within super narrow applications (like playing chess or Go, or identifying image content), and it takes an enormous amount of human effort from large teams to get the computer capable of its little narrow job. This human effort is usually not talked about once the product rolls out and beats some chess champion, or uses Wikipedia to play Jeopardy. Finally, he talks about the whole Big Data fallacy, which suggests that if you feed a computer enough data it will somehow become intelligent. A deleterious side effect of this fallacy for science is that it's replacing human theorists and innovators with computers, which means the real ideas and breakthroughs won't happen because they come from PEOPLE'S BRAINS. The whole AI myth, he says, has expanded the idea of what a computer can do beyond reality while simultaneously shrinking the idea of what a human can do. Throughout, there are entertaining anecdotes about the failures of various AI projects. One he didn't mention is the resume screening system that only ever pulled resumes of men - the developers realized that in training the machine they had only used sample resumes from men. But most of us don't really need these stories to realize his point, do we? We've all called a company and had to struggle to make a computer understand what we need if it's not one of the items in the main menu. Try asking Alexa if a shark can play checkers, not a hard question for the average 9 year old. She can't tell you. In fact any of us with these digital assistants learn pretty quickly the word order they need to give us the response we need. They are not "training us," we are structuring our thoughts to make them work. There is no such thing as AI, there are computer programs created by human intelligence to do defined jobs, and it will ever be so.

  12. 4 out of 5

    Bach Pham

    I might not be the intended audience given that I've been given proper, rigorous treatment of many topics covered in this book. However, from what I've read, this is far better and more in line with my opinion than the "AI evil" book by Dr. Scaremonger James Barrat. A distinction must be made between "acting human" and "acting rationally". Most algorithms target the second objective, not the first. Nevertheless, I'll keep this book as a future reference. I find it hard to articulate my thoughts w I might not be the intended audience given that I've been given proper, rigorous treatment of many topics covered in this book. However, from what I've read, this is far better and more in line with my opinion than the "AI evil" book by Dr. Scaremonger James Barrat. A distinction must be made between "acting human" and "acting rationally". Most algorithms target the second objective, not the first. Nevertheless, I'll keep this book as a future reference. I find it hard to articulate my thoughts when it comes to this topic, but this book's shown me one way of articulating it.

  13. 4 out of 5

    Evan

    Maybe the AI ​​still doesn't work the way it's been shown in various films about the future, but I still think it's pretty well developed now. For example, AI is used for identity verification, I read about this technology on this site https://www.kvalifika.com/blog/How-Kv... . This is already very good, in my opinion, because thanks to AI, verification is very fast. Maybe the AI ​​still doesn't work the way it's been shown in various films about the future, but I still think it's pretty well developed now. For example, AI is used for identity verification, I read about this technology on this site https://www.kvalifika.com/blog/How-Kv... . This is already very good, in my opinion, because thanks to AI, verification is very fast.

  14. 5 out of 5

    Hope

    Intriguing and straightforward look at the differences between induction and abduction, as well as a host of other reasons why AI will never "take over". As an engineer myself, I have worked with machine learning enough to realize the singularity will always be on the horizon. Anyone touting the rise of machines has been reading too much Sci-fi. Computers can only do (or learn) within the parameters they've been programed. Intriguing and straightforward look at the differences between induction and abduction, as well as a host of other reasons why AI will never "take over". As an engineer myself, I have worked with machine learning enough to realize the singularity will always be on the horizon. Anyone touting the rise of machines has been reading too much Sci-fi. Computers can only do (or learn) within the parameters they've been programed.

  15. 5 out of 5

    Christian Hartman

    An extremely clear explication of the host of misconceptions, false beliefs, and misunderstandings of the potential of Artificial Intelligence. Do not fear the rise of artificial general intelligence, for we are making zero progress on the type of inference, abduction, which is uniquely human and that computers find impossible to reference. If you don’t understand artificial intelligence (like me) definitely read this

  16. 5 out of 5

    Bg96

    A persuasive argument as to why "AI" will likely not live up to its promises. Having studied ML, I already agreed with the author that there is nothing there that currently resembles intelligence. He articulates well the points in which AI is lacking - the absence of abductive reasoning and "common sense knowledge". A persuasive argument as to why "AI" will likely not live up to its promises. Having studied ML, I already agreed with the author that there is nothing there that currently resembles intelligence. He articulates well the points in which AI is lacking - the absence of abductive reasoning and "common sense knowledge".

  17. 5 out of 5

    Ani Banerjee

    Fantastic and timely. A well written and thorough take-down of the modern AI and big-data claims to the inevitability of Artificial General Intelligence. I learned about something really new - the philosophy and thought of Charles Sanders Peirce, and his rather penetrating insight into "abduction". Should find something on Pierce next. Fantastic and timely. A well written and thorough take-down of the modern AI and big-data claims to the inevitability of Artificial General Intelligence. I learned about something really new - the philosophy and thought of Charles Sanders Peirce, and his rather penetrating insight into "abduction". Should find something on Pierce next.

  18. 4 out of 5

    Eric Holloway

    important contrarian Lots of great tidbits, like how winograd schemas have resisted even big data and deep learning. Also interesting the abductive form, and its neglect in AI research. A sobering look at the harm posed by the myth of AI's inevitability. Seems sinisterly similar to Marx's claim of communism's inevitability. important contrarian Lots of great tidbits, like how winograd schemas have resisted even big data and deep learning. Also interesting the abductive form, and its neglect in AI research. A sobering look at the harm posed by the myth of AI's inevitability. Seems sinisterly similar to Marx's claim of communism's inevitability.

  19. 5 out of 5

    Ben Dickson

    Truly fascinating book. A realistic, unhyped view of the limits and capabilities of different branches of AI. A deep view on abductive inference, the missing piece of AI. A clear warning on the threats of trusting big data AI/ML too much and not looking at alternate views in science. Definitely on my top AI books of 2021.

  20. 4 out of 5

    Janet

    I appreciated that Larson presented a de-mystified perspective on the science of AI. This text helped me to think about the history, science, and development of AI with less of the "hype" mindset that can pervade discourse on AI. I appreciated that Larson presented a de-mystified perspective on the science of AI. This text helped me to think about the history, science, and development of AI with less of the "hype" mindset that can pervade discourse on AI.

  21. 4 out of 5

    Pie Resting-Place

    There's a lot of hype around AI, it is pleasant to read some well argued push back. Those who enjoyed this book might also want to have a look at Ted Chiang's article in the New Yorker: https://www.newyorker.com/culture/ann... There's a lot of hype around AI, it is pleasant to read some well argued push back. Those who enjoyed this book might also want to have a look at Ted Chiang's article in the New Yorker: https://www.newyorker.com/culture/ann...

  22. 5 out of 5

    Juha

    Too difficult language and construction for me… Really needed to focus. Happy to finish the book anyway as it provoked thinking. Perhaps my view on the AI future even changed or clarified a bit.

  23. 4 out of 5

    Igor Pejic

    Today hardly any technology is treated with such reverence as AI. This book convincingly exposes the myth of algorithmic superintelligence and shows how to foster real inventions.

  24. 4 out of 5

    Marrysparkle

    FASCINATING!

  25. 5 out of 5

    Carl Holmes

    Fair Minded A good reworking of ideas behind A.I. and how it is important work, but there is so much more to creativity. A easy read and good things to ruminate on.

  26. 4 out of 5

    Oolalaa

    18/20

  27. 4 out of 5

    Oakley Merideth

    Larson makes the case perfectly. How to argue against what his says is beyond my scope of imagination. IN many ways it's totally obvious that "AI" is a badly peddled myth which suits the least creative, least interesting, least imaginative, and frankly least human persons among us that have no concept of art, culture, beauty, or transcendence. How anyone could mistake a chat bot for a human (although, to be fair, many an "AI" has "won" a Turing Test by only fooling 30% of humans) is beyond me. P Larson makes the case perfectly. How to argue against what his says is beyond my scope of imagination. IN many ways it's totally obvious that "AI" is a badly peddled myth which suits the least creative, least interesting, least imaginative, and frankly least human persons among us that have no concept of art, culture, beauty, or transcendence. How anyone could mistake a chat bot for a human (although, to be fair, many an "AI" has "won" a Turing Test by only fooling 30% of humans) is beyond me. Please, if you know someone who isn't a total fool who really thinks that this Google engineer is "on to something," have them read this. I would have given 5 stars but at times this book (which is quite short, all things considered) is a little long in the tooth and repetitive. But that's more than likely to guarantee that the most incredulous AI promoters/alarmists stick with the program.

  28. 5 out of 5

    Lee Barry

    Now my go-to book to support my cynicism about AI.

  29. 4 out of 5

    Bruce

    This should be required reading of all human beings. The hyperbole surrounding the idea of Artificial Intelligence has become hysterical and those who subscribe to the hysteria the new "FLAT EARTHERS" of the 21st century. The fear surrounding Artificial Intelligence is puerile paranoid ignorance of Logic. This should be required reading of all human beings. The hyperbole surrounding the idea of Artificial Intelligence has become hysterical and those who subscribe to the hysteria the new "FLAT EARTHERS" of the 21st century. The fear surrounding Artificial Intelligence is puerile paranoid ignorance of Logic.

  30. 5 out of 5

    Alaa Al-Wattar

Add a review

Your email address will not be published. Required fields are marked *

Loading...