How we can create artificial intelligence with broad, robust common sense rather than narrow, specialized expertise.
It's sometime in the not-so-distant future, and you send your fully autonomous self-driving car to the store to pick up your grocery order. The car is endowed with as much capability as an artificial intelligence agent can have, programmed to drive better than you do. But when the car encounters a traffic light stuck on red, it just sits there--indefinitely. Its obstacle-avoidance, lane-following, and route-calculation capacities are all irrelevant; it fails to act because it lacks the common sense of a human driver, who would quickly figure out what's happening and find a workaround. In Machines like Us, Ron Brachman and Hector Levesque--both leading experts in AI--consider what it would take to create machines with common sense rather than just the specialized expertise of today's AI systems.
Using the stuck traffic light and other relatable examples, Brachman and Levesque offer an accessible account of how common sense might be built into a machine. They analyze common sense in humans, explain how AI over the years has focused mainly on expertise, and suggest ways to endow an AI system with both common sense and effective reasoning. Finally, they consider the critical issue of how we can trust an autonomous machine to make decisions, identifying two fundamental requirements for trustworthy autonomous AI systems: having reasons for doing what they do, and being able to accept advice. Both in the end are dependent on having common sense.
This is an excellent book that provides a fresh perspective on "common sense," one of the key missing pieces of the AI puzzle. While there has been a lot of talk about AI and common sense, Brachman and Levesque provide a formal definition of what common sense is and isn't, how it works in humans (and higher animals) and why machines don't have it. Finally, the authors, both of whom have been working in the field since the 1970s, provide a hypothesis to revisit symbolic AI as a means to encode and manipulate commonsensical knowledge. Read my review of the book and my interview with Ron Brachman on TechTalks: https://bdtechtalks.com/2022/08/08/ma...
The book is a collection of navel gazing. As with most, if not all, academics, Brachman can't do it, hence the use of the royal ”we”, and has to rely to a constant stream of Nirvana fallacies. Sure, in an ideal world, maybe people can have a discussion about common sense. Only in the current world there won't be any common sense, turning the fallacious argument into a chain of gas discharges from an old man who is frustrated he doesn't get invited on TV.
Common sense is something based on freedom. Should Brachman defecate in front of his ex-husband's door? Common sense would say no. A driver today has a list of proscriptions longer than it can fit in any law enforcer's mind, even the smart ones on TV. Obviously, both people and enforcers simply ignore most items. The autonomous car won't be able to do that. That's one. Now, fearful politicians and unelected officials will add probably three times more items on a list particular to autonomous cars. And this is precisely where clowns like Brachman could help: calm the ignoramuses. And, of course, Brachman wants to show off, so with this book he only adds fears, instead of removing some of them. That is two. The most important difference with the second item, compared with the first, is the first argument is about the laws. The second argument is about some Autonomous Car Czar in Washington who is going to condition the list in order to get a license to make such cars. This is not a law. This is simply the power of the said official. And that makes me think of a second reason Brachman produced this book: in case he dies, at least one of his minions might win the reigns and rule supreme over ”the AI”. And yea, there is a third problem: inspired by many other cases: the federal Czar will have his 2000 pages of rules. California will have its 3400 pages of rules. Nevada will have only 800 pages. And some of the rules will overlap. And some will not. Only the makers of the said ”machines” will have to obey the whole cacophony. And that is only the US. Canada will have its own Brachman to pass intestinal gas on the topic. Australia will have some other rules. And the UK will have to fit its own Lord to set the tradition straight. And this is ONLY the English speaking World.
So yea, some of the paragraphs might seem smart, smart enough to make a cute meme on Twitter. But the whole text is another argument why these bureaucrats are working only to put humanity back into the dark ages. For the people's own sake, of course.
Se trata de una lectura de un interés profesional y muy académico. El nivel de comprensión que se requiere no es altísimo, pero sí que hay que tener unas bases de programación e inteligencia artificial para entender bien la obra.
Esta obra introduce el sentido común como parte necesaria a tener en cuenta a la hora de lograr máquinas totalmente autónomas. El sentido común debería de ser un aspecto imprescindible en la AGI (Artificial General Intelligence), que popularmente, parece que sea lo que separa al intelecto de la Roomba del de Terminator. Y ese es ni más ni menos que el interés de la obra, declarada por sus propios autores. No pretende enseñar nada ni ser un libro didáctico.
Para ello, los autores invierten una docena de capítulos en una aproximación lenta para comprender por qué hace falta el sentido común, cómo lo definiríamos, cómo se programaría, qué reglas le tendríamos que indicar a una máquina para que lo entendiera, cómo diferenciarlo de la experiencia...
Es interesante. Por momentos de lectura he llegado a pensar que quizás el sentido común no exista, sino que lo que empleamos para entender ciertas frases -la estatua hundió el suelo porque era de acero- nosotros podamos entender perfectamente qué era de acero (si el suelo o la estatua), pero para una máquina sea tremendamente difícil de entender eso.
Un libro muy subrayado. Al principio se me ha hecho un poco pesado, pero a partir de la mitad he cogido velocidad, y creo que el tema en sí, es árido, pero que lo explica de manera que no lo había leído en ningún otro lado. Por todo ello, 4⭐.
I'm not going to finish this one now that I have a sense of where it is going. The introduction has a nice motivational example demonstrating how a self-driving car will never reach human-level ability without acquiring knowledge that is outside of the domain of driving. I.e., it needs "common sense" to recognize a parade and realize it needs to find an alternate route.
But what follows is a long treatise on symbolic AI and how it will solve these kinds of problems. They touch briefly on GPT2, but this book was published before the "ChatGPT moment", and even for its time underestimated GPT2. So-called data-driven learning may have its flaws and limitations still, but it has achieved far more than symbolic AI on tasks we consider at or approaching human-level intelligence. This book is somewhat ill-timed, and feels a bit like the authors were looking for the right "gap" that their methods could fill in ~2021/2022, but since then LLMs have shown much more promise in the kinds of multimodal and generalist tasks the authors were considering.
There is definitely value in the authors' ideas, but so far for most AI applications, "the bitter lesson" seems to be winning.
I might not be the intended audience for this book, but I chose to read it because I am trying to understand, first what people are actually talking about when they refer to “common sense,” since it is usually not used with an implied rigorous definition, and second, what AI that had common sense might look like. The middle of the book gets a little technical, but the book is short, so it is easy to plow through those middle chapters.
The authors started with a good overview of history of attempting to implement “common sense” in AI and highlighted a lot of deficiencies in current approaches, up to GPT 2. Unfortunately, while they said that common sense is closer to reflexive thinking vs. analytical their approach is just GOFAI, with knowledge base, rules, etc. They added goals, but otherwise it is still an approach of expert system route to AI. DNF.