Monday, October 31, 2016

Minding your own business, mind you!

I have used a number of times the metaphor of the drunkard seeking his lost keys under the lamppost (regardless of where he is more likely to have lost them) as an apt description of both our application of the scientific method (however vaguely it is interpreted) to understand every and all aspect of our reality AND our naïve attempts to understand intelligence (or consciousness, or mind more generally) as the mechanistic application of a number of algorithms that could be replicated by a machine (hence the term “artificial intelligence”). The poignancy of the metaphor derives from the fact that the greater clarity afforded by the lamp is of little use to the drunkard (because most likely his keys are anywhere else), the scientific method is of little use to understand some extraordinarily important aspects of reality (why is there something instead of nothing; how is it that the real seems rational to us, and we can understand a substantial part of its workings; why some things, or events, or states of affairs are recognized as valuable, and what does value consists in) and, as I’ve said many times, no algorithm is ever going to make a  machine intelligent (and thus the current research in AI is at best an exercise in deceptive advertising that will in the end disappoint those gullible followers that somehow have put their faith in it to help us alleviate the many ills of the human condition -on the other hand, it may alleviate the concerns of the equally gullible that think that it poses and existential threat to the continued existence of humanity, which is kind of a silver lining, I guess).

As any regular reader of this blog should be familiar enough with my opinion about the first tenet (the inapplicability of the scientific method to some disciplines -much as some of their deluded practitioners would like to) I intend to devote today’s post to the second, and to expand a bit why I think the very interesting and practically applicable things a lot of very intelligent people are doing in the burgeoning field of “artificial intelligence” will in the end come to naught, and be a monumental waste of time and scarce societal resources. I’d like to point out from the start that my belief in the dualist nature of reality underlies my faith in the inapplicability of algorithmic thinking to the simulation of the workings of the human minds (if minds are made of an essentially different “stuff” from the daily matter we are trying to replicate them with, it stands to reason that our chances of success are greatly diminished, especially if that different stuff behaves in some “mysterious”, “mushy” way that doesn’t lends itself to being replicated by more common, everyday matter), but I intend to develop an explanation of the essential misguidedness of current AI research that does not depend on such highly contentious metaphysics. That is, I hope I can plant some seeds of doubt of the soundness of current AI research understandable by those that do not subscribe to my view of mind being  a somehow fundamental component of reality on par and irreducible to its material substrate.

To achieve that, I’ll resort to an understanding of how minds work that has come out of fashion in these last centuries (that’s the advantage of not watching any TV at all and reading as many books by XIII, XIV and XV authors as by contemporary ones: you find very old and abandoned ways of thinking suddenly highly plausible and even congenial), but that would sound as eminently reasonable to any Greek, Scholastic and even Renaissance thinker: let’s consider the mind as divided in a part that more or less keeps us alive without us much caring or noticing (what used to be called our animal or vegetative mind), or Mind I (MI for short), a part that wills (MII), a part that feels (MIII) and a part that reasons (MIV). MI to III hasn’t received much attention from AI researchers, as it is supposed to be the “easy” part of how mental processes work, being mostly unconscious, automatic, requiring little processing power and being essentially the same for all members of the same species. I guess the assumption made about it it is that it represents the “lower level” that we have inherited (and most likely share) with our evolutionary ancestors, and probably we will be able to deal with how it works in detail once we have mastered the most evolved conscious, “higher” processes of the part of the brain able to compute, calculate, set goals, define plans and communicate with other brains.

It has to be noted that, somewhat paradoxically, our understanding of that “higher” part of the brain (what I’m calling MIV) has been heavily influenced by what we could automate and do without involving any consciousness at all. So starting in the XVIII century, when we were learning to formalize the rules of basic algebra (with abacus and similar devices) most rationalist philosophers thought that thinking was mostly made of additions, subtractions, multiplications and divisions, and that if we got a machine capable of doing those wondrous things such machine would, without a shadow of a doubt, be conscious and able to talk to us and do all the other (apparently -for them- minor) operations that human minds did (like sharing its aesthetic judgments about a fine work of art, complaining about the terrible weather or discussing about the moral merits of certain courses of action proposed by contemporary politicians). Of course, we have vastly enhanced the capabilities of machines to add, subtract, multiply and divide without advancing much in having meaningful conversations with them about any of those subjects.

A couple centuries after that, we thought we had a better grasp of how languages developed. We not only had observed more regularly how it was acquired and mastered children, but had learned to compare and classify different types (like those belonging to the Semitic and the Indo-European families) and deduct how they evolved through time within the linguistic communities that used them. So of course we projected our increased understanding of structure and use of signs (symbols) into the whole realm of cognition and declared that all that there was about thinking was symbol manipulation, and the ability to play “language games” (not any language, mind you, but “symbolic language”, identified by being infinitely recursive, combinable, adaptable and having a somewhat flexible relationship with the reality it intended to denote) was the defining feature of intelligence.

Unfortunately the attempt to model language use in machines never took off as nicely as the previous experience with mechanical calculators (the abacus known since time immemorial having been complemented with shiny and newer gadgets like adding machines and slide rules for logarithms… ah, those were the times!) so we didn’t have to hear the likes of Noam Chomsky proclaim as loudly as Condillac and d’Holbach had proclaimed a couple centuries earlier that he had cracked the tough nut of what general purpose intelligence consists in, and that he had the blueprint for building a conscious, truly thinking machine (although no doubt he believed he had the rule for writing the software on which such a machine would run, already having the one on which our own wet and wobbly minds did in fact run). From the beginning of that era, though, we have inherited the litmus test of how we would identify a mechanical system as being truly (general purpose) intelligent: the Turing test, that posits that a machine can be considered intelligent if it can fool a human being into thinking that he is talking to another person after some minutes of casual conversation. Master language, have a machine able to talk to your everyman, and you will have mastered intelligence (not a bad diagnosis, by the way). Unfortunately, as we all know, things turned out not to be so simple.

Because, let’s not mince words here, to create a machine that “understood” language and was able to produce it “meaningfully” (damn, it’s difficult to speak of these things without getting all self-referential and having to resort to quotation marks all the time…) proved out to be damn hopeless, an almost unsolvable problem. Unsolvable to the point that we have essentially given up entirely on it. I’ve referenced in other occasions the Loebner Prize, where conditions had to be substantially tweaked to give the machines a fighting chance, and which still has not produced a winner that fulfills the original conditions set by Turing himself. What is more meaningful for our discussion is that the strategies devised by the participants are so utterly unrelated to any semblance of “understanding” language in any meaningful sense that most AI researchers have abandoned it as a meaningful gauge of the progress of the field, and do not pay much attention to whoever wins or not. Creating babble that vaguely sounds like language and can fool a not-too-subtle human judge may be great fun, and showcase a good deal of ingenuity and wit, but doesn’t get us an inch closer to building anything like an intelligent machine, or something that does anything remotely like thinking.

So raw computing power is obviously taking us nowhere near AI, and trying to teach a computer how to talk (at least using a rule-based approach for it) is not faring much better. But the world is abuzz as never before with the breathtaking advances in the field and how closer we are with every passing day to achieving a truly intelligent machine. How come? Well, we just found a new toy, and are still agog with its possibilities. Only it is not that new, as the basic functioning of it was already posited in the first blooming of AI in the late sixties, but only now has it bloomed (thanks to the massive improvement in processing speed and parallel architectures, which allows for much more complex networks to be modelled, a high level of complexity in terms of number of nodes and number of iterations being required for the whole thing to produce even mildly interesting results). I’m talking of course of neural networks and how they are at last being successfully applied to pattern recognition. Don’t get me wrong, I was as excited as the next guy when I first read On Intelligence by Jeff Hawkins (back in 2004) and his Memory-prediction framework seemed eminently sensible to me. Yes indeed, that was what all that pesky conscience, and qualia, and the like were really about! Just an old-fashioned hierarchical pattern recognition schema, along with good ol’ engineering wiring diagrams to substantiate it all.

And indeed the addition of pattern recognition capabilities to increasingly humongous amounts of data (and an army of potential volunteers to “validate” the recognized patterns found in the data and thus make the network algorithms “learn” by the appropriate feedback loops) has produced some very notable results, from Facebook ability to recognize people in photos (or to know that a cat video indeed includes cat images) to increased speech recognition capabilities in information systems dealing with customers and digital assistants to improved automatic translators to eventually autonomous self-driving cars. Oh, and I forgot to mention machines that play chess, Jeopardy and Go better than the best human players (that would be Garry Kasparov, Ken Jennings and Lee Sedol, for those keeping the score). I may be a world class curmudgeon and find it still impossible to get Siri to understand me in any of the five living languages I speak (I haven’t tried in Latin or Classical Greek, but I guess my pronunciation in those is still more awful than in the still living ones, so I wouldn’t even try), I still despise the clunky translations offered by Google and have already written about the “demo effect” that vitiates any report of self-driving cars (and make me skeptical of the immediacy of their complete takeover of the worlds roads and streets) and I would never say any computer at all has ever “played” better than a human at any game, as playing presupposes a number of attitudinal states of affairs (approaching the game with the right mindset, enjoying it, immersing oneself consciously in it to the point of losing sight of anything outside it) that no machine has emulated. I’ll concede that I’ve successfully made appointments with a machine just by talking to it (a welcome improvement above having to type the desired hours in an inconvenient phone’s keyboard) and that software programs have reached a dastardly level of sophistication at producing winning moves in some well-regulated games, and some not so well-regulated.

So yep, pattern recognition has come a long way, and coupled with the increasing amounts of data we collect and digitize of our daily whereabouts it may help “automate”, or “algorithmize” many decisions that today are taken by people, many times without having a clear grasp of how they make them, or what rules they exactly apply (see this post of the always interesting Jose Luis Hidalgo on the issue: Big Data today for AI tomorrow). But I also think that we will find the limits of what such approach can solve very soon, and we will be disappointed of how quickly it peters out (i.e. it reveals to be another dead end like the automation of basic algebraic operations and chit chat generators). Before I expand on why I think so, I would like to resort to that old friend to reveal scientific statements, falsability, and propose to those still believing in the tooth fairy (sorry, in the power of pattern recognition coupled with big data and armies of free labor to educate the algorithms) the following challenges for a “strong AI” program, that humans have been so far woefully unable to solve:

·         Predict successfully the start date and duration of the next five USA and/or EU recessions

·         Predict successfully, one month in advance, the result (winning party, percentage of popular vote of each of the five most voted parties and number of seats in each chamber of each of same five parties) of the next five general elections in the following democracies: USA, India, UK, Germany and Brazil

·         Predict successfully the quarterly growth (or decrease) for eight consecutive quarters of the following variables: population, net immigration, GDP, currency value against the dollar and life expectation at growth; of the same countries enumerated on previous point, plus China

It can be argued that not only no human would be able to successfully overcome such challenge, but it is highly doubtful that any human may be able to do so ever (which would merit a post of its own: why supposed social “sciences” have such low expectations of themselves that they can never aspire to ever achieve the most modest predictive reliability). I tend to contemplate such argument with a lot of sympathy, but I’ve chosen those questions not because a human kid could give the answers (well, actually a kid could give them, what he could not do is give good ones, unless he was a very unusually lucky kid indeed!), I’ve chosen them because they are the kind of “knowledge” amenable to being achieved by crunching huge number of data points, thus finding correlations with explanatory and predictive power that the unaided human eye has been so far unable to find. Unfortunately, my contention is that such variables have not stayed secret so far because we lack the analytical power to capture enough variables and see how they evolve together and correlate them until we find the proper relationships that suddenly reveal the hidden laws of “psychohistory” (as Asimov called the endeavor). They have stayed secret for the same reason the lair of the Easter bunny has remained secret: they do not exist. No laws explain the relationship of those variables, and thus the effort of finding them is in the end futile (again, for reasons that would merit a post of their own).

OK, so the most “promising” field of application of pattern recognition (the formulation of psychohistory’s laws) may be not that promising after all. As we ourselves can’t seem to agree on the better way, or the major constraints to consider when designing how to live together and how to distribute the social product between the members of a given society, may be we can’t expect machines to implicitly learn such arts from us and enlighten us on the optimal way of doing it (as we just can’t define what that optimal way consists in). So not much to expect from AI in the field of economics, politics, sociology, and even psychology. That doesn’t mean there are not a plethora of other fields where it can develop itself to help us immeasurably: medical diagnosis, discovery of new drugs, human like androids to amuse the growing elderly population, the ubiquitous self-driving vehicles to carry us around safely and with minimal pollution…

May be, and I do not question we will see some minor advance in most of those areas, coming more slowly that what the hype would make you think. To illustrate what I mean, I’ll remind my readers that not that long ago (in 2014, may be?) I had written in my calendar the fall of 2015 as the date for readily available VR headsets with enough content to change forever the face of entertainment… one year after that date, where is my headset (or anybody else’s)? I hope you don’t mean HTC Vive or Oculus Rift or Sony VR, because they are slightly sleeker versions of the venerable Google cardboard … and honestly I don’t see any of them having a clue about how to keep the users glued (or strapped) to their screens for more than a few tens of minutes, let alone “change the face of entertainment forever” (given the hurdles imposed by the need to develop a whole new set of controllers and feedback mechanisms that really enable the “immersive” nature of the experience to shine through).

I expect more or less the same post-hype deflation for most of the technologies that AI proponents are touting as the advent of the “real thing”. And, as usual, I see I’ve already spent more than three thousand words without going in the meat and potatoes of what I wanted to talk about, which is the reason why such real thing is not coming. In this case, the reason is the vast amount of elements of intelligence (the “caring about”, or “minding about”, referenced in this post’s title) that are being left out in the prevailing research program. An astute supporter of such program could counter-argue that the caring about is nothing more than an adscription of value, and that such adscription can be equated to a certain pattern that the systems being developed will be able to deal with in a next iteration.

Again, may be (that’s the answer I tend to give when I’m utterly unconvinced, but don’t really think it productive to discuss more about the issue at hand). But I have the feeling that this “valuing” things is indeed at the core of how we reason, at the core then of what being intelligent consists in, and that any program that tries to circumvent it assuming it will be able to deal with it (or put it back in) at a later stage has gone seriously off track, so seriously as to warrant serious doubts about its feasibility. But I recognize such feeling merits a further development, a development that will need to wait for a next post.

No comments:

Post a Comment