Friday, June 17, 2016

No, Software, is NOT eating the world (the misguided prototype paradigm)

As I promised in a previous post about Robert Gordon’s The Rise and Fall of American Growth (Man, Gordon's book is Good), I would like to expand what I consider a major source of error for techno-utopians and cornucopians, namely, the fact that their limited professional experience (normally limited to academic life, think tanks and some gigs as “strategic” consultants of big corporations, far enough from the nuts and bolts of daily operations) has left them entirely ignorant of the dynamics (and pitfalls) of rapid prototyping, which have become a major distorting factor in most innovative endeavors.

I will start with a somewhat generic remark on viewpoints: although pure impartiality (what Thomas Nagel called felicitously “the view from nowhere”) is a noble ideal we should strive for, I understand that it is damn difficult to achieve. People think, and write, necessarily from their own experience, and what they had a chance to know and experience firsthand tints what they are able to think, and how they come to think about it. When I finally understood what Marx had done for a living (barely) for most of his adult life much of his writing became finally intelligible for me: in this case, he was essentially a man with a “journalistic sensibility”, what other sensibility could he have, if all he ever got paid for was writing articles for newspapers and magazines? He never had much time or resources to academically think “deeply” about any issue pertaining how society worked (all he had to go on were secondhand opinions of British thinkers of the previous generation, the “classical economists” available in the British Museum library, whose reading he had to cram haphazardly between commissioned work), or to organize and direct any sizeable group of people (not a political party, not a trade union, not even a neighborly association, and not for sure any of the “international workers’ movements” with which he was affiliated or from which he was kicked off), so he was understandably clueless about any of them.

Similarly, to understand the thought of some of our current opinion makers we need to get a grasp of where they come from. Not necessarily to undermine their arguments (that have to be considered, as they say, “on their merits”) but to put them in perspective and better ascertain what kind of bias and distortion they may have been subjected to. We can then group the proponents of a new golden age based on the acceleration of “innovation” based around software in two broad groups: the software developers themselves (be they entrepreneurs that ended directing their companies, like Bill Gates, or financiers that ended gaining some understanding of how the companies they put their money in worked, like Marc Andreesen) and the journalists that have chronicled their rise, and written books about it (although sometimes they hold some academic position, the role they play is mainly as mouthpieces and popularizers of the formers’ worldview, as we will see).

What kind of distortions can we expect from people that have worked all their life within the software industry? A considerable number of them, as it happens. We have to start noting that this is a pretty recent creation. The first corporate computer systems were conceived and installed at the end of the 60’s, but their propagation didn’t start in earnest until the beginning of the 80’s (I distinctly remember how at the very beginning of my own professional career, and before that, while I was at University, most desks did not yet have a computer even in advanced fields like engineering), so the first batch of full time programmers that operated in a somewhat structured, well established industry is not yet approaching retirement. So of course they see work as being revolutionized by the software they install, and which they increasingly use for developing their trade. But, unsurprisingly, machine tool operators (or engineers) have a much more nuanced view, as they see software (and computers in general) as an additional tool that fits with other tools they have used traditionally in a context that hasn’t changed that dramatically. That has, in other words, evolved without that much “revolutioning”.

An additional factor to be taken into account is the extent of “self-consumption” within the industry: new languages and tools are increasingly used to build (supposedly) more powerful programs that in turn can be used by the industry or the consumers. But the programmers usually are pretty ignorant about how the external world uses (or benefits from) their end products, even requiring a whole new class of workers (that  call itself, in a revealing self-congratulatory way, “consultants”) to translate what they produce into terms that end users can understand. Not that being so self-congratulatory is warranted in any way, as the track record of what software does to companies and individuals is pretty abysmal (an old piece of statistic from the 90’s revealed that 50% of software implementations in big corporations was never completed, if you added the roughly 30% that was completed but failed miserably to achieve any of the “business objectives” that had been set for it, four out of five projects were a colossal waste of time and effort of all involved).

Finally, the kind of people attracted to the industry, not to put too fine a grain around it, leads to an overrepresentation of what is traditionally known as “nerds”: individuals with little social skills that have spent an inordinate amount of their lives playing video games (software) and discussing the evolution of computing power as if it would deliver them from all their shortcomings. I’m not saying every single participant in such a big sector of the economy is a cartoonish version of Sheldon Cooper, but I do think you can find a higher percentage of people with that kind of mental setup in software development (and related industries: IT consulting, support, telecoms, etc.) than in retail, manufacturing or construction.

So the first kind of people touting their opinion that “software is eating the world” or that “every business is a digital business” are the people more set to benefit from such purported state of affairs, which should already make us suspicious of the accuracy of such statements (people always tend to see as more likely those events that favor them). What about the second kind, the journalists and popularizers? Shouldn’t they have a wider view, less invested, and thus more objective? Sadly no, because of two factors, that I will call the “journalistic bend” and the “prototype fallacy”.

The journalistic bend

There is one reason I chose my enhanced understanding of Marx to exemplify the importance of understanding a man’s profession in order to calibrate the validity of his opinions. As it happens, when such profession consists in writing for the “general public” (what in more elitist ages was known as the “uneducated masses”, but who would proudly claim to be educated in these egalitarian days?) the opinions proffered are to be taken with more than the traditional grain of salt. What is indeed the skill that the journalist/ popularizer has to cultivate during his professional career to make a living? An optimist would say it’s the ability to clarify complex ideas, to identify from the confused mass of facts and opinions the most salient features that can be communicated and easily comprehended by his peers. Yep, sure… if you think that is how the real world works, I still have that bridge in Brooklyn I mention so frequently, and I’m surprisingly willing to cut you an unbeatable deal. That skill would be valuable in a world of discerning publics that could differentiate between streamlined prose that keeps close to the truth and an unadulterated load of claptrap. That is most empathetically NOT the world we live in, so having such skill is exactly of ZERO use to any aspiring journalist.

What the aspiring journalist needs is the ability to sound knowledgeable on any issue, to appear as authoritative even when he doesn’t harbor the slightest, tiniest, most minute idea of what he is talking (or writing) about. What the schools of journalism do indeed teach (in a more or less veiled way) is how to feign authority, how to have audiences trust you, how to, if needed, dissemble with confidence and poise and self-assuredly, with absolute disregard of the gaping chasm that may exist between what one communicates and how things may really be (it helps to live in a culture where finding out “how things really be” has been questioned and doubted and practically banished from public discourse). What they teach, in less words, is how to bullshit the audience, assuming it will not notice anyway. Not surprising then that what fills 99% of the media, in the airwaves and in print, is none other than bullshit, and a sorry testament of the state of the world is that 99% of the audience buys into it and swallows it hook, line and sinker.

The prototype fallacy

So we live in a “mass media society” where the lack of education (both in highbrow culture and in complex technical aspects that require some command of STEM subjects to be understood) of the audiences and the lack of scruples of communicators may give some salience to lies like “we live in an age of unparalleled technical advance” or “the economy is becoming more and more digital, so faster computers is all we need to build a better world for all”, that still doesn’t explain that such misguided statements are not revealed sooner rather than later for the baloney they really are. In most ages of humanity there has been a combination of gullible public and hucksters trained to exploit them, and the (false) ideas like “the Earth is flat” or “The Earth is at the center of the solar system” or “bodies in free fall move towards the floor with constant velocity” were finally exposed as fraudulent and discarded (note I’ve chosen three examples of ideas about facts, not events, as discussed in my post about Collingwood and the viability of social sciences why social sciences suck). True, and in each age those false beliefs have been propped by a subset of the contemporary dominant reason that made them appear more “believable”, more likely than they really, prima facie, were. In the case of the centrality of the Earth it was Christian dogma and a certain interpretation of a passage of the bible (that didn’t impede Protestants to abandon the idea much sooner than Catholics, which would merit a post-long disquisition of its own). In the case of the constant speed of a falling body it was the authority of Aristotle (and the lack of instruments with the required precision). I will argue that in the case of the ongoing acceleration of technical progress (similarly fallacious) the plausibility derives from what I’m calling the “prototype fallacy”.

What is a prototype? In the software world, it is a program that can be built very fast, to show how a function would be performed once it is fully developed, assess the viability of the proposed solution and identify que interfaces it may require with other systems (or with humans, as prototypes are many times used as “proof-of-concept” to validate with future users the design of an application). Prototypes are wonderful tools (there is a whole school of software project development structured around RPB, Rapid Prototype Building, based on essentially iterating around them until you have the wholly working system, and the family of Agile methodologies evolved from that) BUT they are very dangerous too, as they systematically lead its less trained users underestimate the effort required to finish the application and overestimate the value and capabilities of the prototype itself: A common joke of the 90’s told about a programmer who died, went to the pearly gates and was presented by St. Peter with a picture of how Heaven looked like (a bit dull) and by the devil with how Hell looked like (lots of wild fun and exotic pleasures) so he could decide where he wanted to spend the afterlife. He chooses hell, obviously, but once there is tossed on a cauldron of boiling oil surrounded by flames and the screams of the damned souls that fill the place. When he complains to the devil, he receives the answer “man, that was the prototype, this is what it ended looking like when we finally could build it” (and of course, he can not complain, knowing all too well this is how this things work).

The thing about prototype is that they seem to have 90% of what the final system needs already built in. Only a few interfaces with external systems have not been developed, instead calling a “dummy” that behaves “exactly” like the external system will. And some complex algorithms have not been finished, so some simple routine that gives always the same result (“not that different” from what the algorithm will calculate) has been plugged in. And some controls and interactions within the user interface may need to be tweaked to improve intuitiveness and user-friendliness, but those will be surely minor changes that require very little effort. So it is typically assumed that even being extremely pessimistic, with a little final push and an additional 20% effort the application will be ready for roll out (the “10%” or remaining functionality should take no more than 10% of the effort already invested, 20% tops).

Any seasoned project manager knows where this is going. That “10%” of remaining functionality simply takes forever to be completed, tested and integrated. And once integrated new bugs and problems keep showing up. The interfaces have to be redefined. The minor changes in the user interface require major reprogramming of the data base and the data accesses. The algorithms demand additional data that in turn imply changes in a myriad other parts of the application. Lucky is the team that can complete the application with only four times more effort than what it took to develop the prototype. Hence the famous “Pareto rule” so well known by planners and schedulers in Sw projects, that states that 80% of the functionality is achieved with 20% of the effort (and in 20% of the time), and getting to 100% requires the remaining 80%.

In the world of power engineering the difference between a prototype and a working application is not dissimilar to that between an experimental reactor and a commercial one. In the words of Hyman Rickover (one of our patron saints):

“An academic reactor or reactor plant almost always has the following basic characteristics: (1) It is simple. (2) It is small. (3) It is cheap. (4) It is light. (5) It can be built very quickly. (6) It is very flexible in purpose. (7) Very little development will be required. It will use off-the-shelf components. (8) The reactor is in the study phase. It is not being built now.

On the other hand a practical reactor can be distinguished by the following characteristics: (1) It is being built now. (2) It is behind schedule. (3) It requires an immense amount of development on apparently trivial items. (4) It is very expensive. (5) It takes a long time to build because of its engineering development problems. (6) It is large. (7) It is heavy. (8) It is complicated.”

More to the point, most research on AI is barely passing from the stage of prototyping to the stage of “real” development, so expect some delays until it produces something resembling fruitful applications. Google autonomous car? A fancy, costly prototype, which the company is still barely starting to grasp how to scale. “But it has driven millions of miles with just a handful of “incidents”!” I can hear you say. Nope, it has driven a few tens of miles around Mountain View, CA, millions of times, which is a totally different thing. And every time a number of vehicles had to be sent in advance to do reconnoitering and ensuring there were no changes in the roads, no unexpected works, no fallen branches or large pools caused by heavy rain, even some new road signs had to be recorded, digitized and downloaded into the self-driving vehicle… something nobody seems to have thought would need to be regularly done all across every single road of the USA that wanted to be made accessible to the “autonomous” car. They still have to do 80% of the development to have something remotely marketable, and I doubt they have neither the financial resources nor the stomach (or the knowledge) for doing. My prediction is in the next five years they will drop the whole endeavor without much fanfare, as they are doing with that other much ballyhooed (back in the time) project of “Google Glasses”.

In the energy sector, our best, most advanced bet to ever get to a “cheap & clean” source of energy (Fusion) is ITER, which has not reached the stage of prototype yet, 64 years after the first certified fusion reaction on Earth (the detonation of “Ivy Mike”, the first hydrogen bomb) and 48 after the completion of the first Tokamak, and which is not yet a prototype of what a commercial reactor will look like. But not to worry, you have people confidently asserting that the lack of real progress in fusion can be more than compensated by (I’m sure you guessed it) advances in Sw:  Energy is transitioning into software. Look, I’ve been hearing this same claptrap since the 90’s of last century: smart meters, batteries, smaller producing units, et al will allow us to both wring more of the current production facilities and transition to cleaner, more flexible sources. Hasn’t happened in 20 years, and I don’t think is gonna happen in another 20.

So what should you make of Brynjolfsson & McAfee stating confidently that we are in an age of wonder, and that more is yet to come? Just conclude they are deluded, they have been show a battery of prototypes by some executives within the industry and, not having managed a big project in their life just did not have the knowledge to put in context what they were being shown. Most worrisome is the situation of Bill Gates, that should know better, but still dismisses the learned scholarship of Robert Gordon and refuses to acknowledge that, as much as he would like to think otherwise, innovation is not doing much for 99% of the population. I can only assume he has been so many decades in Microsoft occupied with running the company (and his foundation since he left the software giant) that he has forgotten the unfounded optimism of the prototypes he is being shown all around, and has ended drinking uncritically the Kool-aid of his peers.

Just remember there is no reason at all you, dear reader, should be drinking it too. The next time you hear (or read) about a new gewgaw about to revolutionize the world, apply a healthy dose of skepticism and remember that, very likely, you are being sold a prototype as a product close to completion. Do not fall for it. 

No comments:

Post a Comment