Friday, June 15, 2018

Convergent vs. Divergent Technologies

It amazes me that there is still a considerable majority of writers on technical issues that, in good faith, still proclaim that we are in the midst of a technological revolution and that the pace of innovation is constantly accelerating, so we are in the eve of witnessing the most significant revolution in all history affecting how humans live and interact with each other. Hardly a day passes by without some guru gravely and in all seriousness letting us know the epochal changes almost upon us. I know prediction is difficult, specially, as Yogi Berra famously stated, about the future, but it makes me wonder how is it possible that so many brilliant people, with vastly more information than myself about their specific field, may be so utterly wrong.

Some cases are pretty easy to understand, however, when you look at the incentives. Predicting wonderful, never-before-heard-of innovations that threaten every job and can upend any forecast can be very profitable if you make a living from “teaching” people how to adapt to such disruptions, or just by selling books (or newspaper articles) about the dazzling future just around the corner. Posts and columns about how tomorrow will be essentially like today (except with people less motivated and more pissed off as they accumulate less material wealth than their parents and the wonders they have been promised all their lives somehow fail to materialize) tend to fare significantly worse than those with a more cheery outlook, as a constant feature of human nature is to be more attracted to good, promising news than to bad ones, regardless of how well grounded on reality the former turn out to be.

However, a lot of people fully engaged in techno-utopian balderdash really have no dog in the fight, and should know better. I can understand Tom Friedman (an updated version of venerable Alvin Toffler, wont to be similarly discredited by how things turn out to happen) blabbering about the wonders of new (unproven, in most cases overhyped and underdeveloped) technologies in almost every one of his NYT columns of the last five years, or Michio Kaku peddling an imminent progress that will never actually come to pass (Poor Michio! Probably his best days as a physicist are behind him, and nowadays he surely hopes to make more money from the TED talk circuit than from potential scientific discoveries, although I just cannot avoid noticing he will need to be more discerning about the nonsense he spouts, as see him here harping on the greatness of the “marshmallows test”: M Kaku on the mashmallow test, only it seems that all the arch-famous test measures is how affluent were your parents -which is, doubtlessly, greatly correlated with success in life, albeit it sounds much less heroic and self-serving: The marshmallow test doesn't prove what you thought it proved (if anything) ). Same goes for charlatans like Aubrey de Grey or Ray Kurzweil (although with the latter one has to wonder if he is trying to actually fool somebody, or mainly to fool himself in the viability/ inevitability of that ages-old conceit of living forever and cheating death, now that it is getting undoubtedly closer for him), but what about the likes of Bill Gates, rich enough and old enough not to be foolishly deluded by the gewgaws and wild predictions of a bunch of self-styled “visionaries” obviously hoping to make a buck from the delusion? I can only suppose that, coming from the tech industry himself, he is as steeped in its biases and distorted perceptions as the next guy, and can only fail to see the negligible impact it makes in the lives of the majority of human beings as any of his Silicon Valley imitators, a root cause of such baseless techno-optimism that was already identified in a book by Richard Barbrook edited more than ten years ago, Imaginary Futures: Grauniad book review 

Be it as it may, the best antidote to such unfounded optimistic pseudo-predictions is to go back a couple of decades and see how there is little new under the sun, and the same miraculous technologies we see being presented today as about to change everything forever were already slated back then to be fully developed and implemented by now. For example, in this priceless Wired article from 1997 about the long boom that started in 1980 and would (they confidently assumed) last until 2020: Futurists never learn! the predictions (electric cars! New energy sources that eliminate the need to burn fossil fuels! Nanotechnology! Biotechnology!) are surprisingly similar to the ones we hear as about to cause an immediate seismic change in our lives any day now… only twenty years later, and after all of them (not just one or two) having failed in the meantime to, ahem… actually happen. All those wonders didn’t come out as expected when predicted towards the end of the last century, and won’t come out any more so at the end of the second decade of the present one.

However, it’s not like there isn’t absolutely anybody with eyes to see realizing that the majority of the predictions of cornucopians and techno-optimists alike have a very slim, unscientific foundation. Robert Gordon did a magisterial effort to point out how the supposedly disruptive technologies that are already ten to twenty years old were not causing that much disruption, at least where productivity statistics are concerned (as I commented on here: On Robert Gordon ). Tyler Cowen famously announced the onset of a “Great Stagnation” (but has been hedging his bets since by announcing in his blog at MR that there is no such a thing almost weekly, usually with the most obnoxious examples so we know he is half-joking about it).  Every now and then you find a contrarian view: Pace of technological change NOT accelerating but I think we can all agree such opinions are in the minority, and 90% of people out there assumes we are in an age of undiminished technical advance and ever-accelerating progress, along the most publicized lines, to recap:

·         Computers and, as a practical consequence, General Purpose Artificial Intelligence (not to fall in the IT consultants trap of Internet of Things, Big Data, Quantum Computing, Virtual Reality, Augmented Reality, etc.)

·         Biotechnology, Genetic Engineering (extending normal human lifespan beyond 130 years, may be 180 years, may be forever)

·         Nanotechnology or, alternatively, 3D printing

·         Self-driving cars and trucks (something most cities have known for a century… they were called taxis back then)

·         Green energy production (limitless amounts of cheap energy produced with no cost to the environment whatsoever)

·         Space exploration a go-go (permanent base on the moon, mars colony, cheap satellite launching to ensure high-bandwidth access to the internet anywhere on Earth with almost no cost)

Once again, and I’m really sorry to have to play the Cassandra here, none of those things will be widespread (and some will not exist at all, not even as more-or-less credible “proof of concept” in some marketing whiz’s powerpoint presentation) or actually rolled out, not in ten years’ time, not in a decade’s time, but in our lifetimes. In the whole lifetime of any reader of this blog, regardless of how young he or she is (and sorry, but that lifetime won’t go much beyond 100 years, doesn’t matter how much they take care of themselves or medicine advances). Believe, this is not a rant of your average fifteen-years-old living in his parents basement that has read this and that in the interwebz without understanding much. I worked for 15 years as an IT consultant. I work now in a company that designs and engineers power plants (of all technologies and stripes) and manufactures thermal control system for rockets and satellites. I teach in a university, which gives me a good overview of the real (present and future) capabilities of that “most trained ever” generation of future geniuses we are setting loose on the world (and which may shows the first signs of reversal of the “Flynn effect” we have been benefiting from for decades: Things looked bad enough already, and now it seems we are getting dumber! ) . I know a bit about what the current level of technological advance can and cannot deliver. About how long it takes to deploy at scale a new technology, it doesn’t matter how promising it seems on paper. And I am constantly puzzled when apparently serious people tell journalists, investors and the general public, with a straight face, that they are going to produce some miracle in blatant violation of the laws of physics, sociology, economic rationality and what we know of human (and animal) nature, and the latter swallow it hook, line and sinker. But such is the sad state of affairs we have to deal with, and it behooves us, like in so many other fields, to understand why it is so.

And in this case, I think there are a couple of distinctions that both “entrepreneurs” (a term that we should know better than to lionize, as the more congruent cognate is not “beneficent genius”, as so many people seem to believe, but “psychopathic snake oil peddler that got lucky once”) and said general public fail to make. The first distinction is that between science and technology, a well understood one I won’t delve much into. The second one is within technologies, between “convergent” and “divergent” ones. Convergent technologies are predictable, repeatable, reliable and because of all that, boring (they don’t attract much attention). We know how much it costs to produce something with a convergent technology; we can replicate it in different environments and cultures, because we understand the underlying principles and processes at play, and we have vast historical data series from which to extrapolate the future behavior of the different underlying systems and components; we can measure the different performances of the involved processes, and thanks to that measurement, tweak them here and there to improve some aspect marginally, but they don’t lend themselves easily to major alterations or “disruptions”. Finally, convergent technologies are considered boring because it is difficult to wring out a “competitive advantage” from their application, thus their products end up sooner or later being commoditized, and the rate of return they can produce tends asymptotically to zero (so good ol’ Marx erred in his universal prediction, in Vol. 3 of Das Kapital, about the falling rate of return dooming capitalism to a crashing end in that he didn’t consider the other half of the equation: the existence of divergent technologies).

Divergent technologies, in turn, are the exact opposite: if we are honest with ourselves, we only have the foggiest idea of what it costs to produce a single unit of whatever it is that this kind of technology is supposed to deliver, and we may fail by orders of magnitude (although errors of 50-150% are more common); we understand only a fraction of what they require to work, so for every new attempt of establishing it we find new elements that were missing we hadn’t considered, and that have to be hastily commandeered (adding to the total cost creep); because of such limited and incomplete understanding, they are highly unreliable, and if in one location they seem to function all right in the next one they fail or misfire, and they generally exhibit very poor production statistics (they have to be frequently stopped for unforeseen maintenance/ repair/ adjustment); finally, they are very exciting, promise above-market rates of return, and tend to make the life of everyone involved miserable (see Elon Musk sleeping in the factory floor to try to personally fix all the problems of Tesla Model 3 manufacturing, something he has as much chances of accomplishing through such heroic strategy as I have of winning the next Nobel Prize in Economics).

To clarify a bit, I’ll give some examples of each category:

·         Building complex and “big” physical infrastructures (i.e. nuclear power plants, highways with bridges and tunnels, high speed trains, harbors, airports) – highly divergent

·         Building complex and “big” infrastructure for moving data (land based communication networks, be they copper based, optic fiber based or antennae based -mobile and TV) - convergent

·         Manufacturing technically complex things highly adapted to their mission, so in very small quantities and with lots of differences between one piece and the next (satellites, rockets to put things in orbit, components for fusion reactors, supercomputers) – divergent

·         Manufacturing a lot of identical things (cars, running shoes, parts of furniture that the customer has to assemble himself…) – convergent

·         Manufacturing a lot of identical things in quantities that had never been manufactured before, which means it is uncertain which features are valued by consumers, and by how much (electric cars, wall-mounted batteries, electric car batteries, virtual reality headsets, augmented reality glasses, 3D printers, DIY gene-editing toolkits) - divergent

·         Developing software - divergent

·         Providing low end services (cooking, cleaning, cutting hair, serving tables, washing clothes, personal training) – highly convergent

·         Providing high end services (strategy consulting, financial and tax advice, psychological counseling, surgery) – divergent

·         Providing cookie-cutter entertainment (TV shows, run-of-the-mill apps, LPs of most big-name bands) – convergent

·         Providing cutting-edge entertainment (blockbuster movies, high-budget videogames) - divergent

You may see where the problem lies: moving a technology from the “divergent” category to the “convergent” one is really hard, it takes a lot of time, and requires a sustained commitment from the whole of society to endure overcosts, delays, frustration, disappointments and the occasional tragedy. If the benefits of turning the technology in question convergent are clear enough, and perceived to be widely shared enough, all those efforts are endured indeed, and the darn thing becomes commonplace, unexciting, and part and parcel of our everyday lives. But if they are not, people get tired of it (what in some circles is called “future fatigue”, or the weight of so many unmet expectations and promises not honored) and it may well  never come to fruition, like it seems will be the case with nuclear energy (as much as it anguishes me to recognize it).

To make things worse, some of the technologies that techno-utopians are announcing as imminent are not even in the “divergent technology” phase (when at least there is a draft of a business plan with some rough numbers of what it costs to produce each unit of the new good and what people may be in theory willing to pay for it), but in the pure “scientific application that we can trick some VC to pay to develop in the hope it will produce something vaguely marketable someday” phase. And guess what? A) things take an awful lot to transition from that phase to the “convergent technology” one (think generations, decades at best, not years, and certainly not months) and B) a lot of scientific wonders never make it to that final stage.

You may also have noticed that the classification is subtle and tricky at some points. Is building communications infrastructure convergent or divergent? It depends of what communication it intends to enable. Infrastructure to communicate physical goods (highways, airports and the like) is divergent (and prone to corruption, regulatory inflation and countless inefficiencies, but that is another matter). Infrastructure to convey electric signals (data or power), or gas or water, is mostly pretty convergent. A similar thing happens with manufacturing: convergent for combustion engine cars, divergent for electric ones. Convergent for laptops and mobile phones, divergent for satellite platforms and large telescope equipment. So if you bet in huge societal changes dependent on cheap, super-abundant electric cars (or in ubiquitous satellites) you will be sorely disappointed, as those are not coming anytime soon (if ever). Of course the people tasked with building those things requiring divergent technologies will try to convince you of the opposite, and will claim that their technology is already convergent or really close to becoming so: solar concentration plants are already widespread (they are not, the few ones actually built have every kind of technical problems and terrible performance), fusion energy is around the corner because the ITER experimental reactor is already almost built in Cadarache (it is not), and MIT just signed with Enel the financing of a project to deliver a similarly productive reactor for a fraction of the cost (they still need to find more than 90% of the money, which will turn out to be less than 10% of the total amount actually needed, and the whole thing will never go beyond the preliminary design stage); Elon is experiencing some minor glitches that will be finally ironed out in a few days, and it’s been a year and a half of him saying that starting in two weeks his Fremont factory will churn out 5,000 Model 3 cars per week (which is still short of the half a million cars a year he said he would be producing by now), but what he is really and indisputably doing is firing 9% of his workforce (the surest signal the company is going nowhere, but industry analysts, those sharp cookies, reward him with a 3,5% rise of the share price… Tesla's travails ); VR glasses are finally about to go mainstream, and the technology is so breathtaking that they will reach 50% of homes in  no time at all (they won’t, actually that announcement is from almost two years ago, I fear journalists have already given up on that one); AI is so much around the corner that whole panels of ethicists (and may be a presidential commission of experts for good, if we heed the recommendation of Dr. Kissinger!) are already convening to help guide it towards a morally responsible behavior towards us, poor humans, that it may almost inadvertently obliterate (although, of course, we don’t have the darnedest clue of how to actually produce, program, implement, embody, develop or whatnot said AI, never mind have it harbor positive or negative intentions towards us… or towards anything else for what it’s worth).

Before we go into the consequences of the convergent-divergent distinction, we have to take into account how it is different from the classification of technologies in mature-immature. Some technologies, like building nuclear power plants, breaking ground and laying down highways, or producing and filming blockbuster movies, are very mature, but never stopped being divergent (and driving the companies that attempted to market them to bankruptcy). Some technologies were, when commercially launched, groundbreaking and immature but made a profit from the start, as they were predictable and repeatable enough to be convergent since the beginning of their commercialization, like cars, oil extraction, radios (or many home appliances on which huge brands were built: washing machines, refrigerators, TVs).

You may notice that the latter category (innovative products that become convergent almost from the start) have one thing in common: they are all quite old, most of them being introduced at the end of the XIX or beginning of the XX century. In contrast, the last wave of consumer oriented innovations (PCs, mobile phones, LED color TVs and may be autonomous vacuum cleaning devices, like Roomba) are not generating that much profit. That may be an indicator of the comparatively little impact they have on people’s lives (which translates into a reduced marginal value, which in turn means people is willing to pay for them only moderate prices), and thus their inability to command a high margin. By way of comparison, back in the day people were perfectly OK with parting from a year and a half of average salary for a car, or many months of salary for a TV receiver.

Some readers may object that there is one bright spot of both innovation and high margin (and lots of people being still demanded): IT. Unfortunately, regardless of how much has been invested trying to industrialize it, developing software is still divergent (most Sw projects are behind schedule and over budget, many times absurdly so). However, using it, applying already developed Sw to the intended areas (like using an excel spreadsheet to develop the annual budget of a company), or even extending it to some new ones, is convergent, and that has created the mirage that a) Software is eating the world and b) Software (and virtualization) has any real impact on how people live and interact… when it has not.

We may spend increasing fractions of our lives in front of a screen, typing (or just watching), but saying that a new app is going to change the world is like saying in the 50’s (when TVs were already common enough) that a new show would change the world, or, going back even further in time, that a new novel by the romantic author du jour would change the world. They may have had everybody talking about it for a while, they may have gently nudged the attitudes and opinions a little in this direction or that, but they would have actually changed very little, as people would have gone on about their daily lives exactly as before. I’ve read in some reputable magazines brainy pundits declare that the mobile internet has changed everything because now we have things like Uber, which has utterly revolutionized how people move around in cities. Uh? Dude, Uber is a semi-convenient means for getting a guy to take you from point A to point B in exchange for some money, something that has been around for a century, and its innovation is to circumvent stifling licensing and regulation (which may or may not translate into a societal gain, as with any “artificial” monopoly).

If that is your idea of a society-shaking, business-disrupting (well, it has been pretty disruptive for incumbent licensed taxi drivers, which in most European cities are fighting back with some success, whilst they have a more formidable enemy in car sharing companies), life-altering innovation… I suggest you go back to The Rise and Fall of American Growth and ponder the impact of running water, the internal combustion engine, electricity and light bulbs or the radio, and how was life before and after the advent of such true innovations. Listen, one of my grandfathers was raised as a peasant kid in the Canary Islands countryside at the beginning of the XX century. He didn’t know running water, electric lightbulbs or motor vehicles until he moved to the capital in his teens (and he almost had to kill another suitor of who would be my grandmother to avoid being killed by him, life was indeed nasty, brutish and short back then). My maternal grand-grandmother, only slightly older, was still amazed when I first met her by people moving inside a little box (the TV set, still balck & white). So when a guy tells me that Waze is a life-altering innovation because now he knows in advance how much time he will spend in a traffic jam I can only nod my head in disbelief.

In summary, part of the stagnation and stasis we are mired in (regardless of how intently a bunch of interested fools may try to convince you of the opposite) derives from the fact that most of the technologies we are developing since the 70s of the past century are still divergent, and we don’t seem to have the collective willpower (or wits) to make them converge. Which means most effort devoted to their further refinement is squandered and lost, while our daily lives remain as before, only a little more cynical and a little more disenchanted (the weight of all those unmet expectations). Are we doomed to trundle along such unexciting path? Not necessarily (as there is very little totally necessary in human history), and I would like to end this post with a (cautious) appeal to hope: The areas of promise and development you hear of in the media (recapitulating: biotechnology, genetic engineering, nanotechnology, Artificial Intelligence, electric cars, “renewable” energy, machine learning, big data, the internet of things, virtualization, industry 4.0) will go through the usual hype-cycle and most likely fizzle out and disappoint:


Such is the nature of capitalism and a spent dominant reason. But human ingenuity, even when unfathomable amounts of it are wasted in dead-end alleys with no prospect of producing anything of value (remember, many Nobel-prize-winning-IQ-level guys are spending their professional careers trying to improve in a fraction of a percentage point the conversion ratio of some obscure advertising algorithm, and call that a meaningful life), won’t be subdued forever. Popper made a convincing case against what he termed “the poverty of historicism”, which he identified as the delusion of being able to predict the future extrapolating from the tendencies of the present. Some true, unexpected, entirely off-the-blue innovation will arise in the following decades, probably outside of the stifling environment we have produced for “R+D+I” within corporate or academic behemoths that provide the wrong incentives and the wrong signals of what is worthy to pursue. And from that spark, likely of humble origins, the whole cycle of (truly) creative destruction may start anew. But from the current bonfires stoked by greedy firms’ research departments and hapless universities, more focused on publishing than on expanding human knowledge and welfare, I expect little creativity and lots of destruction indeed…  

2 comments:

  1. Santi, are you sure we should categorize electric vehicles as a divergent technology? This is a pretty old technology (remember the first cars were electric), based on rather straightforward elements, with a well-known cost, easier to build and maintain than ICE vehicles...Maybe that is why I agree with you that this is not transformative.

    ReplyDelete
  2. Yup, but convergent-divergent is orthogonal to mature-immature. Electric cars (and building nuclear power plants even more so) is mature, but divergent. It is divergent because in its current state, there is no way to market enough vehicles with features enough people want at a price that allows to produce them profitably, so they either lie about the features (inflate mileage, tell they are "ready to drive themselves autonomously", keep silent about hysteresis of the batteries...) or delude themselves about what it really costs to manufacture at scale.
    Now, of all the divergent technologies mentioned, EVs may be, may be, the one we see becoming convergent in our lifetimes. Not this decade or the next. We still have to see Tesla bankrupt (in a couple of years) and the rich cities of the first world ramping up their efforts of banning ICEs thus creating a moderately big market for them that allow incumbent manufacturers (Renault, Toyota...) to really make money with them.

    ReplyDelete