Thursday, June 30, 2016

The problem with open borders

Some attentive readers may remember that I recently labeled what to do globally about refugees, displaced people, asylum seekers, or just paupers from one dilapidated place moving to another in search for a better life (many times not even for themselves, that know well enough will be facing discrimination and outright scorn, but for their children) one of the “great moral questions of our times” (GMQooT). Unfortunately, in a following post about the shortcomings of democracy in complex societies where desiderative reason has run its course (democracy's shortcomings) I reached the conclusion that the most extended mode of aggregating the political preferences of the people (representative democracy) was unlikely to help reach a commonly agreed answer to such question. Although I didn’t explore thoroughly the possible alternatives to current democratic practices, I found this interesting article in The Grauniad that proposes a particularly intriguing one (supplementing the current institutions by groups of citizens drawn by lot, an apparently absurd idea I recently recommended my patient wife as a potential alternative) Why elections are bad for democracy.

Some of the shortcomings of democracy identified in my first post have already become a commonplace: the Brexit referendum and the rise of Donald Trump in the USA are used by political commentators of all stripes to showcase the dangers of unfettered popular rule, and to illustrate how left to their own devices the uneducated masses are wont to take catastrophically bad decisions that can leave all the participants (which in their parlance means not only said uneducated masses, but especially the cultured, well-informed, able decision makers that constitute the natural milieu of the commentators themselves) worse off. Maybe just for sharpening my contrarian skills today I want to take the opposite position to that I implied back then, and argue that may be there is some legitimacy to the popular revolt against elite opinion (what their self-appointed leaders tell them they should think), and that by not paying attention to it such elites are accelerating the descent of our already exhausted social arrangement (link) into chaos and disintegration.

Let’s start, then, by reviewing the commentariat’s opinion of the latest manifestations of popular folly, in order to better understand its potential shortcomings. According to most politicians, pundits, journalists, boffins, professors and political analysts what we see in the “populist moment” is the reaction of those left behind by globalization against the fine humans of different racial and cultural extraction that have come to live amongst them. Such reaction is uniformly described with a clear tinge of disapproval and derision, as only a racist, a bigot and a moron wouldn’t appreciate the multiple benefits of diversity and how enriching it is to be able to live in close proximity of those very different from yourself. That’s the credo of “multiculturalism” in a nutshell: human diversity is good in itself, and not just opposing it, but simply being less than enthusiastic about it is an unpardonable sin, and must be the manifestation of a most retrograde and almost psychopathic mind. Before we pass judgment on the truth of such appreciation it may be good to separate how that diversity is experienced by the well-to-do and by the average man in the street.

For the first, diversity translates in more readily available tasty foods, cheaper home service (from cleaning ladies to nannies) and the eventual social exchange with similarly well-to-do businessmen (or, though much less frequently, businesswomen). In ethnic restaurants the service is always good (I dare to say better than the local one), cheap home service is a blessing and the businessmen from foreign cultures are invariably well educated, have interesting life stories to share and fascinating taste and habits. Within the upper strata of our world diversity is not only an unmitigated good, but it is enjoyed in their own terms. If the quarter they live in is too full of people too different from themselves, they can always move to another place, more exclusive and more culturally homogeneous. The world is full of “enclaves” for rich people to feel exactly as exposed to the wide world as they feel comfortable and enjoyable to do.

Things couldn’t be more different for the second category (your average-income earning worker). For him diversity translates in a certain amount of immigrants from faraway places who compete with him for low-skilled work, thus depressing his salary. They also take their children to the public school where he takes his, making the classes less manageable (language barriers). They also use the public health service he uses, making doctors less available for him. Finally, depending on how successfully they integrate with the mainstream culture he belongs to, they make the streets less safe, or public transportation a less appealing option. And of course, none of those uncomfortable situations is remotely voluntary, as the average worker in most developed countries would have great difficulties to move to a more affluent place (another post should be devoted to why the world over local governments seem bent on maintaining an artificial scarcity of land to build on, keeping the real estate prices unjustifiable going mostly up in a period of global price deflation) if he chose to. So, unsurprisingly, he tends to be highly suspicious when the self-proclaimed “elites” harangue him about his unacceptable bigotry and racism, and tell him he should be more understanding of diversity and accept the living embodiments of diversity that have come to dwell close in his midst. Not surprising, either, that he seems willing to vote for the demagogue that pretends to stand up for him against those evil immigrants (in the USA they include also the black population, that significant number of whites still see as alien), against the judgment of the duplicitous, hypocritical “elite” that preaches universal love and acceptance but afterwards retreat to their expensive gated communities and leave the bulk of the population with more and more squalid options.

So we may just agree that the resistance to unfettered multiculturalism may have some logic (and may be even some merit), and is not just the moral cesspool the majority of the opinion makers pretend it to be. What is to be done, then, about the problem of the refugees and the displaced for economic reasons? Is it then defensible to erect as big and foreboding barriers as possible to keep them out, and to prosecute and expel them if they somehow manage to sneak into our well-guarded islands of stability and order? Nope, that is not the answer either. Let’s inject a healthy dose of Kantianism back in the issue. What is it we would like others to do if WE were the ones seeking asylum, or just a bit of opportunity as our homeland were dirt poor and shoddily run? Not certainly to be thrown back.

Although we would understand our chosen destination being picky about where we settle, and even imposing certain controls on how many of our like were admitted (or the duration of our stay). That is then what we could demand our leaders to do: accept as many immigrants as possible given the economic conditions of the country, clearly understanding who have come fleeing violence and strife, and have the intention to return, and who have fled lack of opportunity and poverty and are thus willing to work for a better life. Provide shelter and nourishment to the former, and a remunerated job (or the chance to legally find one with the full protection of current laws and statutes) to the latter. I can hear nativists (especially European ones, with unemployment figures in some countries approaching 50% of some segments of the population) yelling “we can not ensure a job for every local, how can we be expected to provide them for newcomers!” and even for regarding those asking only for temporary reprieve, “the whole country is full, we neither can nor want to house them for free in our midst, putting an additional strain in our public services!” Well, raise MY taxes then, as I would more gladly pay to provide a modicum of dignity to a fleeing Syrian (or Zambian, or Sudanese, or Myanmarian) than to pay for an additional first-class flight for an already too pampered local politician, but as the politician is unlikely to renounce his perks I understand more wealth would need to be extracted from me and the likes of me.

But of course, the first order of business should be to get the economy growing again, to actively pursue full employment so there is work not only for the locals, but also for whoever else may want to come and do it, as long as they conform to the law of the land. What I see is that the answer to the Great Moral Question can not be given in isolation, as the malaise that pushes so many working people to give the wrong answer (the morally wrong answer, the answer that disregards the moral worth of the huddled masses searching for a better life in our lands and has then to justify why they are somehow less deserving of our concern and less entitled to basic rights than we are) is caused mainly by the competition for scarce resources we have made them enter with the foreigners, a competition that does not affect those self-righteously trying to impose it on others. Give the whole of the population enough resources (good job opportunities, enough choice about where to live and who to surround themselves with, good schooling for their children, and affordable healthcare) and you will see how little racist and small-minded they turn out to be.

Easier said than done, I know, as the two motors that have kept the European (and American, and Australian, and Japanese, and Korean) economy growing, demographic growth and technological innovation, are gone for good, and of the two alternative strategies that have been postulated to substitute for them one has been tried and found wanting (monetary loosening) and the other (expansion of public investment), although not as much tried as Keynesians would like, is not likely to produce any result at all (for reasons I explored here: man, are we screwed!). What are we supposed to then? Abdicate our responsibilities, as so many between us are doing? Pronounce it an unsolvable conundrum, a terrible mess, and just throw our arms up in the air? No, never, as that only hastens the societal collapse I already identified as the unavoidable consequence of our civilizational model having reached its terminal phase. But the alternatives are hefty enough as to merit a post of their own in the near future.

Tuesday, June 28, 2016

War, what is it good for? (creating viable states, may be?)

Last week I finished reading Charles Tilly Coercion, Capital and European States, AD 990-1992, a superb book from a first-rate mind (I have to thank one of the judges in my dissertation panel, professor Jose María Rosales, for pointing Tilly’s work to me), and it triggered a whole set of reflections of great relevance to my understanding of the most desirable social units towards which we should steer our current system which I intend to explore in more depth in this post.

The core of Tilly’s argument dovetails very nicely with my own ideas about the development of the dominant reason within the European State System, identifying the continuous armed conflict between the different societies as the great catalyst of their formation, consolidation and growth in the period under consideration. It was the need to extract additional resources from their populations to wage ever costlier wars what led the budding states to go down the path available to them, be it capital intensive development (leveraging the commercial acumen of their city-based bourgeoisie to hire mercenaries, like in the case of the Low Countries or England); or coercion intensive development (granting their noblemen permission to exploit their vassals for a similar end, like in Russia and Prussia). Those that did well would end up converging in the organizational form of the modern nation-state, and those that did not were finally absorbed by greater units (like most German states, Italian city-states, Occitania, Normandy, Bourgogne and most Iberian kingdoms). A surprising aspect of the authors’ analysis of such long period, over such a vast geographical area, is that although the main actors where for most of the time either directly at war or preparing for war, that didn’t translate in specially war-like mentalities, or required the instillation of an especially combative mindset in their citizens.

Since antiquity the waging of war, the hardships experienced by the soldiers and the physical exposure to potential pain and death were borne by well segregated parts of the population, extracted mostly from the less favored classes (or directly contracted for such exploits from outside the realm, with some regions, most notably the Swiss cantons, specializing in the provision of mercenaries for every warring party that could afford them). Even when such portion of the people were led by the local nobility the casualty rate for the latter was orders of magnitude lower than for the former. Not only was a self-contained part of the population involved with the actual fighting, but the scenario where such fighting took place was pretty limited once the main state actors were in place after the Peace of Westphalia, with the more populous urban centers of the continent (London, Paris, Copenhagen, Amsterdam, Stockholm) safely ensconced in the center of national territories surrounded by a buffer zone that almost guaranteed that they would not be visited by the indignities of a foreign invasion until very recent advances in transportation technologies (railway and motorized division well into the XXth century) made that possibility likely again.

Thus although the possibility of fighting more expensive wars was the ultimate explanation of much of their social development, the European societies could live blissfully ignorant of the preeminence of such motive, and focus instead on developing ever more successful institutions and in the end ever more innovative and prosperous economies. Of course, the ultimate success was defined by their ability to produce more material goods, which in turn would allow them to field more powerful armies and navies, as I have ceaselessly repeated in this forum. No revolutionary discovery here, then.

However, what does such history teaches us about the importance of war (“the father of all things” in the unexpectedly prescient words of Heraclitus)? And what happens when such source of creativity, or of societal innovation, dries up, as it seems to be happening in our own times? Finally, how should we consider the influence of conflict when designing the transition to a more humane kind of society? To begin with, for war to be a net positive to the development of social systems the following conditions seem to be necessary:

·         a reasonable balance of forces between the likely contenders that makes the end result of war uncertain enough to make it infrequent enough (where the forces of one side are disproportionately greater, it will either invade and absorb the weaker part outright or impose some kind of vassalage over it; the weaker will in turn submit or resort to any alternative –pay tribute or yield territory- rather than be overwhelmed by force)

·         a dynamic of escalating costs and diminishing returns from fighting as each actor takes the fight farther from its power base (because of the extension of its supply lines, the increasing resistance of the encountered population and the greater difficulty of administering distant lands), thus constraining the amount of territorial gain that could be profitably pursued, and adding to the stability of the system

·         a social stratification system that limits the amount of able bodied males that can be conscripted at any given moment, limiting the disruption war itself causes and the drag on economic growth imposed by the maintenance of a standing army (on a side note, how much of the population can be devoted to war is greatly affected by the particular circumstances and level of economic development of the society in question: Sweden in the XIV and Prussia in the XIX centuries seem to have reached the upper limit of mobilization, that is, until Nazi Germany found itself caught in a struggle for its very existence well into the XXth, but nobody would make the argument that giving a panzerfaust to any 14 year old, or 64 year old for that matter, is a sensible or sustainable idea) and freeing enough talent and manpower to pursue interests with a more immediate economic application

Not that you need the three factors for armed conflict to be a spur to technical and economic development without its costs seriously underwhelming it. For the first two, it helps to have different groups with different histories, languages and even religions but a rough demographic balance, or you end up having a “warring states” scenario at the end of which a triumphant elite ends up dominating the whole territory (as the Han famously did in China).

The vagaries of history, climate and geography created such combination of social conditions only in Europe, and thus it is only in Europe that “progress” took shape, and finally exploded, giving it the institutional framework to dominate most of the world in the XIX century. A bit before that the third condition had been seriously weakened by the appearance of the mass-conscripted national armies created right after the French revolution, and the second was obliterated by the technological advances that came to fruition with WWII (and that had been first applied, at a more limited scale, already in the American Civil War, which saw the first mass movements of troops by rail and Sherman’s raid through the Southern States which severely impacted the civil population and infrastructure). Once you have developed the conditions for “total war” the incentive structure totally changes, and war becomes entirely antithetical to and incompatible with even a shred of economic development. What did Europe and its offshoots do under those new conditions? They essentially stopped making war in their territory (until the Balkan wars of the 90s, where a separate set of reasons took precedence) and limited themselves to “project force” where their economic interests where threatened and keep residual fighting forces which were barely enough to protect them from any mid-sized outside invasion, which was a perfectly rational thing to do given that their safety did not depend on the size and quality of their armed forces as much as on the credibility of the American “nuclear umbrella” that sheltered them…

Unsurprisingly, both economic and institutional development have stalled, and all the public investment of the world doesn’t seem enough to kickstart it back into life. The great institutional innovation that was hatched after the latest carnage showed all too clearly the unacceptable price of the old incentive system (“evolve to better make war or be conquered”) was the supra-national European Union, which made all the sense back then but has not been very capable of evolving in response to the new demand imposed by a very different international scenario, where there is a similarly supra-national agent (vaguely known as “Islamic extremism” under different banners) that does not territorially threaten the well-established nation states, but rather floods them with displaced people from a very different cultural background creating a wave of instability they find themselves ill-prepared to deal with.

That is of course an unfortunate state of affairs, one for which the old response of improving the ability of the state to extract resources from its native population that would then pour into a better equipped, more numerous armed forces doesn’t seem very well suited (essentially, that’s what the USA has kept on doing since the beginning of the new century, with quite underwhelming results). But I do not want to deal now with how such threat should be dealt with (something that would merit a post of its own), but to what such dynamic affects the optimal design of a new kind of society, be it one built on the Anarcho-traditionalist principles that I sketched in a series of posts (AT Manifesto I) or according to my Idyllic view of what the future of humans here on Earth will look like (Sunny future I). One aspect conspicuously absent from the discussion of such ideal (and idealized) social arrangements was how they would defend themselves in case of external aggression. In both cases deviance from the collectively agreed norms was dealt with by a minimal police force, which could be adequate to the task only in the case of internally bred, low intensity, low scale bad behavior. No provisions were made to defend the polity from a full blown external invasion, or to deal with the potentially destabilizing efforts of a significant segment of the population that received support (in any form: armament, training, or even just ideological inspiration) from an external agent.

Back when I penned those ideas I thought that the need for “defense forces” equipped well beyond the capabilities of a constabulary would be at most a temporary nuisance, as a stable equilibrium would be quickly reached where all institutional actors would see that it was in their best interest to get rid of such forces and jointly enjoy the worldwide “dividend of peace” of being able to devote the material resources demanded for the maintenance of such force to the betterment of the whole group, resulting in the utopia of everybody realizing the futility of keeping such costly resources (you can not invade a modern democracy and force all its citizens to work productively for you anyway), so “turning their swords into ploughs” and living peacefully ever after. What European history teaches us, then, is that for millennia an alternative equilibrium (equally stable) is possible in which the progress catalyzed by low intensity conflicts more than compensates the drag on economic development derived from the maintenance of the capabilities to keep those conflicts going. Even after eliminating all the armies and achieving for a few generations a peaceful, demilitarized world there will always exist the possibility of a single group (or federation of groups) deciding to rearm and restart the ages-old quest for world domination, a strategy that in a scenario of surrounding disarmed polities has the potential to offer a huge payoff (at least to the first actor to pursue it, thus increasing the incentives for every actor to take the lead).

Which basically means that in the ideal polity there will probably need to be a supra-national single body with the monopoly of violence, dominant enough to prevent any single phyla to attempt to dominate their neighbors (and beyond). Which in turn poses the problem of how such body is prevented from taking its prerogatives too far and ending up dominating tyrannically all the rest, parasitically taking advantage of their work to live comfortably. And we are back to Hobbes and Locke about the principles of government and how much of a Leviathan makes for preventing the war of everybody against everybody else without itself impinging on their basic liberties. And, in a more down to Earth version, poses the problem of how the first anarcho-traditionalist phratria will ensure they are not “tread upon” by the more traditional polities surrounding them (a problem not that different from that of establishing free cities, which in many ways could be seen as slightly enlarged version of my phratria), which will likely want to either subjugate them or “protect” them (for a fee, thus preventing them from what they would see as secession). I don’t have an answer for such problems yet, but I will keep on thinking about them and share the solutions I see in a following post.  

Wednesday, June 22, 2016

Ideas having sex? I wouldn’t count on it, either

A much commented meme that has been making the rounds in the Internet of late is that, you are not going to believe this, we live in an age of miracle and wonder, an age of unparalleled technological advance (I know, I know) thanks in part to the cross-pollination, and the cross-fertilization of disparate ideas from different fields, whose mixture has been enabled by the wonderful information technologies that put all that knowledge and all that thought “out there”. We owe the original concept to Matt Ridley, who presented it in his book The Rational Optimist (2010), and developed it in a TED Talk of that same year (Ridley's TED talkie), but it has recently gained new currency by being mentioned in a much commented essay in the WSJ by Deirdre McCloskey (neoliberalism will make you rich (and I have a bridge in Brooklyn!)) who uses Ridley’s original insight to explain how the Industrial Revolution took place, and how since then it has kept making us richer and richer, to the extent that if we don’t mess it up (and leave everybody free to pursue their best interest, hmmm, may be there is some unsavory ideology creeping on the back of such lofty proclamations) it will end poverty and ameliorate inequality… just by applying what she calls “classical liberalism”, as supposedly done by the USA, China and India (but not by Brazil, South Africa and the European Union, which according to Mrs. McCloskey “have stagnated” because of their statism and misguided attempts at forced redistribution, never mind that in the latter productivity per hour has grown more than in the USA –something that the Americans have compensated by working many more hours, something that from a utilitarian perspective may be not that desirable- or that according to Nick Kristof 3 million kids in the USA live on 2 $ a day or less Kristof was wrong (no kidding!)).

Nothing that should really surprise you in the unofficial organ of the global plutocracy, but thinking through such malarkey I came to the conclusion that the hope in the ability of ideas to reproduce may explain instead some of the reasons of the decrease in the pace of innovation, and that it would be worthy to explore them more in depth to see how such decrease could be countered. But first, I have to debunk a little of Mrs. McCloskey article, as to better focus on the sexual lives of ideas we should better get rid of the ugly and unsubstantiated ideology with which they try to sell us the concept. An economic historian as her should know that the Industrial Revolution started when and where it started (in England around the middle of the XVIII century) not because for the first time the state decided to respect the freedom of the individuals to pursue their own interest, rather the opposite: the state actively intervened in the protection of the domestic market through high tariffs on imports and incentives to export, and regulated significant portions of economic life (from what boats could be used to move commodities into British ports to how paupers should be treated). As Branko MIlankovic reminded us recently, commenting on a book by Peer Vries (Origins of Great Divergence) the reason the “Great Divergence” happened in England and not in China is NOT because England was more liberal and less interventionist in the economic life of their subjects than China, but precisely the other way round: the intellectual landscape of England’s economic policy was dominated by James Steuart’s Inquiry into Principles of Political Oeconomy (which, although being the most pure exposition of the mercantilist doctrine, bore the significant subtitle of being an Essay in the Science of Domestic Policy in Free Nations), not by Adam Smith The Wealth of Nations, which was not published until 1776, with the IR already well under way. England devoted a higher percentage of its GNP to public expenditure, collected more taxes and in general intervened more actively in how her subjects produced and traded than France (as Wallerstein and other world-system theorists have amply attested), although it is the latter which is usually held up as an exemplar of the ills of sclerotic statism, complicit in the widespread meddling with the market  of its elites being the main culprit for its (comparative, and maybe entirely inexistent) lack of development on those crucial years.

Readers of my blog know I subscribe to a completely different explanation of why England. It was them who for the first time in Europe articulated collectively an image of what was a life well lived that focused on the satisfaction of wants in this world, and left aside as of secondary importance the preparation for a next life (something that David Hume was as instrumental as anybody in formulating and popularizing between the opinion makers of mid-century, and that Adam Smith received ready made from him). The geographical context is of paramount importance, as such dominant reason in other place and time could have gone unnoticed and eventually died out, but the Europe of the time was formed by a number of opposing polities locked in continuous military struggle, which placed an enormous reward on those able to outcompete their neighbors. As the British embraced more ardently the new dominant reason with such image of the life well lived at its apex (btw, the Dutch had started toying with it almost a century earlier, and became the world’s dominant commercial power thanks to it, with Spinoza here being the pathbreaker, albeit much less influential between his contemporaries than what Hume would be, and Bernard de Mandeville the most vocal popularizer… unsurprisingly the latter would write in English and develop his influence mostly between the British public), they could extract from their population a higher surplus, and thanks to the production of more material goods that such higher surplus was devoted to they could overcome their main foe (the French) in the long contest for world supremacy that took place between 1714 (end of the war of the Spanish Succession) and 1815 (battle of Waterloo and end of the Napoleonic wars with the complete victory of British arms that would lead to a century of undisputed domination). Along that period relatively minor tweaks were made to such dominant reason to narrow the scope of socially legitimate desires and to clarify the rules for assigning social precedence (the justification of the social hierarchy), crystalizing in the kind of dominant reason (desiderative reason) that a different society had developed to a greater extent, and was embodying more faithfully. Not surprising then that such society (the USA) finally displaced the Britons from the top of the international system. Sorry for repeating myself so much, as this story I’ve already told (DR History I and DR History II).

Nothing really to do with citizens being more or less free to pursue their (egoistic or altruistic) own interest, or with “the state” meddling more or less with the unalloyed magic of the market, and everything to do with how successfully social cues are given to those same citizens since they are born as to what is admissible to want, what a good life consists in and who they should yield to (or try to emulate). And the current mix of those three categories just has happened to produce the most successful social system as measured by the number for material goods it can produce, but such success tells us nothing whatsoever about how “free” each individual within such society is or how morally worthy such an arrangement is. The reason this particular dominant reason has become so widespread is because the amount of material goods a certain group can produce is very highly correlated with the power of the armed forces it can deploy so, simply put, the most successful material goods’ producers (the richest societies) have militarily crushed any challengers, either in open war (the axis powers in WWII) or through a slow, grinding campaign of attrition consisting on proxy conflicts in the outer limits of their sphere of influence (the Cold War).

Now don’t get me wrong. I’ve always been quite critical of a line of thought, exemplified by the Frankfurt school, that equated the level of coercion imposed by the state on its citizenry in the communist and the capitalist system, stating that the people under both systems was equally unfree, and thus that both systems were morally equivalent. That’s idiotic, and there is no discussion that the dominant reason of our times (desiderative reason) admits of very different degrees of formal freedom, and that it is morally preferable to be manipulated into taking for granted certain while at the same time having almost unlimited access to learn and discuss opposing ideas while living under the rule of law and reasonable guarantees that you will not be harassed, beaten or jailed because of your stated opinions (the current situation in the “liberal”, “capitalist” West) than to be similarly manipulated but have such access curtailed, being unable to discuss a subset of ideas determined by somebody else and living under the risk of physical violence because of your beliefs (the situation in China, Russia and most developing countries).
Call me a moral absolutist (which I am), but I do believe there are some values that are non negotiable, and that no amount of social manipulation can neither turn palatable nor debase by being surreptitiously imposed. Freedom is good, desirable, and merits being fought for, even if it is used to justify unacceptable levels of inequality. Having a certain level of bodily comfort that can only be achieved by the possession of material goods (a good mattress, a well-insulated shelter, nutritious food and tasty beer, some books, etc.) are similarly good, desirable and merit being fought for (within more stringent limits than the kind of fight that the pursuit of freedom justifies, of course, as many times increasing the goods I can enjoy means a corresponding decrease for somebody else, which doesn’t necessarily happen when fighting for freedom), even if the enjoyment of such goods is used to justify an unacceptable renunciation to our basic freedom and dignity. There are biological and psychological limits to what kind of life we may want to pursue, what we can desire and what features we use to define the social hierarchy that no dominant reason can manipulate, and beyond which no amount of conditioning would take us, but that leaves enough latitude for wildly differing systems, as history has repeatedly taught us.

So back to our initial contention, it is not the lack of involvement of the state in the economy (or in the private life of its citizens) which explain the “great divergence”, or the “great enrichment”, as the case of China since the 70s attests. What the PCCh did back then was forswear the previous dominant reason (bureaucratic reason) and embrace the latest one (dominant reason). Since then, it has not been the party who decided the social position of each citizen, but the amount of money they could earn. Which does not mean the party has dissolved, or disappeared, as it continues actively participating in the decision of what to produce and how (they are just more cavalier about how the results of such production are distributed, leaving that to “the market” or to loosely defined “price mechanisms” in which they still intervene actively). But make no mistake about the ultimate source of social prestige: being a party cadre is good, but being a newly minted millionaire is much better (and indeed the highest party officials seem to be hell bent on using the former inasmuch as it enables them to become the latter). During all these decades, it is not as if the average citizen has been made the subject of Western-style rights and freed of any unreasonable tutelage by the Party (although, compared with how things stood during the Cultural Revolution you could say they are all free as a bird)… or that they have been producing wildly innovative ideas, either.

Similar thing in the more economically developed countries: we are collectively more or less as free today as we were in the 60’s and 70’s (well, some of us are distinctively more so, let’s remember both Spaniards and Portuguese were under a dictatorial regime back then, as were many of our Latin American brethren) but we seem to be distinctively less able to have innovative ideas. So may be the equation of ideas-producing capabilities and political freedom is just bunk… However, it would not be entirely true to say that we have completely run out of innovative ideas since the 70’s of last century. I have always readily conceded that there is one area where there has been significant innovation, and a significant growth: information and communication technologies have doubtlessly progressed phenomenally, which has blinded many observers to the fact that everything else has basically been left out in the cold to dry.
And that’s precisely the root cause of the uneasiness with which I’ve been thinking of late about the overgrown recognition given to the advances in ICT’s: may be both trends are not just simultaneous, but causally connected. Maybe the reason we have collectively stopped having innovative ideas (or being able to innovate beyond “fake”, “simulated”, “virtual” reality) is because we collectively devote too much attention to the easily available information that the new technologies put so effortlessly in front of us. I’m an avid fan of Doonesbury by Garry Trudeau, and I remember being especially impacted by the stripe published on June 26 of 2011, not because of its brilliance, but rather because of its boneheadedness. I’ll let you judge:

I’ll pass on the first two questions, and focus on the third one. Just to ask what are the three branches of moral philosophy requires you to have such a massive background (which probably includes already knowing the answer) as to make Google’s contribution almost meaningless (let’s leave aside for the moment the point that such classification is pretty contested, and most moral philosophers would challenge the way it is posited). Getting to know what we understand as moral philosophy, and what it means that it may have different (or differentiated) fields of inquiry with (slightly) different methodologies and references requires a kind of deep learning that can only be done through serious engagement with long, convoluted texts, not definitely by browsing a few web pages and jumping from a snippet of information to the next. I dare to say that for some pursuits the kind of technological advancements brought by the ICT revolution (internet, mobile telephony and the like) are not only not of much help, but may be positively detrimental.

Philosophy is one of those pursuits, and surely you may argue that Philosophy is no good at all, so loosing or degrading it is no real loss. But it seems to me that the way most social sciences are practiced today has been affected for the worse by the easy availability of not-necessarily-very-relevant information enabled by Internet. Any student of sociology can google the main terms of the issue he wants to develop and have immediately at his fingertips a gazillion hyper-obscure references by people all over the world that have most tangentially touched on such issue. Does that enable better scholarship, more informed commentary, a richer understanding through heightened discussions? Nope, not at all, what it enables is an unreadable style that has become prevalent in the peer reviewed publications that constitute much of what today passes for economics, psychology, anthropology, sociology and political science in which for every page of argument you have two or three pages of “critical apparatus” (notes and bibliography) that add little to the understanding but at least helps to mask the lack of substance of what is being (barely) said.

When in seconds you can arm yourself with the opinion of twenty thousand “experts” you don’t really need to devote as much time to mull over the core of the contentions and implications of each turn of their arguments, and see how they help you sharpen your own ones. When all you had where the main books in your local library, and could mail order the ones that you deemed interesting enough, and may be some magazine articles, you had to do crazy things like actually reading them (and not just hastily peruse them in search for some quotable paragraph after which you could forget the whole jumbled thing).

“Yeah, well, so the internet has killed the social sciences, you were the first to say they serve no purpose, and so again, what’s the loss?” there is a difference between serving no purpose and being of no use, and an even greater difference between both and being of no value. The social sciences, oxymoronic as they may be today, have provided an enormous service to humanity by helping precisely to explain what it means to be human, and how such humanity is advanced or hindered by the way we organize ourselves in groups. Understanding that (which is an inseparable part of understanding us) is a main thrust we have experienced since we have symbolic language (already when we shared the Earth with the Neanderthals, for those unknowing of the remote past of our species), and I just decry we have come to a position where little further development is to be expected.

But, furthermore, I am more and more of the impression that such ailment is not exclusive of the “humanities”, or the “social sciences” (moronic as I find the label) or the geisteswissenschaften. The more I think about it I think that it is a rot (the easy availability of trivial or barely related information that becomes more and more difficult to filter) that has extended and colonized also the fields of natural sciences, medicine and engineering (my regular readers should know I consider the last two epistemically equivalent). I mentioned in a recent post (Why tech stopped evolving, especially points 1 –the dead weight of tradition- and 2 –the effect of too much regulation) how some underlying reasons for the lack of technological progress were to be found in a society too rich in information and experience, and that tried to orient itself in that vast wealth by imposing a framework made of regulatory standards that made things worse, and robbed potential innovations of much of their dynamism.

And the problem does not end there. It is bad enough that we have gone methodologically astray in most disciplines by making so much irrelevant information not just available, but considered an essential part of the way we present any sound argument (at the expense of the soundness itself), but it gets worse when you realize that the consolidation of such methodological tomfoolery is accompanied by a mindset that almost imposes distraction and impedes the kind of concentration required to make progress in any organized field of knowledge. I’m thinking here about a phenomenon that ranges from the relatively mild (the Otaku) to the more extreme (Hikikomori), and which as I see as more and more prevalent (and the more extended the more technologically advanced and materially rich the society is), which draws a significant number of youths that count themselves between the most gifted of their generation (and that should thus be main contributors to collective innovation in their primer years) to a pursuit of what, for lack of a better term, I will call “immaterial” development (under which I group manga fandom, playing videogames, compulsively participate in forums, or simply watch inordinate amounts of cat videos in YouTube) that renders them more and more unable to contribute to the development of the “real”, “material” social group to which they should belong. But with this I feel I’ve touched a far greater subject (that touches with my concern about the global decline and accelerating disintegration of our whole civilizational model) that will require an additional post to be sorted out.

Friday, June 17, 2016

No, Software, is NOT eating the world (the misguided prototype paradigm)

As I promised in a previous post about Robert Gordon’s The Rise and Fall of American Growth (Man, Gordon's book is Good), I would like to expand what I consider a major source of error for techno-utopians and cornucopians, namely, the fact that their limited professional experience (normally limited to academic life, think tanks and some gigs as “strategic” consultants of big corporations, far enough from the nuts and bolts of daily operations) has left them entirely ignorant of the dynamics (and pitfalls) of rapid prototyping, which have become a major distorting factor in most innovative endeavors.

I will start with a somewhat generic remark on viewpoints: although pure impartiality (what Thomas Nagel called felicitously “the view from nowhere”) is a noble ideal we should strive for, I understand that it is damn difficult to achieve. People think, and write, necessarily from their own experience, and what they had a chance to know and experience firsthand tints what they are able to think, and how they come to think about it. When I finally understood what Marx had done for a living (barely) for most of his adult life much of his writing became finally intelligible for me: in this case, he was essentially a man with a “journalistic sensibility”, what other sensibility could he have, if all he ever got paid for was writing articles for newspapers and magazines? He never had much time or resources to academically think “deeply” about any issue pertaining how society worked (all he had to go on were secondhand opinions of British thinkers of the previous generation, the “classical economists” available in the British Museum library, whose reading he had to cram haphazardly between commissioned work), or to organize and direct any sizeable group of people (not a political party, not a trade union, not even a neighborly association, and not for sure any of the “international workers’ movements” with which he was affiliated or from which he was kicked off), so he was understandably clueless about any of them.

Similarly, to understand the thought of some of our current opinion makers we need to get a grasp of where they come from. Not necessarily to undermine their arguments (that have to be considered, as they say, “on their merits”) but to put them in perspective and better ascertain what kind of bias and distortion they may have been subjected to. We can then group the proponents of a new golden age based on the acceleration of “innovation” based around software in two broad groups: the software developers themselves (be they entrepreneurs that ended directing their companies, like Bill Gates, or financiers that ended gaining some understanding of how the companies they put their money in worked, like Marc Andreesen) and the journalists that have chronicled their rise, and written books about it (although sometimes they hold some academic position, the role they play is mainly as mouthpieces and popularizers of the formers’ worldview, as we will see).

What kind of distortions can we expect from people that have worked all their life within the software industry? A considerable number of them, as it happens. We have to start noting that this is a pretty recent creation. The first corporate computer systems were conceived and installed at the end of the 60’s, but their propagation didn’t start in earnest until the beginning of the 80’s (I distinctly remember how at the very beginning of my own professional career, and before that, while I was at University, most desks did not yet have a computer even in advanced fields like engineering), so the first batch of full time programmers that operated in a somewhat structured, well established industry is not yet approaching retirement. So of course they see work as being revolutionized by the software they install, and which they increasingly use for developing their trade. But, unsurprisingly, machine tool operators (or engineers) have a much more nuanced view, as they see software (and computers in general) as an additional tool that fits with other tools they have used traditionally in a context that hasn’t changed that dramatically. That has, in other words, evolved without that much “revolutioning”.

An additional factor to be taken into account is the extent of “self-consumption” within the industry: new languages and tools are increasingly used to build (supposedly) more powerful programs that in turn can be used by the industry or the consumers. But the programmers usually are pretty ignorant about how the external world uses (or benefits from) their end products, even requiring a whole new class of workers (that  call itself, in a revealing self-congratulatory way, “consultants”) to translate what they produce into terms that end users can understand. Not that being so self-congratulatory is warranted in any way, as the track record of what software does to companies and individuals is pretty abysmal (an old piece of statistic from the 90’s revealed that 50% of software implementations in big corporations was never completed, if you added the roughly 30% that was completed but failed miserably to achieve any of the “business objectives” that had been set for it, four out of five projects were a colossal waste of time and effort of all involved).

Finally, the kind of people attracted to the industry, not to put too fine a grain around it, leads to an overrepresentation of what is traditionally known as “nerds”: individuals with little social skills that have spent an inordinate amount of their lives playing video games (software) and discussing the evolution of computing power as if it would deliver them from all their shortcomings. I’m not saying every single participant in such a big sector of the economy is a cartoonish version of Sheldon Cooper, but I do think you can find a higher percentage of people with that kind of mental setup in software development (and related industries: IT consulting, support, telecoms, etc.) than in retail, manufacturing or construction.

So the first kind of people touting their opinion that “software is eating the world” or that “every business is a digital business” are the people more set to benefit from such purported state of affairs, which should already make us suspicious of the accuracy of such statements (people always tend to see as more likely those events that favor them). What about the second kind, the journalists and popularizers? Shouldn’t they have a wider view, less invested, and thus more objective? Sadly no, because of two factors, that I will call the “journalistic bend” and the “prototype fallacy”.

The journalistic bend

There is one reason I chose my enhanced understanding of Marx to exemplify the importance of understanding a man’s profession in order to calibrate the validity of his opinions. As it happens, when such profession consists in writing for the “general public” (what in more elitist ages was known as the “uneducated masses”, but who would proudly claim to be educated in these egalitarian days?) the opinions proffered are to be taken with more than the traditional grain of salt. What is indeed the skill that the journalist/ popularizer has to cultivate during his professional career to make a living? An optimist would say it’s the ability to clarify complex ideas, to identify from the confused mass of facts and opinions the most salient features that can be communicated and easily comprehended by his peers. Yep, sure… if you think that is how the real world works, I still have that bridge in Brooklyn I mention so frequently, and I’m surprisingly willing to cut you an unbeatable deal. That skill would be valuable in a world of discerning publics that could differentiate between streamlined prose that keeps close to the truth and an unadulterated load of claptrap. That is most empathetically NOT the world we live in, so having such skill is exactly of ZERO use to any aspiring journalist.

What the aspiring journalist needs is the ability to sound knowledgeable on any issue, to appear as authoritative even when he doesn’t harbor the slightest, tiniest, most minute idea of what he is talking (or writing) about. What the schools of journalism do indeed teach (in a more or less veiled way) is how to feign authority, how to have audiences trust you, how to, if needed, dissemble with confidence and poise and self-assuredly, with absolute disregard of the gaping chasm that may exist between what one communicates and how things may really be (it helps to live in a culture where finding out “how things really be” has been questioned and doubted and practically banished from public discourse). What they teach, in less words, is how to bullshit the audience, assuming it will not notice anyway. Not surprising then that what fills 99% of the media, in the airwaves and in print, is none other than bullshit, and a sorry testament of the state of the world is that 99% of the audience buys into it and swallows it hook, line and sinker.

The prototype fallacy

So we live in a “mass media society” where the lack of education (both in highbrow culture and in complex technical aspects that require some command of STEM subjects to be understood) of the audiences and the lack of scruples of communicators may give some salience to lies like “we live in an age of unparalleled technical advance” or “the economy is becoming more and more digital, so faster computers is all we need to build a better world for all”, that still doesn’t explain that such misguided statements are not revealed sooner rather than later for the baloney they really are. In most ages of humanity there has been a combination of gullible public and hucksters trained to exploit them, and the (false) ideas like “the Earth is flat” or “The Earth is at the center of the solar system” or “bodies in free fall move towards the floor with constant velocity” were finally exposed as fraudulent and discarded (note I’ve chosen three examples of ideas about facts, not events, as discussed in my post about Collingwood and the viability of social sciences why social sciences suck). True, and in each age those false beliefs have been propped by a subset of the contemporary dominant reason that made them appear more “believable”, more likely than they really, prima facie, were. In the case of the centrality of the Earth it was Christian dogma and a certain interpretation of a passage of the bible (that didn’t impede Protestants to abandon the idea much sooner than Catholics, which would merit a post-long disquisition of its own). In the case of the constant speed of a falling body it was the authority of Aristotle (and the lack of instruments with the required precision). I will argue that in the case of the ongoing acceleration of technical progress (similarly fallacious) the plausibility derives from what I’m calling the “prototype fallacy”.

What is a prototype? In the software world, it is a program that can be built very fast, to show how a function would be performed once it is fully developed, assess the viability of the proposed solution and identify que interfaces it may require with other systems (or with humans, as prototypes are many times used as “proof-of-concept” to validate with future users the design of an application). Prototypes are wonderful tools (there is a whole school of software project development structured around RPB, Rapid Prototype Building, based on essentially iterating around them until you have the wholly working system, and the family of Agile methodologies evolved from that) BUT they are very dangerous too, as they systematically lead its less trained users underestimate the effort required to finish the application and overestimate the value and capabilities of the prototype itself: A common joke of the 90’s told about a programmer who died, went to the pearly gates and was presented by St. Peter with a picture of how Heaven looked like (a bit dull) and by the devil with how Hell looked like (lots of wild fun and exotic pleasures) so he could decide where he wanted to spend the afterlife. He chooses hell, obviously, but once there is tossed on a cauldron of boiling oil surrounded by flames and the screams of the damned souls that fill the place. When he complains to the devil, he receives the answer “man, that was the prototype, this is what it ended looking like when we finally could build it” (and of course, he can not complain, knowing all too well this is how this things work).

The thing about prototype is that they seem to have 90% of what the final system needs already built in. Only a few interfaces with external systems have not been developed, instead calling a “dummy” that behaves “exactly” like the external system will. And some complex algorithms have not been finished, so some simple routine that gives always the same result (“not that different” from what the algorithm will calculate) has been plugged in. And some controls and interactions within the user interface may need to be tweaked to improve intuitiveness and user-friendliness, but those will be surely minor changes that require very little effort. So it is typically assumed that even being extremely pessimistic, with a little final push and an additional 20% effort the application will be ready for roll out (the “10%” or remaining functionality should take no more than 10% of the effort already invested, 20% tops).

Any seasoned project manager knows where this is going. That “10%” of remaining functionality simply takes forever to be completed, tested and integrated. And once integrated new bugs and problems keep showing up. The interfaces have to be redefined. The minor changes in the user interface require major reprogramming of the data base and the data accesses. The algorithms demand additional data that in turn imply changes in a myriad other parts of the application. Lucky is the team that can complete the application with only four times more effort than what it took to develop the prototype. Hence the famous “Pareto rule” so well known by planners and schedulers in Sw projects, that states that 80% of the functionality is achieved with 20% of the effort (and in 20% of the time), and getting to 100% requires the remaining 80%.

In the world of power engineering the difference between a prototype and a working application is not dissimilar to that between an experimental reactor and a commercial one. In the words of Hyman Rickover (one of our patron saints):

“An academic reactor or reactor plant almost always has the following basic characteristics: (1) It is simple. (2) It is small. (3) It is cheap. (4) It is light. (5) It can be built very quickly. (6) It is very flexible in purpose. (7) Very little development will be required. It will use off-the-shelf components. (8) The reactor is in the study phase. It is not being built now.

On the other hand a practical reactor can be distinguished by the following characteristics: (1) It is being built now. (2) It is behind schedule. (3) It requires an immense amount of development on apparently trivial items. (4) It is very expensive. (5) It takes a long time to build because of its engineering development problems. (6) It is large. (7) It is heavy. (8) It is complicated.”

More to the point, most research on AI is barely passing from the stage of prototyping to the stage of “real” development, so expect some delays until it produces something resembling fruitful applications. Google autonomous car? A fancy, costly prototype, which the company is still barely starting to grasp how to scale. “But it has driven millions of miles with just a handful of “incidents”!” I can hear you say. Nope, it has driven a few tens of miles around Mountain View, CA, millions of times, which is a totally different thing. And every time a number of vehicles had to be sent in advance to do reconnoitering and ensuring there were no changes in the roads, no unexpected works, no fallen branches or large pools caused by heavy rain, even some new road signs had to be recorded, digitized and downloaded into the self-driving vehicle… something nobody seems to have thought would need to be regularly done all across every single road of the USA that wanted to be made accessible to the “autonomous” car. They still have to do 80% of the development to have something remotely marketable, and I doubt they have neither the financial resources nor the stomach (or the knowledge) for doing. My prediction is in the next five years they will drop the whole endeavor without much fanfare, as they are doing with that other much ballyhooed (back in the time) project of “Google Glasses”.

In the energy sector, our best, most advanced bet to ever get to a “cheap & clean” source of energy (Fusion) is ITER, which has not reached the stage of prototype yet, 64 years after the first certified fusion reaction on Earth (the detonation of “Ivy Mike”, the first hydrogen bomb) and 48 after the completion of the first Tokamak, and which is not yet a prototype of what a commercial reactor will look like. But not to worry, you have people confidently asserting that the lack of real progress in fusion can be more than compensated by (I’m sure you guessed it) advances in Sw:  Energy is transitioning into software. Look, I’ve been hearing this same claptrap since the 90’s of last century: smart meters, batteries, smaller producing units, et al will allow us to both wring more of the current production facilities and transition to cleaner, more flexible sources. Hasn’t happened in 20 years, and I don’t think is gonna happen in another 20.

So what should you make of Brynjolfsson & McAfee stating confidently that we are in an age of wonder, and that more is yet to come? Just conclude they are deluded, they have been show a battery of prototypes by some executives within the industry and, not having managed a big project in their life just did not have the knowledge to put in context what they were being shown. Most worrisome is the situation of Bill Gates, that should know better, but still dismisses the learned scholarship of Robert Gordon and refuses to acknowledge that, as much as he would like to think otherwise, innovation is not doing much for 99% of the population. I can only assume he has been so many decades in Microsoft occupied with running the company (and his foundation since he left the software giant) that he has forgotten the unfounded optimism of the prototypes he is being shown all around, and has ended drinking uncritically the Kool-aid of his peers.

Just remember there is no reason at all you, dear reader, should be drinking it too. The next time you hear (or read) about a new gewgaw about to revolutionize the world, apply a healthy dose of skepticism and remember that, very likely, you are being sold a prototype as a product close to completion. Do not fall for it. 

Wednesday, June 15, 2016

Bye bye UK

As the date for the UK vote on the country’s pertinence to the EU approaches it is becoming more and more clear that, as much as most of the political establishment and the upper classes would like the ”remain” option to prevail, the “leave” will win the day. Which kinda sucks, as I’m a great fan of British beer, and fear that its price in my corner of the world will go up after they are penalized with heavy import tariffs for their defiance. Thankfully, many of the best crafts lately come from Scotland, which I hope will secede from Great Britain as soon as the vote counting finishes, and which would be quickly readmitted back in. Now seriously, I am going to devote this post to analyze why the Brits will likely vote to leave against what the most educated amongst them tell the masses is their best economic interest, to what extent that is true, and what I would vote if I were them (not that anybody should care, but I can envision more of these referenda in our collective future, as the European project bleeds legitimacy and attractiveness with every member that decides to follow the British example –and make no mistakes, after the Brits there will be more votes like this).

Let’s start by recapping who is for leaving (“Brexit”) and who for staying, what arguments have they made publicly and what arguments really animate them (as we will see the two can be substantially different). The Brexit party is animated by a core of unrepentant Eurosceptics that for decades have brooded within the conservative party, claiming that “Europe” is taken over from its very inception by socialists and welfarists that have produced a monstrous bureaucracy with no real accountability to any single country’s electorate, that thus limit economic growth by the imposition of needless regulation, the “crowding out” of private initiative and the encouragement between its member states of a policy of high taxes and excessive safety nets. Getting out, for them, is all about regaining “control” over the UK finances, their traditions (including a tradition of laissez faire in the economic realms they see threatened by the deluge of norms and laws originated in Brussels) and their frontiers (being free to decide who they admit and allow to settle). The last point is the key one, as what polls show is that the core of the movement doesn’t lie within the Tories, but within the less educated (on average, because the intelligentsia in England, as in so many other places, has historically tilted heavily towards the left) Labor voters that seem to be utterly disgusted by what they see as an unconditional surrender of the state towards the foreign imported currents of multiculturalism and an imagined “alien flood”.

So the democratic majority that very likely will get the UK out of the European Union will do so out of grievance and mistrust, as a protest against a society they see too welcoming of the mythical “other”, be it a Polish plumber or a Pakistani purveyor, a Hungarian Hussar or a Nigerian Naturalist, a Hong-Kongese Hoodlum or a Bangladeshi Burglar. A society that in their eyes has not only failed to prevent so many alien people from coming and settling in their midst, but is actively showing their preference for them giving them all the advantages of Britain’s generous welfare state rather than to the native born, impervious to the fact that such preference is mostly imagined, and falsely so (see this recent article by Polly Toynbee in the Grauniad: Be careful what you wish for, Brexiters).

A bit of good ol’ fashioned racism and a bit of sense of national identity seen under threat in times of turmoil and economic uncertainty, nothing new or exciting here, folks. Add to that the festering trauma of the loss of empire only six decades ago, which turned the country from the towering superpower of yore into a run-of-the-mill middling democracy, and one that has been loosing ground to Germany ever since the end of WWII (side note: do not forget that since the Peace of Westphalia the central focus of British diplomacy and international stance has been precisely containing Germany and the German speaking peoples of Central Europe, not my thesis but Brendan Simms’ in Europe: The Struggle for Supremacy) and you start to see why they seem ready to forego some points of GDP growth in the coming years (the most clear-eyed economists give a range between 2 and 4%, which in the present scenario of anemic productivity gains may mark the difference between minimal increases in per capita output and downright depression) for gaining what they see as a modicum of independence and dignity (again, most markedly the dignity of not having to put up with so many foreigners, the fact that the most troublesome amongst them come from outside Europe be damned).

The sad truth is that just having the debate in those terms is already a victory for the forces of reaction. If you were asked to choose between freedom and a few more coins, what would you choose? Especially when the loss of coin can be also discussed and even outright negated, as the liberty from the “heavy bureaucracy” will supposedly allow the British exchequer to save 350 billion pounds a year (a figure thoroughly debunked) and unleash the legendary entrepreneurial spirit of the nation. Which takes us to the side of those making such feeble attempts to convince their countrymen to stay. Nominally the head of the Tory government and his ministers, the head of the opposition party (Jeremy Corbin), the leaders of every UK company north of 100 employees and every writer, performer, university professor and publicly known figure you can dream of, having on his side the “serious” newspapers and the venerable British Broadcast Corporation. But all those concerted voices of reason and concord, all those sensible and exemplar citizens have been unable to convince a majority represented by a ragtag of disreputable politicians (Nigel Farage and Boris Johnson) and the tabloids.

Such are the whims of democracy, and they just reinforce my idea expressed in a previous post (on democracy's health) that representative democracy may have stopped being the most efficient system to aggregate the multiple preferences of our societies, as in the absence of “natural” external models (“outgroups”) against which compete in a structured fashion it makes some fractions of the “ingroup” turn viciously against each other (fabricating its own, convenient, at-hand “outgroup”) and, lacking a mutually agreed criteria for solving their differences, ends up resorting to numbers and humiliation. Again, as Mrs. Toynbee pointed in her article, linked above, the biggest concern comes from what may come after the referendum, as the passions aroused by the winning side will show difficult to contain, after they have been legitimized by the popular vote. Once you confirm the strength in numbers of your position nothing impedes you to demand ever greater submission of those that disagree with you. If you voted to leave Europe, economic consequences be damned, because you were fed up with the Pakis, and the Indians, and the Africans in your neighborhood, what will prevent you from actively pursuing their uprooting once the economy turns predictably to the worse and the size of the pie that can be distributed consistently shrinks? Class solidarity? Basic human decency? Doesn’t seem any of them had much impact when appealed to in the current election…

And probably this is what tips the balance for me regarding what I would vote if I were British (or what I will likely vote when the turn comes in my place). Listen, I get that Europe as is currently being constructed is a sham. That the Euro is a costly failure, a lovely common project that got hijacked by an unholy mixture of well-intentioned Germanic fundamentalism (indebtedness is always bad, no matter what you use the money for, and inflation is the work of the devil to be avoided at all costs) and difficult to predict double whammy of socioeconomic tsunamis (they couldn’t foresee that the all-too-evident demographic collapse unfolding under their noses would be compounded by the exhaustion of the capacity to innovate of the societies they were tasked with overseeing). If I could choose to stay within the unfathomable and not-too-overtly democratic institutional structure of the European Union (and keep the freedom of movement) and get rid of the Euro I would do it in the blink of an eye. But unfortunately that choice is not on the menu, neither in my country nor in England. What is on the menu is leaving the Union entirely, forsaking your country’s ability to influence the future development of the continent (limited as it may have been before, it was something) and jeopardizing your economy’s ability to freely trade within what is still the world’s biggest trading bloc. You may gain some similar ability by bilateral negotiation, but it will take time, the result is uncertain and your position will be undoubtedly weaker. And, in the case of England, they do not have the Euro millstone around their neck in the first place!

So, given the doubtful moral stance of the leave position, the unsavory company I would be in, and the fishy logic of the potential economic advantages (derived from them not being part of the single currency), I think I would vote to stay. There is another consequence of them leaving that also have to be considered: the Scots would get a new argument for secession that may finally push them to break the Union, as the EU is much more popular north of the border than south. Not that the jingoistic followers of Mr. Farage seem to be much troubled by that prospect, but it should give some pause to the conservative followers of Mr. Johnson for whom national greatness and cherished traditions seem better served keeping the Act of Union of 1707 alive.

The consequences for the rest of the continent are similarly dire. The stock exchanges are already discounting the loss of value of a Brexit, seen as a harbinger of more protectionism and less free trade. Within the EU, Germany would be far more dominant without the (already feeble) British balance, and France would be made more conscious of its subsidiary status. Furthermore, a Europe more heavily centered around the German inspired principles of sound money and austerity at all costs is wont to be economically even more sclerotic than what it has been until now (some feat!), prompting other countries to leave (the Scandinavian ones to pursue greater success outside, and the Southern ones unable to accommodate German orthodoxy with the plunging output their choked populations will be able to produce). Bad news all around, but bad news that are unlikely to tip anybody’s mind in the lovely island, as at this point they’d rather feel like Samson making the philistine’s temple come crashing down over their own heads than like Johnathan Swift’s Lilliputians, trying to secure another thread of twine to keep the gigantic Gulliver prostrate. All we can say is that in the long run, the role of the Lilliputian is far more useful, and far more beneficial, than that of Samson. 

Monday, June 13, 2016

I don’t care what the pundits say, Hillary has a much steeper hill to climb than Trump

I’ve declared many times I’m a junky of the endless American electoral process, not because I see it as a paragon of democratic virtue, highlighting the benefits of having a strong civil society profoundly engaged in the process of choosing their representatives and giving each option a thoughtful consideration, but because they always manage to put up an entertaining show. This presidential election cycle has already been full of drama and surprises, and not the lesser of them is that the Democratic party has had much bigger difficulties choosing its nominee than their adversaries, the Republicans, when a year ago it seemed clear that the former Secretary of State, Senator and First Lady Hillary Clinton would coast to her party’s nomination, whilst the Repubs were mired in a very public fight to select between seventeen potential candidates with very heterogeneous levels of recognition, seriousness, gravitas and apparent viability in a presidential election.

Now that the primaries are mostly over and the final contenders have been decided, most opinion makers seem to think that the Republicans have made a terrible mistake, as their standard bearer is transparently unfit for the highest office in the land, and although he may have played well an apoplectic and ill-informed party base, whose better judgments were clouded by anti-establishment rage, he will be unable to attract the sizeable number of moderates that end up deciding every Presidential election since time immemorial. The verdict tends to be more mixed regarding the Democratic nominee, because although she is widely disliked (especially by those politically leaning towards the right, no shit Sherlock!) she is seen as more capable of “pivoting to the center” while keeping most of the support among the party base. What I will argue in this post is that such valuations of each nominee prospects are deeply flawed, and originate more in the prejudices of the commentators than in a dispassionate appraisal of the voters’ mood, and that Hillary’s position in particular is considerably more precarious than what pundits think.

First let’s review Trump’s position. Although soon after securing the nomination some polls that can not be discounted as hopelessly biased (ABC News/ Washington Post no less) were announcing a tie between him and Hillary (with Trump slightly ahead), most analysts have discounted them as a result of the different moment in the primary cycle, with Trump being already the nominee and thus benefitting from the republican voters uniting behind him while Mrs. Clinton was still battling an unexpectedly serious contender in the Vermont senator Bernie Sanders which was costing her an increasing amount of goodwill within her own party (more on that later on). Those same analysts keep on telling us that a) we shouldn’t read too much from polls so early on, b) once Hillary clinches the nomination (even more so if Mr. Sanders finally recognizes the futility of him continuing in a race he can’t win, and finally endorses her) she will consolidate the Democrats’ vote and c) the campaign will highlight Mr. Trump many weaknesses (Trump University, his questionable management style that allowed him to stay rich while the casinos owned by him went under, his racism, misogyny and overall lack of knowledge of any issue outside of real state development…) and make the “reasonable majority” of the electorate sour on him.

I’m not yet ready to throw overboard such arguments, as each one of them seem prima facie quite plausible, but I can not avoid pointing the similarity with the scenario most pundits painted early in the primaries, when polls where showing unequivocally that Trump was the most widely preferred of all the republican candidates. Back then pundits of every stripe, from the right and from the left, made essentially the same argument we are hearing now. “we are in position A, where Trump leads. As it is unimaginable that Trump becomes the nominee, in June 2016 we will be in position B, where Trump has sunk in the polls and any other candidate (pick your choice) will be the nominee”, however, what happened to transform position A into position B was never clearly stated. What we hear now is “we are in position A where Trump and Clinton are technically tied (more or less). As it is unimaginable that roughly half of the electorate can seriously consider voting for such a con artist, come the first Sunday of November we will be in position B, where Hillary will lead by 20 points or more”. But again, what happens to move from A to B is anybody’s guess, as neither a) nor b) nor c) are likely to cause such a shift in the polls.

But not only is there no clear or convincing rationale for why Trump’s support may collapse in the next five months, but what I will argue is that Hillary has vulnerabilities of her own that may cause her support to waver and, if not outright crumble, at least have serious difficulties equaling that of the casino and real state mogul. Let’s turn from Trump’s unlikely position (undisputed standard bearer of the republican party, with conservative voters coalescing around him as they have done around any candidate in previous electoral cycles, much to the chagrin and despair of the specifically republican commentariat) to Hillary’s. I already noted that she is entering the campaign as one of the most disliked public figures in recent memory, only surpassed by David Duke and Trump himself. But I believe that there is a subtle difference between who dislikes Trump and who dislikes Hillary that has the potential to have a tremendous impact in the election. To put it shortly, Trump is mostly and most ardently disliked by people from his outgroup: blacks, latinos, progressive women, and the urban white educated upper-middle class in the two coasts. People who weren’t (and aren’t) going to vote for him no matter what but, and here’s the catch, constitute roughly 40% of the electorate, and are just a fraction of the whole democratic base. On the other hand side, who most viscerally dislikes Hillary? Old white males (specially those between them exposed to more than two minutes a day of Fox News), their wives, blue collar workers in the industrial belt and in rural areas and, here is the big thing, the young university students that in normal conditions should be mostly part of the democrat’s roster, but this time seem to be positively repelled by her, in good measure because of the fervor with which they bought the idea propagated by Bernie Sanders’ camp that she is a puppet of Wall Street and a foreign policy hawk and her election would mean more of the same (more militaristic aggression of unsuspecting foreign lands, more inequality and more boondoggles for the well-lobbied and well-connected). What this means is that Trump abysmal approval ratings may not cost him much. Rather the opposite, they allow him to take for granted a whole section of the electorate, and openly demonize and antagonize them, which in turn helps him gain points with another, bigger part. That’s the essence of the populist’s game, and what make them so dangerous for any republic, instead of respecting checks and balances and ensuring a modicum of well-being and representation for everybody thy pit one group against another, and promise the bigger one the spoils from the total annihilation of the smaller.

But Hillary’s not-so-bad unfavorability is a much bigger problem for her, as it partially overlaps with members of the coalition she needs to mobilize in November to win. At this point, all you have to do to get a taste of the size of such problem is read the comment section of any political article or opinion piece in the NYT. Between 30 and 40% of said comments are by super-angry Bernie supporters that loudly proclaim that they will never vote for Hillary, that the election has been stolen from them by a corrupt establishment, and that they will rather vote for Trump and see the whole falsely representative and rotten edifice come crashing down than renounce the purity of their ideals. Most commentators think that is just a manifestation of the passionate nature of youth (although many a Bernie bro and sis is well over 50), and that once the fairness of the process through which Hillary won sinks in they will accept the result, make peace with the candidate and end up falling in line and duly voting against Trump. Maybe they will, maybe they won’t, and subsequent polls may help us calibrate to what extent such movement and acceptance may be taking place, but I wouldn’t bet much on it happening. The level of bile and disaffection shown by the senator from Vermont’s followers seems not just quantitatively, but also qualitatively different from anything that I’ve seen in previous cycles (from Dean followers when Kerry was chosen candidate to, yes, Hillary followers when Obama was chosen instead).

Does that mean that Hillary is doomed, and that the unthinkable scenario of a Trump presidency may really come to pass? Not necessarily, as Brian Beutler in the New Republic recently remembered, Hillary doesn’t need all that many of Bernie’s followers to win come November (How many Bernie supporters does Clinton need?). Unfortunately for her, those followers are not the only part of her coalition that may desert her until then. I’ve already mentioned blue collar workers in the industrial areas that have been decimated by decades of globalization that, in the popular imagination at least, she and her husband championed. Add to them the white collar workers in the lower educational rings whose income hasn’t improved in those same decades, and that are subject to increasing anxiety as the last economic recovery (like the two before that one) is definitely not lifting their boats and you start seeing how an intelligent Trump campaign (something that may seem far-fetched now, but has to be considered for completeness sake) could turn competitive many of the purple states that now almost guarantee a Democratic victory.

It could be counter argued that the migration of disaffected voters can go both ways, and that there are surely many traditional Republican constituencies that must be similarly appalled by the mercurial and unreliable character of the man that has been chosen to represent them, and thus will probably switch sides. We know that such people exist because we hear from them almost daily in the MSM. The #NeverTrump movement started by right wing intellectuals. The daily denunciations from the Weekly Standard, National Review, the conservative stable at the WaPo (Charlie Krauthammer, Michael Gerson, George F. Will, Jennifer Rubin, geez, there are really a lot of them there!), at the NYT (Davey Brooks and Ross Douthat, you may add of late Peter Wehner)… must have  some effect. They must be peeling some conservatives away from Trump into the arms of some unknown candidate (not David French, sure, but may be at last Gary Johnson) or even, to really and most effectively preclude what they consider the most dreadful outcome, in those of Hillary Clinton. Well, let’s wake up, folks, If the Trump phenomenon has shown something is how utterly ineffectual the conservative luminaries are, and how little they are able to influence their side. Honestly, if I were one of them I would renounce my profession and enter a monastery for life (or at least for the next eight years while the country is administered by whoever ends up winning in November). Surely I wouldn’t count on any of them to be of much help to any non-Trump candidate. They may have a little fun toying with their principled stance, and patting themselves in the back for how righteous and dignified they sounded (except those of them that will in the end ally themselves with the devil and more or less openly endorse Trump as the lesser of two evils, which I still expect some of them to do) but their purported coreligionists have forsaken them and we can ignore their pleas for the remainder of the campaign.

All of this shouldn’t be taken as me believing that Trump has it done, and that he is more likely than not to win in November. It just means that when you see any political analyst (or casual commentator) self assuredly declaring that no matter what the polls say, it’s going to be a “shellacking”, a “landslide” or a “trouncing”, that Hillary will win between 40 and 50 states and more than 370 electoral votes, and that Dems will likely recover the Senate and may be the House you should take it with a bit of salt, as many of them will just be projecting their desires on a more fluid, more uncertain electoral reality, and confusing their wishes with what the data show. That said, if I had to make a forecast I would still say that Hillary wins the White House by a moderate margin (around +5% of the popular vote, which may translate in an enormous difference in electoral votes if she plays her cards right, which she will undoubtedly do, it is Trump’s campaigning prowess and acumen which is a wildcard here). Regarding the Senate, it is too soon to tell, but I don’t see the Repubs losing it yet. As for the House, it stays Republican led in any conceivable scenario. How likely is that? Much less than the sure thing liberals are currently dreaming of. I’d say Hil has a 55% chance against Trump’s 45%. Close enough to flip if a) Trump puts together a half competent organization (something he didn’t need to do in the primaries) b) Hillary is indicted (very, very unlikely, but you never can tell for sure) or c) a “black swan” event (from a sudden severe recession to a major terror attack –I don’t think Orlando qualifies as serious enough) upends the campaign.

Wait and see, as I always say towards the end of these kind of posts, it’s gonna be a hell of a show to watch.