Thursday, July 12, 2018

Ethics & Business II: Business Ethics in a Nutshell


As mentioned in my previous post, I recently finished teaching a course on Business Ethics to students of a Master in International Management, which has forced me to review as much of the existing literature on the subject as I could, and to conclude that most of it is essentially useless or worse. Back to the title, I wanted to take the occasion to condense in a few paragraphs the summary of all the research and the thinking I have been doing since October last year. Consider it the Cliffs Notes of my latest literary production:

Ethics – how to live

There is considerable social agreement about what is “good” (although as in so many areas, agreement on the general outlook, or the wide brushstrokes, may conceal a good deal of disagreement on the fine details and nuances). A good guy is generous, has other people interests’ in his mind when acting, is just and equitable, gives everyone his due and does not shortchange others to gain a bit more himself:


We may leave aside for a moment the fact that different people may perceive exactly the same action (Action 1 in the previous schema, say) as occupying different positions in the continuum. If I give some money to charity I may judge it as being a very virtuous thing, while you may think I did it to show off (virtue signaling), or to gain a fiscal deduction, or that given how much I earn and the wretched state of the world it is still not enough. Regardless of how well trained in ethics we are, it is simply the nature of human language that the assignment of value (remember, that pesky concept that physics and chemistry cannot capture or properly measure) is both imprecise and subjective, so we are wont to disagree about the exact amount of it of each action shows. Which shouldn’t obscure the fact that, again, there is wide agreement in the relative position of different actions, so my giving money to charity is almost universally understood to be “better” (more virtuous, more praise deserving) than selling addictive drugs to teenagers out of school or mugging people at knife point.

Now, even if we accept (as I do) the fact that there is an objective truth to the amount of moral virtue of every action we freely decide to perform (as virtue, and praiseworthiness in general, requires freedom to have any meaning at all: no freedom implies no possibility of being virtuous or evil) we still need to inquire if such “socially perceived virtue” is indeed conductive to the good life, to the kind of life we should (that pesky verb again) aspire to live. This apparently simple question turns out to be fiendishly difficult to answer in a way that is universally considered valid, and indeed every single attempt at answering it, since the times of Socrates (and probably well before that) has in some sense failed, as some thinker or other has come out sooner rather than later pointing out to some formal defect in the underlying logic of the answer that purportedly rendered it moot.

This is where historical reasoning normally kicks in with full force (in philosophy books, not certainly in business ethics ones) and attempts to explain why, for the kind of creatures we humans happen to be, which include the fact we are endowed with reason and we have to live in community with our fellow beings, for which we instinctively feel empathy, it is indeed best to try to live ethically (closer to the complex of behavior causing perceptions under label “B” in my graph) than the other way round, and thus the source of the normativity of ethical theory lies either in its anchoring in human nature or in the dictates of abstract reason. I feel a lot of sympathy for historical reason, and enjoy as much as the next guy dabbling in it, but at this point I won’t consider the matter entirely settled (that narrative went more or less OK ‘til the Enlightenment, but has been seriously weakened afterwards by Nietzsche and, closer to our days, by Post-modernism, if you want the details you’ll need to wait until my book is released, as both are discussed at length in there). For the purpose of the present post (summary of how to sensibly apply ethical thinking to a business environment) I’ll just consider it settled: it is a fact of the matter that the good life, the worthy life, the life that recommends itself to any rational being, the life that should be pursued, is the virtuous life, the praiseworthy life, the life that most ethical traditions coincide in recommending (regardless of “why” it may be so).

Remember, what such virtuous life consists in is basically agreed by the aforementioned traditions (Nietzsche’s being the odd one out), and can be summarized in following two precepts:

·         Equanimity rule (directly derived from the venerable “golden rule”): don’t give your own interests more weight than those of others. Don’t treat others as you would not like to be treated yourself. Try to adopt and impersonal, impartial point of view when deciding how to act, so you don’t favor yourself just for being you

·         Perfectibility rule: develop your capabilities as much as you can, giving priority to those that can be of use to your fellow men (and thus, that by the application of the previous rule, make you a more useful member of society)

It is also widely agreed that such schematic formulations cannot exhaust every conceivable dilemma (ethical or otherwise) we may find in the business of conducting our daily lives. No statement of a rule to be universally and unconditionally followed, doesn’t matter how pithy or how extended, may aspire to cover the almost infinite combination of circumstances and peculiar features each of our free actions is framed in, so it may recommend (or disqualify) unambiguously each one of those actions. We will discuss towards the end of the post (hopefully) to what extent, then, are such pithy formulations of ethical directives useful or not, as some authors have deduced, from the impossibility of being applied to each possible situation we face, that ethics is simply not amenable to being formulated as a definite set of rules, ad that supposed universal “principles” are at best a distraction, and at worst an unnecessary obstacle when trying to find out how to live (a good example of such position can be found in Jonathan Dancy’s book Ethics Without Principles, although many of the objections to what we may call “understanding of Ethics as finding universally applicable rules” were already presented in the deservedly famous Ethics, Inventing Right or Wrong by J. L. Mackie). Let’s just assume for the time being that those rules do indeed apply, are useful, and are valid indications of how to lead a good life (both seen from outside, by our fellow humans, and felt from inside, as self-fulfilling, rewarding ways of living). What do they have to do with the conduct of business, and the performance of our professional activities?

Business Ethics – How to exchange commodities (including our own labor)

Before we discuss the peculiarities of how Ethics applies to business situations, a couple of reminders are in order:

1.       Regardless of what the US supreme court may say, Corporations are not people. You may grant rights to them, and you may impose duties on them, but such rights and duties are, from an ethical perspective, legal fictions. There is no such a thing as a sentient, conscious, “mind of the corporation”, able to take decisions (apart from and distinct of those of its different executives, within their respective areas), and thus morally deserving praise or blame. One of the central terms within the discipline of Business Ethics is “Corporate Social Responsibility”, and enormous amounts of ink have been spilled discussing what that responsibility consists in, and how far does it extend. Executives, which we can safely presume are (mostly) human being do indeed have a responsibility for acting in “socially acceptable” ways, for taking decisions that result in a net positive for the corporations they represent and for the societies in which they operate. But the nebulous collectives that have endowed them with such decision power have not responsibility, social or otherwise.

2.       For a majority of their life, the activity that occupies more awake time of almost everybody is precisely work. If the average adult (between 18 and 65 years old) is awake 112 hours a week, you can expect him to devote more than half of those hours to his job (including getting there and returning from it). There is no way you can define, orient, inspire or direct a meaningful way to live (which is precisely the core of what ethics is about) that doesn’t address that time. There cannot be an ethics that doesn’t include, or that carves a separate space for, or identifies different principles to regulate, how to behave at work, or how to approach business deals, or how to treat subordinates and co-workers. Furthermore, the attempts to create such ethic, an ethic of the “private life”, to be governed by a set of principles, and different from a “work ethic”, governed by a distinct set, understood as attuned to the peculiarities and separate dynamics of a mythical realm called “the market”, are not neutral or objective or value-free. They typically constitute a naked attempt to justify quite unsavory behavior (the shameless exploitation of our fellow humans, which in turn requires denying their inherent dignity and “exchangeability” with ourselves) with a veneer of sophistry and bad empiricism, appealing to “the greatest happiness for the greatest number” and the unverifiable assumption that certain relationships of production (collectively known as free-market capitalism) automatically ensure such stupendous happiness by ensuring everybody maximizes the utility they extract from what they consume, and produces using with maximal efficiency the means at their disposal (as summarized in a passage in the textbook on Economics by Samuelson and Nordhaus that I’ve quoted several times, and which manages to be both tautological and disingenuous).

What we should conclude from both points is that business ethics, as commonly described, is highly suspect of constituting a rationalization of rapacious behavior. A rationalization facilitated by the way it is typically taught, in isolation from the rich ethical tradition from which it could benefit (expounded in my previous post). If you zoom in what are the responsibilities of corporations, and how to balance the demands of sustainability and profit maximizing, and then disguise your own lack of an adequate framework for even formulating the problem with a myriad of “cases” that can be argued one way or the other (and that end up transmitting that this ethical stuff is really complex and confusing and it doesn’t matter much what you end up deciding because for any outcome you can find someone willing to defend that it was the right thing to do). A fine way of training lawyers (again, that is where the case methodology originated), but not certainly one for developing ethical excellence in economics (or BA) students…

What I’m saying with this is that “business ethics” cannot aspire to be a complete ethics, detached from the question of the good life and a solid theoretical framework of what is good for us, rational animals. Now, the peculiarity of business is, as I intimated in the opening of this section, that it is an institution devoted to the exchange of commodities. What is a commodity? I’ll take Lionel Robbins definition stating that it is a piece of stuff (or of our on time) that has an economic value (that we can put a price on), which in turn assumes that it can be put to alternative uses (or to the same use by alternative persons). That means that when we exchange commodities (again, our own time included) the main question, as long as the exchange is voluntary, is not one of ultimate ends, or of how conductive to our own perfectibility the exchange is, but of its fairness, of how just it is. Thus, of the two main aspects of ethics (the two main precepts we mentioned towards the end of previous section), business ethics is mainly concerned with the second, as the ultimate mark of a fair transaction is that we would accept it from both sides, if we exchanged places with the other party we would still consider it advantageous (if not, if we are the only ones taking advantage of it, if we are somehow fleecing the other party, it is doubtlessly unethical to engage in it).

Which is all great and good, but puts us in a bit of a bind, specially when we turn our attention to that most vaunted position (one which was identified by Alasdair McIntyre in his arch famous After Virtue as a paradigmatic figure of our times), that of the Manager, the person that corporations choose to coordinate the activities (and thus, to give instructions) to other people. A manager must maximize the output of his team, that is the only possible way to discharge the fiduciary responsibility he has been assigned. But for such maximization to happen, he must consider the members of said team as means, not as ends in themselves, again, because the end in itself can only be the profit maximization. Indeed, the whole of economic theory is built in the essential interchangeability of people, which are but one resource more among others (remember our Robbinsian definition of commodity, taken from the master’s conceptualization of the discipline: resources that by definition admit of alternative uses), and are thus to be thrown off the productive process if said process can be accomplished more efficiently (more cheaply) using machines, or using people based in countries with lower salaries. Something that managers all the world over have been doing with verve and gusto on a monumental scale for the last half century (which only shows that their economic training was impeccable, and impeccably unbalanced by any ethical concern).

So, alas! Business ethics has to deal with justice, and justice is by far the messier aspect of the whole field of practical philosophy, because it has to do with the competing claims of different people, with different histories, different arguments that can be expressed more or less convincingly, independently of their “intrinsic” merit (if there is even such a thing), and thus it very easily degrades into casuistry. We all agree in a number of ethical positions: flogging a man that robbed a crumb of bread because he was hungry? That’s bad, bad, bad. Raping a woman to satisfy your wanton lust? Superbad and inexcusable. Slavery (benefitting from it, or simply standing by if it happens in your country)? Totally bad and despicable. But when it comes to who deserves what… we haven’t progressed much since the time of the original sophists (the practitioners of their trade now are called “lawyers”, at least in the West), that boasted they could make the “weaker argument seem stronger” (and win the trial). 

Which doesn’t mean that business ethics can’t be taught, or that it necessarily has to be as poorly taught as it actually is (with the predictably mushy results we can daily admire in the press). Some of the cases can be used to illustrate the inherent tension between the different actors involved in business relationship (the workers, the capital owners, the consumers, the rest of society), and some guidelines can be provided about the basis for adjudicating between their competing claims (as I did in this series of posts: Organizational Justice I, Organizational Justice II and Organizational Justice III, in which I essentially argued for the superiority of a Kantian approach over a utilitarian one). I’m just saying that trying to cut corners, and jump in the discussion of cases without having carefully laid out the foundations of why one kind of life (the examined one, that recognizes the essential equality and dignity of all human beings, and on the other hand requires us to develop our potential abilities to perfection, prioritizing those most useful to our fellow humans) is better than other (the unexamined pursuit of social status through the hoarding of material goods as prescribed by a cancerous dominant reason that is uncritically accepted as the only way to live) can only end in causing the confusion of the students, and breeding in them the cynicism and disenchantment of which they provide such ample evidence once they leave the hallowed grounds of academia and start fending for themselves in the world of greedy corporations, all too eager to put all that cynicism and disenchantment to good use for their own ends.  

Friday, June 22, 2018

Ethics & Business I: the Shell around Business Ethics


A number of regular readers of this blog (old habits die hard, so I still cannot avoid adding the usual rejoinder “all two or three of them”) have asked me what I’ve been doing between January and June, that has  kept me away from posting (and, prior to that, I was already in an apparently very low productivity streak, publishing no more than one post per month since at least August 2017). As it happens, last year I was presented with the opportunity to teach a course on Business Ethics in a prestigious university, and I grabbed it with both hands (and probably both feet too), as teaching is one of my overriding vocations, and ethics is probably my foremost passion. Well, probably I like having sex with my wife a bit more (but, alas! I have far less control over how much of it I can have, which tends to dull it a bit), and weightlifting would sit somehow between both in my list of preferences, but I hope you get the idea that thinking theoretically how to live the good life, and what the good life consists in, and sharing those thoughts with attentive younglings, sits pretty high between my priorities. However, being the kind of conscientious (bordering on obsessive) asshole this blog gives ample evidence I am, taking that opportunity required me to prepare it extensively and thoroughly, to ensure the students had an unforgettable learning experience, one that changed their lives forever (ideally for the better). Such preparation included extensively researching the materials to use, to ensure only top notch texts were used.

What I quickly found is that existing texts on the subject range between atrocious and abysmal. This may sound a bit harsh, a bit over the top, and a bit presumptuous… but it is the pure, unadulterated truth. Why students of economics and business administration the world over are exposed to half-baked, poorly understood by the authors themselves, uninterestingly presented ethical theories, in the most cursory manner, and then given a deluge of supposedly relevant, real-life, exciting and properly edited to highlight their ethical saliency, business situations that are supposed to teach them how to apply sound ethical reasoning, would probably merit a post of itself, having to do with how “social sciences” are taught in the Anglo-Saxon tradition (dominated by the case methodology developed in Harvard almost a century ago), how such tradition has contaminated almost all of academia (specially in Economics and BA, you cannot aspire to be taken seriously if you can’t throw a bunch of cases to illuminate or illustrate or rather obfuscate whatever you are supposed to be teaching) and how as a consequence kids leave their training without a goddamn clue about how to think ethically (one wonders if they leave with a clue about how to think at all, fullstop).

I thought it may be useful, for my students and myself, to represent in a single chart how most ethical teaching is done, versus how I thought it should be done (the “is-ought divide” would later on figure as one of the key concepts to grasp):
As you may see, in most business ethics textbooks, some space is given at the beginning to “describe” the main ethical traditions (at least deontology and utilitarianism, some may also include virtue ethics as a separate strands, but many do not even go in such abstruse nuances). I had to put describe between quotations as, given the space and the seriousness of the effort, it is almost impossible for even the most brilliant student to grasp the difference between them, which may very well be the point. The overall tone I’ve most frequently encountered is one of “guys, we are really sorry to bother you with this mushy and dusty and clearly irrelevant stuff… we know you are young and brilliant and enthusiastic and we will be talking about the shenanigans at Enron corporate board soon enough, as that is of course what excites your imagination and get your juices flowing, as well it should, not like all this boring disquisitions… just bear with us for a couple of pages so you can pronounce “deontology” and “utilitarianism” and that will be more than enough”. From such half-assed understanding, they go on to try to convey how ethical decisions may be taken (that’s the “normative” part). Of course, with such limited resources it is almost impossible to set a framework that makes the whole endeavor understandable: why should executives act ethically in the first place? The answer given, at best, is that acting “ethically” happens, almost miraculously, to be good for the business (some out-of-context appeal to Smith’s “hidden hand” is common at this point).

What if there is a situation in which acting “good” (according to some tradition) and acting in profit maximizing ways are clearly in conflict? Your average business ethics book has no answer to such case, other than saying the conflict can only be apparent, and that if we take everything in consideration (reputational damage, possible fines, loss of trust from stakeholders and whatnot) both ends cannot conflict. Well, of course they can! And indeed they do conflict!(and negating it happens to incense me, being such a dishonest and blatant violation of logic and historical example) but of course an author with a limited ethical understanding wouldn’t even be able to articulate why lying to the students about such possibility is, in the first place… unethical!

I’ll leave apart the infuriating fact that a “normative ethic” is a redundant concept, and that trying to give a semblance of respectability to a disjointed set of loose observations and biased (when not downright manipulative) recommendations by labelling them “descriptive ethics” is an oxymoron (I’ve equated it in other posts to attempting to create a “normative physics” or a “normative chemistry” to try to determine if it is good or bad that particles with opposite electrical charges attract each other, or that acids and alkalis react). But there is where the heart of the authors really lies, and to such quixotic (or rather, Sancho Panzic) enterprise they devote between 80 and 90% of their tracts. To painstakingly describe a random amount of (almost uncountable) decisions economic actors (from every walk of life) may face, and how such decisions have “ethical implications”, and how those implications should be weighed against each other. Sometimes they even circle back to their limp and incoherent definition of the traditions to bring them to bear in the analysis, almost apologizing for demanding such mental effort from their readers (as using words composed by more than five syllables is considered highly suspect, if not an outright hostile move in academic circles). But again, without a clear, forceful understanding of what the end goal of life is, or how a life well lived should look like, any attempt at discussion of the presented cases is wont to be an exercise in futility and equivocation. Indeed, in their attempt not to sound too judgmental (how old-fashioned would that be! How uncool!) they end up endorsing almost any imaginable outcome of the decision they present the student with, advocating for any possible side, and recommending every possible course of action short of outright violating the law.

In a sense, the authors are simply the all-too-expectable product of their society (which, let us not forget, is also ours), a society that leaves no space for values (other than the maximization of pleasure and the minimization of pain) or for any kind of transcendence. And as values require traditions that embed and explain and justify them, our society, under its own contingent dominant reason (a dominant reason that, like any other society before it, it tends to present as the only viable one, directly derived from reason itself and human nature), rejects all those traditions, ethical or otherwise, as so much deadweight to be cast aside and abandoned in the way to untrammeled individual (and individualistic) self-actualization and self-realization. A self-actualization and self-realization that requires their pursuers to admit they are mere lumps of matter, as free as a falling stone in a powerful gravitational field, mere profit maximizers programmed by evolution since the moment of their birth to blindly follow a predetermined set of behaviors that, in our time and place, impel them to seek the highest possible status by consuming as much as possible of high-prestige brands.

As any reader of this blog already knows (I can almost see your eyes rolling back whilst thinking “oh, no, here he goes again”), I happen to think all of that is hogwash, buying expensive thingies does not a good life make, you are only as unfree, or as determined, as you allow the dominant reason to make you, and the first step in liberating yourself from the yoke of such anti-humane, desiderative reason, is to recognize its origins and the interests it serves. Which leads me to how ethics should be taught: starting with a wide understanding of the kind of creature that formulates it (human beings), an understanding that requires accumulating knowledges from many fields (fields that in time of Kant were grouped under the common label “philosophical anthropology”, which has fallen mightily out of fashion): what people of different ages thought that reality was composed of (ontology), how, based on that ontology, they thought their own minds functioned, and could be trained and motivated to function even better. How such collective set of beliefs shaped their social relations and what they could produce together. How from that level of production they could compete with other neighboring societies with a different set of beliefs, motivations and institutions, giving shape to what we know as human history.

And only when you have a working understanding of all that subjacent material does it make sense to turn your attention to how they proposed to answer the question of how the good life looked like, and how best to pursue it (both individually and as members of a social group), the description of how such answers were arrived at, and what particular form they took being the teaching of ethical traditions that should be at the core of nay ethics book, as the training tool to make the mind better at ethical thinking is honing its understanding of how different ages and persons have indeed reasoned ethically when answering their own dilemmas and challenges. I like to use the analogy of weight training, as it is the simplest, better understood method of developing capabilities we are not born with: by judiciously applying a gradually increasing stress to certain muscles we make those muscles grow stronger. By making our mind mull and ponder and consider and weigh the ideas of the great thinkers of the past we male our mind grow more capable of developing ideas of its own, and to apply them successfully to the circumstances it finds itself into.

So, on top of that deep, foundational understanding of what makes humans tick (the anthropology part) and of that rich, nuanced description of the answers about how to live that the most brilliant thinkers of our civilizational unit have given, and only after such foundations have been securely laid, does it make sense to discuss how they may apply in a specifically business context. Trying to jump to the business “application” without a proper foundation is a way of cheating the students, making them (falsely, as so many recent examples of corporate misbehavior attest) believe they have a developed capability they lack, and are able to apply judgments that will in the end fail them.

All of which is to say, as all the materials I reviewed were essentially crap (under my unduly harsh, critical, old-fashioned, elitist, grumpy, unreasonably demanding, curmudgeonly, misanthropic, arrogant, obstreperous, haughty, suspicious and idiosyncratic opinion) I decided I had to write my own book on ethics, following the structure I highlighted on the left of the graphic. A short tract, oriented to university students, although highly competent ones (I would be teaching in a postgrad, after all, a master in international management), so no holding back on the rich vocabulary or the convoluted conceptual structures presented. Something short, as in these harried times who has time for those majestic, sophisticated university texts of the past. I aimed originally at something around 100 pages/ 50,000 words. Short and to the point, or, like my lifting straps “short & sweet”. I started around October last year, but as those of you that have written a book surely are familiar with, one thought led to another, every idea required a bit more exploration and clarification, entire currents had to be added (you cannot leave the stoics out of an ethics treatise! Nor the cynics! If you present Nietzsche you have to first introduce Schopenhauer… I’m sure you get the point) and soon I was strenuously fighting to have at least half the materials ready as the beginning of the classes was fast approaching, and I had only a third of the whole thing (which I had to complement with well-thought-out cases, plus group dynamics, plus supporting materials). So no much time for blogging. On the other hand, I’ll remind my kind, patient and entirely non-paying readers that I do this mainly to improve my writing skills (you know the whole “practice makes perfect” shtick), and frankly, for the past six months it’s not writing practice what I’ve lacked (Jeez, some days I even had to forgo training if I wanted to have enough pages to give my students to read! There are very few instances on this Earth of things I would accept to prioritize over moving a loaded bar for a predefined number of sets and reps).

So that’s the explanation for my much reduced output these last months. Towards the end of May I finished the whole book, one I am mighty proud of, by the way (on whose contents I intend to use extensively in this blog), and am currently looking for an editor to publish it (which will probably require a load of extra work to make it half-readable, I know). In my next post (you probably saw this one coming) I’ll share with my devoted readership the main results I reached. I’ll close this post summarizing the unavoidable conclusion I extracted from my deep diving in the existing literature on business ethics: the whole field is a dishonest mess, an oxymoron lacking a moral compass, lacking a faith in the very possibility of its own internal coherence and external relevance, shot with consequentialist thinking through and through, it sees actions, beliefs, people itself as “resources” to be substituted for one another if a different mix may produce additional output of the only currency it recognizes which, of course, is not “the good life”, or a life well lived as understood internally by a free, rational agent, but the ability to purchase more material goods.

And it cannot lose time considering such abstract question as what the good life for a rational agent may be because it has no concept of what such agent would look like, to begin with, or what it should desire or how it makes sense to act on those desires. Although it is not entirely true the discipline does not rely (implicitly, as it happens) on those concepts. It assumes them from the zeitgeist, it receives them uncritically from the age’s dominant reason (that tells its teachers that the only intelligible goal of life is to feel the maximum pleasure, the only thing that gives pleasure is to have more social status than your neighbor, and the only measure of status is the amount of money you have at your disposal at any given moment). Business Ethics as I’ve found it explained and taught accepts enthusiastically such crappy ideological package, and is an essential component in its transmittal to the new generations. That’s why it has to be fought against, tooth and nail, with every last atom of strength of every well-meaning person.

Friday, June 15, 2018

Convergent vs. Divergent Technologies

It amazes me that there is still a considerable majority of writers on technical issues that, in good faith, still proclaim that we are in the midst of a technological revolution and that the pace of innovation is constantly accelerating, so we are in the eve of witnessing the most significant revolution in all history affecting how humans live and interact with each other. Hardly a day passes by without some guru gravely and in all seriousness letting us know the epochal changes almost upon us. I know prediction is difficult, specially, as Yogi Berra famously stated, about the future, but it makes me wonder how is it possible that so many brilliant people, with vastly more information than myself about their specific field, may be so utterly wrong.

Some cases are pretty easy to understand, however, when you look at the incentives. Predicting wonderful, never-before-heard-of innovations that threaten every job and can upend any forecast can be very profitable if you make a living from “teaching” people how to adapt to such disruptions, or just by selling books (or newspaper articles) about the dazzling future just around the corner. Posts and columns about how tomorrow will be essentially like today (except with people less motivated and more pissed off as they accumulate less material wealth than their parents and the wonders they have been promised all their lives somehow fail to materialize) tend to fare significantly worse than those with a more cheery outlook, as a constant feature of human nature is to be more attracted to good, promising news than to bad ones, regardless of how well grounded on reality the former turn out to be.

However, a lot of people fully engaged in techno-utopian balderdash really have no dog in the fight, and should know better. I can understand Tom Friedman (an updated version of venerable Alvin Toffler, wont to be similarly discredited by how things turn out to happen) blabbering about the wonders of new (unproven, in most cases overhyped and underdeveloped) technologies in almost every one of his NYT columns of the last five years, or Michio Kaku peddling an imminent progress that will never actually come to pass (Poor Michio! Probably his best days as a physicist are behind him, and nowadays he surely hopes to make more money from the TED talk circuit than from potential scientific discoveries, although I just cannot avoid noticing he will need to be more discerning about the nonsense he spouts, as see him here harping on the greatness of the “marshmallows test”: M Kaku on the mashmallow test, only it seems that all the arch-famous test measures is how affluent were your parents -which is, doubtlessly, greatly correlated with success in life, albeit it sounds much less heroic and self-serving: The marshmallow test doesn't prove what you thought it proved (if anything) ). Same goes for charlatans like Aubrey de Grey or Ray Kurzweil (although with the latter one has to wonder if he is trying to actually fool somebody, or mainly to fool himself in the viability/ inevitability of that ages-old conceit of living forever and cheating death, now that it is getting undoubtedly closer for him), but what about the likes of Bill Gates, rich enough and old enough not to be foolishly deluded by the gewgaws and wild predictions of a bunch of self-styled “visionaries” obviously hoping to make a buck from the delusion? I can only suppose that, coming from the tech industry himself, he is as steeped in its biases and distorted perceptions as the next guy, and can only fail to see the negligible impact it makes in the lives of the majority of human beings as any of his Silicon Valley imitators, a root cause of such baseless techno-optimism that was already identified in a book by Richard Barbrook edited more than ten years ago, Imaginary Futures: Grauniad book review 

Be it as it may, the best antidote to such unfounded optimistic pseudo-predictions is to go back a couple of decades and see how there is little new under the sun, and the same miraculous technologies we see being presented today as about to change everything forever were already slated back then to be fully developed and implemented by now. For example, in this priceless Wired article from 1997 about the long boom that started in 1980 and would (they confidently assumed) last until 2020: Futurists never learn! the predictions (electric cars! New energy sources that eliminate the need to burn fossil fuels! Nanotechnology! Biotechnology!) are surprisingly similar to the ones we hear as about to cause an immediate seismic change in our lives any day now… only twenty years later, and after all of them (not just one or two) having failed in the meantime to, ahem… actually happen. All those wonders didn’t come out as expected when predicted towards the end of the last century, and won’t come out any more so at the end of the second decade of the present one.

However, it’s not like there isn’t absolutely anybody with eyes to see realizing that the majority of the predictions of cornucopians and techno-optimists alike have a very slim, unscientific foundation. Robert Gordon did a magisterial effort to point out how the supposedly disruptive technologies that are already ten to twenty years old were not causing that much disruption, at least where productivity statistics are concerned (as I commented on here: On Robert Gordon ). Tyler Cowen famously announced the onset of a “Great Stagnation” (but has been hedging his bets since by announcing in his blog at MR that there is no such a thing almost weekly, usually with the most obnoxious examples so we know he is half-joking about it).  Every now and then you find a contrarian view: Pace of technological change NOT accelerating but I think we can all agree such opinions are in the minority, and 90% of people out there assumes we are in an age of undiminished technical advance and ever-accelerating progress, along the most publicized lines, to recap:

·         Computers and, as a practical consequence, General Purpose Artificial Intelligence (not to fall in the IT consultants trap of Internet of Things, Big Data, Quantum Computing, Virtual Reality, Augmented Reality, etc.)

·         Biotechnology, Genetic Engineering (extending normal human lifespan beyond 130 years, may be 180 years, may be forever)

·         Nanotechnology or, alternatively, 3D printing

·         Self-driving cars and trucks (something most cities have known for a century… they were called taxis back then)

·         Green energy production (limitless amounts of cheap energy produced with no cost to the environment whatsoever)

·         Space exploration a go-go (permanent base on the moon, mars colony, cheap satellite launching to ensure high-bandwidth access to the internet anywhere on Earth with almost no cost)

Once again, and I’m really sorry to have to play the Cassandra here, none of those things will be widespread (and some will not exist at all, not even as more-or-less credible “proof of concept” in some marketing whiz’s powerpoint presentation) or actually rolled out, not in ten years’ time, not in a decade’s time, but in our lifetimes. In the whole lifetime of any reader of this blog, regardless of how young he or she is (and sorry, but that lifetime won’t go much beyond 100 years, doesn’t matter how much they take care of themselves or medicine advances). Believe, this is not a rant of your average fifteen-years-old living in his parents basement that has read this and that in the interwebz without understanding much. I worked for 15 years as an IT consultant. I work now in a company that designs and engineers power plants (of all technologies and stripes) and manufactures thermal control system for rockets and satellites. I teach in a university, which gives me a good overview of the real (present and future) capabilities of that “most trained ever” generation of future geniuses we are setting loose on the world (and which may shows the first signs of reversal of the “Flynn effect” we have been benefiting from for decades: Things looked bad enough already, and now it seems we are getting dumber! ) . I know a bit about what the current level of technological advance can and cannot deliver. About how long it takes to deploy at scale a new technology, it doesn’t matter how promising it seems on paper. And I am constantly puzzled when apparently serious people tell journalists, investors and the general public, with a straight face, that they are going to produce some miracle in blatant violation of the laws of physics, sociology, economic rationality and what we know of human (and animal) nature, and the latter swallow it hook, line and sinker. But such is the sad state of affairs we have to deal with, and it behooves us, like in so many other fields, to understand why it is so.

And in this case, I think there are a couple of distinctions that both “entrepreneurs” (a term that we should know better than to lionize, as the more congruent cognate is not “beneficent genius”, as so many people seem to believe, but “psychopathic snake oil peddler that got lucky once”) and said general public fail to make. The first distinction is that between science and technology, a well understood one I won’t delve much into. The second one is within technologies, between “convergent” and “divergent” ones. Convergent technologies are predictable, repeatable, reliable and because of all that, boring (they don’t attract much attention). We know how much it costs to produce something with a convergent technology; we can replicate it in different environments and cultures, because we understand the underlying principles and processes at play, and we have vast historical data series from which to extrapolate the future behavior of the different underlying systems and components; we can measure the different performances of the involved processes, and thanks to that measurement, tweak them here and there to improve some aspect marginally, but they don’t lend themselves easily to major alterations or “disruptions”. Finally, convergent technologies are considered boring because it is difficult to wring out a “competitive advantage” from their application, thus their products end up sooner or later being commoditized, and the rate of return they can produce tends asymptotically to zero (so good ol’ Marx erred in his universal prediction, in Vol. 3 of Das Kapital, about the falling rate of return dooming capitalism to a crashing end in that he didn’t consider the other half of the equation: the existence of divergent technologies).

Divergent technologies, in turn, are the exact opposite: if we are honest with ourselves, we only have the foggiest idea of what it costs to produce a single unit of whatever it is that this kind of technology is supposed to deliver, and we may fail by orders of magnitude (although errors of 50-150% are more common); we understand only a fraction of what they require to work, so for every new attempt of establishing it we find new elements that were missing we hadn’t considered, and that have to be hastily commandeered (adding to the total cost creep); because of such limited and incomplete understanding, they are highly unreliable, and if in one location they seem to function all right in the next one they fail or misfire, and they generally exhibit very poor production statistics (they have to be frequently stopped for unforeseen maintenance/ repair/ adjustment); finally, they are very exciting, promise above-market rates of return, and tend to make the life of everyone involved miserable (see Elon Musk sleeping in the factory floor to try to personally fix all the problems of Tesla Model 3 manufacturing, something he has as much chances of accomplishing through such heroic strategy as I have of winning the next Nobel Prize in Economics).

To clarify a bit, I’ll give some examples of each category:

·         Building complex and “big” physical infrastructures (i.e. nuclear power plants, highways with bridges and tunnels, high speed trains, harbors, airports) – highly divergent

·         Building complex and “big” infrastructure for moving data (land based communication networks, be they copper based, optic fiber based or antennae based -mobile and TV) - convergent

·         Manufacturing technically complex things highly adapted to their mission, so in very small quantities and with lots of differences between one piece and the next (satellites, rockets to put things in orbit, components for fusion reactors, supercomputers) – divergent

·         Manufacturing a lot of identical things (cars, running shoes, parts of furniture that the customer has to assemble himself…) – convergent

·         Manufacturing a lot of identical things in quantities that had never been manufactured before, which means it is uncertain which features are valued by consumers, and by how much (electric cars, wall-mounted batteries, electric car batteries, virtual reality headsets, augmented reality glasses, 3D printers, DIY gene-editing toolkits) - divergent

·         Developing software - divergent

·         Providing low end services (cooking, cleaning, cutting hair, serving tables, washing clothes, personal training) – highly convergent

·         Providing high end services (strategy consulting, financial and tax advice, psychological counseling, surgery) – divergent

·         Providing cookie-cutter entertainment (TV shows, run-of-the-mill apps, LPs of most big-name bands) – convergent

·         Providing cutting-edge entertainment (blockbuster movies, high-budget videogames) - divergent

You may see where the problem lies: moving a technology from the “divergent” category to the “convergent” one is really hard, it takes a lot of time, and requires a sustained commitment from the whole of society to endure overcosts, delays, frustration, disappointments and the occasional tragedy. If the benefits of turning the technology in question convergent are clear enough, and perceived to be widely shared enough, all those efforts are endured indeed, and the darn thing becomes commonplace, unexciting, and part and parcel of our everyday lives. But if they are not, people get tired of it (what in some circles is called “future fatigue”, or the weight of so many unmet expectations and promises not honored) and it may well  never come to fruition, like it seems will be the case with nuclear energy (as much as it anguishes me to recognize it).

To make things worse, some of the technologies that techno-utopians are announcing as imminent are not even in the “divergent technology” phase (when at least there is a draft of a business plan with some rough numbers of what it costs to produce each unit of the new good and what people may be in theory willing to pay for it), but in the pure “scientific application that we can trick some VC to pay to develop in the hope it will produce something vaguely marketable someday” phase. And guess what? A) things take an awful lot to transition from that phase to the “convergent technology” one (think generations, decades at best, not years, and certainly not months) and B) a lot of scientific wonders never make it to that final stage.

You may also have noticed that the classification is subtle and tricky at some points. Is building communications infrastructure convergent or divergent? It depends of what communication it intends to enable. Infrastructure to communicate physical goods (highways, airports and the like) is divergent (and prone to corruption, regulatory inflation and countless inefficiencies, but that is another matter). Infrastructure to convey electric signals (data or power), or gas or water, is mostly pretty convergent. A similar thing happens with manufacturing: convergent for combustion engine cars, divergent for electric ones. Convergent for laptops and mobile phones, divergent for satellite platforms and large telescope equipment. So if you bet in huge societal changes dependent on cheap, super-abundant electric cars (or in ubiquitous satellites) you will be sorely disappointed, as those are not coming anytime soon (if ever). Of course the people tasked with building those things requiring divergent technologies will try to convince you of the opposite, and will claim that their technology is already convergent or really close to becoming so: solar concentration plants are already widespread (they are not, the few ones actually built have every kind of technical problems and terrible performance), fusion energy is around the corner because the ITER experimental reactor is already almost built in Cadarache (it is not), and MIT just signed with Enel the financing of a project to deliver a similarly productive reactor for a fraction of the cost (they still need to find more than 90% of the money, which will turn out to be less than 10% of the total amount actually needed, and the whole thing will never go beyond the preliminary design stage); Elon is experiencing some minor glitches that will be finally ironed out in a few days, and it’s been a year and a half of him saying that starting in two weeks his Fremont factory will churn out 5,000 Model 3 cars per week (which is still short of the half a million cars a year he said he would be producing by now), but what he is really and indisputably doing is firing 9% of his workforce (the surest signal the company is going nowhere, but industry analysts, those sharp cookies, reward him with a 3,5% rise of the share price… Tesla's travails ); VR glasses are finally about to go mainstream, and the technology is so breathtaking that they will reach 50% of homes in  no time at all (they won’t, actually that announcement is from almost two years ago, I fear journalists have already given up on that one); AI is so much around the corner that whole panels of ethicists (and may be a presidential commission of experts for good, if we heed the recommendation of Dr. Kissinger!) are already convening to help guide it towards a morally responsible behavior towards us, poor humans, that it may almost inadvertently obliterate (although, of course, we don’t have the darnedest clue of how to actually produce, program, implement, embody, develop or whatnot said AI, never mind have it harbor positive or negative intentions towards us… or towards anything else for what it’s worth).

Before we go into the consequences of the convergent-divergent distinction, we have to take into account how it is different from the classification of technologies in mature-immature. Some technologies, like building nuclear power plants, breaking ground and laying down highways, or producing and filming blockbuster movies, are very mature, but never stopped being divergent (and driving the companies that attempted to market them to bankruptcy). Some technologies were, when commercially launched, groundbreaking and immature but made a profit from the start, as they were predictable and repeatable enough to be convergent since the beginning of their commercialization, like cars, oil extraction, radios (or many home appliances on which huge brands were built: washing machines, refrigerators, TVs).

You may notice that the latter category (innovative products that become convergent almost from the start) have one thing in common: they are all quite old, most of them being introduced at the end of the XIX or beginning of the XX century. In contrast, the last wave of consumer oriented innovations (PCs, mobile phones, LED color TVs and may be autonomous vacuum cleaning devices, like Roomba) are not generating that much profit. That may be an indicator of the comparatively little impact they have on people’s lives (which translates into a reduced marginal value, which in turn means people is willing to pay for them only moderate prices), and thus their inability to command a high margin. By way of comparison, back in the day people were perfectly OK with parting from a year and a half of average salary for a car, or many months of salary for a TV receiver.

Some readers may object that there is one bright spot of both innovation and high margin (and lots of people being still demanded): IT. Unfortunately, regardless of how much has been invested trying to industrialize it, developing software is still divergent (most Sw projects are behind schedule and over budget, many times absurdly so). However, using it, applying already developed Sw to the intended areas (like using an excel spreadsheet to develop the annual budget of a company), or even extending it to some new ones, is convergent, and that has created the mirage that a) Software is eating the world and b) Software (and virtualization) has any real impact on how people live and interact… when it has not.

We may spend increasing fractions of our lives in front of a screen, typing (or just watching), but saying that a new app is going to change the world is like saying in the 50’s (when TVs were already common enough) that a new show would change the world, or, going back even further in time, that a new novel by the romantic author du jour would change the world. They may have had everybody talking about it for a while, they may have gently nudged the attitudes and opinions a little in this direction or that, but they would have actually changed very little, as people would have gone on about their daily lives exactly as before. I’ve read in some reputable magazines brainy pundits declare that the mobile internet has changed everything because now we have things like Uber, which has utterly revolutionized how people move around in cities. Uh? Dude, Uber is a semi-convenient means for getting a guy to take you from point A to point B in exchange for some money, something that has been around for a century, and its innovation is to circumvent stifling licensing and regulation (which may or may not translate into a societal gain, as with any “artificial” monopoly).

If that is your idea of a society-shaking, business-disrupting (well, it has been pretty disruptive for incumbent licensed taxi drivers, which in most European cities are fighting back with some success, whilst they have a more formidable enemy in car sharing companies), life-altering innovation… I suggest you go back to The Rise and Fall of American Growth and ponder the impact of running water, the internal combustion engine, electricity and light bulbs or the radio, and how was life before and after the advent of such true innovations. Listen, one of my grandfathers was raised as a peasant kid in the Canary Islands countryside at the beginning of the XX century. He didn’t know running water, electric lightbulbs or motor vehicles until he moved to the capital in his teens (and he almost had to kill another suitor of who would be my grandmother to avoid being killed by him, life was indeed nasty, brutish and short back then). My maternal grand-grandmother, only slightly older, was still amazed when I first met her by people moving inside a little box (the TV set, still balck & white). So when a guy tells me that Waze is a life-altering innovation because now he knows in advance how much time he will spend in a traffic jam I can only nod my head in disbelief.

In summary, part of the stagnation and stasis we are mired in (regardless of how intently a bunch of interested fools may try to convince you of the opposite) derives from the fact that most of the technologies we are developing since the 70s of the past century are still divergent, and we don’t seem to have the collective willpower (or wits) to make them converge. Which means most effort devoted to their further refinement is squandered and lost, while our daily lives remain as before, only a little more cynical and a little more disenchanted (the weight of all those unmet expectations). Are we doomed to trundle along such unexciting path? Not necessarily (as there is very little totally necessary in human history), and I would like to end this post with a (cautious) appeal to hope: The areas of promise and development you hear of in the media (recapitulating: biotechnology, genetic engineering, nanotechnology, Artificial Intelligence, electric cars, “renewable” energy, machine learning, big data, the internet of things, virtualization, industry 4.0) will go through the usual hype-cycle and most likely fizzle out and disappoint:


Such is the nature of capitalism and a spent dominant reason. But human ingenuity, even when unfathomable amounts of it are wasted in dead-end alleys with no prospect of producing anything of value (remember, many Nobel-prize-winning-IQ-level guys are spending their professional careers trying to improve in a fraction of a percentage point the conversion ratio of some obscure advertising algorithm, and call that a meaningful life), won’t be subdued forever. Popper made a convincing case against what he termed “the poverty of historicism”, which he identified as the delusion of being able to predict the future extrapolating from the tendencies of the present. Some true, unexpected, entirely off-the-blue innovation will arise in the following decades, probably outside of the stifling environment we have produced for “R+D+I” within corporate or academic behemoths that provide the wrong incentives and the wrong signals of what is worthy to pursue. And from that spark, likely of humble origins, the whole cycle of (truly) creative destruction may start anew. But from the current bonfires stoked by greedy firms’ research departments and hapless universities, more focused on publishing than on expanding human knowledge and welfare, I expect little creativity and lots of destruction indeed…  

Friday, June 8, 2018

Of Nuclear Energy and Adenoid Glands

The first part of the title (Nuclear Energy) has not much of a future, at least in the advanced economies of the West. Which is a pity, and just shows that we as a species (or is it just the social arrangement we gave ourselves in the last three hundred years?) are not very good at taking decision collectively, as once and again what may make sense for a minority of individuals, and is vocally advocated by them, ends up curtailing the prospects of the majority (not exactly breaking news, I know). I want to share with my readers (in this post or in a future one) why I think nuclear energy (the industry to which I devoted most of my first university career, and a good deal of my professional life) has no future, and why I think that is a terrible, terrible decision. But before that, I want to undermine my own argument (we contrarians are intellectually masochistic like that) with an example taken from an entirely different field. As usual, it will require a longish detour, so bear with me patiently.

You all know that physicians are between the most admired professionals in our society, a recognition that translates in the very hefty salaries they command (it may vary slightly by medical specialty, but becoming a doctor is, in every country, one of the surest ways towards accumulating great wealth, almost regardless of individual ability or likeability in other areas). Which is only fair, because they have to spend very long years in medical school actually studying and practicing (not like, say, journalism students, which devote almost 100% of their university lives to partying and abusing recreational drugs, also almost universally) AND health is one of the most coveted goods, and people show once and again that they are willing to pay whatever it takes to maintain it or recover it if (heaven forbid!) they lose even a tiny fraction of it. Good. Now let’s turn our attention to what it is that those devoted and self-sacrificing students learn during those long years, based on a couple recent experiences I had (disclaimer: I may be one of the most medicine-averse human beings currently on Earth, I routinely skip my med check-ups, consistently fail to read the results when I undergo them, am averse to any kind of pill, drug or concoction -xcept the ones containing alcohol for internal use, that is- and when I feel physically ill I try to soldier on with good spirits and good ol’ fashioned stoicism… the result is that I’ve not lost a single work day due to illness in the past 25 years… all of which I bring up to say my exposure to medics is really, really limited, and four little anecdotes I’m about to present may well be very little representative):

·         I’ve always been terribly allergic. Probably a genetic condition, as my mother also is, and most of my brothers are afflicted by the same intense hay-fever when spring come. In my childhood it was so bad I tried to stay indoors most of the time between April and the Beginning of June, and even there I spent between four and six boxes of Kleenex and had a conjunctivitis so sever I could barely read. It receded a bit during my university years, but worsened again when I started working, so I did what everybody else does: went to the allergologist , who made the typical test (piercing my forearm in 26 points, and pouring a droplet of pure allergen in each point to watch the funny shapes of the reactions) and gave me the typical diagnostic and prescribed the typical treatment: a vaccine with those substances I had shown a highest sensitivity to. Back then, those vaccines were pretty expensive (they were not covered by social security for young, wealthy workers) and a royal pain in the ass: they had to be kept in a fridge, injected once or twice a week for months on end and seriously curtailed the possibility of international travel (at least back then you could pass a thermally sealed bag full of glass vials through airport security, as I had to do a number of times). Be it as it may, I endured it for a couple of years, as the damn allergy was making my life really miserable for at least three moths every friggin’ year. And the first year it worked like a charm! I had one of the finest springs I could remember! Eyes mostly OK, little rhinitis, little sneezing, I could almost lead a completely normal life! Man, I was elated and in awe at the power of modern medicine! But then the second year came. Probably the worst spring I’ve experienced, ever. My colleagues at work told me to go home and die every single day. My nose was like an open faucet, as were my lacrimal glands (when my conjunctive was not so swollen it blocked them), and it was almost impossible for me to finish a single sentence without multiple loud sneezes. My head ached horribly because of all the sneezing and blowing my nose… and no medication seemed to make any effect, doesn’t matter how many anti-histamines I popped. So I went to the doctor to complain, telling him that the vaccine didn’t seem to be making any effect. “Well”, he told me, “you have to understand that this spring we are seeing historically high pollen levels, and so your reaction to them is being very severe… if the next spring is normal you will continue seeing some improvement”. “Wait a minute here, Doc” I retorted, “So these pollen levels vary, according to how rainy the fall and winter have been and all that, that makes sense, but how much pollen was there last spring?”, “very little indeed” he answered, “a very dry winter, with a lot of frost and many late hailstorms caused a spring with one of the lowest pollen levels ever recorded”. Aha! So the vaccine had very little to do with my perceived improvement, I just got lucky with the weather, and all the fuss with the darn injections was almost for naught. I lost my faith in it, never again injected the damn thing, and have never regretted it (if you are interested, my symptoms have greatly improved since then, but allergies seem to recede with age in many cases)  

·         As I’ve reported in this same blog, at some point a couple years ago I tore my left biceps tendon doing moderately heavy deadlifts (as narrated here: I (barely) dodged this bullet). I went through a number of doctors, one of which, with a somewhat crude sonogram machine, “saw” that the biceps tendon was fine, and many others that saw clearly it was broken and had to be surgically reattached. The second group relied on the images provided by a much more precise, state-of-the-art sonogram, plus a magnetic resonance that left them in no doubt my tendon was gone for good. Only of course it wasn’t, and thanks to a series of entirely unintended circumstances (the successful surgeon that had to operate on me was so busy he couldn’t seem to find a spot for my little intervention, and in the meantime I realized my arm was entirely fine) I barely avoided a surgery that would have caused me countless inconveniences (between them, the almost certainty of an elbow with less mobility and less stability than the one I was born with), for exactly zero benefits. Is there any difference between the first doctor and the rest? Well, there happens to be one: the first doctor I saw, the one that clearly “saw” my tendon was alright, was a friend of my father-in-law, and examined it as a favor, knowing he would not be the one to operate (and thus gain a financial benefit from the intervention), whilst all the others had a strong incentive to “see” an opportunity to intervene. I am not saying they were a bunch of scoundrels wanting to make a quick buck from an unnecessary surgery (well, maybe I’m strongly implying that), I’m just pointing at how difficult it is to value and judge things impartially when you have a clearly identifiable stake in one of the possible outcomes

·         All my three sons have had a lot of health problems having to do with the nose and throat: the usual colds, some snoring (more when down with the flu), persistent coughing. Now, I know “a lot” is a very relative term, and I would be surprised if there is any parent out there who thinks that their children are outstandingly healthy and have never given her any trouble. Pediatricians manage to do nicely in a society where there are less and less children overall for a reason (they have successfully managed to extract much more needs for treatment from each individual child, that’s how). My two eldest sons have had tonsillectomies, like almost every other child I know of, as after the second cold (or episode of sore throat) the doctors always look inside their little noses with an endoscope and dictate the need to remove the adenoid glands, as they unfailingly “see” they are clearly “too big”, a definite cause of uncounted problems, entirely unnecessary and the life of the kids will substantially improve after the removal. I probably could have been more statistically savvy and formally test such prediction registering the amount of episodes of rhino-pharyngeal illnesses experienced by my sons before and after the surgery, but I have the strong hunch that the effect of it would show up to have been exactly zilch. Indeed, I probably grew savvier no matter what, as recently we took our younger one to the pediatrician (after a prolonged episode of dry coughing that didn’t allow neither us nor him to sleep), he referred us to the otorhinolaryngologist, who used the usual endoscope and, lo and behold! Found (“saw”, as clearly as my own doctors had before) that the adenoids of my little one were some of the biggest he had ever seen, and that a tonsillectomy was absolutely recommended. Back to the pediatrician, she agreed (how could she not, specialists are for general practitioners like the voice of God, they’ve been trained all their lives to defer to them). Well, this time I put my foot against the wall, “specialist schmecialist” said I, and (that’s the magic of it) convinced my wife to wait and see. Obviously, the kid improved all of his own, he has had other episodes (just like his tonsillectomized brothers) but so far seems to be growing perfectly fine. And just today I found this little gem in the NYT: turns out on of the most common surgical procedures may not be worth it ... it’s difficult not to read it as “we have been inflicting pain (or wasting surgeon’s time and anesthetics) in little kids for decades for no apparently valid reason… other than said surgeons, and the people referring patients to them, have been nicely paid for it”.

·         Finally, a good friend of mine works for big pharma, in a work that, as I jokingly call it, consists essentially in “pushing drugs”, both retail (she visits doctors to “keep them appraised of the latest developments in pharmacology”, said developments consisting entirely in advertising of the benefits of the drugs she represents, attested by very expensive clinical trials paid for by her company, and given additional credibility by the different levels of “presents” that fall just short of what would legally constitute bribing) and wholesale (organizing “medical congresses” for doctors and their spouses to publicize the advantages of the medicines they market over those of their competitors, normally in plush settings and comfy locations so the most influential amongst the section of the medical profession prescribing the drug are sufficiently enticed to attend). Big pharma spends a godawful amount on those “direct marketing” activities because they know they work. Again, I’m not saying that the doctors are a bunch of scoundrels that prescribe the medicines peddled by the highest bidder (those that provide them with the most outrageous perks)… only that they “see” more clearly the benefits identified in those unimpeachably scientific clinical trials that happen to be (coincidentally, almost miraculously) presented to them along more attractive “gifts”.

To put things in perspective, I did not intend to say that all physicians are a bunch of sellouts and greedy bastards, in it for the money and willing to invent any imaginary illness (like ADHD, but that would be the subject of another post, probably much more critical and vitriolic than this one) to profit from the suffering and anguish of their unsuspecting, gullible patients, or that modern medicine is a sham, and that most drugs and surgeries are at best placebos (and at worst actually detrimental) with only the barest connection with the well-being of the ill and infirm. Remember, medicine in my playbook is not science, but a form of engineering, and being an engineer myself I feel a lot of respect for its accomplished practitioners, and cannot think of a higher praise that recognizing the empirical, practical nature of their profession. What I am saying is a healthy dose of skepticism is more than warranted, because all their training, from their early years of study to their professional career, imparts them with a certainty on what should be, at the end of the day, subject to interpretation and nuance. It teaches them to “see” what is not much more than a random blot of ink (I’ve many times equated sonograms with Rorschach test, and the same could be applied, it seems, to endoscope images or the results of clinical trials comparing the efficacy of different drugs, and to God knows how many more “pieces of evidence” used to diagnose and determine the best course of medical action), and to unfailingly err towards the option that will make them richer… which is not necessarily the one more beneficial for the patient (and I know, there is a deontological code explicitly forbidding doing harm, and counseling towards caution and abstaining from those actions that the natural course of the illness may make unnecessary… a deontological code that cannot compete with 6 years of schooling plus countless examples of teachers and mentors to the contrary).

Now, what has all of this to do with nuclear energy? Simple: if you ask a physician if she is aware of the unholy interests that blind her/ cloud her judgment/ bias her decisions to take first into account her financial well-being and only in a distant second consider dispassionately what is best for her patients, she will, I am sure, vehemently deny such a monstrous accusation, and defend her good faith, selflessness, objectivity and devotion to the ill and the infirm (making a very nice profit, again one of the highest paid professions absolutely everywhere, being just a happy coincidence!) And she would be telling the absolute truth in such defense. Doctors are indeed incredibly selfless and hardworking for what they have been taught is the greater good of their patients. They make tremendous sacrifices and work super-intense and super-long hours (in most countries with a “socialized medicine”, typically split between the national health service and private practice) to help the ill and infirm recover their health, which is something incredibly valuable.

You surely see where this is going: if you similarly ask a nuclear engineer (like myself) if he is aware of the unholy interests that blind him/ cloud his judgment/ bias his decisions to take first into account his financial well-being and only in a distant second consider dispassionately what is best for society in general, he will, I am sure (indeed, I’m going to do exactly that!) vehemently deny such a monstrous accusation, and defend his (my) good faith, selflessness and devotion to the power-hungry and needy, the citizens of every stripe and nation that need a reliable supply of energy that does not spew carbon dioxide in the atmosphere. Nuclear engineers, although not as well paid as doctors, are incredibly selfless (all this obsession with safety and security, with considering every potential failure that can be imagined and not sparing a dime to ensure such failure does not end releasing harmful radionuclides in the open that could adversely affect the public) and hardworking for what they have been taught is the greatest good of society in general. They make tremendous sacrifices and work super-intense and super-long hours (well, again probably not as many as your average doctor), etc.

Which means that, when I defend nuclear energy (definitely in a future post, as this one has grown beyond my wildest expectations already) and argue why its more than probable demise is a net social loss, you should take my arguments with a grain of salt…