Last month I had a lot of fun finally putting to
paper my idea of how a future society (the first truly global, and hopefully truly
humane) may look like (you can find it here: Sunny future
and here Politics of sunny future
and here Economy of sunny future
and finally here -it was quite long even for my verbosity standards, but
describing a whole viable social arrangement is no small potatoes: loose ends of sunny future
). It was a pretty old project I’ve been engaged on and off with for the last
five years, which I intended to use for a “popular” book of mild Sci Fi.
Something in the line of the “new age utopia” supposedly penned by one of the
main characters of Martin Amis The
Information (that I may still some day develop, the main plot and the
ending are already written, all that was missing was a credible background that
now I consider fully fleshed out, or at least with consistent enough guidelines
to allow for the fleshing out). For those following this blog regularly, I may
come back to this every now and then, as I play with some feature that requires
an explanation, or want to expand on some critique of our current society by
describing a potential alternative, as this is what the exercise was about from
the beginning: accepting that every society is improvable, how would my version
of a “better” society look like was for me important to spell out, as I was
(and still am) a bit tired of so many interwebz warriors talking crap about
capitalism and accusing everybody who doesn’t share their radical agenda of
being a sellout without being able to define what they would like to see in its
place. What I describe in those posts is what I would like to see.
Of course, the fact that I would like to see it
is not by itself a great indicator of its plausibility. We humans have an abysmal
record estimating probabilities (as this wonderful post in the Slate Star Codex reminds us of: On overconfidence
), and even worse when we let our desires weigh in the issue. So I do not
pretend to believe that such future
is very probable, although I do consider it both viable and more likely than
not. Now, what are the big unknowns that may render this little exercise in
prediction entirely off the mark?, I can think of the following events, each of
them with a very low probability but with a potentially huge impact in the
final outcome:
·
Total
nuclear war (not causing the annihilation of the human race, I always believed
that fear was irrational and overblown, but it may still cause severe societal
breakdown in the northern hemisphere all the same, and back to the inter state
race for survival that puts a premium on reproduction rates sky high above
replacement, and all the intra societal competition maladies that come with it)
·
Limited
nuclear war (detonation of nuclear device in a major Western city most likely
by a stateless actor with likely retaliation towards the most likely sponsor of
such action) and reinforcement of totalitarian tendencies in current “open”
societies… leading to completely “administered” lives in giant macro-states,
where the level of population and economic activity are dictated by centralized
bureaucrats
·
Scientific
breakthrough that enables space travel, colonization of Mars, the Moon and
beyond (and new demographic push to populate new peripheries and new frontiers,
with societies cranking up again the demographic and economic arms race to
colonize the new lands in what would amount to a new imperialist expansion)
·
Scientific
breakthrough that enables immersive, two directional virtual reality (VR) with
simulated tactile feedback, and massive retreat of humanity into virtual
worlds, once the tiny problem of feeding the bodies we left behind in this one
is solved (which doesn’t seem all that difficult, synthetic food is already
cheap to produce, and not having to worry about taste and texture would make it
even more so)
·
Inability
of society to adapt to massive climate disruption caused by continued pumping
of Green House Gasses in the atmosphere (worst case scenario) and either extinction
of human species after famines, riots and total war or retreat into fortified states
that survive the turmoil in the remaining habitable zones, probably heavily
armed and oriented towards hegemony at any price (thus nationalistic-jingoistic
and in my humble opinion not very conductive to the flourishing of their
unlucky subjects)
You may notice I have not included
the singularity, the advent of post-humanity (by our merging with machines to
enhance our capabilities until we end up becoming a different species) or the
wiping out of humanity by an evil (or simply not that fond of us) AI. I just
don’t believe any of those fancy things are going to happen. Technological
progress is going to quietly come to a halt, without much fuss, without
fanfare, without anybody much noticing it. Two or three hundred years from now
somebody will look back and say “wow, for three or four generations we have
been living exactly as our parents, Isn’t that weird?” but it won’t be (in
fact, it will be the culmination of the reversal to the historical average).
Not that there won’t be lots of innovation: people will create wonderful
stories, great art (I don’t know through which means of expression), daring
theories of why we are here and how nature works… it will just not have that
much influence in how we live (what we dress, how we build our houses, what we
eat, how we cure ourselves when ill). But that is an interesting topic for a
separate post (why I think Artificial Intelligence is just not going to
happen), although I’ll leave my readers with a hint: the interesting question
is not if there will be artificial intelligence fifty years from now (or if
there is extraterrestrial intelligence anywhere in the Universe outside of
Earth right now, or if there ever was), the interesting question is if there is intelligence (natural or
otherwise) here and now. And the fact that we can meaningfully pose such a
question, and be uncertain about its answer tells us a lot about why it is
premature to think of “building” something we understand so imperfectly (if at
all).
No comments:
Post a Comment