Friday, April 22, 2011

Transhumanism: "If It Sounds Too Good to be True, It Probably IS!"

Today, I came across an article by Patrick D. Hopkins, A Moral Vision For Transhumanism. Overall, I found nothing to seriously disagree with, even if it is a bit utopian.  To be fair, I don’t think Hopkins intended to predict how we will apply transhumanist technologies; he simply asserts in the article what transhumanism ought to stand for, regardless of how and to what ends Homo sapiens sapiens will use these technologies after they're invented.  

In this article, Hopkins argues convincingly that, for all our ideas of and wishings for utopia, Homo sapiens are simply not mentally and emotionally equipped to handle utopian states of affairs. Presumably, in utopia, we'd have few to no unmet needs.  In that case, Utopia would not be a state of eternal bliss but a state of boredom.  We humans have evolved to ultimately like struggle, perhaps even conflict.  Therefore, living in the very Utopia we desire would necessitate reengineering human nature itself (presumably through a combination of both genetic and/or neurological engineering; and adding artificial intelligence to our brains besides).

His next claim also seems true to human nature: without changing our basic mentality, we’d not be transhuman but merely superhuman – merely a greater abundance of the same physical and mental abilities we so often blindly admire, but with the same instincts and emotions that cause so much conflict in this world.  While I definitely agree with Hopkins here, I think it's by no means the only objection most people would have to transhumanism. I find it much more likely that people will oppose it because humans love "human nature" just the way it is so much that the great majority will not be willing to embrace the change.

People will give any number of objections why we shouldn’t change human nature even were it in our power to do so.  Primarily they fall into either one of the following categories:

(a) stick-in-the-mud conservatism (That’s just the way we are; If it ain’t broke, don’t fix it; etc)
(b) fear of the unknown (this is not entirely unreasonable, given that if we can control the minutest details of genetic engineering and our brains, the possibilities of transhumanism are limited only by the human imagination - and later the transhuman one besides)
(c) egotism (too much pride in the way human nature is)
(d) petty personal distaste at the possibilities of our nature becoming radically different (We won’t be human any more!)
(e) the established “winners” among Homo sapiens fear they’ll lose power (again, given human nature desire for power, glory, and control even in the most "civilized" societies, this is not a groundless worry).

Excuse the anthropomorphism, but it makes the idea easier to get across. Nature has already been genetically engineering Homo sapiens for a few hundred thousand years already, and in ways we don’t like besides (e.g., we’re often violent, dishonest, belligerent, bigoted or otherwise judging others by their superficial traits, and being hypocritical about it besides).  If we develop the technology and techniques to purge these traits from our nature, then why shouldn't we use them?  If anything, failure to purge these traits would be a dereliction of duty to humanity.

My Opposition to Transhumanism on Practical Grounds

Yet, at the same time, I generally oppose transhumanist schemes even if they prove to yield only ten percent of the results their most ardent boosters claim they will be able to yield.  This is because the power to change the nature of life is so powerful that any mishap or misuse is likely to cause a cataclysmic catastrophe for at least a large segment of society. In fact, given the potential of transhumanist technologies to change human nature itself, I would say that safe use of them requires human nature itself to change before they are used.  This is putting the cart before the horse.  Given the human drive for power and domination at all costs plus our natural contempt and personal distaste for anyone strikingly different from ourselves, I find it far more likely that reengineering human nature will make our superhuman descendants - on average - even more despicable than we are now; or at least make this world even more unpleasant to live in. At the very least, it will give a lot of our post-human descendants enhanced abilities without empathy and compassion commensurate to that power they possess. In this case, we can't simply hope that the "good post-humans" can successfully defend mere humans and/or less-developed post-humans like some superhero; for unlike in the movies or graphic novels, there's no guarantee the good guys will win in the end.  

In fact, one can very well argue that our technology is already too powerful for our present level of compassion or empathy for others. After all, were our empathy and compassion commensurate with our technology, then why do we still - in this day and age - still have a world that requires soldiers, police, and social workers – just to name three?  This is especially true for societies as highly educated as 21st century Earth.  As explained above, it's highly unlikely we'll use this technology to purge the worst of humanity's baser instincts from ourselves - which means that a post-human world can easily be a lot worse than this one. Continuing the graphic novel analogy, imagine a super-villain with the raw strength of a gorilla and the reflexes of a cat. Then imagine it has more intelligence than any of us currently do - even making the proverbial Einstein seem to have the IQ of a rodent by comparison. Consider how easily he or she could dominate you or even the world. Now consider a whole species of such people.  Therefore, if we don’t change the nature of these superhumans before we give them superpowers, namely by purging them of the most destructive and self-centered tendencies we find in our present selves, how much chance would Homo sapiens have in a battle with them?
It may well be the case that the more high-minded, idealist segment of humanity will change themselves or, more likely, gene-enginner their children, into being truly transhuman rather than mere superhuman.  Even so, the fact remains that even idealistic sounding liberal movements take time to seep into the mass consciousness.   The mental and emotional ideas and memes required to transhumaninize as opposed to superhumanize Homo sapiens will likely take at least a full century,  very possibly a few.  During that time, there will undoubtedly be people using the technology, theoretically usable for transhumanization, to merely superhumanize their offspring - essentially making us either quasi-supervillians or quasi-superheroes (no doubt created to deter the supervillains.  In short, it'll be a kind of arms race based on genetic or other kinds of enhancements. This will very likely create a worst of both worlds scenario:  Enhancement in physical strength, agility, and mental cunning without any improvement in the being’s empathy, sympathy, open-mindedness, tolerance, and compassion.  This is exactly the situation we risk if we don’t enhance our compassion and concern for others before we genetically and mentally augment ourselves.

Unfortunately, for my stated reasons, it seems much more realistic to expect that advent of the ability to reengineer ourselves in all aspects is more likely to lead to dystopia rather than utopia; thereby making the world an even worse place than it currently is.  For these reasons, I cannot help but be concerned about humanity’s future; for history shows that anything inventible by humans will be invented and used by us.  It is also one reason I don’t think humanity will exist much longer than a few more centuries.


Bazompora said...

That's the thing, isn't it: technological progress, for the better or the more likely worse, cannot be realistically prevented. Despite all unimaginable evils that it could unleash, this box of Pandora also holds hope: the eventual possibility to deactivate the craving for offspring and other behaviour in allegiance to genes. I have to admit that I'm more interested in seeing what comes out than not, for, with the current human condition, we have nothing to lose that we won't lose eventually. Yet, even without further technological advance, worse fates already are unfolding for the unfortunate majority of mankind (global warming, overpopulation, free trade, the rise of evangelicalism, GMO's, mineral and oil wars, low-intense genocides with lower coverage, ...). If anything, humanity might at least succeed in wiping itself and all future misery out, through the devices of tomorrow.

In the end, I see a strong convergeance between antinatalism's strife, to convince humans to renounce spreading their misery and mortality, and transhumanism's quest, for removing man's misery and mortality, and don't see why it should be one or the other (in fact, I support both at the same time).

Shadow said...

your post could not have a better title. And yes, transhumanism has dystopia written all over it.

Great post!

filrabat said...


I agree there are common themes between antinatalism and transhumanism. In practice, as I said, I don't have much real hope for transhumanism. At the very best (and I'm being generous here), it's a noble gamble. Some sufferings can be alleviated or even eliminated using TH tech.

As for the fantasy, Jim has two posts about this matter. The bottom line is that at best successful transhumanism would end in a draw for our post-human descendants (if any).


Thanks, even if I did make a few editing goofs I'm still quite proud of this post. Because human nature is the way it is, there are some technologies we simply aren't meant to handle wisely. Whether we're beyond that point already is hard to say though. Still, I'm practically convinced that radically advanced TH tech is just one of those sets of technologies (unless somehow we manage to reenginner ourselves to be transhuman instead of merely post-human. Of course, I'm not holding my breath about this one).