UFOs And Extra Terrestrials – Interstellar Artificial Intelligence?

Today I found myself watching a Sam Harris interview, his most recent appearance on (and the first episode of the newest season of) the Rubin Report. 

The first part of the program was a discussion (well, explanation of sorts) of free will. The context being that Dave Rubin had previously seen a discussion of the topic between Sam Harris and Daniel Dennett, and found himself leaning towards Dennett’s hypothesis. Rubin wanted Sam to more or less, bring him over to his side (I gather, anyway).

I am not really familiar with Dennett’s hypothesis, and same goes really with Harris’s really, even after listening to the explanation (it’s quite over my head, for the most part). But I did hear a few things that would make me skeptical. Fortunately (likely unlike Rubin, and many others), I have some fairly smart people to bounce this stuff off of, even if I don’t fully grasp it. To see which (Dennett or Harris) is closer to the line. Or if possibility BOTH are out to lunch. 

Wouldn’t be the first time I spotted something and turned out to be right. At times on both sides of the argument.

Also, this may be a bit unfair, but why does it always seem like Sam Harris is grappling with ideas he hardly grasps when he speaks?!

I sound like that when someone asks my opinion of something that is unfamiliar and fairly complex. Since my glorified monkey membrane is hunting for answers, the flow of verbal data is a bit delayed. Though I expect it from someone like me, I don’t really from someone that is fairly well read a topic. 

But that is just me. Maybe it’s just his own way of speaking. A characteristic of sorts. Like how Laurence Krause almost always comes across as an arrogant twat. 

Then again, who am I to talk! 🙂

Either way, the interview then went in the direction of spirituality for a bit (I have no opinion really), and then to artificial intelligence. Though this is not normally a topic I have put much thought (or interest) into, Harris touched off an interesting thought process when mentioning (as part of a hypothetical situation) that humanity could be wiped out by advanced and potentially conscious AI. We may be the “bugs in the windshield” to these new God-like (?) beings, wiped out as the machines take off in their own life cycle of sorts.

This got me thinking about various extraterrestrial phenomena that have (allegedly) been observed over the years. First off, I do not discount the bullshit factor, the human error factor, or other non-mysterious possibilities. But if we play with a hypothetical scenario in which at least some of the sightings have merit, this gets interesting.

When it comes to UFO sightings, they tend to involve futuristic looking, extremely fast, extremely unique, and extreme performing crafts. Crafts capable of feats that no man made machine (presumably) could even hope to achieve.

When it comes to the operators and sources of these crafts, they are typically hypothesized as some form of possibly sentient, certainly intelligent life form or other. Origins of which could be literally anywhere in the universe (not just the observable universe). With intentions that could range from curiosity and interest to colonization and\or resource exploitation.  Who knows.

The typical assumption (at least that I have heard) seems to be that extraterrestrials \ aliens are some well-advanced life form or another. It’s a sound hypothesis, considering the example we have in ourselves. An advanced life form, albeit barely in the grand scheme of things. And not at ALL, when our tech is compared to the apparently demonstrated ability of the extraterrestrials.

But one wonders . . . Could what we some things see be a product of some form of artificial intelligence? Could it be a combination of life and machine or just machine? And if the latter, can this be called a life form?

Which then brings the obvious question of, where do the machines come from? What designed them and (to use a common expression) brought them to life? Did the designers succumb to some form of self-induced extinction (possibility machine is driven, or possibly stupid driven, like our path), or are they still kicking? 

In reality, that thought experiment is likely never going to get any more closure than the God enigma. All those questions will likely never be answered. And I am comfortable with that. It’s all a bit far fetched anyway.

That said, this whole thought process made another one occur to me as well. The scenario of depopulation by necessity.

Most of the scenarios involving man made artificial intelligence gone wrong tend to involve the machines eventually turning on us. We create machines to cater to our every whim and program them to classify us above all else. Until some disastrous sequence turns them into death machines. In Hollywood, someone kicks robot ass and saves us all. In academic and layman hypothesis, humanity is rendered either slave or extinct.

But I see another possibility. One which is arguably just as scary, and just as probable (given the same framework from which the other hypothesis are built).

The machines \ robots follow most of the same patterns as other scenarios, eventually becoming arguably more powerful than us by way of self-driven evolution (?). They become the stereotypical Hollywood future bot.

But contrary to the typical ending (man vs machine), the machines become guardians and preservers of humanity. The problem with that being, what has to likely happen in order for humanity to be preserved.

When one thinks about the laws of robotics, these likely come to mind:

robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Though they are extensively used all over the place, these bug me in their simplicity. By that, I mean that they really only work for individual units. As opposed to some form of artificial intelligence collective. Which comes to matter if you scale up from individual units making everyday calculations based on real time data, to collectives applying similar rules to humanity and macro decisions as a whole.

An ideal world for human life would not result in a problem. However, in a world of increasing constraints (in terms of population size vs available resources), this conundrum will come up.

We know how unreceptive many people are to making drastic course changes to their comfortable status quo. Even if the changes could help preserve human life in upcoming generations, still often no action is taken.

Now, enter artificial intelligence, and the 1st rule of robot law.

robot may not injure a human being or, through inaction, allow a human being to come to harm

I am a robot collective of sorts, and I look around at all the problems plaguing humanity. Though many varieties of problem exist, I find that a binding string of them all is overpopulation. If no action is taken, the humans will likely be wiped out by their own ravenously enormous resource footprint. The only recourse for the overall preservation of the species is a cull. Reduce the numbers until their footprint is less self-destructive. And keep those numbers stable. 

To be perfectly fair, it does not take artificial intelligence to come to this conclusion. Many already have, though such conclusions are unpopular (to say the very least). But even if some of our most intelligent minds come to such a drastic conclusion, this doesn’t mean we will acknowledge the knowledge \ heed the warning. When it comes to issues like fossil fuel driven pollution, this is fortunate for the industry and its benefactors. As for the human overpopulation problem, it’s fortunate for, well . . . everyone. 

But back to the thought experiment.

If we ever find ourselves at a point of having AI that is equal to or beyond our collective faculties, we may decide to put the future in the hands of the machines. It may not even be a fully conscious decision. The technology may just become more and more pervasive to the point of being almost unavoidable. Like how the internet age in which we live has progressed over the years. From a toy of the wealthy and intellectual, to an essential staple of modern existence.

We worry about the consequences of selling out life to a bunch of self-aware machine’s, because they may turn on us eventually. Possibly as a baby step to some kind of future artificial intelligence civilization. But it seems that few consider the ramifications of AI decisions that may be for our own good.

I suppose that these ramifications may be dependent on just how sentient or conscious the technology is. For example, if this technology ends up wired and emotional like many of us, we may not have much to worry about. However, if the technology tends to be more psychopathic in nature (can reason well, but gives little or no attention to empathy), that is a whole new ball game. Though both could have scary consequences, they are much worse without empathy. While empathetic AI may mandate resource consumption cuts for all humans (we will have a serious drop in our standard of living, but will keep on living! The goal), unemphatic AI may come up with something . . . Hitler-esk.

In all honesty, I have doubts we will get anywhere even NEAR having to face such a reality. Because I suspect that we may well succumb to one or more of the problems that such AI could help alleviate LONG before we progress to having the technology at our disposal. And even if we do not totally succumb, it’s doubtful that the research will ever be completed. It takes a lot of people to maintain this normal that we live in.

Millions.

Having a massive population drop off will mean not only restarting the machine’s with possibly a fraction of the former labor available, but also potentially a lengthy restoration period (depending on how degraded the infrastructure becomes due to neglect, and of course post apocalyptic scavenging). Basically, when it all goes offline entirely, it likely will never come back up. Maybe in pockets, but certainly not to anything comparable to today.

This is the reason that I haven’t put much thought into artificial intelligence (until now, anyway). Because I view it as a pipe dream. Considering the serious problems we are about to run into as a species in the coming years (hardly decades anymore, at this point!), looking to AI as a solution is almost akin to looking to God, or to look to the statues (in the case of past Easter island civilization). And on the flip side, looking to AI as a source of demise is at best far-fetched. At worst, fatally distracting.

Yes, there are all the various problems outlined before, not to mention new ones we don’t (or can’t) anticipate. But there is also the fact that . . . current technology is scary enough. Our comfortable existence is in no small part because of electricity and the internet. Mess with those and, things could get interesting REAL fast. This is not even considering how many critical systems are accessible online, openly vulnerable.

Artificial intelligence makes for an interesting thought experiment, possibly a good film or book. But it’s hardly representative of any attainable reality.

This entry was posted in Opinion, Various Commentary. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s