Today, I will again delve into the realm of Artificial Intelligence. A response (rebuttal?) to an argument made by Sam Harris in one of his recent podcasts discussing (among other things) Artificial Intelligence with Kevin Kelly (here is some background).
Also, some Veganism stuff. It will come later, and it is related.
Fortunately, unlike the last article, I utilized for commentary (written by one of the founders of Wired), Kevin (interestingly, founding editor of Wired) is not as sympathetic to the scare mongering as other notable names. Something I put in italics because despite many names being well-known contributors on the subject, even I question what exactly they are contributing.
Case in point:
One thing I will say is that I didn’t think I would see the day when I would be in agreement with Mark Zuckerberg with much of anything. But it seems that this topic has made it happen.
In short, Mark thinks that Elon and the like are overreacting to the point of irresponsibility. Elon thinks that Mark . . . is not knowledgeable enough on the subject. Many Elon fanboys posted photos of ointment for Mark (he got BURNED!).
I put my hands to my face and shook my head.
Really Elon? THAT is the card you are going to play?
Don’t get me wrong, Mark and I still aren’t buds, this considered. Unlike the dystopian realities in the windscreen of folks like Sam Harris and Elon Musk, Mark Zuckerberg’s profit driven algorithm structures have helped to damage and divide nations the world over (and continue to do so). Indeed, The Facebook is not alone in this. But it was the trailblazer of the concept. And it appears to be doing little short of window-dressing to even acknowledge that a problem exists (let alone tackling it).
Either way, a feud between 2 rich techies that have WAY more intellectual influence than is merited is another subject altogether.
First, back to Sam Harris and the podcast.
He seems not as afraid of humanity being run over due to malice as he is of humanity being run over due to circumstance. Maybe these machines will become SO efficient in their replication that they will allocate ALL resources towards the purpose. Humanity will not be murdered but sacrificed to AI needs.
One of the first things that come to mind is, these people have GOT to pay more attention to the recent warning from Stephen Hawking. He gives the species 100 years (give or take) before the earth is . . . all washed up. To paraphrase George Carlin, a giant stinking ball of shit.
The Hawking warning comes to mind because his solution (we need to populate Mars or other planets) is questionable to me. Not just from a logistical point of view, but also from an ethical point of view. We know that humans will more than likely screw up any planet we ever inhabit eventually, so it is ethical to keep the process going?
My ethics angle to this question would not be taken well by a great many people (let alone agreed upon). The only person I did get an answer from seemed to default to it being automatically ethical if it enables continued human growth. That seems like a bit of a cop out to me. But it is what it is.
Either way, the only well-known voice to come out against this planetary Brexit movement seemed to be Bill Maher, outlining his reasons in the prologue of his earth day 2017 show. Many of which I am more or less in agreement with.
We know the general reasons why Stephan Hawking, Elon Musk, and others want to get us off of earth and into supposedly better places. It ranges between something cool to accomplish, and we have no choice. But to focus on the latter (or Hawking) side of the issue, what drove us to that point in the first place?
A myriad of factors really would be a nuanced answer. To get right to the point, ourselves. We did it to ourselves.
There are multiple reasons to explore. Climate change, plastic pollution permeation, loss of biodiversity . . . pick your poison. While all are generally drastically different issues, they are all rooted in the same ideology. Human (and later, corporate) growth. While our history is littered with this behavior, it would not culminate until our discovery and subsequent incorporation of fossil fuels into everyday life.
It could be said that this time in history presented humanity with a forked path and a stark choice. Proceed forward with this new technology with care and caution to possible ramifications, or go all in. It seems the humans of the time didn’t see (or choose to ignore) the possible future risks, and fully embraced and incorporated it into societies world wide. The same can be said for any number of technological revolutions that ended up turning into disasters in their own right. From asbestos to DDT, to plastic, to BPA.
Humans are not good at the long game. And really, humans have never been good with the long game. Unfortunately, unlike when there were too few of us to do too much damage, that dynamic has changed now. Not only are our staggering numbers ALONE a strain on the biosphere, all of our modern innovations only add to the mess.
Plastic and other trash now make up a layer of our very own creation, both on the surface and the bottom of the oceans. Large cargo vessels and tankers (along with seismic oil and gas exploration) in the very same oceans now have raised the background noise levels in the ocean exponentially, in every ocean. There is literally no part of the ocean that we can not be seen or heard.
Our industrialization period has forever changed whole continents. From forests and wetlands to farmland and concrete. Drive from Winnipeg, Manitoba to Key Largo, Florida, and you will see pretty much the same thing. An endless expanse of farmland or concrete. Drive from Winnipeg to Vancouver, and you will see more or less the same thing. But for where the landscape made breaking impractical or impossible, it’s human controlled.
Over a single century (or half a century, really), humans have burned and contributed millions of years worth of carbon into the atmosphere. A rate of increase that far surpasses pretty much any other past event. We still don’t know what all the long term effects of this massive glut of CO2 will be, but all credible contributors agree that it will NOT be pretty. Today’s flooding storm surge is tomorrows baseline sea level. At MINIMUM.
Then there is space. We have now launched into orbit and abandoned enough junk for it to now be a legitimate problem with people and equipment operating in the great unknown. And our fingerprints are not just in near earth orbit. We also have left stuff behind on the moon. And we have shot some vehicles out even further into the solar system, destined to end up who knows where.
It seems that no matter what endless expanse of empty space that humans come into contact with, they find a way to clutter it up. A recent example of this, and a problem of urbanites everywhere (whether they realize it or not) is wifi interference and overlap. A few years back, few would even think about this since wifi was still in its infancy. But now, with more wireless devices than ever before in our possession and every ISP selling and renting routers to service these devices, the carrying capacity of the available spectrum is often filled to capacity, if that capacity can even be reached.
Networks and devices sharing a channel can more or less traffic one another for airtime (with the lag increasing according to active devices in the area on that channel). However, traffic on adjacent channels is seen as noise, hindering (if not entirely drowning out) wifi activity. Imagine trying to have a conversation in a crowded room. The louder the background din gets, the louder everyone else gets.
Since these ISP issued routers often set up shop anywhere on the 2.4ghz spectrum (not just on the recommended channels 1, 6 or 11), there is often lots of overlap. Which causes noise levels that actively reduce the already finite amount of bandwidth available to area devices.
Indeed, a first world problem and more of an inconvenience than anything (unless all the radiation from the wireless devices is considered, anyway). But it is yet another perfect example of humans managing to again completely clutter up a seemingly endless expanse to due unplanned and largely unregulated embrace of new technology.
Someone like Sam Harris could use this argument in the context of Artificial Intelligence (our total and complete embrace of largely untested technology have not always ended well). However, I have doubts that many got that far since dystopian fear of AI tends to write the whole issue off.
It’s not that the machines will one day turn on us either. It is more, the machines will become so efficient in replication that they will develop methods to essentially utilize ALL resources towards that goal. Such resources may include us, or all that we rely upon.
An example given is an AI robot that has only one goal . . . to create paperclips. All it does is hone and fine tunes the art of creation of the paperclip. The perceived risk is that this AI robot may become so good and efficient that it may develop ways to turn literally ANYTHING into paperclips. Yes, including us and all that we hold dear.
Indeed, the example isn’t the best (it needed to be dumbed down for a layman, but really?! Paperclips?!). But it gets the job done.
Either way, due to this fear, Harris figures it necessary that so called Human Common Sense is programmed into these machines, so as to ensure this result is not realized. On one hand, it can’t hurt.
But on the other hand . . . human common sense?!
To STOP the machines from consuming and destroying everything in sight, all for the goal of endless replication?
HUMAN common sense is going to achieve that?
To put it short and sweet, if humans had common sense, people like Stephen Hawking would not be telling the world that the species NEEDS to find a new planet. Really, one could even go as far as saying that common sense dictates the exact opposite of Hawkings wishes. Let this bad strain remain isolated to one planet.
One could say that. It’s certainly an interesting usage of the topic of ethics. Which is more unethical?
Keeping humanity on one planet, for better or worse? Or allowing humanity to spread out in the universe?
Either way, suffice to say, I am highly critical of the notion that our so called common sense is of any use to AI robots and bots. In my mind, the only difference between AI gone wrong and humanity is how they spread . . . reproduction VS replication. We’re certainly not a good example, as far as stewards of the earth are concerned.
But this is not all to slam the notion. More to highlight the arrogance of insisting that the obviously flawed (if not flat out non-existent) common sense of humans is an important trait for future AI technology. If anything, that seems as though it could go WAY in the other direction. Being the direction we are headed at current, it seems a wager worth betting on.
That said, however, it is not all bad. The benefit of knowing one’s own flaws is in fact, knowing one’s own flaws. We can ensure to program these problems out to the best of our ability.
Figuring out goals for future AI is not an unreasonable conversation, however. This is an important conversation to have, even though people like Elon Musk like to disrupt it. Seemingly based on a strawman.
The machines are coming . . . and eventually, they will TAKE OVER OUR LIVES! Not scared? Well, CLEARLY you have not given this much thought!
But enough about arrogant fools with a giant platform. You get the point. This frame of mind is harmful to the conversation, not helpful.
Since humans are in the driver’s seat, it seems apparent that those in charge will design and program these things not just to be benign to humanity, but also to be helpful. How exactly that would work obviously remains to be seen. But it seems, dare I say, common sense.
Fine, maybe not. It is more the conclusion that one comes to when they use the past behavior of humans as a predictor of future outcomes. Humans are selfish and self-serving creatures, utilizing pretty much every resource we have available to us towards this goal. It seems apparent then that new technology birthed by us would follow this same pattern. Be it conscious, sentient, or not.
I have to be careful here, I admit. I don’t have a good grasp on either consciousness OR sentience, so I have to be careful in my usage of the terms. Although from what I see, few (if anyone) in the Artificial Intelligence conversation have made much headway on that front either.
To go back where I essentially left off (what could AI mean for us?), it could go many ways. I explored this a bit in a previous post on the subject, but I have even more to add now.
One should not just assume that AI will be inherently our enemy, or could become so due to some unforeseen development or update (to use a technical term). It can’t and shouldn’t be ruled out. But ending the conversation here is akin to throwing out the baby with the bathwater.
Humanity is good at developing tools. It’s how we got as far as we did today, and it’s what will drive whatever future we have left. So rather than viewing what is essentially our future technological development as a foe, we should try and see it as a tool. Something that has the potential of introducing a whole new level of intellectual prowess to both humanities biggest problems and enigmas (let alone desires).
The first thing that comes to mind is something from my other piece on AI, which was pondering whether or not so called UFO’s and extraterrestrials were a form of AI, developed by some other life form elsewhere. Looking back, I wrote that piece under the assumption that these beings must have run over their creators in order to reach the high that they obviously did technologically. I was taking cues from Sam Harris, in that it was a previous episode of the Waking Up podcast that inspired my thoughts on the piece.
Despite starting there, however, it occurs to me that annihilation of the AI’s origin species is not necessary. Rather, the super developed Artificial Intelligence may, in fact, serve as a tool for them. A tool that accomplishes feats that may not otherwise ever be possible. For example, the ability to explore far beyond whatever their observable universe is. Not to mention possibly enabling these origin species to come for the ride.
Looking closer to home to the problems facing the future of humanity and the earth itself, this is another area where AI could be of more help than harm. For example, reversing climate change by developing a way to scrub (and put to use!) excess carbon in the atmosphere. Or developing viable means of scrubbing plastic pollution of all sizes and types from the worlds oceanic gyres (and again, finding a use for it). If the intelligence potential is close to (if not) infinite beyond so called singularity, then so too are the possibilities.
But even Artificial Intelligence that is on our side is not beyond issue, even if it is just as perceived by us.
One example is our current habits of resource consumption (among other things). We currently WAY overconsume what is available to us, to the point of taking from future generations. Every year, an article is released at about this time of year (August) telling us that we’re past that point. Before the back to school and holiday rushes have even begun! Either way, it will not take long for Artificial Intelligence to detect this, and obviously, follow the problem all the way to its conclusion (bye bye Homo sapiens!).
If a part of their programming or goal is the safety of the species, they could either recommend drastic action or just force it upon us. Essentially, for the good of the collective that is humanity, all may be forced into a more limited life of consumption than they are used to.
Or to up the ante a few notches, let’s consider the overpopulation conundrum.
At current, our population is WAY beyond the static carrying capacity of the planet. But it doesn’t much matter (at least in the short term) due to fossil fuels and other technologies extending the carrying capacity. We already know this house of cards will eventually topple, so of course, the machines will also know this.
Again, the AI does the calculations and concludes that without a meager to drastic reduction in either births or population numbers, the species is in trouble. Our numbers are either close to or beyond the maximum allowable for the survival of the species, so something has to be done to keep extinction at bay.
Disallowing children, despite being its own hot potato, is arguably the lesser solution (when compared to being forced to essentially cull the herd).
On that note . . . imagine either being a decider of that group, or of having to accept the AI’s decision on the matter. No matter how you slice it, things will not be pretty. People will (rightfully really) hate and fear the machines.
And yet, at its core, is the well-being of the origin species. Humanity has proven unwilling to face the biggest decisions even at the expense of its own survival. So if some external force (or intelligence) has to do it for us, is this really a bad thing?
Interestingly, questions and scenarios like this (brought to my mind by topics like Artificial Intelligence, Autonomous Vehicles, and Veganism, oddly enough) have fundamentally changed my view of ethics and morality.
For example, the idea of some machine mandating or forcing a moratorium on human population growth (or worse, a cull of the population) would be seen as automatically evil (and thus immoral and unethical) by many, no matter what the circumstances. Even if the reasons are based on cold hard logic (too many people = too much resource consumption = No (or very few) people).
As for Veganism (since it can also be tied to this conversation, oddly enough), a common argument for is pain and suffering. This is more often than not buttressed by the terrible status quo that is mass factory farming in the US and elsewhere. There is a climate change component as well. But it’s primarily based on animal welfare.
My answer to that is to assert that the choice to (or not to) eat meat has little to do with ethics. Even though the status quo is far from optimal, it does not have to be and could be changed. In fact, compared to the suffering endured by the prey of many other species, humans have developed much less painful methods of slaughter. Though humans do not have to eat meat, we evolved (like many other animals) with such protein in our diets. As such, it’s hardly unethical to engage in what is as natural an activity as drinking water. One can use the climate change argument to attach ethics to the conversation. But even that is a stretch since something as normal as driving a car or heating (and cooling) your house could be turned on you. Not to mention that nuts and kale also have to be transported to market (and we’re not running EV transport trucks yet. Though I doubt their debut is far down the road).
If anything, framing this on ethics and morality (people who eat meat are unethical and immoral!) is doing damage to the cause of Veganism. Aside from inviting people like me to retort their rhetoric (a minority), it turns people off (the majority). While it may be seen as an excuse or burying one’s head in the sand, how exactly is that helping the afflicted animals? It’s not.
If anything, using the ethics and morality arguments to back a Vegan stance is unethical and immoral. If the tactics employed are resulting in a net negative in terms of action taken towards helping afflicted animals, then I don’t think it a ridiculous statement. It’s just an observation.
Here is another observation. PETA is inherently anti-Vegan.
I didn’t think I would ever find myself reading that sentence (let alone writing it).
One may wonder where that came from. How a piece about Artifical Intelligence ended up criticising Veganism. The answer is in ethics and morality. Or more, as I alluded to earlier, my fundamental change in acknowledgment of the 2 concepts.
Both are fluid, no 2 people have the same ethics and morals. Most tend to be very human centric (dare I say, self-serving) to the point of being irrational. As such, they are not inherent.
In a recent conversation, I was asked essentially what would happen if some alien race rounded us all up for some nefarious purpose. Would that be ethical?
The first thing that came to mind was, what does it matter? Like the many people that died at the hands of Adolf Hitler and other crazed leaders, I’m sure that those people saw the actions of the hierarchy as being unethical. Didn’t do them much good though, did it?
Now that I have triggered many into thinking that I am a crazed psychopath, I shall explain myself. I am not a psychopath.
Just a psychopath on demand. In a way.
I don’t walk around treating everything and everyone like shit. I have an ethical/moral code that I follow. If anything, I think that my ethical and moral code would rival that of many of the people that I just triggered. It’s a consequence of being overly analytical of almost every aspect of life. When you see more, the often thoughtless ethical infractions of the faceless populace become crystal clear.
Either way, I think that about wraps this up. Feel free to comment below if you have something to say.