Ethics Of Artificial Intelligence – An Exploration

Today’s topic has been on the backburner since April when I came across the top 9 ethical issues in artificial intelligence as explored by The World Economic Forum.

It seem’s that I can’t log into any platform without coming across Ethics or AI these days. That is unsurprising, given the microtargeted nature of our online world (past behavior dictates future content). What did surprise me, however, was having the Twitter account associated with this blog get followed by an Ethics in AI oriented NGO (very likely the source of the blog post that spawned this piece, actually).

In truth, it’s all very . . . questionable. It seems that everyone and their dog is chiming into the Ethics in AI conversation, but I am not even sure that the rest of us have mastered the topic yet. Particularly, heads of tech-based companies with known histories of unethical behavior behind the shiny facade of silicon valley grandeur.

None the less, let’s get on with the questions.

https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/

1. Unemployment. What happens after the end of jobs?

The hierarchy of labour is concerned primarily with automation. As we’ve invented ways to automate jobs, we could create room for people to assume more complex roles, moving from the physical work that dominated the pre-industrial globe to the cognitive labour that characterizes strategic and administrative work in our globalized society.

Look at trucking: it currently employs millions of individuals in the United States alone. What will happen to them if the self-driving trucks promised by Tesla’s Elon Musk become widely available in the next decade? But on the other hand, if we consider the lower risk of accidents, self-driving trucks seem like an ethical choice. The same scenario could happen to office workers, as well as to the majority of the workforce in developed countries.

This is where we come to the question of how we are going to spend our time. Most people still rely on selling their time to have enough income to sustain themselves and their families. We can only hope that this opportunity will enable people to find meaning in non-labour activities, such as caring for their families, engaging with their communities and learning new ways to contribute to human society.

If we succeed with the transition, one day we might look back and think that it was barbaric that human beings were required to sell the majority of their waking time just to be able to live.

That is certainly a rosy way to look at things. If we succeed in the transition, one day we may look at the full-time employment of a human over a lifetime as inhumane. There is just the matter of GETTING there first.  Will society survive?

In exploring the seeming hyperbole of my last sentence, I think we have to define what we mean as a successful transition. If the transition is regulated in the typical almost libertarian manner that many world governing entities (in particular, the United States) tend to follow, then not much will change. Like the last technological revolutions of our time, most gains will stay with the shareholders and the CEOs.
The biggest difference will be that the vast majority of all workers may well find themselves in dire straits. As opposed to just workers in regions once supported primarily by vibrant (yet niche, and inevitably redundant) industries.  Or workers displaced by money-saving decisions (such as outsourcing) made by companies.

One potential method of dealing with this potential time bomb (as some experts are calling it) would be some form of Universal Basic Income. Everyone (below a given income bracket?) would receive some regular given amount of money to do with as they please. Presumably, it would be enough to cover living expenses (or at least make a substantial dent in them, anyway).

Though this concept is fairly new to me, I rather like the idea. Aside from helping to avoid a civil collapse into unrest and possible martial law (or in some cases, fascism), you will have a healthier and more vibrant economy. Sitting on riches (or sheltering them in international tax havens) doesn’t do anything but increasingly undermine an economy. However, distribution to lower brackets does tend to be beneficial to all economies of scale. Food stamps are a good example of this.

To go with the last example (food stamps), economies do see benefits from even basic social safety nets. But when people have more than is required just for the basics (what the boomers called disposable income), they spend more.  They buy all manner of items that help to enrich all economies.

Of course, there is the question of how one is going to pay for this. In that respect, I am a lot like Bernie Sanders in saying “TAX THE RICH!”.

It is more than a slogan, however. It is (or it SHOULD BE!) a consequence to make up for the huge impact that their decisions have on the societies and nations in which they do business. In some senses, one could almost say “the societies and nations in which they plunder”.
Up to now, companies (for the most part) have been able to get away with washing their hands of the many consequences of their existence on local areas.
And not just unemployment either!
Consider things like obesity, or the ever growing problem of plastic waste (and garbage in general). To date, the food industry has faced Zero consequences for either epidemic, despite being the single largest contributors to both issues worldwide.

Universal basic income seems not just the logical solution to a coming asteroid, but also a much-deserved form of corporate reparations. Oh yes . . . I went there.

To conclude, people like Elon Musk and the late Stephan Hawking typically cite fears of AI gone rogue as their main concern for the technology going forward. What is far more concerning to me, are angry mass populations once they find themselves redundant. In a sense, we already have had a bit of a taste of what it looks like when angry (and somewhat ignorant) people find themselves without purpose in the world. Should we do nothing to prepare going forward . . .

There is also much to be said about the pitfalls of the current status quo. Millions of cogs in a giant machine, spending day after day just toiling. Working. Doing something, for some reason.

Well, at least you have a job! That is all we can hope for, right?

Boomers, in particular, love to use this line. Just shut up and do what you are told. Don’t think, just work.

It makes you wonder . . . how many people with various gifts that would be beneficial to society, spend their lives toiling in menial labor, whilst the whole machine inches ever closer it’s seemingly inevitable march off a cliff?

At this crossroad, where the species is soon to face running smack dab into more than one wall, isn’t it just logical to have all hands on deck? After all, it’s not just the economy, or life as we know it . . . it’s life itself.

Which will we choose?

Another milestone for the species? Or the evolutionary cul-de-sac?

2. Inequality. How do we distribute the wealth created by machines?

Our economic system is based on compensation for contribution to the economy, often assessed using an hourly wage. The majority of companies are still dependent on hourly work when it comes to products and services. But by using artificial intelligence, a company can drastically cut down on relying on the human workforce, and this means that revenues will go to fewer people. Consequently, individuals who have ownership in AI-driven companies will make all the money.

We are already seeing a widening wealth gap, where start-up founders take home a large portion of the economic surplus they create. In 2014, roughly the same revenues were generated by the three biggest companies in Detroit and the three biggest companies in Silicon Valley … only in Silicon Valley there were 10 times fewer employees.

If we’re truly imagining a post-work society, how do we structure a fair post-labour economy?

I think I have delved into most of the negatives in enough depth already. But it’s worth exploring the positives a little more.

Right now, the habit seems to be to call this the post work era. I’m not sure that will necessarily be the case. As explored before, without so-called gainful employment taking up so much of peoples time, their energy is available for whatever they want to focus it on. I suspect that this will include new business ventures. Ventures potentially shelved previously because the entrepreneur or customers (if not both) may not have had the time to devote to such an endeavor.

To put it in hopeful economist terms, who’s dreams could become a reality in this new paradigm?

This will of course largely depend on the availability of capital to fund such ventures. Though a big issue in a paradigm of mass unemployment, it is possible that recent innovations like micro-financing and crowdfunding could help clear this hurdle. Either way, the possibility is there for this be become (or more accurately, revert back to) an era of small business.

The future could be bright if one plays the cards right.

3. Humanity. How do machines affect our behaviour and interaction?

Artificially intelligent bots are becoming better and better at modelling human conversation and relationships. In 2015, a bot named Eugene Goostman won the Turing Challenge for the first time. In this challenge, human raters used text input to chat with an unknown entity, then guessed whether they had been chatting with a human or a machine. Eugene Goostman fooled more than half of the human raters into thinking they had been talking to a human being.

This milestone is only the start of an age where we will frequently interact with machines as if they are humans; whether in customer service or sales. While humans are limited in the attention and kindness that they can expend on another person, artificial bots can channel virtually unlimited resources into building relationships.

In a sense, I would personally welcome this. Assuming they are user-friendly, I prefer dealing with machines to humans generally (be it a self-checkout or on the phone with some company). Part of this is due to my introverted nature (leaning towards the extreme end of that spectrum, if I am honest. It’s a bit ironic that my menial labor is in customer service). And part of this reaction is because service-oriented jobs tend to be hell for most but a few brave souls. Having worked my entire career in various segments of the service industry, I’ve learned to hate people. To be fair, my high school experience helped pave the way to this conclusion, but HAVING SAID THAT . . . the general public didn’t do much to correct my misconceptions.
Amusingly, this is a somewhat subdued expression of these feelings. Maturity has tempered me somewhat, even in comparison to some of the earliest posts on this blog.

I understand, however, that personal anecdote is not always a good barometer to go by.

Even outside of my paradigm, I still see this transition as being mostly a good thing. One way I can think of is in helping the socially challenged such as myself practice some social skills in an unjudgemental environment.

It is also possible that it could go the other way. Less interaction could entail more isolation. Having said that, I suspect that these traits are more associated with the individual (not to mention their finances) than they are with macro technological innovations. It makes me curious if those many studies and observations that find many in the post-boomer generations to be socially isolated (in comparison) also take financial restraints into account.

In conclusion, I suspect that the overall impact will likely range somewhere from benign to positive.

Even though not many of us are aware of this, we are already witnesses to how machines can trigger the reward centres in the human brain. Just look at click-bait headlines and video games. These headlines are often optimized with A/B testing, a rudimentary form of algorithmic optimization for content to capture our attention. This and other methods are used to make numerous video and mobile games become addictive. Tech addiction is the new frontier of human dependency.

This is actually a very much overlooked fact of modern existence that needs more attention. Tech addiction. Not only is it a real thing, it is often times being encouraged due to the nature of the industry. Though the human attention span is finite, the options within the software world are infinite. And as such, some developers are not beyond employing less than questionable means to keep people hooked on their platform.

The unfortunate aspect of this is that some of the most desirable users of these apps are also the least equipped to combat their habit influencing nature . . . children. In today’s world, seeing a teen or anyone else addicted to their phone is seen as little more than a joke, but really, there is something to this allegation. Even the heaviest users know it isn’t rational to devote hours of attention to a seemingly benign app used to share photos.

For those that have never considered this before, consider why slot machines in Vegas don’t just stay silent when you win big (or win at all, really). They flash bright lights, they make a ruckus of noise, they sometimes dump shiny coinage all over the place. They get the dopamine pumping and make you feel good.

I likely don’t need to elaborate on the dark side of this psychological trickery in the context of gambling venues. I suspect that we all know (or know of) a person that has fallen into this trap.

But have you ever considered why many of those apps on your device are so pesky? They beep, ping, flash the screen, pop up constantly. If they are not showing personal interactions, then they are showing notifications about what activities friends have recently engaged in on the platform.

Anything to get your attention back on the app.

the other hand, maybe we can think of a different use for software, which has already become effective at directing human attention and triggering certain actions. When used right, this could evolve into an opportunity to nudge society towards more beneficial behavior. However, in the wrong hands it could prove detrimental.

To comment on the last point, it already is proving detrimental. Noting that, I am not even sure that one could say that the software is even in the wrong hands. After all, it was not inherently designed to be nefarious. It was designed to serve a purpose that benefited the agendas of the designers of said software. Even they more than likely overlooked many of the flaws that have since become apparent.

If it’s an indictment of anything, it’s what you get when you allow the market too much control over these things. It’s the emerging tech industries symptoms of a problem that has persisted American companies for decades . . . lack of regulatory control.
To anyone that disputes that seemingly arbitrary point, I encourage them to show me just ONE instance where an industry has put the well-being of the commons over short-term gains.

I am at a bit of a crossroads. On one hand, all of these tactics of psychological manipulation are more than likely here to stay. As noted by my gambling comparison, they long predated social media. So the author may be correct in seeing a possible positive use for such technologies.

None the less, manipulation is manipulation. No matter who is pulling the strings, there exists an air of dishonesty.

4. Artificial stupidity. How can we guard against mistakes?

Intelligence comes from learning, whether you’re human or machine. Systems usually have a training phase in which they “learn” to detect the right patterns and act according to their input. Once a system is fully trained, it can then go into test phase, where it is hit with more examples and we see how it performs.

Obviously, the training phase cannot cover all possible examples that a system may deal with in the real world. These systems can be fooled in ways that humans wouldn’t be. For example, random dot patterns can lead a machine to “see” things that aren’t there. If we rely on AI to bring us into a new world of labour, security and efficiency, we need to ensure that the machine performs as planned, and that people can’t overpower it to use it for their own ends.

5. Racist robots. How do we eliminate AI bias?

Though artificial intelligence is capable of a speed and capacity of processing that’s far beyond that of humans, it cannot always be trusted to be fair and neutral. Google and its parent company Alphabet are one of the leaders when it comes to artificial intelligence, as seen in Google’s Photos service, where AI is used to identify people, objects and scenes. But it can go wrong, such as when a camera missed the mark on racial sensitivity, or when a software used to predict future criminals showed bias against black people.

We shouldn’t forget that AI systems are created by humans, who can be biased and judgemental. Once again, if used right, or if used by those who strive for social progress, artificial intelligence can become a catalyst for positive change.

I decided to group these 2 as one because I have come to see both symptoms as roots from the same branch . . . bad data inputs.

Having done a bit of looking into this stuff, despite the bad AI result’s getting a lot of coverage, one often doesn’t see much attempt to diagnose. For example, that AI handing out harsher (seemingly, racist) sentences to differing nationalities was also drawing from a far larger data set than any jury or judge would (such as the person’s home neighborhood, and other seemingly irrelevant information). It’s less a matter of nefarious machines than it is data contamination.

Of course, this doesn’t make for as splashy of a headline. Or as gripping an article.

Things could take a wrong turn, as far as these machines are concerned. But with clean data, I suspect they may outperform their current human competition in MANY contexts. Bias is somewhat controllable when it comes to inputting data (for example, switching out names for identification numbers when it comes to entering criminals into these systems). However, it is NOT in the context of the human. Nor is it necessarily apparent even to the person themselves, that they may well be acting on their biases. Or possibly even some other seemingly unrelated trigger (“I’m hungry/ gotta pee! Can this just be over with already!?”).

6. Security. How do we keep AI safe from adversaries?

The more powerful a technology becomes, the more can it be used for nefarious reasons as well as good. This applies not only to robots produced to replace human soldiers, or autonomous weapons, but to AI systems that can cause damage if used maliciously. Because these fights won’t be fought on the battleground only, cybersecurity will become even more important. After all, we’re dealing with a system that is faster and more capable than us by orders of magnitude.

The author says a system. I feel that it will be more many systems. We may get to that Star Trek-like future someday, but not in my lifetime.

To start, even our CURRENT public and private data infrastructure systems tend to be woefully unprotected. Judging by the sheer number of companies seemingly caught with their pants down upon finding data breaches, it’s like digital security is an afterthought. When you sign up for everything from a bank account to a retail store loyalty card, you have to hope that digital security is a priority. And even if it is, there are no guarantees!

A good start that can happen TODAY, is drafting legislation on the protection of data under an organizations care. Losing the equivalent to the intimate details of a person’s life (and in some cases, those very details!) has to be more than a “WHOOPSY! We will do better!” type situation. Identity theft can cause a lot of stress and cost a lot of money, so companies that fail to protect consumer data in every way possible (particularly in cases of negligence) should pay dearly for this breach of trust. A fine large enough to not just be a slap on the wrist. Cover the potential expenses of every potential victim, and then some. Make a statement.

What say you, Elizibeth Warren?

When it comes to private companies under control of public infrastructure, the same should apply. When an attack happens, the horse is out of the barn already. Which is why one has to be proactive.
Employ some white hats to test the resiliency of our private and public infrastructure. Issue warnings and set deadlines whilst demanding regular updates on progress made. Then keep at it.
Hit those that miss the deadline without reasonable explanation, with fines. And keep on top of things, issuing warnings (and hopefully less frequently, fines) as issues are found.

As technology progresses both for the public and the private sector, only a staunchly proactive atmosphere such as this can help prevent the hijacking of far more powerful technologies for nefarious purposes.

Being that most of the world is nowhere even close to this . . .

7. Evil genies. How do we protect against unintended consequences?

It’s not just adversaries we have to worry about. What if artificial intelligence itself turned against us? This doesn’t mean by turning “evil” in the way a human might, or the way AI disasters are depicted in Hollywood movies. Rather, we can imagine an advanced AI system as a “genie in a bottle” that can fulfill wishes, but with terrible unforeseen consequences.

In the case of a machine, there is unlikely to be malice at play, only a lack of understanding of the full context in which the wish was made. Imagine an AI system that is asked to eradicate cancer in the world. After a lot of computing, it spits out a formula that does, in fact, bring about the end of cancer – by killing everyone on the planet. The computer would have achieved its goal of “no more cancer” very efficiently, but not in the way humans intended it.

To be fair, the computer doesn’t have to kill all humans on earth, just all the ones with cancer. That would take care of the problem (well, at least temporarily). Call it the most effective health and fitness campaign in the history of the human race.

Move over Joanne & Hal!

Moving on, this is one of the more played up possibilities of this new technological age. Fear of the Machine, of the tables turning. But it’s hard to see this as much more than Hollywood driven fear mongering.

Consider the eradicate cancer request that in this hypothetical, went very wrong. What if instead of becoming the digital adaptation of Adolf Hitler, the machine dug into its massive database of information and spit out a laundry list of both lifestyle changes and possible environmental improvements that would dramatically lessen the instances of cancer. Hell, any big problem known to our species.
Strip away the bias, emotion and other deadweight of the human cognitive ability, and add exponentially more computation power in the process, and who knows what can be accomplished.

For a while now, I’ve been tossing around the idea of UFO’s and extraterrestrial visitors as some form of inter-steller Artificial Intelligence, possibly linked to some past (or present!) life form from who knows where. Who knows . . . AI could be our ticket to depths beyond the observable universe!

8. Singularity. How do we stay in control of a complex intelligent system?

The reason humans are on top of the food chain is not down to sharp teeth or strong muscles. Human dominance is almost entirely due to our ingenuity and intelligence. We can get the better of bigger, faster, stronger animals because we can create and use tools to control them: both physical tools such as cages and weapons, and cognitive tools like training and conditioning.

This poses a serious question about artificial intelligence: will it, one day, have the same advantage over us? We can’t rely on just “pulling the plug” either, because a sufficiently advanced machine may anticipate this move and defend itself. This is what some call the “singularity”: the point in time when human beings are no longer the most intelligent beings on earth.

I can’t help but wonder if that ship has already long since sailed into the sunset. I suppose it lies in how one defines intelligence. For example, there are 3 devices within my reach that leave my brain in the dust (8 in the whole of the apartment). My brain doesn’t hold a candle to a calculator, let alone a modem or a smart TV.
But at the same time, these things are not beings (to borrow from the article). They are just objects with purposes ranging to the simplistic, to the complex. Be it crunching data, or helping move it from my computer to the WordPress server, both machines are far from autonomous.
The closest examples we have at the moment are the autopilot systems of both jetliners and autonomous vehicles, and even these default to human intervention when in doubt.

So I guess we’re not quite there . . . yet?

If I recall, the last time I explored this question, I concluded that we would most likely not ever see this revelation because I have serious doubts in the continued flourishing of the species as a whole. This culmination may not be like life after people (everyone here one day, gone the next), but no matter what, whoever is left is more than likely to have bigger concerns than furthering AI research.
Rather than the matrix, you may have The Colonie. Mad Max. The Book of Elie. The Road.

Pick your poison.

If I am wrong and am proven a dumbass by Elon Musk and everyone else sounding alarm bells, as Chef Ramsay would say . . . Fuck me. We done got ourselves into a pickle now, didn’t we?

Since these machines are influenced by input data, then I guess . . . hopefully, the technology will ignore the whole parasitic nature of the spread of the human species. And hopefully it will overlook the way that humans tend to consume and destroy damn near everything we have ever come into contact with.

God help us all if this singularity decides to side with Gaia.

Then again, what if it flips the script and turns mother to the species, acting as nurturer instead of the destroyer?  We conceived of it with our limited facilities, so it shall now keep us healthy. A good outcome, it would seem.

But wait. There are only enough resources to support X amount of humans, but there is currently Y number alive. If there isn’t a cull of this number (along with a possible lifestyle change), all will perish.
Still a good result?
The great fall is inevitable. In one circumstance, few recognize the truth and mass calamity ensues in the aftermath. In the other, the warning allows at least SOME preparations to be made (and difficult decisions to be decided) in staving off the worst possible scenario.

One can play with the singularity principal all they like. If it is to be, there is not all that much to be said or done. Though, I suspect we won’t have to worry, anyway.

9. Robot rights. How do we define the humane treatment of AI?

While neuroscientists are still working on unlocking the secrets of conscious experience, we understand more about the basic mechanisms of reward and aversion. We share these mechanisms with even simple animals. In a way, we are building similar mechanisms of reward and aversion in systems of artificial intelligence. For example, reinforcement learning is similar to training a dog: improved performance is reinforced with a virtual reward.

*raises eyebrow*

I have heard virtual cookies spoken of many times in my years of contributing to online forums, offered to people of opposing viewpoints as a gesture of goodwill. I never thought there would be a day when such a cookie would exist, however.

Is it like, a bitcoin?

Call me ignorant, but I haven’t the FAINTEST idea how one rewards something that, last I checked, was neither sentient or conscious.

Right now, these systems are fairly superficial, but they are becoming more complex and life-like. Could we consider a system to be suffering when its reward functions give it negative input? What’s more, so-called genetic algorithms work by creating many instances of a system at once, of which only the most successful “survive” and combine to form the next generation of instances. This happens over many generations and is a way of improving a system. The unsuccessful instances are deleted. At what point might we consider genetic algorithms a form of mass murder?

How about, never?

I view this as not being identical to the processes of evolution or obsolescence, but similar enough to render it as being ethically benign. It would be asinine to consider the ethicacy of the process of evolution spanning the ages. And humans regularly throw away and destroy the old and obsolescent technologies that once populated their lives.

Consider the following video’s. Are the actions you see within either video unethical?

To be fair, a great many people do display emotional distress at the sight of this type of thing. But this is less murder than it is . . . progeress. To have shiny new things, we have to sacrafice much of the old things to The Claw.

Once we consider machines as entities that can perceive, feel and act, it’s not a huge leap to ponder their legal status. Should they be treated like animals of comparable intelligence? Will we consider the suffering of “feeling” machines?

Some ethical questions are about mitigating suffering, some about risking negative outcomes. While we consider these risks, we should also keep in mind that, on the whole, this technological progress means better lives for everyone. Artificial intelligence has vast potential, and its responsible implementation is up to us.

Looking at this question in a strictly pragmatic way, then no. We will not take the suffering of the machines into consideration. I draw this conclusion from the way that our species tends to treat lesser animals that are sacrificed for the sake of our stomachs.

Should animals be given respect for the beings that they are? I suppose that it depends on what that entails.
The argument can be made that being humans have the emotional intelligence to understand suffering, then consumption of meat is unethical. Actually, put in this way, one could even say barbaric.
Bloody hell, The militant vegan’s are getting to me.

Either way, since I am not Ingrid Newkirk and less prone to emotional manipulation than the average psychopath, this does not drive me straight to veganism. First, because humans have been omnivores boardering on carnivorous for pretty much the entirety of our existence.

The ancestors of Homo sapiens cooked their food, cooking has been around for approximately a million year (that is around 500 000 years longer than the human species has existed) (Berna et al., 2012; Organ, Nunn, Machanda, & Wrangham, 2011). Traces of humans eating meat is also ancient and seems to have been around for as long as our species existed (Pobiner, 2013). One of our closest relatives the chimpanzee also eats an omnivorus diet with mainly fruits, but occasionally eats animals (McGrew, 1983).

https://veganbiologist.com/2016/01/04/humans-are-not-herbivores/

Though meat does seem to be a necessity in our diet, being that its components can fairly easily be replaced by vegan alternatives,  I wouldn’t use that argument. Which brings us back to ethical implications. Since it would be asinine to label a lion unethical for doing what it must to survive, I also feel no such ethical conundrum.
To be fair, while I am pro-meat, I am not blind. It would a benefit in every way for the species to severely curb it’s met consumption. It’s simply unsustainable in the long term (even without taking inhumane conditions into consideration). If everyone relaxed their meat consumption to even once or twice a week, resource-intensive factory farming would quickly become redundant. Meat eaters can enjoy their choice, whilst animals also get a bit of a lift (in terms of overall treatment).

The argument can still be made that there is no humane way to slaughter meat for food, period. Full stop.

I hate dichotomies. If you are one of the people of this persuasion, I encourage you to go over to National Geographic or Discovery’s channel and watch a lion eat it’s pray. Eat it’s sometimes sick and injured (and therefore, easy to catch) pray.

The phrase “put it out of its misery” exists for a reason.

Having delved into all of that, I still can’t bring myself to view these so-called Intelligent Designs (a proper usage for the term?) as anything beyond abiotic objects. Which raises a new question . . . when do abiotic factors become biotic?
From what I can tell, the divide seems to be between the living and the dead. Recently living things that are part of a food chain are generally considered biotic until fully broken down.

I think one can understand where I am going with this, by now. Does life need to fit into this spectrum? Or could we have just stumbled into a new categorization?

For the time being, i’ll settle into the non-commital conclusion that is “Not sure”. If this ever becomes a reality, I will likely revisit this topic. However, until then, this can go on the shelf along side gods existence, extra terrestrial phenomena ans other supernatural lore.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.