Holographic Performances From Dead Celebrities – Awesome? Or Despicable?

Today, we will be exploring an article titled Dead Celebrities are being digitally resurrected — and the ethics are murky, written by Jenna Benchetrit and published by CBC News. While it’s not the first time I have heard of this concept (nor seen it explored in pop culture, as in the case of Black Mirror), I have never really stopped to consider the implications. That is to say, how I would respond to coming across one of my cherished idols or artists digitally resurrected for my enjoyment.

Being the nature of the subject, the resulting conclusions can only be subjective. We will all naturally come to a different stance based on the many things that make us all . . . us. As such, this will be (for the most part) more an act of personal exploration than ethical vetting. Nonetheless, feel free to share your views in the comments if you wish.

Let us begin.

https://www.cbc.ca/news/entertainment/dead-celebrities-digital-resurrection-1.6132738?cmp=newsletter_CBC%20Newsletter_4450_299391

 

Hologram performances, artificial voices and posthumous albums pose tough ethical questions, critics say

It’s a modern phenomenon that’s growing increasingly common with innovative technology and marketing appeal: the practice of digitally resurrecting deceased celebrities by using their image and unreleased works for new projects.

Michael Jackson moonwalked at the 2014 Billboard Music Awards years after his death; rapper Tupac Shakur performed at the 2012 Coachella music festival, though he died in 1996; and late singer Aaliyah’s estate spoke out recently after her record label announced that some of her albums would be released on streaming services.

A slew of recent controversies have renewed complicated questions about whether projects involving the use of a deceased celebrity’s likeness or creative output honours the artist’s legacy or exploits it for monetary gain. 

Prince’s former colleague released a posthumous album comprised of songs the artist recorded in 2010 then scrapped; an artificially-engineered model of Anthony Bourdain’s voice was used in a new documentary about the chef and author’s life; and a hologram of Whitney Houston will perform a six-month Las Vegas residency beginning in October 2021.

 

Interestingly enough, this brings to mind a conversation (a debate of sorts) I had with a friend some years back at work. Him being a fan of old school grunge and the Seattle scene, he hated the reincarnation of Alice In Chains with the presence of a new lead singer. At the time, I recall viewing the sentiment towards the name as kind of silly (what difference does it make?). I happened to like the music of both configurations of the band, so the sentiment that they should have proceeded under a different name seemed . . . purist.

Then around 3 years later, Chester Bennington of Linkin Park fame died by suicide. Upon considering my previous viewpoint at some point later, I was struck by the realization that I had similar reservations about someone else fronting Linkin Park in place of Chester Bennington. I had no real rational reason for this. It just felt weird for someone else to step into the role that of someone that I had become familiar with since my teen years. Hybrid Theory and Meteora came out when I was in high school. I literally grew up with this band as part of the soundtrack of my life.

Even though I stopped paying as much attention to most of the releases after Minutes To Midnight, it still felt . . .weird.

But that was years ago. Having not thought about it since probably 2017, I’ve realized that most of the sentiment towards the name Linkin Park (likely a result of the death being so recent at the time) is gone. Which it seems is not a moment too soon since the rest of the group (mostly on hiatus since 2017) is starting to release remixed and new material starting in 2020. So far there has been 1 track re-released in August 2020 and a remixed released in January of 2021. We will see what goodies the rest of LP have for us in the coming post-pandemic years.

https://en.wikipedia.org/wiki/Linkin_Park#2020%E2%80%93present:_Return_to_music

This isn’t even the first time I’ve had this inner dialogue, either. It also occurred back in 2016, when I heard (with horror at the time) that Axel Rose of Guns & Roses infamy was set to replace ACDC’s Brian Johnson, who was forced to retire due to hearing problems. This was not on account of sentiment either (remember that Brian Johnson replaced the deceased Bon Scott back in 1980). More, it was due to the volatile and infamous nature of Rose himself. Though his antics are well known and documented (up to and including inciting a riot in Montreal), even my aunt has a story of annoyance associated with working security at a G &R show (the band came on stage an hour late).

An interesting side note of the Montreal riot . . . lost to history is the fact that Axle was also suffering from a torn vocal cord at the time of the incident, which seems to have weighed into the decision. This, along with the fact that only around 2000 people (of the 10,000ish in attendance) were thought to have participated in the riots.

This is also something that I have not thought about for a long time. Probably because, as it turns out, the 23 show ACDC collaboration appears to have gone off without a hitch. And though the group was on hiatus since 2016, the 2014 lineup reunited in 2020 to release Power Up, an album that I enjoy.
Not that ACDC has ever put out an album that I didn’t enjoy.

Sure, the music is simple in comparison to the various shades of metal that I’ve since moved on to. Yet, it also remains enjoyable since the group is delightfully unserious when it comes to songwriting, never fearing to tread into the low brow. As evidenced by the 2020 track Money Shot, a tune that made me laugh out loud.
And one can’t complain much of the simplistic nature of the pub rock genre, because if you want something more, look no further than Airbourne (like ACDC. they also started in Australia). Though it is obvious who their influences are, they certainly take things to a whole other level.

Sticking to the topic still, we come to another band that I grew up with that changed frontman. Three Days Grace.

Growing up, I used to think of the first 2 3DG albums as another soundtrack to my teenage years. I also liked (and own) the subsequent 2 albums under the original lineup. But when lead singer Adam Gontier left the group and was replaced by Matt Walst of My Darkest Days,  it took some (who am I kidding . . . MUCH!) persuasion to appreciate the new Three Days Grace.

Or, Nu 3DG as it were.

But as it turned out, the unexpected change of lineup was not the awful thing that times closer to the change made it out to be. Under the lead of Matt Walst, Three Days Grace has moved into a newer and more interesting sound. And Adam is heading an equally interesting project in St. Asonia. The best of both worlds.

Also worth noting is the Foo Fighters. While I am almost certain that Courtney Love would NOT have let Dave Grohl and the rest of the trio continue forward under the Nirvana brand, it would be interesting to see what the results of a different timeline would have been. For example, the ACDC timeline.

Would fans embrace the new frontman (as seems the case with ACDC)? Or would they detest the new configuration (as with AiC)? 

Whatever the case does not matter, anyhow, since the Foo Fighters did perfectly fine even without the old brand behind them.

Looking back at this, it’s funny that I once looked at my friend’s distaste of NU-AiC as amusing and purist. As it turns out, I am just as human in my distaste of the alterations of the familiar. Hell . . . it’s one of my biggest critiques of many baby boomers that I know, and of the generation in general. The lack of interest in even trying to accept the new, let alone accepting that the old way is largely on the way out. Often for good reason.

So much have I pondered this that I now conclude that change is almost always actually a good thing for a band.

The first example of this that comes to mind is Seether. Their first 3 albums were also part of the soundtrack of my teenage years, with the 4th coming out just as I was coming of age as an adult. Though I still liked the fourth album despite its slight move away from what one was used to, I can’t stand anything released afterward.
The same goes for Theory of a Deadman. I liked the first 2 albums, but what followed was Gawd awful.  I don’t normally throw away music that I own, but I did toss The Truth Is because, for the life of me, I didn’t know why I spent $15 or $20 on it.

Remember buying CDs?

Yeah . . . I don’t miss it either. I do miss the days before people like me and streaming sucked much of the money out of the music industry, forcing artists old and new to resort to commercials and advertising as a steady income stream. But I suppose that is a different entry altogether.

Either way, rare is the musician from my childhood that has continuously put out new material, yet avoided the pitfall of toning it down for mainstream popularity. So rare is the case that only Billy Talent comes to mind as an artist that bucked the trend.

No matter the backlash, when artists decide to do the seemingly unthinkable and make a big change, the results are almost always alright. Another example that I just recently discovered was Aaron Lewis. Best known by me (and probably most people) as the lead singer behind Staind, imagine my surprise in discovering Country Boy in a country playlist. I can’t say that I like it, per se. But it’s certainly different, and Aaron is suited for the genre.

Considering that I used to hate country, the fact that I’m starting to get accustomed to some of it is shocking in itself. And I do in fact mean some of it. Though I like a couple Dierks Bentley songs and a Joe Nichols tune that most people likely know among some others, the pickings are slim. Aside from learning that a coon dog isn’t an incredibly racist lyric, I still find the formulaic nature of much of the country genre to be annoying.



To be fair, much of what I am describing is prescribed to a category within Country music that many call Bro-Country. Having said that, even the old-time stuff tends to lean in this direction. Hence why also can’t stand Alan Jackson or Toby Keith (he irked me long before the Red Solo Cup abomination).

I am very selective indeed . . . but it’s a hell of a change from a year ago. Not to mention that I figure it would be hard to find someone that has everything from Slipknot, to Weird Al, to Dierks Bentley on the same playlist.

But at long last, I come to the topic that the readers have come here for . . .  holograms.

 

Michael Jackson moonwalked at the 2014 Billboard Music Awards years after his death; rapper Tupac Shakur performed at the 2012 Coachella music festival, though he died in 1996; and late singer Aaliyah’s estate spoke out recently after her record label announced that some of her albums would be released on streaming services.

* * *

Prince’s former colleague released a posthumous album comprised of songs the artist recorded in 2010 then scrapped; an artificially-engineered model of Anthony Bourdain’s voice was used in a new documentary about the chef and author’s life; and a hologram of Whitney Houston will perform a six-month Las Vegas residency beginning in October 2021.

 

This is certainly an interesting thing to ponder. Though I CAN think of 1 reason why I would not want to see Micheal Jackson moonwalking in a show post-humously, the ethical reasoning has nothing to do with him being dead. Frankly, the same goes for anyone that would want to present a holographic Kobe Bryant. I find the continued praise and worship of both those people to be problematic, but again, that is a whole other post.

To boil it down:

1.) While one should always reserve judgement, the evidence weighs heavily in one direction. As does the fact that the case was settled out of court.

2.) Micheal Jackson was NOT proven innocent, contrary to how Twitter recently reacted. The court only dismissed the notion of the victims that 2 companies representing Jackson’s interests had any bearing of responsibility towards their safety and welfare. Nothing more.

Moving on from that red hot potato, I come to Tupac Shakur and Whitney Houston. When it comes to these 2, I am neutral. Assuming that neither said anything in life against the concept of post-humous holograms and assuming the concept isn’t going against either majority fan or estate wishes, I see little issue with it. It is but a new medium for the broadcast and display of recorded media, after all. In my opinion, no different than watching a Whitney Houston music video on YouTube. Or as I happen to be doing at this moment, listening to the long-deceased Johnny Cash in MP3 form.

I know . . . who still does that?!

Speaking of times changing, we come to the release of dead artist’s music on streaming platforms. Short of the artist taking issue with it in life (as seems would be the case with Prince), I have little issue with it.
For all intents and purposes, the cat is already out of the bag. In fact, it has been since the debut of Napster in 1999, continued to be so in the early 2000s with the decentralized P2P platforms, and continued ever beyond in the realm of torrents and discographies. Today, people scrape YouTube videos for audio.

And even that isn’t really correct anymore, with most people using ad or subscription-based streaming services. My preferred choice is YouTube Music since it comes with fewer limitations than Spotify (though I use Spotify for podcasts).

Any artists refusing to join the streaming platforms at this point are just pissing into the wind. This is not to say that the modern monetary sharing scheme is optimal (cause it’s not. It’s even more shit than it was in the past!). Nonetheless, however, when even the Nirvana and Tool catalogues can now be streamed, you know we’re in a different era. 

As for using machine learning algorithms to reanimate the voice of the now-deceased Anthony Bourdain, however . . . THAT IS WHERE I DRAW THE LINE! 

Yeah . . . just kidding.

Personally, having seen Desperate Housewives back in the day (remember the homophobia of seasons 1 and 2? That didn’t age well :/ ), the idea of a show narrated by a character deceased from the plot is interesting. 
As much as I’d love the Bourdain doc to open with a line like “Guess what, guys! I’m dead!” (I can see him doing something like that!), it probably wouldn’t go over well with the normies among us. 

No one seems to take issue with a dead Paul Walker showing up in a run of the mill Holywood movie, but throw a dead guy joke into a Bourdain documentary . . .

*GASP*

CANCELLED!

 

Ethical and legal ramifications

It’s a matter of both ethics and law, but the ethical concerns are arguably more important, according to Iain MacKinnon, a Toronto-based media lawyer. 

“It’s a tough one, because if the artist never addressed the issue while he or she was alive, anybody who’s granting these rights — which is typically an executor of an estate — is really just guessing what the artist would have wanted,” MacKinnon said.

“It always struck me as a bit of a cash grab for the estates and executors to try and milk … a singer’s celebrity and rights, I guess, for a longer time after their death.”

According to MacKinnon, the phrase “musical necrophilia” is commonly used to criticize the practice. Music journalist Simon Reynolds referred to the phenomenon of holographic performances as “ghost slavery,” and in The Guardian, Catherine Shoard called the CGI-insertion of a dead actor into a new film a “digital indignity.”

 

This is indeed an almost cut and dry case when it comes to copyright law. Though it sounds like one single area, what copyrights equate to are a great many single rights.

Say I am writing a book called “Crazy Cats of New Haven”. The moment the pen hits the paper (or the finger hits the keyboard), the resulting document in its entirety is covered under international copyright law. However, beyond just being your proof in a court of law, having this control over the main copyright also means you have control of any other rights whether currently available or future. For example:

  • audiobook
  • audio (song?)
  • theatrical (movie? play?)

The reason I am aware of this is on account of a short copyright course I took aimed at aspiring authors. Instructed by a seasoned and published author, the goal was to introduce us to a sample book contract and ensure we are aware that not all contracts are alike. Like every other area of the media and entertainment industry, not all publishers are equal.

This is where the future rights portion of this comes in. Though I have yet to come across my first contract at this point, most are said to automatically include every right that is available and future rights. Or in normie speak, if the project ever blows up and goes cross-platform (eg. Lord of the Rings, Harry Potter), the publisher is often in a much more powerful position than the author or writer.

And this isn’t uncommon either. The writers in the music industry often make peanuts even if they write hits.

 

Songwriters are guaranteed a royalty from every unit sold (CDs, vinyl, cassette, etc.).

These royalties are paid out differently in different countries, but in the U.S., they come out to $0.091 per reproduction of the song – nine cents every time a song is reproduced/sold.

In other countries, the royalty is paid out at 8 to 10% of the value of the recording.

What does this equate to?

Take the song “Pumped Up Kicks” – a huge hit for Foster The People. The track sold 3.8 million copies and the album itself sold 671,000 copies.

The frontman of the band Nate Foster has the sole writing credit on the song, so he collects every penny of the mechanical royalties, which would come out to around $406,861.

And that’s just the mechanicals. There are other ways that song was making money – it received a ton of radio play and was licensed on TV shows like Entourage, Gossip Girl and The Vampire Diaries, which added to Foster and the band’s earnings.

 

Digital Download Mechanical Royalties

Digital download mechanical royalties are generated in the same way physical mechanical royalties are generated, except they are paid whenever any song is downloaded.

iTunes, Amazon, Google Rhapsody, Xbox Music, all generate and pay these royalties to songwriters whenever a song is downloaded.

Again, these are paid out at a rate of $0.091 per song.

Streaming Mechanical Royalties

Streaming mechanical royalties are generated from the same Reproduction and Distribution copyrights, but are paid differently.

They are generated any time a song is streamed through a service that allows users to pause, play, skip, download, etc.

This means Spotify, Apple Music, TIDAL, Pandora, etc.

In the U.S. (and globally for the most part) the royalty rate is 10.5% of the company’s gross revenue minus the cost of public performance.

An easier way to say this, is that it generally comes out to around $0.005 per stream. Less than a cent!

How Much Do Songwriters Make Per Song, Per Stream & In Other Situations?

An easier way to put the last sentence is that its sweet fuck all.

Imagine that many nations in the world quit manufacturing the 1 cent penny because of its production cost (over a cent!). Most songwriters earn less than that.

 

The problem here is as obvious and immediate as a whacking great pop hook.

Think of the biggest songs on Spotify over the past decade. Here they are, courtesy of Kworb:

  • Ed Sheeran – Shape Of You (1.77bn streams);
  • Drake – One Dance (1.48bn streams);
  • The Chainsmokers – Closer (1.28bn streams)
  • Luis Fonsi – Despacito Remix (1.07bn streams)
  • Post Malone – Rockstar (1.05bn streams)

All of them were co-written, alongside the featured artist, by very talented people.

Some of these co-writer’s names: Steve Mac, Johnny McDaid, Shaun Frank and Jason ‘Poo Bear‘ Boyd.

How many people amongst Spotify’s 75m paying subscribers, you wonder, heard songs written by these people and thought; ‘I love that track – I want to play it now… I’ll try Spotify.’

And then: ‘Wow, this service is amazing, I’m going to pay for it.’

Yet the songwriters who penned these tracks presumably aren’t getting a penny for their compositions from corporate Spotify stock sales.

Instead, they’re being left out in the cold during one of the industry’s most historic windfalls.

Songwriters got screwed by the Spotify equity bonanza. The industry has to ask itself questions.

 

Now that we have explored all the reasons why MB Man will never be writing any songs anytime soon, let’s move onto the movie industry. We will now explore the shady realms of Hollywood Accounting. How to turn a multi-billion dollar grossing blockbuster into cash bleeding loss.

 

On today’s Planet Money, Edward Jay Epstein, the author of a recent book called The Hollywood Economist, explains the business of movies.

As a case study, he walks us through the numbers for “Gone In 60 Seconds.” (It starred Angelina Jolie and Nicolas Cage. They stole cars. Don’t pretend like you don’t remember it.)

The movie grossed $240 million at the box office. And, after you take out all the costs and fees and everything associated with the movie, it lost $212 million.

This is the part of Hollywood accounting that is, essentially, fiction. Disney, which produced the movie, did not lose that money.

Each movie is set up as its own corporation. So what “lost money” on the picture is that corporation — Gone In 60 Seconds, Inc., or whatever it was called.

And Gone In 60 Seconds, Inc. pays all these fees to Disney and everyone else connected to the movie. And the fees, Epstein says, are really where the money’s at.

https://www.npr.org/sections/money/2010/05/the_friday_podcast_angelina_sh.html/

 

May I first note that the last name appears to be coincidental in this case. Unsurprising, given my doubts that Jeffrey Epstein would like having an investigative journalist around the island of rich pedos.

ANYWAY . . .

That is how you turn a billion-dollar grossing moneymaker of a film into a cash-losing flop. And as usual, I veered off-topic.

Well, sort of. We now know the stance of the entertainment industry in terms of ethics . . . there are none. Given the power afforded to the rights holder, I suspect that we will see a lot more deceased celebrities doing everything from performing in Vegas to selling coffee and toothpaste on TV commercials.

Just kidding . . . clearly the cash is now in YouTube and Spotify ads.

 

Richard Lachman, an associate professor at Ryerson University who researches the relationship between humans and technology, said that as artists age and develop a better sense of their legacies, they may take the time to protect their images and file appropriate contract clauses. 

But not every artist will grow old. Indeed, a common thread between many of the artists whose works and likeness have been used in this capacity is an unexpected or accidental death.

Prince died in 2016 of an accidental opioid overdose, Anthony Bourdain died by suicide in 2018 and Whitney Houston drowned in her bathtub in 2012 as a result of heart disease and cocaine use. Tupac, Amy Winehouse and Aaliyah all died unexpectedly at young ages. 

Lachman said if this is the case, then it’s possible that clauses accounting for image use didn’t get written into wills. He also noted that artists who die prematurely don’t grow old, giving an impression of perpetual youth that reminds audiences what an artist looked like in their prime.

And while fans might be protective of the artists they love, they’re also the primary consumers to whom these digital resurrections appeal.

“Yes, we know that [a hologram of] Whitney Houston is not the real Whitney Houston,” Lachman said. “But it’s a chance for us to engage in some of that fan behaviour, something that binds us to one another.”


I agree with the final sentence.

As explained earlier, I am not against the concept of posthumous holograms. Even taking the Whitney Houston hologram example and replacing her likeness with Chester Bennington or Warrel Dane (2 artists that mean much more to me than Whitney Houston), I still don’t really find myself against the concept. Assuming that the family and/or next of kin is on board with the process, this seems to be just an ultramodern example of what we have been taking for granted for decades. The ability to store information onto various mediums. 

First came the song. Then the video. Now, potentially, the whole experience. Whether the experience is to be predetermined (akin to a pre-recording) or interactive (play out based on the audience, presumably) depends on the technology.

Though I can see why this kind of thing may be considered horrifying by some, consider the opportunity. Before now, if your favourite artist were to die, that is it in terms of opportunities for interaction. Though there may be shows if their surrounding act decides to continue, the opportunity of seeing the artist live will never happen again. Particularly notable when it comes to solo acts.

For people who have never seen that artist live, this may well be the opportunity of a lifetime. Indeed, it’s not the REAL thing. But it’s a very special opportunity nonetheless. An opportunity that my grandfather (who died in 1998) did not have in his lifetime.

For this reason, those in charge of these shows will have to be extra careful when it comes to smooth and flawless production performances. Not only will these performances serve as a typical live show, they will also serve as the farewell tribute that many of us wish we could have had with long-lost loved ones (beloved celebrities included). Auditoriums housing such performances may be wise to keep lots of tissues on hand.

 

For some, releasing archived material might not seem as harmful as resurrecting a person with virtual reality, MacKinnon said.

“I think there’s different degrees and a spectrum of uses that can be made of dead performers.”

 

There is no doubt no comparison between the 2. If it was not explicitly trashed by the artist, it may well have ended up released later in their career anyway.

The Prince example from earlier has to be mentioned, however. The posthumous release of an album of songs written by (and scrapped by!) Prince. Prince’s feelings towards the material were clear. Any person of ethics and integrity would know to leave the trash in the trash.

So naturally, they took the other path and cashed in on the fanbase for some cash. 

There will always be unscrupulous actors in an industry devoid of ethical and moral virtues. Thus, it is important not to let their actions dictate our opinion of anything we are speaking of. Unscrupulous people will always be unscrupulous, after all.

 

Prince is an artist who’s been on both sides of that spectrum.

Last month, his posthumous album Welcome 2 America was released to fanfare. But there was another controversial incident in which it was rumoured that a hologram of Prince would perform alongside Justin Timberlake at the 2018 Super Bowl halftime show. The plans were eventually scrapped, with Prince’s ex-fiancée Sheila E. confirming that Timberlake wouldn’t go through with it. 

The incident renewed interest in a 1998 interview with Guitar World, in which Prince said performing with an artist from the past is “the most demonic thing imaginable.”

 

I don’t know who had the bigger say in this decision, but if it was Justin Timberlake, good on him for seemingly honouring the wishes of Prince. Seemingly, because I can only imagine how much public pressure was driving the decision. This is the age of social media and Twitter, after all.

 

Sarah Niblock, a visiting professor of psychology at York St. John University in York, England, who has long studied Prince and co-wrote a book about the artist, says efforts to dig into his vault and use his image for profit are in contention with his publicly expressed wishes.

“He was fully in control of his output, sonically and visually, and the way everything was marketed, and of course, those who performed with him and all of his artists that he produced,” Niblock said.

The situation is further complicated because Prince didn’t leave a will when he died. Without one, “a person’s estate can exploit or license those rights if they want to,” MacKinnon said.

While the legal boundaries are relatively clear, the ethical question of whether an artist is being exploited or not is subjective.

For Niblock, digital resurrections that enrich the estate and its executors at the expense of an artist’s known wishes cross a line.

“Trying to somehow use that death to create a mythic quality that the artists themselves would have not necessarily intended, to then market that for money … I mean, it’s extremely cynical and disrespectful.”

 

There is no respect in capitalism. Only profits.

 

Legal considerations must be made before death

While promoting his new documentary Roadrunner: A Film About Anthony Bourdain, director Morgan Neville said he had recreated Bourdain’s voice using machine learning, then used the voice model to speak words Bourdain had written.

The incident prompted a wave of public discussion, some of it criticism levelled at Neville.

A tweet from Bourdain’s ex-wife suggested that he wouldn’t have approved. A columnist for Variety considered the ethical ramifications of the director’s choice. And Helen Rosner of The New Yorker wrote that “a synthetic Bourdain voice-over seemed to me far less crass than, say … a holographic Tupac Shakur performing alongside Snoop Dogg at Coachella.”

Recent incidents like the Bourdain documentary or Whitney Houston’s hologram residency will likely prompt those in the entertainment industry to protect themselves accordingly, said MacKinnon.

 

Having considered things a bit (and watched the Tupac Coachella appearance), I would hardly consider it as crass. The audience in attendance certainly didn’t. Nor do most of the people in the YouTube comments. Nor do the 274k people that liked the video (verses around 6k dislikes). I’d say the only people that cared were exactly where they should be . . . NOT AT THE SHOW!

Feel free to check it out for yourself. It was linked in the CBC article, believe it or not.

 

“I think now, if they haven’t already, agents, managers, lawyers, performers are all going to be telling their clients that if they care about this, if they care about how their image is used after they die, they need to be addressing it right now in their wills.”

Robin Williams is a notable example of a public figure who foresaw these issues. The late actor, who died by suicide in 2014, restricted the use of his image and likeness for 25 years after his death.

 

It’s cool that Robin Williams had the foresight to consider this before his tragic demise. While I am not as averse to the thought of a post-humous Robin Williams comedy special as I would have been closer to 2014, the man has spoken.

We have indeed entered a new era.

A passing thought . . . though we will never know what opinion past comedians like George Carlin or Bill Hicks would have of this technology, I sense that both would have a lot of fun with it.  

 

Hologram technology improving

According to both Lachman and MacKinnon, artists would do well to make similar arrangements, as the technology behind these recreations will only get more sophisticated.

Holograms of Tupac at 2012 Coachella and Michael Jackson at the 2014 Billboard Music Awards were produced using a visual trick from the Victorian-era called “Pepper’s Ghost,” named for John Henry Pepper, the British scientist who popularized it.

In the illusion, a person’s image is reflected onto an angled glass pane from an area hidden from the audience. The technique gave the impression that the rapper and the king of pop were performing on stage.

Nowadays, companies like Base Hologram in Los Angeles specialize in large-scale digital production of holograms. The recreation of Bourdain’s voice was made possible by feeding ten hours of audio into an artificial intelligence model.

Lachman said that it will become “almost impossible” for the average consumer to know the difference between a hologram creation and the real person. 

He said that while the effects are still new and strange enough to warrant media attention, digital resurrections will continue to have an uncanny effect on their audience — but not for much longer, as audiences will likely grow accustomed to the phenomenon.

Though he said there may be purists who disagree, it seems like audiences have been generally accepting of the practice.

“It seems like the trend is we’re just going to get over it.”

 

I agree. This phenomenon, as somewhat creepy and new as it is, ain’t going anywhere. But as far as I’m concerned, that is a good thing.

There will no doubt be people that will take advantage of this technology so long as celebrities don’t take precautions. Such is the world we live in. Aside from that, I’d say we have a very unique opportunity.

Certainly for tasteful send-offs of beloved stars and musicians (imagine something like a Whitney Houston final Farewell tour). Beyond that, really, the sky is the limit.

“Safeguarding Human Rights In The Era Of Artificial Intelligence” – (Council Of Europe)

Today I will explore yet another (hopefully) different angle on a topic that has grown very fascinating to me. How can human rights be safeguarded in the age of the sentient machines? An interesting question since I think it may also link back to technologies that are pre-AI.

Let’s begin.

https://www.coe.int/en/web/genderequality/-/safeguarding-human-rights-in-the-era-of-artificial-intelligence

The use of artificial intelligence in our everyday lives is on the increase, and it now covers many fields of activity. Something as seemingly banal as avoiding a traffic jam through the use of a smart navigation system, or receiving targeted offers from a trusted retailer is the result of big data analysis that AI systems may use. While these particular examples have obvious benefits, the ethical and legal implications of the data science behind them often go unnoticed by the public at large.

Artificial intelligence, and in particular its subfields of machine learning and deep learning, may only be neutral in appearance, if at all. Underneath the surface, it can become extremely personal. The benefits of grounding decisions on mathematical calculations can be enormous in many sectors of life, but relying too heavily on AI which inherently involves determining patterns beyond these calculations can also turn against users, perpetrate injustices and restrict people’s rights.

The way I see it, AI in fact touches on many aspects of my mandate, as its use can negatively affect a wide range of our human rights. The problem is compounded by the fact that decisions are taken on the basis of these systems, while there is no transparency, accountability or safeguards in how they are designed, how they work and how they may change over time.

One thing I would add to the author’s final statement would be the lack of safeguards in terms of what kind of data these various forms of AI are drawing their conclusions from. While not the only factor that could contribute to seemingly flawed results, I would think that bad data inputs are one of (if not THE) most important factor.

I base this off of observation of the many high profile cases of AI gone (seemingly) haywire. Whether it is emphasized in the media coverage or not, biased data inputs have almost always been mentioned as a factor.

If newly minted AI software is the mental equivalent to a child, then this data is the equivalent to religion, racism, sexism or other indoctrinated biases. Thus my rule of thumb is this . . . If the data could cause indoctrination of a child, then it’s unacceptable for a learning stage algorithm.

Encroaching on the right to privacy and the right to equality

The tension between advantages of AI technology and risks for our human rights becomes most evident in the field of privacy. Privacy is a fundamental human right, essential in order to live in dignity and security. But in the digital environment, including when we use apps and social media platforms, large amounts of personal data are collected – with or without our knowledge – and can be used to profile us, and produce predictions of our behaviours. We provide data on our health, political ideas and family life without knowing who is going to use this data, for what purposes and how.

Machines function on the basis of what humans tell them. If a system is fed with human biases (conscious or unconscious) the result will inevitably be biased. The lack of diversity and inclusion in the design of AI systems is therefore a key concern: instead of making our decisions more objective, they could reinforce discrimination and prejudices by giving them an appearance of objectivity. There is increasing evidence that women, ethnic minorities, people with disabilities and LGBTI persons particularly suffer from discrimination by biased algorithms.

Excellent. This angle was not overlooked.

Studies have shown, for example, that Google was more likely to display adverts for highly paid jobs to male job seekers than female. Last May, a study by the EU Fundamental Rights Agency also highlighted how AI can amplify discrimination. When data-based decision making reflects societal prejudices, it reproduces – and even reinforces – the biases of that society. This problem has often been raised by academia and NGOs too, who recently adopted the Toronto Declaration, calling for safeguards to prevent machine learning systems from contributing to discriminatory practices.

Decisions made without questioning the results of a flawed algorithm can have serious repercussions for human rights. For example, software used to inform decisions about healthcare and disability benefits has wrongfully excluded people who were entitled to them, with dire consequences for the individuals concerned. In the justice system too, AI can be a driver for improvement or an evil force. From policing to the prediction of crimes and recidivism, criminal justice systems around the world are increasingly looking into the opportunities that AI provides to prevent crime. At the same time, many experts are raising concerns about the objectivity of such models. To address this issue, the European Commission for the efficiency of justice (CEPEJ) of the Council of Europe has put together a team of multidisciplinary experts who will “lead the drafting of guidelines for the ethical use of algorithms within justice systems, including predictive justice”.

Though this issue tends to be viewed as the Black Box angle (you can’t see what is going on inside the algorithms), I think it more reflects on the problem that is proprietary systems running independently, as they please.

It reminds me of the situation of corporations and large-scale data minors and online security. The EU sets the standard in this area by way of levying huge fines for data breaches, particularly those that cause consumer suffering (North America lags behind, in this regard).
I think that a similar statute to the GDPR could handle this issue nicely on a global scale. Just as California was/is the leader in terms of many forms of safety regulation due to its market size, the EU has now stepped into that role in terms of digital privacy. They can also do the same for regulating biased AI (at least for the largest of entities).

It won’t stop your local police department or courthouse (or even your government!) from running flawed systems. For that, mandated transparency in operations becomes a necessity for operation. Governing bodies (and international overseers) have to police the judicial systems of the world and take immediate action if necessary. For example, by cutting AI operations funding to a police organization that either refuses to follow the transparency requirements or refuses to fix diagnosed issues in their AI system.

Stifling freedom of expression and freedom of assembly

Another right at stake is freedom of expression. A recent Council of Europe publication on Algorithms and Human Rights noted for instance that Facebook and YouTube have adopted a filtering mechanism to detect violent extremist content. However, no information is available about the process or criteria adopted to establish which videos show “clearly illegal content”. Although one cannot but salute the initiative to stop the dissemination of such material, the lack of transparency around the content moderation raises concerns because it may be used to restrict legitimate free speech and to encroach on people’s ability to express themselves. Similar concerns have been raised with regard to automatic filtering of user-generated content, at the point of upload, supposedly infringing intellectual property rights, which came to the forefront with the proposed Directive on Copyright of the EU. In certain circumstances, the use of automated technologies for the dissemination of content can also have a significant impact on the right to freedom of expression and of privacy, when bots, troll armies, targeted spam or ads are used, in addition to algorithms defining the display of content.

The tension between technology and human rights also manifests itself in the field of facial recognition. While this can be a powerful tool for law enforcement officials for finding suspected terrorists, it can also turn into a weapon to control people. Today, it is all too easy for governments to permanently watch you and restrict the rights to privacy, freedom of assembly, freedom of movement and press freedom.

1.) I don’t like the idea of private entities running black box proprietary algorithms with the aim of combatting things like copyright infringement or extremism either. It’s hard to quantify really because, in a way, we sold out our right to complain when we decided to use the service. The very public square that is many of the largest online platforms today have indeed become pillars of communication for millions, but this isn’t the problem of the platforms. This is what happens when governments stay hands off of emerging technologies.

My solution to this problem revolved around building an alternative. I knew this would not be easy or cheap, but it seemed that the only way to ensure truly free speech online was to ditch the primarily ad-supported infrastructure of the modern internet. This era of Patreon and crowdfunding has helped in this regard, but not without a set of its own consequences. In a nutshell, when you remove the need for everyday people to fact check (or otherwise verify) new information that they may not quite understand, you end up with the intellectual dark web.
A bunch of debunked or unimportant academics, a pseudo-science pedaling ex-psychiatrist made famous by an infamous legal battle with no one (well, but for those, he sued for using their free speech rights), and a couple dopey podcast hosts

Either way, while I STILL advocate for an (or many) alternatives in the online ecosystem, it seems to me that at least in the short term, regulations may need to come to the aid of the freedom of speech & expression rights of everyday people. Yet it is a delicate balance since we’re dealing with sovereign entities in themselves.

The answers may seem obvious at a glance. For example, companies should NOT have been allowed to up and boot Alex Jones off of their collective platforms just for the purpose of public image (particularly after cashing in on the phenomenon for YEARS). Yet in allowing for black and white actions such as that, I can’t help but wonder if it could ever come back to bite us. For example, someone caught using copyrighted content improperly having their entire Youtube library deleted forever.

2.) I don’t think there is a whole lot one can do to avoid being tracked in the digital world, short of moving far from cities (if not off the grid entirely). At this point, it has just become part of the background noise of life. Carrying around a GPS enabled smartphone and using plastic cards is convenient, and it’s almost impossible to generate some form of metadata in ones day to day life. So I don’t really worry about it, short of attempting to ensure that my search engine accessible breadcrumbs are as few as possible.

It’s all you really can do.

What can governments and the private sector do?

AI has the potential to help human beings maximise their time, freedom and happiness. At the same time, it can lead us towards a dystopian society. Finding the right balance between technological development and human rights protection is therefore an urgent matter – one on which the future of the society we want to live in depends.

To get it right, we need stronger co-operation between state actors – governments, parliaments, the judiciary, law enforcement agencies – private companies, academia, NGOs, international organisations and also the public at large. The task is daunting, but not impossible.

A number of standards already exist and should serve as a starting point. For example, the case-law of the European Court of Human Rights sets clear boundaries for the respect for private life, liberty and security. It also underscores states’ obligations to provide an effective remedy to challenge intrusions into private life and to protect individuals from unlawful surveillance. In addition, the modernised Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data adopted this year addresses the challenges to privacy resulting from the use of new information and communication technologies.

States should also make sure that the private sector, which bears the responsibility for AI design, programing and implementation, upholds human rights standards. The Council of Europe Recommendations on human rights and business and on the roles and responsibilities of internet intermediaries, the UN guiding principles on business and human rights, and the report on content regulation by the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, should all feed the efforts to develop AI technology which is able to improve our lives. There needs to be more transparency in the decision-making processes using algorithms, in order to understand the reasoning behind them, to ensure accountability and to be able to challenge these decisions in effective ways.

Nothing for me to add here. Looks like the EU (as usual) is well ahead of the curve in this area.

A third field of action should be to increase people’s “AI literacy”.

Indeed.

In an age where such revered individuals as Elon Musk are saying such profoundly stupid things as this, AI literacy is an absolute necessity.

States should invest more in public awareness and education initiatives to develop the competencies of all citizens, and in particular of the younger generations, to engage positively with AI technologies and better understand their implications for our lives. Finally, national human rights structures should be equipped to deal with new types of discrimination stemming from the use of AI.

1.) I don’t think that one has to worry so much about the younger generations as they do about the existing generations. Young people have grown up in the internet age so all of this will be natural. Guidance as to the proper use of this technology is all that should be necessary.

Older people are a harder sell. If resources were to be put anywhere, I think it should be in programs which attempt to making aging generations more comfortable with increasingly modernized technology. If someone is afraid to operate a smartphone or a self-checking, where do you even begin with explaining Alexa, Siri or Cortana?

2.) Organizations do need to be held accountable for their misbehaving AI software, particularly if it causes a life-altering problem. Up to and including the right to legal action, if necessary.

 It is encouraging to see that the private sector is ready to cooperate with the Council of Europe on these issues. As Commissioner for Human Rights, I intend to focus on AI during my mandate, to bring the core issues to the forefront and help member states to tackle them while respecting human rights. Recently, during my visit to Estonia, I had a promising discussion on issues related to artificial intelligence and human rights with the Prime Minister.

Artificial intelligence can greatly enhance our abilities to live the life we desire. But it can also destroy them. It therefore requires strict regulations to avoid morphing in a modern Frankenstein’s monster.

Dunja Mijatović, Commissioner for Human Rights

I don’t particularly like the darkened tone of this part of the piece. But I like that someone of influence is starting to ask questions, and getting the ball rolling.

It will be interesting to see where this all leads in the coming months, years and decades.

“Unboxing Google’s 7 New Principles Of Artificial Intelligence” – (aitrends)

Today, I am going to look into Google’s recent release of it’s 7 new principals of artificial intelligence. Though the release was made at the beginning of July, life happens, so I haven’t been able to get around to it until now.

https://aitrends.com/ethics-and-social-issues/unboxing-googles-7-new-principles-of-artificial-intelligence/

How many times have you heard that Artificial Intelligence (AI) is humanity’s biggest threat? Some people think that Google brought us a step closer to a dark future when Duplex was announced last month, a new capability of Google’s digital Assistant that enables it to make phone calls on your behalf to book appointments with small businesses.

The root of the controversy lied on the fact that the Assistant successfully pretended to be a real human, never disclosing its true identity to the other side of the call. Many tech experts wondered if this is an ethical practice or if it’s necessary to hide the digital nature of the voice.

Right off the bat, were into some interesting stuff. An assistant that can appear to do all of your phone call related chores FOR you.

On one hand, I can understand the ethical implications. Without confirming the nature of the caller, it could very well be seen as a form of fraud. It’s seen as such already when a person contacts a service provider on behalf of another person without making that part clear (even if they authorize the action!). Indeed, most of the time, no one on the other end will likely even notice. But you never know.

When it comes to disguising the digital nature of the voice of such an assistant, I don’t see any issue with this. While it could be seen as deceptive, I can also see many businesses hanging up on callers that come across as being too robotic. Consider, the first pizza ever ordered by a robot.

Okay, not quite. We are leaps and bounds ahead of that voice in terms of, well, sounding human. None the less, there is still an unmistakably automated feel to such digital assistants as Siri, Alexa, and Cortana.

In this case, I don’t think that Google (nor any other future developer or distributor of such technology) has to worry about any ethical issues surrounding this. Simply because it is the onus of the user to ensure the proper use of the product or service (to paraphrase every TOS agreement ever)

One big problem I see coming with the advent of this technology is, the art of deception of the worst kind is going to get a whole lot easier. One example that comes to mind are those OBVIOUSLY computer narrated voices belching out all manner of fake news to the youtube community. Now the fakes are fairly easy for the wise to pick up on because they haven’t quite learned the nuances of the English language (then again, have I?). In the future, this is likely to change drastically.
Another example of a problem posed by this technology would be in telephone scamming. Phishing scams originating in the third world are currently often hindered by the language barrier. It takes a lot of study to master enough English to fool most in English speaking nations. Enter this technology, that that barrier is gone.

And on the flip side of the coin, anything that is intelligent enough to make a call on your behalf can presumably also be programmed in the reverse. To take calls. Which would effectively eliminate the need for a good 95% of the call center industry. Though some issues may need to be dealt with by a human, most common sales, billing, or tech support problems can likely be dealt with autonomously.

So ends that career goal.

None the less, I could see myself having a use for such technology. I hate talking on the phone with strangers, even for a short time. To have the need for that eliminated would be VERY convenient. What can be fetched by a tap and a click IS, so eliminating what’s left . . . I’m in millennial heaven.

You heard it here first . . .

Millenials killed THE ECONOMY!

Google was also criticized last month by another sensitive topic: the company’s involvement in a Pentagon program that uses AI to interpret video imagery and could be used to improve the targeting of drone strikes. Thousands of employees signed a letter protesting the program and asking for change:

“We believe that Google should not be in the business of war. Therefore we ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.”

Time to ruffle some progressive feathers.

In interpreting this, I am very curious of what is meant by the word improve. What does it mean to improve the targeting of drone strikes? Improve the aiming accuracy of the weaponry? Or improve the quality of the targets (more actual terrorist hideouts, and fewer family homes)?

This has all become, very confusing to me. One could even say that I am speaking out of both sides of my mouth.
On one hand, when I think of this topic, my head starts spitting out common deliberately dehumanizing war language like terrorists, combatants, or the enemy. Yet, here I am, pondering if improved drone strikes are a good thing.

I suppose that it largely depends on where your interests are aligned. If you are more aligned nationalistically than humanisticly than this question is legitimate. If you work for or are a shareholder of a defense contractor, then this question is legitimate. Interestingly, this could include me, being a paying member of both a private and public pension plans (pension funds are generally invested in the market).

Even the use of drones alone COULD be seen as cowardly. On the other hand, that would entail that letting loose the troops onto the battlefields like the great wars of the past, would be the less cowardly approach. It is less cowardly for the death ratio to be more equal.
Such an equation would likely be completely asinine to most. The obvious answer is the method with the least bloodshed (at least for our team).Therefore, BOMBS AWAY!” from a control room somewhere in the desert.

For most, it likely boils down to a matter of if we HAVE to. If we HAVE to go to war, then this is the best way possible. Which then leads you to to the obvious question that is “Did we have to go to war?”. Though the answers are rarely clear, they almost always end up leaning towards the No side. And generally, the public never find this out until after the fact. Whoops!

The Google staff (as have other employee’s in silicon valley, no doubt) have made their stance perfectly clear. No warfare R & D, PERIOD. While the stance is enviable, I can’t help but also think that it comes off as naive. I won’t disagree that the humanistic position would not be to enable the current or future endeavors of the military-industrial complex (of which they are now a part of, unfortunately). But even if we take the humanist stance, many bad actors the world over have no such reservations.
Though the public is worried about a menace crossing the border disguised as a refugee, the REAL menace sits in a computer lab. Without leaving the comfort of a chair, they can cause more chaos and damage than one could even dream of.

The next war is going to be waged in cyberspace. And at the moment, a HUGE majority of the infrastructure we rely upon for life itself is in some stage of insecurity ranging from wide open to “Password:123456”.
If there is anyone who is in a good position to prepare for this new terrain of action, it’s the tech industry.

On one hand, as someone who leans in the direction of humanism, war is nonsense and the epitome of a lack of logic. But on the other hand, if there is one thing that our species has perfected, it’s the art of taking each other out.

I suspect this will be our undoing. If it’s AI gone bad, I will be very surprised. I suspect it will be either mutually assured destruction gone real, or climate change gone wild. Which I suppose is its own form of mutually assured destruction.

I need a beer.

Part of this exploration was based around a segment of the September 28, 2018 episode of Real Time where Bill has a conversation about the close relationship between Astrophysicists and the military (starts at 31:57). The man’s anti-philosophical views annoyed me when I learned of them 3 years ago. And it seems that he has culminated into a walking example of what you get when you put the philosophy textbooks out with the garbage.

A “clear policy” around AI is a bold ask because none of the big players have ever done it before, and for good reasons. It is such a new and powerful technology that it’s still unclear how many areas of our life will we dare to infuse with it, and it’s difficult to set rules around the unknown. Google Duplex is a good example of this, it’s a technological development that we would have considered “magical” 10 years ago, that today scares many people.

Regardless, Sundar Pichai not only complied with the request, but took it a step further by creating 7 principles that the company will promote and enforce as one of the industry drivers of AI.

When it comes to this sort of thing, I am not so much scared as I am nervous. Nervous of numerous entities (most of them private for profits, and therefore not obligated to share data) all working on this independently, and having to self-police. This was how the internet was allowed to develop, and that has not necessarily been a good thing. I need go no further than the 2016 election to showcase what can happen when a handful of entities has far to much influence on, say, setting the mood for an entire population. It’s not exactly mind control as dictated by Alex Jones, but for the purpose of messing with the internal sovereignty of nations, the technology is perfectly suitable.

Yet another thing that annoys me about those who think they are red-pilled because they can see a conspiracy around every corner.

I always hear about mind control and the mainstream media, even though the traditional mainstream media has shrinking influence with each passing year. It’s being replaced by preference tailored social media platforms that don’t just serve up what you love, but also often (and unknowingly) paint a false image of how the world looks. While facts and statistics say one thing, my Youtube suggestions and overall filter bubbles say another.

It’s not psi-ops and it doesn’t involve chemtrails, but it’s just as scary. Considering that most of the people developing this influential technology also don’t fully grasp what they have developed.

1. Be socially beneficial

For years, we have dealt with comfortable boundaries, creating increasingly intelligent entities in very focused areas. AI is now getting the ability to switch between different domain areas in a transparent way for the user. For example, having an AI that knows your habits at home is very convenient, especially when your home appliances are connected to the same network. When that same AI also knows your habits outside home, like your favorite restaurants, your friends, your calendar, etc., its influence in your life can become scary. It’s precisely this convenience that is pushing us out of our comfort zone.

This principle is the most important one since it bows to “respect cultural, social, and legal norms”. It’s a broad principle, but it’s intended to ease that uncomfortable feeling by adapting AI to our times and letting it evolve at the same pace as our social conventions do.

Truth be told, I am not sure I understand this one (at least the explanation). It seems like the argument is that the convenience of it all will help push people out of their comfort zone. But I am a bit perplexed as to what that entails.
Their comfort zone, as in their hesitation in allowing an advanced algorithm to take such a prominent role in their life? Or their comfort zone as in, helping to create opportunities for new interactions and experiences?

In the case of the former, it makes perfect sense. One need only look at the 10 deep line at the human run checkout and the zero deep line at the self-checkout to understand this hesitation.
As for the later, most would be likely to notice a trend in the opposite direction. An introvert’s dream could be seen as an extroverts worst nightmare. Granted, many of the people making comments (at least in my life) about how technology isolates the kids tend to be annoyingly pushy extroverts that see that way of being as the norm. Which can be annoying, in general.

Either way, I suspect that this is another case of the onus being on the user to define their own destiny. Granted, that is not always easy if the designers of this technology don’t fully understand what they are introducing to the marketplace.

If this proves anything, it’s that this technology HAS to have regulatory supervision from entities who’s wellbeing (be it reputation or currency wise) is not tied into the success or failure of the project. Time and time again, we have seen that when allowed to self-police, private for-profit entities are willing to bury information that raises concerns about profitable enterprises. In a nutshell, libertarianism doesn’t work.

In fact, with the way much of this new technology is often hijacking and otherwise finding ways to interact with us via out psychological flaws, it would be beneficial to mandate long-term real-world testing of these technologies. In the same ways that newer drugs must undergo trials before they can be released on the market.

Indeed, the industry will do all they can to fight this, because it will effectively bring the process of innovation to a standstill. But at the same time, most of the worst offenders for manipulating the psyche of their user base do it strictly because the attention economy is so cut throat.
Thus, would this be really stifling technology? Or would it just be forcing the cheaters to stop placing their own self-interests above their users?

2. Avoid creating or reinforcing unfair bias

AI can become racist if we allow it. A good example of this happened in March 2016, when Microsoft unveiled an AI with a Twitter interface and in less than a day people taught it the worst aspects of our humanity. AI learns by example, so ensuring that safeguards are in place to avoid this type of situations is critical. Our kids are going to grow in a world increasingly assisted by AI, so we need to educate the system before it’s exposed to internet trolls and other bad players.

The author illustrates a good point here, though I am unsure if they realize that they answered their own question with their explanation.
Machines are a blank slate. Not unlike children growing up to eventually become adults, they will be influenced by the data that they are presented. If they are exposed to only neutral data, they are likely less prone to coming to biased conclusions.

So far, almost all of the stories that I have come across about AI going racist, sexist, etc can be pinpointed to the data stream that it is based on. Since we understand that domineering ideologies of parents tend to also become reflected in their children, this finding should be fairly obvious. And unlike how difficult it is to reverse these biases in humans, AI can presumably be shut down and reprogrammed. A mistake that can be corrected.

Which highlights another interesting thing about this line of study. It forces one to seriously consider things like unconscious human bias. As opposed to the common anti-SJW faux-intellectual stance that is:

“Are you serious?! Sexism without being overtly sexist?! Liberal colleges are turning everyone into snowflakes!”

But then again, what is a filter bubble good for if not excluding nuance.

3. Be built and tested for safety

This point goes hand in hand with the previous one. In fact, Microsoft’s response to the Tai fiasco was to take it down and admit an oversight on the type of scenarios that the AI was tested against. Safety should always be one of the first considerations when designing an AI.

This is good, but coming from a private for-profit entity, it really means nothing. One has to have faith (hello Apistevists!) that Alphabet / Google won’t bury any negative finding made with the technology, particularly if it is found to be profitable. A responsibility that I would entrust to no human with billions of dollars of revenue at stake.

Safety should always be the first consideration when designing ANYTHING. But we know how this plays out when an industry is allowed free rein.
In some cases, airplane cargo doors fly off or fuel tanks puncture and catch fire and people die. And in others, sovereign national elections get hijacked and culminate in a candidate in which many question the legitimacy of.

4. Be accountable to people

The biggest criticism Google Duplex received was whether or not it was ethical to mimic a real human without letting other humans know. I’m glad that this principle just states that “technologies will be subject to appropriate human direction and control”, since it doesn’t discount the possibility of building human-like AIs in the future.

An AI that makes a phone call on our behalf must sound as human as possible, since it’s the best way of ensuring a smooth interaction with the person on the other side. Human-like AIs shall be designed with respect, patience and empathy in mind, but also with human monitoring and control capabilities.

Indeed. But we must not forget the reverse. People must be accountable for what they do with their AI tools.

Maybe I am playing the part of captain obvious. None the less, it has to be said. No one blames the manufacturer of bolt cutters if one of its customers uses them to cut a bike lock.

5. Incorporate privacy design principles

When the convenience created by AI intersects with our personal feelings or private data, a new concern is revealed: our personal data can be used against us. Cambridge Analytica’s incident, where personal data was shared with unauthorized third parties, magnified the problem by jeopardizing user’s trust in technology.

Google didn’t use many words on this principle, probably because it’s the most difficult one to clarify without directly impacting their business model. However, it represents the biggest tech challenge of the decade, to find the balance between giving up your privacy and getting a reasonable benefit in return. Providing “appropriate transparency and control over the use of data” is the right mitigation, but it won’t make us less uncomfortable when an AI knows the most intimate details about our lives.

I used to get quite annoyed with people that were seemingly SHOCKED about how various platforms used their data, but ignorant to the fact that they themselves volunteered the lions share of it openly.
Data protection has always been on my radar, particularly in terms of what I openly share with the world at large. Over the years, I have taken control of my online past, removing most breadcrumbs left over from my childhood and teenage years from search queries. However, I understand that taking control within one platform ALONE can be a daunting task. Even for those that choose to review these things in Facebook, it’s certainly not easy.

There is an onus on both parties.

Users themselves should, in fact, be more informed about what they are divulging (and to who) if they are truly privacy-conscious. Which makes me think of another question . . . what is the age of consent for privacy disclosure?

Facebook defaults this age to 18, though it’s easy to game (my own family has members that allowed their kids to join at 14 or 15!). Parents allowing this is one thing, but consider the new parent who constantly uploads and shares photographs of their children. Since many people don’t bother (worry about?) their privacy settings, these photos are often in the public domain. Thus, by the time the child reaches a stage when they can make a decision on whether or not they agree with this use of their data, it’s too late.

Most children (and later, adults) will never think twice about this, but for those who do, what is the recourse?
Asking the parent to take them out of the public domain is an option. But consider the issue if the horse is already out of the barn.

One of my cousins (or one of their friends) once posted a picture of themselves on some social media site drinking a whole lot of alcohol (not sure if it was staged or not). Years later, they would come across this image on a website labeled “DAMN, they can drink!”.
After the admin was contacted, they agreed to take down the image for my cousin. But in reality, they didn’t have to. It was in the public domain, to begin with, so it’s up for grabs.

How would this play out if the image was of a young child or baby of whom was too young to consent to waive their right to privacy, and the person putting the photo in the public domain was a parent/ guardian or another family member?

I have taken to highlighting this seemingly minuscule possibility of issue recently because it may someday become an issue. Maybe one that the criminal justice systems of the world will have to try and figure out how to deal with. And without any planning as to how that will play out, the end result is almost certain to be bad. Just as it is in many cases where judges and politicians have been thrust the responsibility of blindly legislating shiny new technological innovation.

To conclude, privacy is a 2-way street. People ought to give the issue more attention than they give to a post that they scroll past because future events could depend on it. But at the same time, platforms REALLY need to be more forward in terms of exactly WHAT they are collecting, and how they are using this data. Making changes to these settings should also be made into a task of relative ease.

But first and foremost, the key to this is education. Though we teach the basics of how to operate technology in schools, most of the exposure to the main aspects of this technology (interaction) is self-taught. People learn how to use Facebook, Snapchat and MMS services on their phone, but they often have little guidance on what NOT to do.

What pictures NOT to send in the spur of the moment. How not to behave in a given context. Behaviors with consequences ranging from regret to dealing with law enforcement.

While Artificial Intelligence does, in fact, give us a lot to think about and plan for, it is important to note that the same goes for many technologies available today. Compared to what AI is predicted to become, this tech is often seen as less intelligent than it is mechanical. None the less, modern technology plays an ever-growing role in the day to day lives of connected citizens of the world of all ages and demographics. And as internet speeds keep increasing and high-speed broadband keeps getting more accessible (particularly in rural areas of the first world, and in the global south), only more people will join the cloud. If not adequately prepared for the experience that follows, the result could be VERY interesting. For example, Fake news tends to mean ignorance for most westerners, but in the right cultural context, it could entail death and genocide. In fact, in some nations, this is no longer theoretical. People HAVE died because of viral and inciteful memes propagated on various social media platforms.

Priorities.

Before we even begin to ponder the ramifications of what does not yet exist, one has to get our ducks in a row in terms of our current technological context. It will NOT be easy and will involve partnerships of surprising bed fellows. But it will also help smooth the transition into an increasingly AI dominated future.

Like it or not, it is coming.

“Autonomous Vehicles Might Drive Cities to Financial Ruin” – (Wired)

In a recent post exploring the rise of AI and the dramatic effects, it will have on contemporary society as we know it, one of the issues it (I) covered was the soon to arrive issue of unemployment on a MASSIVE scale. Comparisons are made to past transitions, but really, there is no precedent.  Not just on account of the percentages, but also due to our population alone. There are WAY more of us making tracks now than during any past transition. The stakes could not be higher.

I explored some possible solutions to make the transition less drastic, my favorite being universal basic income. Though I explored that in enough depth to be satisfied, Wired has highlighted a new and equally important problem with this transition.  The issue of local budgets becoming EXTREMELY tight on account to autonomous vehicles more than likely operating outside the traditional confines of must city revenue streams (gas taxes, parking tickets, etc).

If we go into these situations unprepared, the conclusion seems altogether terrifying. Cities that were already structurally deficient in many ways in THIS paradigm now fall apart, filled with aimless and angry people, automated out of existence.

Then there is the now past peak of worldwide oil production, a wall we will also begin to increasingly hit in the coming years. Then again, one terrifyingly dystopian issue at a time.

https://www.wired.com/story/autonomous-vehicles-might-drive-cities-to-financial-ruin/

In Ann Arbor, Michigan, last week, 125 mostly white, mostly male, business-card-bearing attendees crowded into a brightly lit ballroom to consider “mobility.” That’s the buzzword for a hazy vision of how tech in all forms—including smartphones, credit cards, and autonomous vehicles— will combine with the remains of traditional public transit to get urbanites where they need to go.

There was a fizz in the air at the Meeting of the Minds session, advertised as a summit to prepare cities for the “autonomous revolution.” In the US, most automotive research happens within an hour of that ballroom, and attendees knew that development of “level 4” autonomous vehicles—designed to operate in limited locations, but without a human driver intervening—is accelerating.

The session raised profound questions for American cities. Namely, how to follow the money to ensure that autonomous vehicles don’t drive cities to financial ruin. The advent of driverless cars will likely mean that municipalities will have to make do with much, much less. Driverless cars, left to their own devices, will be fundamentally predatory: taking a lot, giving little, and shifting burdens to beleaguered local governments. It would be a good idea to slam on the brakes while cities work through their priorities. Otherwise, we risk creating municipalities that are utterly incapable of assisting almost anyone with anything—a series of sprawling relics where American cities used to be.

A series of sprawling relics where American cities used to be.

Like this?

The fact that Detroit blight jumps right to the forefront of the mind when the topic of urban wastelands is broached, is unfortunate. I don’t live anywhere near the city (nor have I ever visited), but even I know that the remeaning residents are often doing anything in their power to improve their environment. The evidence is scattered all over Youtube and social media in general.

I decided to use the example, frankly, because I didn’t like the way the author seemed to gloss over the notion of the deterioration of cities using the term relics. A relic to me is something old and with former purpose, but now obsolete.
Cities (like Detroit) will likely never be obsolete.  They will just continue to suffer the continued effects of entropy, while still being necessary for the survival of their inhabitants.

It may just be a linguistic critique, but it still doesn’t sit well with me.

Moving on, the other reason why Detroit (and really, many similar cities all over the US) come to mind is that it’s not the first time innovation has left locales in the lurch.  Detroit (and the others as well) have other factors at play as well (white flight being one), but a big one lies in the hands of private entities. Automation itself requires fewer positions, and when combined with an interconnected global economy, the results can be tragic.
As much as I am fascinated by technology (and view it as being the new societal stasis from now on), it’s hard not to see it as one of the largest drivers of income inequality.
Workplace innovations are almost as a rule, NOT good for anything but the bottom line. As you need fewer workers (and can employ them in places with inhumanly low wages), it’s almost inevitable that inequality will only balloon.

In the past, one could balance this out somewhat with the service sector, an industry that is a necessity everywhere and can reliably create cash flow from essentially nothing. It has served as somewhat of a crutch for some unemployed people. These jobs are by no means on par with previous positions (something many slanted commentators overlook either ignorantly or deliberately), but none the less, they serve a purpose.

Or, at least they do for the time being.

The first big round of automation and economic shifts hit the manufacturing sector hard, leaving in its wake the many examples of civil and urban decay. Though the new economic realities of free trade were not really an issue for the service industry (generally, the opposite actually), that paradigm may well be starting to shift.
Already, automation is slowly making its presence seen in the world of service. On top of this, online retailers are gradually rendering once absolutely necessary brick and mortar retail stores and complexes obsolete. While I can see some areas of the service sector as being permanent, local retail is not one of them. At least not in the numbers it generates today.

Hot or cold food is a challenge from a logistics perspective (when the lengthy supply chains of your average online retailer are considered). This, coupled with people wanting to eat out every so often, will hold a place for the family restaurant (or possibly even the fast food outlet) in the local landscape for the time being. Stores on the other hand (particularly larger retailers) are a different matter.

There will exist local shops, I have no doubt there. But I doubt that the selection (or prices) would come anywhere close to what consumers can now get in big box retailers, or will then be able to get with big online retailers. This, combined with the increased automation of future service encounters, could make things very challenging for anyone with any hesitation towards technology. I suspect that many such people will move (or be pushed out) of larger cities and towns, far from the machine.

The demise of big-box retail is, on one hand, a good thing. They tended to be notoriously toxic when it came to local economies to begin with, not beyond many types of bullying tactics in order to maintain such perks as tax-free status. Consider the case of the big box retailer that relocates a couple miles over to another country in order to break a union, skip out on a local tax, or whatever action they deemed punitive. Therein the county ends up reaping all the negatives of such an enterprise without having any of the positives.

The world can do with less big boxes sucking up energy and contributing to an EXTREMELY energy inefficient way of life that we can no longer afford for a number of reasons. But having said that, economically, this will only succeed in turning almost the whole of most countries into the loser county to the big boxes relocation. One or 2 cities that are home to the distribution facilities will see some benefit, but that is it. The rest see nothing but the infrastructural wear and tear, and the trash.
And things probably won’t be rosy even for the seemingly lucky host cities of these distribution centers, because of the power these entities now have. Take the case of Seattle.

It would seem that I am now miles from where I started off (autonomous vehicles & city budgets). But it all plays into the very same thing. Just as I suspect that the majority of future retail distribution will be based out of a small number of warehouses and based around a largely autonymous transportation (be it truck, plane or drone), I can also see such a model for autonomous vehicle distribution.
When the time comes when rented autonomous vehicles are reliable enough to allow the majority of people to ditch one of the largest expenses in their lives (a vehicle), it will become increasingly financially feasible to own and maintain large fleets of always ready autonomous vehicles. Like how self-hauling rental services operate almost ubiquitously on the North American continent with one control center, I can see an alike entity operating huge fleets of self-driving vehicles.

Though these vehicles will utilize some local services (mechanics, cleaners, maybe electricity), as the article states, I doubt it will ever come close to covering the costs of maintaining the infrastructure on which they depend on for their operation. Which more than likely means that consumers will be footing the bill, be it through taxes or user fees.

The problem, as speaker Nico Larco, director of the Urbanism Next Center at the University of Oregon, explained, is that many cities balance their budgets using money brought in by cars: gas taxes, vehicle registration fees, traffic tickets, and billions of dollars in parking revenue. But driverless cars don’t need these things: Many will be electric, will never get a ticket, and can circle the block endlessly rather than park. Because these sources account for somewhere between 15 and 50 percent of city transportation revenue in America, as autonomous vehicles become more common, huge deficits are ahead.

Cities know this: They’re beginning to look at fees that could be charged for accessing pickup and dropoff zones, taxes for empty seats, fees for parking fleets of cars, and other creative assessments that might make up the difference.

But many states, urged on by auto manufacturers, won’t let cities take these steps. Several have already acted to block local policies regulating self-driving cars. Michigan, for example, does not allow Detroit, a short drive away from that Ann Arbor ballroom, to make any rules about driverless cars.

A preemptive strike.

Not that such surprises me. Auto companies already are blurring the line that once separated them from tech companies. I say this due to a bit of exposure to the computers that drive today’s vehicles, having helped a self-taught mechanic tinker with the tune of his 2013 Ford F150. The internet is a limitless resource for this sort of thing. I taught him the basics of how to use this tool, and he ran with it.

It’s not surprising that automobile manufacturers are greasing the gears in statehouses all over the country already. I wouldn’t be surprised that other tech entities are also doing the same thing.

This loss of city revenue comes at a harrowing time. Thousands of local public entities are already struggling financially following the Great Recession. Dozens are stuck with enormous debt loads—usually pension overhangs—that force them to devote unsustainable portions of their incoming revenue to servicing debt. Cities serve as the front lines of every pressing social problem the country is battling: homelessness, illiteracy, inadequate health care, you name it. They don’t have any resources to lose.

The rise of autonomous vehicles will put struggling sections of cities at a particular disadvantage. Unemployment may be low as a national matter, but it is far higher in isolated, majority-minority parts of cities. In those sharply-segregated areas, where educational and health outcomes are routinely far worse than in majority white areas, the main barrier to employment is access to transport. Social mobility depends on being able to get from point A to point B at a low cost.

Take Detroit, a city where auto insurance is prohibitively expensive and transit has been cut back, making it hard for many people to get around. “The bus is just not coming,” Mark de la Vergne, Detroit’s Chief of Mobility Innovation, told the gathering last week, adding that most people in the City of Detroit make less than $57,000 a year and can’t afford a car. De la Vergne told the group in the Ann Arbor ballroom about a low-income Detroit resident who wanted a job but couldn’t even get to the interview without assistance in the form of a very expensive Lyft ride.

As explored before, I suspect that the scaled economies of owning and operating massive fleets of self-driving vehicles may help with this problem. But with the shrunken job market and other local problems coming down the pipe, this hardly even seems a benefit worth mentioning.

That story is, in a nutshell, the problem for America. We have systematically underinvested in public transit: less than 1 percent of our GDP goes to transit. Private services are marketed as complements to public ways of getting around, but in reality these services are competitive. Although economic growth is usually accompanied by an uptick in public transit use, ridership is down in San Francisco, where half the residents use Uber or Lyft. Where ridership goes down, already-low levels of investment in public transit will inevitably get even lower.

When driverless cars take the place of Uber or Lyft, cities will be asked to take on the burden of paying for low-income residents to travel, with whatever quarters they can find lying around in city couches. Result: Cities will be even less able to serve all their residents with public spaces and high-quality services. Even rich people won’t like that.

America has been under-funding essential services across the board for decades. The fact that this is likely to REALLY bite the nation in the ass when they are least prepared to deal with it, is just the cherry on top.

Also, I don’t know that Uber and Lyft will necessarily get replaced. I suspect that they may still exist, but just with much fewer employees. Who knows, one (or both) may become one of the autonomous vehicle behemoths I see existing down the road.

As for the comment about rich people . . . get real. Nothing matters outside the confines of the gated communities in which they reside. Even when the results of their actions are seemingly negative to them in the long term.

Money is a powerful blinder.

It will take great power and great leadership to head off this grim future. Here’s an idea, from France: There, the government charges 3 percent on the total gross salaries of all employees of companies with more than 11 employees, and the proceeds fund a local transport authority. (The tax is levied on the employer not the employee, and in return, employees receive subsidized or free travel on public transport.)

This helps the public transportation angle, indeed. But it doesn’t even touch the infrastructure spending shortfall, a far more massive asteroid to most localities.

At the Ann Arbor meeting, Andreas Mai, vice president of market development at Keolis, said that the Bordeaux transit authority charges a flat fee of about $50 per month for unlimited access to all forms of transit (trams, trains, buses, bikes, ferries, park and ride). The hard-boiled US crowd listening to him audibly gasped at that figure. Ridership is way up, the authority has brought many more buses into service, and it is recovering far more of its expenditures than any comparable US entity. Mai said it required a very strong leader to pull together 28 separate transit systems and convince them to hand over their budgets to the local authority. But it happened.

It’s all just money. We have it; we just need to allocate it better. That will mean viewing public transit as a crucial element of well-being in America. And, in the meantime, we need to press Pause on aggressive plans to deploy driverless cars in cities across the United States.

Public transit is just a part of the problem. I suspect a very small part, at that. And likely the easiest to deal with.
You can not have a public transportation system (or at least not a good one) without addressing infrastructure deficits. And this is just the transportation angle. You also have to contend with water & sewage, solid waste removal,  seasonal maintenance and other ongoing expenses.

Indeed, it is a matter of money and funding allocation. However, the majority of the allocation HAS to start in Washington, in the form of taxation on wealth. As bitter of a pill as that is to swallow, the failure of that course of actions may well make us nostalgic of post-2016 turmoil. Pretty much every leader post-Regan added a little more fuel to the powderkeg, but failure to prepare for coming changes adequately may well set the whole damn thing off.

As for pressing pause on the deployment of driverless vehicles in the cities of the world, we already know that such a plan won’t work. The levers of power are being greased as we speak. Thus, the only option is preparation. Exploration. Brainstorming.

There likely is not going to be a paradigm that fits all contexts, and there will be no utopias. But there is bound to be something between the extremes of absolute privatization and dystopia.

Ethics Of Artificial Intelligence – An Exploration

Today’s topic has been on the backburner since April when I came across the top 9 ethical issues in artificial intelligence as explored by The World Economic Forum.

It seem’s that I can’t log into any platform without coming across Ethics or AI these days. That is unsurprising, given the microtargeted nature of our online world (past behavior dictates future content). What did surprise me, however, was having the Twitter account associated with this blog get followed by an Ethics in AI oriented NGO (very likely the source of the blog post that spawned this piece, actually).

In truth, it’s all very . . . questionable. It seems that everyone and their dog is chiming into the Ethics in AI conversation, but I am not even sure that the rest of us have mastered the topic yet. Particularly, heads of tech-based companies with known histories of unethical behavior behind the shiny facade of silicon valley grandeur.

None the less, let’s get on with the questions.

https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/

1. Unemployment. What happens after the end of jobs?

The hierarchy of labour is concerned primarily with automation. As we’ve invented ways to automate jobs, we could create room for people to assume more complex roles, moving from the physical work that dominated the pre-industrial globe to the cognitive labour that characterizes strategic and administrative work in our globalized society.

Look at trucking: it currently employs millions of individuals in the United States alone. What will happen to them if the self-driving trucks promised by Tesla’s Elon Musk become widely available in the next decade? But on the other hand, if we consider the lower risk of accidents, self-driving trucks seem like an ethical choice. The same scenario could happen to office workers, as well as to the majority of the workforce in developed countries.

This is where we come to the question of how we are going to spend our time. Most people still rely on selling their time to have enough income to sustain themselves and their families. We can only hope that this opportunity will enable people to find meaning in non-labour activities, such as caring for their families, engaging with their communities and learning new ways to contribute to human society.

If we succeed with the transition, one day we might look back and think that it was barbaric that human beings were required to sell the majority of their waking time just to be able to live.

That is certainly a rosy way to look at things. If we succeed in the transition, one day we may look at the full-time employment of a human over a lifetime as inhumane. There is just the matter of GETTING there first.  Will society survive?

In exploring the seeming hyperbole of my last sentence, I think we have to define what we mean as a successful transition. If the transition is regulated in the typical almost libertarian manner that many world governing entities (in particular, the United States) tend to follow, then not much will change. Like the last technological revolutions of our time, most gains will stay with the shareholders and the CEOs.
The biggest difference will be that the vast majority of all workers may well find themselves in dire straits. As opposed to just workers in regions once supported primarily by vibrant (yet niche, and inevitably redundant) industries.  Or workers displaced by money-saving decisions (such as outsourcing) made by companies.

One potential method of dealing with this potential time bomb (as some experts are calling it) would be some form of Universal Basic Income. Everyone (below a given income bracket?) would receive some regular given amount of money to do with as they please. Presumably, it would be enough to cover living expenses (or at least make a substantial dent in them, anyway).

Though this concept is fairly new to me, I rather like the idea. Aside from helping to avoid a civil collapse into unrest and possible martial law (or in some cases, fascism), you will have a healthier and more vibrant economy. Sitting on riches (or sheltering them in international tax havens) doesn’t do anything but increasingly undermine an economy. However, distribution to lower brackets does tend to be beneficial to all economies of scale. Food stamps are a good example of this.

To go with the last example (food stamps), economies do see benefits from even basic social safety nets. But when people have more than is required just for the basics (what the boomers called disposable income), they spend more.  They buy all manner of items that help to enrich all economies.

Of course, there is the question of how one is going to pay for this. In that respect, I am a lot like Bernie Sanders in saying “TAX THE RICH!”.

It is more than a slogan, however. It is (or it SHOULD BE!) a consequence to make up for the huge impact that their decisions have on the societies and nations in which they do business. In some senses, one could almost say “the societies and nations in which they plunder”.
Up to now, companies (for the most part) have been able to get away with washing their hands of the many consequences of their existence on local areas.
And not just unemployment either!
Consider things like obesity, or the ever growing problem of plastic waste (and garbage in general). To date, the food industry has faced Zero consequences for either epidemic, despite being the single largest contributors to both issues worldwide.

Universal basic income seems not just the logical solution to a coming asteroid, but also a much-deserved form of corporate reparations. Oh yes . . . I went there.

To conclude, people like Elon Musk and the late Stephan Hawking typically cite fears of AI gone rogue as their main concern for the technology going forward. What is far more concerning to me, are angry mass populations once they find themselves redundant. In a sense, we already have had a bit of a taste of what it looks like when angry (and somewhat ignorant) people find themselves without purpose in the world. Should we do nothing to prepare going forward . . .

There is also much to be said about the pitfalls of the current status quo. Millions of cogs in a giant machine, spending day after day just toiling. Working. Doing something, for some reason.

Well, at least you have a job! That is all we can hope for, right?

Boomers, in particular, love to use this line. Just shut up and do what you are told. Don’t think, just work.

It makes you wonder . . . how many people with various gifts that would be beneficial to society, spend their lives toiling in menial labor, whilst the whole machine inches ever closer it’s seemingly inevitable march off a cliff?

At this crossroad, where the species is soon to face running smack dab into more than one wall, isn’t it just logical to have all hands on deck? After all, it’s not just the economy, or life as we know it . . . it’s life itself.

Which will we choose?

Another milestone for the species? Or the evolutionary cul-de-sac?

2. Inequality. How do we distribute the wealth created by machines?

Our economic system is based on compensation for contribution to the economy, often assessed using an hourly wage. The majority of companies are still dependent on hourly work when it comes to products and services. But by using artificial intelligence, a company can drastically cut down on relying on the human workforce, and this means that revenues will go to fewer people. Consequently, individuals who have ownership in AI-driven companies will make all the money.

We are already seeing a widening wealth gap, where start-up founders take home a large portion of the economic surplus they create. In 2014, roughly the same revenues were generated by the three biggest companies in Detroit and the three biggest companies in Silicon Valley … only in Silicon Valley there were 10 times fewer employees.

If we’re truly imagining a post-work society, how do we structure a fair post-labour economy?

I think I have delved into most of the negatives in enough depth already. But it’s worth exploring the positives a little more.

Right now, the habit seems to be to call this the post work era. I’m not sure that will necessarily be the case. As explored before, without so-called gainful employment taking up so much of peoples time, their energy is available for whatever they want to focus it on. I suspect that this will include new business ventures. Ventures potentially shelved previously because the entrepreneur or customers (if not both) may not have had the time to devote to such an endeavor.

To put it in hopeful economist terms, who’s dreams could become a reality in this new paradigm?

This will of course largely depend on the availability of capital to fund such ventures. Though a big issue in a paradigm of mass unemployment, it is possible that recent innovations like micro-financing and crowdfunding could help clear this hurdle. Either way, the possibility is there for this be become (or more accurately, revert back to) an era of small business.

The future could be bright if one plays the cards right.

3. Humanity. How do machines affect our behaviour and interaction?

Artificially intelligent bots are becoming better and better at modelling human conversation and relationships. In 2015, a bot named Eugene Goostman won the Turing Challenge for the first time. In this challenge, human raters used text input to chat with an unknown entity, then guessed whether they had been chatting with a human or a machine. Eugene Goostman fooled more than half of the human raters into thinking they had been talking to a human being.

This milestone is only the start of an age where we will frequently interact with machines as if they are humans; whether in customer service or sales. While humans are limited in the attention and kindness that they can expend on another person, artificial bots can channel virtually unlimited resources into building relationships.

In a sense, I would personally welcome this. Assuming they are user-friendly, I prefer dealing with machines to humans generally (be it a self-checkout or on the phone with some company). Part of this is due to my introverted nature (leaning towards the extreme end of that spectrum, if I am honest. It’s a bit ironic that my menial labor is in customer service). And part of this reaction is because service-oriented jobs tend to be hell for most but a few brave souls. Having worked my entire career in various segments of the service industry, I’ve learned to hate people. To be fair, my high school experience helped pave the way to this conclusion, but HAVING SAID THAT . . . the general public didn’t do much to correct my misconceptions.
Amusingly, this is a somewhat subdued expression of these feelings. Maturity has tempered me somewhat, even in comparison to some of the earliest posts on this blog.

I understand, however, that personal anecdote is not always a good barometer to go by.

Even outside of my paradigm, I still see this transition as being mostly a good thing. One way I can think of is in helping the socially challenged such as myself practice some social skills in an unjudgemental environment.

It is also possible that it could go the other way. Less interaction could entail more isolation. Having said that, I suspect that these traits are more associated with the individual (not to mention their finances) than they are with macro technological innovations. It makes me curious if those many studies and observations that find many in the post-boomer generations to be socially isolated (in comparison) also take financial restraints into account.

In conclusion, I suspect that the overall impact will likely range somewhere from benign to positive.

Even though not many of us are aware of this, we are already witnesses to how machines can trigger the reward centres in the human brain. Just look at click-bait headlines and video games. These headlines are often optimized with A/B testing, a rudimentary form of algorithmic optimization for content to capture our attention. This and other methods are used to make numerous video and mobile games become addictive. Tech addiction is the new frontier of human dependency.

This is actually a very much overlooked fact of modern existence that needs more attention. Tech addiction. Not only is it a real thing, it is often times being encouraged due to the nature of the industry. Though the human attention span is finite, the options within the software world are infinite. And as such, some developers are not beyond employing less than questionable means to keep people hooked on their platform.

The unfortunate aspect of this is that some of the most desirable users of these apps are also the least equipped to combat their habit influencing nature . . . children. In today’s world, seeing a teen or anyone else addicted to their phone is seen as little more than a joke, but really, there is something to this allegation. Even the heaviest users know it isn’t rational to devote hours of attention to a seemingly benign app used to share photos.

For those that have never considered this before, consider why slot machines in Vegas don’t just stay silent when you win big (or win at all, really). They flash bright lights, they make a ruckus of noise, they sometimes dump shiny coinage all over the place. They get the dopamine pumping and make you feel good.

I likely don’t need to elaborate on the dark side of this psychological trickery in the context of gambling venues. I suspect that we all know (or know of) a person that has fallen into this trap.

But have you ever considered why many of those apps on your device are so pesky? They beep, ping, flash the screen, pop up constantly. If they are not showing personal interactions, then they are showing notifications about what activities friends have recently engaged in on the platform.

Anything to get your attention back on the app.

the other hand, maybe we can think of a different use for software, which has already become effective at directing human attention and triggering certain actions. When used right, this could evolve into an opportunity to nudge society towards more beneficial behavior. However, in the wrong hands it could prove detrimental.

To comment on the last point, it already is proving detrimental. Noting that, I am not even sure that one could say that the software is even in the wrong hands. After all, it was not inherently designed to be nefarious. It was designed to serve a purpose that benefited the agendas of the designers of said software. Even they more than likely overlooked many of the flaws that have since become apparent.

If it’s an indictment of anything, it’s what you get when you allow the market too much control over these things. It’s the emerging tech industries symptoms of a problem that has persisted American companies for decades . . . lack of regulatory control.
To anyone that disputes that seemingly arbitrary point, I encourage them to show me just ONE instance where an industry has put the well-being of the commons over short-term gains.

I am at a bit of a crossroads. On one hand, all of these tactics of psychological manipulation are more than likely here to stay. As noted by my gambling comparison, they long predated social media. So the author may be correct in seeing a possible positive use for such technologies.

None the less, manipulation is manipulation. No matter who is pulling the strings, there exists an air of dishonesty.

4. Artificial stupidity. How can we guard against mistakes?

Intelligence comes from learning, whether you’re human or machine. Systems usually have a training phase in which they “learn” to detect the right patterns and act according to their input. Once a system is fully trained, it can then go into test phase, where it is hit with more examples and we see how it performs.

Obviously, the training phase cannot cover all possible examples that a system may deal with in the real world. These systems can be fooled in ways that humans wouldn’t be. For example, random dot patterns can lead a machine to “see” things that aren’t there. If we rely on AI to bring us into a new world of labour, security and efficiency, we need to ensure that the machine performs as planned, and that people can’t overpower it to use it for their own ends.

5. Racist robots. How do we eliminate AI bias?

Though artificial intelligence is capable of a speed and capacity of processing that’s far beyond that of humans, it cannot always be trusted to be fair and neutral. Google and its parent company Alphabet are one of the leaders when it comes to artificial intelligence, as seen in Google’s Photos service, where AI is used to identify people, objects and scenes. But it can go wrong, such as when a camera missed the mark on racial sensitivity, or when a software used to predict future criminals showed bias against black people.

We shouldn’t forget that AI systems are created by humans, who can be biased and judgemental. Once again, if used right, or if used by those who strive for social progress, artificial intelligence can become a catalyst for positive change.

I decided to group these 2 as one because I have come to see both symptoms as roots from the same branch . . . bad data inputs.

Having done a bit of looking into this stuff, despite the bad AI result’s getting a lot of coverage, one often doesn’t see much attempt to diagnose. For example, that AI handing out harsher (seemingly, racist) sentences to differing nationalities was also drawing from a far larger data set than any jury or judge would (such as the person’s home neighborhood, and other seemingly irrelevant information). It’s less a matter of nefarious machines than it is data contamination.

Of course, this doesn’t make for as splashy of a headline. Or as gripping an article.

Things could take a wrong turn, as far as these machines are concerned. But with clean data, I suspect they may outperform their current human competition in MANY contexts. Bias is somewhat controllable when it comes to inputting data (for example, switching out names for identification numbers when it comes to entering criminals into these systems). However, it is NOT in the context of the human. Nor is it necessarily apparent even to the person themselves, that they may well be acting on their biases. Or possibly even some other seemingly unrelated trigger (“I’m hungry/ gotta pee! Can this just be over with already!?”).

6. Security. How do we keep AI safe from adversaries?

The more powerful a technology becomes, the more can it be used for nefarious reasons as well as good. This applies not only to robots produced to replace human soldiers, or autonomous weapons, but to AI systems that can cause damage if used maliciously. Because these fights won’t be fought on the battleground only, cybersecurity will become even more important. After all, we’re dealing with a system that is faster and more capable than us by orders of magnitude.

The author says a system. I feel that it will be more many systems. We may get to that Star Trek-like future someday, but not in my lifetime.

To start, even our CURRENT public and private data infrastructure systems tend to be woefully unprotected. Judging by the sheer number of companies seemingly caught with their pants down upon finding data breaches, it’s like digital security is an afterthought. When you sign up for everything from a bank account to a retail store loyalty card, you have to hope that digital security is a priority. And even if it is, there are no guarantees!

A good start that can happen TODAY, is drafting legislation on the protection of data under an organizations care. Losing the equivalent to the intimate details of a person’s life (and in some cases, those very details!) has to be more than a “WHOOPSY! We will do better!” type situation. Identity theft can cause a lot of stress and cost a lot of money, so companies that fail to protect consumer data in every way possible (particularly in cases of negligence) should pay dearly for this breach of trust. A fine large enough to not just be a slap on the wrist. Cover the potential expenses of every potential victim, and then some. Make a statement.

What say you, Elizibeth Warren?

When it comes to private companies under control of public infrastructure, the same should apply. When an attack happens, the horse is out of the barn already. Which is why one has to be proactive.
Employ some white hats to test the resiliency of our private and public infrastructure. Issue warnings and set deadlines whilst demanding regular updates on progress made. Then keep at it.
Hit those that miss the deadline without reasonable explanation, with fines. And keep on top of things, issuing warnings (and hopefully less frequently, fines) as issues are found.

As technology progresses both for the public and the private sector, only a staunchly proactive atmosphere such as this can help prevent the hijacking of far more powerful technologies for nefarious purposes.

Being that most of the world is nowhere even close to this . . .

7. Evil genies. How do we protect against unintended consequences?

It’s not just adversaries we have to worry about. What if artificial intelligence itself turned against us? This doesn’t mean by turning “evil” in the way a human might, or the way AI disasters are depicted in Hollywood movies. Rather, we can imagine an advanced AI system as a “genie in a bottle” that can fulfill wishes, but with terrible unforeseen consequences.

In the case of a machine, there is unlikely to be malice at play, only a lack of understanding of the full context in which the wish was made. Imagine an AI system that is asked to eradicate cancer in the world. After a lot of computing, it spits out a formula that does, in fact, bring about the end of cancer – by killing everyone on the planet. The computer would have achieved its goal of “no more cancer” very efficiently, but not in the way humans intended it.

To be fair, the computer doesn’t have to kill all humans on earth, just all the ones with cancer. That would take care of the problem (well, at least temporarily). Call it the most effective health and fitness campaign in the history of the human race.

Move over Joanne & Hal!

Moving on, this is one of the more played up possibilities of this new technological age. Fear of the Machine, of the tables turning. But it’s hard to see this as much more than Hollywood driven fear mongering.

Consider the eradicate cancer request that in this hypothetical, went very wrong. What if instead of becoming the digital adaptation of Adolf Hitler, the machine dug into its massive database of information and spit out a laundry list of both lifestyle changes and possible environmental improvements that would dramatically lessen the instances of cancer. Hell, any big problem known to our species.
Strip away the bias, emotion and other deadweight of the human cognitive ability, and add exponentially more computation power in the process, and who knows what can be accomplished.

For a while now, I’ve been tossing around the idea of UFO’s and extraterrestrial visitors as some form of inter-steller Artificial Intelligence, possibly linked to some past (or present!) life form from who knows where. Who knows . . . AI could be our ticket to depths beyond the observable universe!

8. Singularity. How do we stay in control of a complex intelligent system?

The reason humans are on top of the food chain is not down to sharp teeth or strong muscles. Human dominance is almost entirely due to our ingenuity and intelligence. We can get the better of bigger, faster, stronger animals because we can create and use tools to control them: both physical tools such as cages and weapons, and cognitive tools like training and conditioning.

This poses a serious question about artificial intelligence: will it, one day, have the same advantage over us? We can’t rely on just “pulling the plug” either, because a sufficiently advanced machine may anticipate this move and defend itself. This is what some call the “singularity”: the point in time when human beings are no longer the most intelligent beings on earth.

I can’t help but wonder if that ship has already long since sailed into the sunset. I suppose it lies in how one defines intelligence. For example, there are 3 devices within my reach that leave my brain in the dust (8 in the whole of the apartment). My brain doesn’t hold a candle to a calculator, let alone a modem or a smart TV.
But at the same time, these things are not beings (to borrow from the article). They are just objects with purposes ranging to the simplistic, to the complex. Be it crunching data, or helping move it from my computer to the WordPress server, both machines are far from autonomous.
The closest examples we have at the moment are the autopilot systems of both jetliners and autonomous vehicles, and even these default to human intervention when in doubt.

So I guess we’re not quite there . . . yet?

If I recall, the last time I explored this question, I concluded that we would most likely not ever see this revelation because I have serious doubts in the continued flourishing of the species as a whole. This culmination may not be like life after people (everyone here one day, gone the next), but no matter what, whoever is left is more than likely to have bigger concerns than furthering AI research.
Rather than the matrix, you may have The Colonie. Mad Max. The Book of Elie. The Road.

Pick your poison.

If I am wrong and am proven a dumbass by Elon Musk and everyone else sounding alarm bells, as Chef Ramsay would say . . . Fuck me. We done got ourselves into a pickle now, didn’t we?

Since these machines are influenced by input data, then I guess . . . hopefully, the technology will ignore the whole parasitic nature of the spread of the human species. And hopefully it will overlook the way that humans tend to consume and destroy damn near everything we have ever come into contact with.

God help us all if this singularity decides to side with Gaia.

Then again, what if it flips the script and turns mother to the species, acting as nurturer instead of the destroyer?  We conceived of it with our limited facilities, so it shall now keep us healthy. A good outcome, it would seem.

But wait. There are only enough resources to support X amount of humans, but there is currently Y number alive. If there isn’t a cull of this number (along with a possible lifestyle change), all will perish.
Still a good result?
The great fall is inevitable. In one circumstance, few recognize the truth and mass calamity ensues in the aftermath. In the other, the warning allows at least SOME preparations to be made (and difficult decisions to be decided) in staving off the worst possible scenario.

One can play with the singularity principal all they like. If it is to be, there is not all that much to be said or done. Though, I suspect we won’t have to worry, anyway.

9. Robot rights. How do we define the humane treatment of AI?

While neuroscientists are still working on unlocking the secrets of conscious experience, we understand more about the basic mechanisms of reward and aversion. We share these mechanisms with even simple animals. In a way, we are building similar mechanisms of reward and aversion in systems of artificial intelligence. For example, reinforcement learning is similar to training a dog: improved performance is reinforced with a virtual reward.

*raises eyebrow*

I have heard virtual cookies spoken of many times in my years of contributing to online forums, offered to people of opposing viewpoints as a gesture of goodwill. I never thought there would be a day when such a cookie would exist, however.

Is it like, a bitcoin?

Call me ignorant, but I haven’t the FAINTEST idea how one rewards something that, last I checked, was neither sentient or conscious.

Right now, these systems are fairly superficial, but they are becoming more complex and life-like. Could we consider a system to be suffering when its reward functions give it negative input? What’s more, so-called genetic algorithms work by creating many instances of a system at once, of which only the most successful “survive” and combine to form the next generation of instances. This happens over many generations and is a way of improving a system. The unsuccessful instances are deleted. At what point might we consider genetic algorithms a form of mass murder?

How about, never?

I view this as not being identical to the processes of evolution or obsolescence, but similar enough to render it as being ethically benign. It would be asinine to consider the ethicacy of the process of evolution spanning the ages. And humans regularly throw away and destroy the old and obsolescent technologies that once populated their lives.

Consider the following video’s. Are the actions you see within either video unethical?

To be fair, a great many people do display emotional distress at the sight of this type of thing. But this is less murder than it is . . . progeress. To have shiny new things, we have to sacrafice much of the old things to The Claw.

Once we consider machines as entities that can perceive, feel and act, it’s not a huge leap to ponder their legal status. Should they be treated like animals of comparable intelligence? Will we consider the suffering of “feeling” machines?

Some ethical questions are about mitigating suffering, some about risking negative outcomes. While we consider these risks, we should also keep in mind that, on the whole, this technological progress means better lives for everyone. Artificial intelligence has vast potential, and its responsible implementation is up to us.

Looking at this question in a strictly pragmatic way, then no. We will not take the suffering of the machines into consideration. I draw this conclusion from the way that our species tends to treat lesser animals that are sacrificed for the sake of our stomachs.

Should animals be given respect for the beings that they are? I suppose that it depends on what that entails.
The argument can be made that being humans have the emotional intelligence to understand suffering, then consumption of meat is unethical. Actually, put in this way, one could even say barbaric.
Bloody hell, The militant vegan’s are getting to me.

Either way, since I am not Ingrid Newkirk and less prone to emotional manipulation than the average psychopath, this does not drive me straight to veganism. First, because humans have been omnivores boardering on carnivorous for pretty much the entirety of our existence.

The ancestors of Homo sapiens cooked their food, cooking has been around for approximately a million year (that is around 500 000 years longer than the human species has existed) (Berna et al., 2012; Organ, Nunn, Machanda, & Wrangham, 2011). Traces of humans eating meat is also ancient and seems to have been around for as long as our species existed (Pobiner, 2013). One of our closest relatives the chimpanzee also eats an omnivorus diet with mainly fruits, but occasionally eats animals (McGrew, 1983).

https://veganbiologist.com/2016/01/04/humans-are-not-herbivores/

Though meat does seem to be a necessity in our diet, being that its components can fairly easily be replaced by vegan alternatives,  I wouldn’t use that argument. Which brings us back to ethical implications. Since it would be asinine to label a lion unethical for doing what it must to survive, I also feel no such ethical conundrum.
To be fair, while I am pro-meat, I am not blind. It would a benefit in every way for the species to severely curb it’s met consumption. It’s simply unsustainable in the long term (even without taking inhumane conditions into consideration). If everyone relaxed their meat consumption to even once or twice a week, resource-intensive factory farming would quickly become redundant. Meat eaters can enjoy their choice, whilst animals also get a bit of a lift (in terms of overall treatment).

The argument can still be made that there is no humane way to slaughter meat for food, period. Full stop.

I hate dichotomies. If you are one of the people of this persuasion, I encourage you to go over to National Geographic or Discovery’s channel and watch a lion eat it’s pray. Eat it’s sometimes sick and injured (and therefore, easy to catch) pray.

The phrase “put it out of its misery” exists for a reason.

Having delved into all of that, I still can’t bring myself to view these so-called Intelligent Designs (a proper usage for the term?) as anything beyond abiotic objects. Which raises a new question . . . when do abiotic factors become biotic?
From what I can tell, the divide seems to be between the living and the dead. Recently living things that are part of a food chain are generally considered biotic until fully broken down.

I think one can understand where I am going with this, by now. Does life need to fit into this spectrum? Or could we have just stumbled into a new categorization?

For the time being, i’ll settle into the non-commital conclusion that is “Not sure”. If this ever becomes a reality, I will likely revisit this topic. However, until then, this can go on the shelf along side gods existence, extra terrestrial phenomena ans other supernatural lore.

“​7 Short-Term AI Ethics Questions” – (Medium)

It’s been awhile since I delved into this interesting and ever more commonly discussed topic so I will go through this article I recently came across on Twitter. Written by Orlando Torres and published by Towards Data Science, which seems to be a branch of the platform Medium.

https://towardsdatascience.com/7-short-term-ai-ethics-questions-32791956a6ad

When new technologies become widespread, they often raise ethical questions. For example:

  1. Weapons — who should be allowed own them?
  2. Printing press — what should be allowed to be published?
  3. Drones — where should they be allowed to go?

An interesting way to start since one could argue that none of those questions have really been fully answered yet.

The most apparent answer in the context of the US is weapons. That has been an ongoing conversation for decades, and likely will continue being so (short of a miracle).
As for publishing, in these days of relative calm and first world freedom of speech, pretty much anything goes. However, narratives that rise to the top tend to be what is good for power. In the past, narratives good for power tended to be all that was allowed.

So again, we can bring ourselves back to the start. People angry about changing times (“I can’t say insensitive and bigoted things without people getting all triggered!”) certainly would agree. But I don’t align myself with self-serving idiots (even if we share 1 common thread).
As such, I say this. What tends to get lots of airplay is what is good (or at very least, benign) for those in power. Granted, modern-day social media algorithms have shaken up this notion a bit (based on heavy reliance on user inputs. More on this later). None the less, however, good for the commons is common is still a basis of most popular discourse within societies the world over.

Drones make things even more interesting. The readily available consumer units certainly pose a never before faced challenge to the privacy rights of pretty much everyone. Not even a 50-foot fence can protect you anymore.
Not to mention idiots using these things in aircraft corridors. Though most of the collisions have bounced off the aircraft so far, I fear what could happen if one of these units (with its lithium-ion payload!) ended up in an engine. Particularly jets that are throttled down (landing!) or otherwise at a low altitude. This airspace also tends to blanket populated cities.

You get the picture.
That conversation is pretty much finished, however. Few would disagree that only the most inept or careless would use a consumer drone in such a callous fashion. But then again, given the state of the conversation on mass assault weapons and weapons in general (in the US), one can’t even assume that a stake can be placed at the point of the common good.

After all, drones don’t kill people. People flying drones into aircraft engines, which then crash into heavily populated buildings and neighborhoods, kill people!

A nice segway into part 2 of this. The drones of war.

Nations utilizing these things on a regular basis certainly have answered the question that is “Where should these predator drones be allowed to go?”. Any nation full of brown people!

Fine, that was a loaded statement.  Even though I don’t ever see these things being used to take out the scum of the earth in any first world nations, we can’t go there.
Waco served as a good template for that storyboard. First Timothy Mcveigh and the Alfred P. Murrah building, then Columbine, then Virginia Tech (and who knows how many other roots).
Of course, we don’t listen to this logic in the middle east. Leveling hundreds of homes and buildings and killing thousands of innocent people, then assuming that the ISIS phenomenon sprang out of nowhere.
Firing and releasing hundreds of trained Iraqi military leaders and personnel into a power vacuum filled with raging people craving ideological structure . . . who could have seen the endgame to this experiment coming?

Though important, that is just a side effect of the predator drone and unaffiliated with the question of where these things should be used. The United States and other western nations seem to agree that they are only to be deployed in enemy territory. I highly doubt the people native to these lands would agree, however.

Is it any wonder why they keep showing up in Europe by the thousands? We like to pick apart their presence and probable effects on contemporary society (mostly theoretical), but we certainly don’t like considering exactly WHY they felt the need to come in the first place.
Not to mention the financial angle. Most conservatives I know are all about cutting services to these refugees, but I rarely hear about the costs of the never-ending war that drove them there.

An amount that I suspect would make caring for all American citizen’s under a single payer system (with pharmacare!) look like a penny.

Or at very least, a one dollar bill.

The answers to these questions normally come after the technologies have become common enough for issues to actually arise. As our technology becomes more powerful, the potential harms from new technologies will become larger. I believe we must shift from being reactive to being proactive with respect to new technological dangers.

We need to start identifying the ethical issues and possible repercussions of our technologies before they arrive. Given that technology grows exponentially fast, we will have less and less time to consider the ethical implications.

We need to have public conversations about all these topics now. These are questions that cannot be answered by science — they are questions about our values. This is the realm of philosophy, not science.

Artificial intelligence in particular raises many ethical questions — here are some I think are important to consider. I include many links for those looking to dig deeper.

I provide only the questions — it’s our duty as a society to find out what are the best answers, and eventually, the best legislation.

While I won’t and don’t disagree, Artificial Intelligence is hardly a benchmark in the conversation that is technical innovation versus ethics. While a great many breakthroughs could fall into this category, one of the most obvious seems to be nuclear weapons.
There is NO upside to having them around, PERIOD. A war between Pakistan and India ALONE would be enough to effectively wipe out our species. Let alone the fact that the United States seems hell bent on picking a fight with someone, be it Russia, China or some other unforeseen player.

Though it could come as an unfair question (depending on the circumstances) . . . where were the philosophers during the Manhatten project? I can think of no position that is more against logic than mutually assured destruction.

Artificial Intelligence in the wrong hands (or if not properly managed) COULD go in a bad direction and turn out to be bad news for us. But at the same time, it could also become just another tool in the human toolkit of experimentation, innovation, and exploration.

1. Biases in Algorithms

Machine learning algorithms learn from the training data they are given, regardless of any incorrect assumptions in the data. In this way, these algorithms can reflect, or even magnify, the biases that are present in the data.

For example, if an algorithm is trained on data that is racist or sexist, the resulting predictions will also reflect this. Some existing algorithms have mislabeled black people as “gorillas” or charged Asian Americans higher prices for SAT tutoring. Algorithms that try to avoid obviously problematic variables like “race”, will find it increasingly hard to disentangle possible proxies for race, like zip codes. Algorithms are already being used to determine credit-worthiness and hiring, and they may not pass the disparate impact test which is traditionally used to determine discriminatory practices.

How can we make sure algorithms are fair, especially when they are privately owned by corporations, and not accessible to public scrutiny? How can we balance openness and intellectual property?

Algorithms can present biased results when programmed with biased data from the outset. A big problem that has to be addressed.

A french fry factory has a similar problem. The product they put out is low quality, mostly because the raw potato’s they bring in and process are substandard. But they are cheap and plentiful. How does one fix this problem?

Indeed, only part of the problem as outlined. But none the less . . . come on. You don’t need to be an ethicist or a philosopher to solve this enigma.

As for keeping an eye on proprietary algorithms, that is more of a challenge. It reminds me of the struggle that is keeping track of various wall street financial instruments like derivatives. The equations are often so complex and complicated that what regulation DOES exist is very hard to enforce.
Good to know that the fate of the world economy is in the hands of a bunch of probable psychopaths that haven’t learn a thing from events 10 years ago.

Fortunately, the 2 are not entirely identical. Mainly because one can keep track of algorithms just by way of the results. Keep tabs on these various outputs, and issue notices/warnings/fines/cease and desists as necessary. Even if one can’t see inside the black box that drives Bank A’s loan authorization process, you don’t need to. You just need it to follow established guidelines.

Yes, algorithms have already gotten away from us. By now, we have all heard about the mess that Facebook (and other social media platforms) have gotten themselves into. And there are other real-world examples of misused algorithms creating real problems for people.

I don’t deny it’s something we need to watch for since it’s ALREADY happening. However, I still don’t really consider it to be a big problem, simply because it seems fairly easy to fix.
Have the potato factory start bringing in higher quality potatoes, and have government regulatory organizations keep tabs on the outbound product to ensure it meets food safety standards.

Done.

2. Transparency of Algorithms

Even more worrying than the fact that companies won’t allow their algorithms to be publicly scrutinized, is the fact that some algorithms are obscure even to their creators.
Deep learning is a rapidly growing technique in machine learning that makes very good predictions, but is not really able to explain why it made any particular prediction.

For example, some algorithms haven been used to fire teachers, without being able to give them an explanation of why the model indicated they should be fired.

How can we we balance the need for more accurate algorithms with the need for transparency towards people who are being affected by these algorithms? If necessary, are we willing to sacrifice accuracy for transparency, as Europe’s new General Data Protection Regulation may do? If it’s true that humans are likely unaware of their true motives for acting, should we demand machines be better at this than we actually are?

First off, deep learning. Normally I stick away from wiki articles, but in this case, I needed a way to get my bearings (if you will). Once you learn enough about a topic or concept, you can further search for relevant information. I generally do that by starting with individual aspects of a given topic, and then build up from there.
It was a method that served me well for most of my explorations into glyphosate, GMO’s and other scientific research heavy topics. But not so much with this. I have a feeling I am starting at chapter 4 of a complicated textbook.

So let’s try and back it up a bit.

Machine learning is all about using statistical techniques to help computers learn without explicit programming. In a nutshell, it’s all about predictions or decisions based on raw data inputs. It’s actually commonly used by many of us already in the form of email filtering and network intrusion prevention (your firewall), among other things. Since the threats and environment in both areas are always changing, the algorithm learns common behaviors and makes future judgments accordingly.

There is much more than that. But that seems a good start.

In theory, the goal for the whole of the field seems to be generalization from experience. Which makes sense of it’s close relationship to statistical analysis, being that it’s possible to make fairly accurate predictions in many areas based on statistical analysis. I would imagine that the long-term goal is to outsource this task to an algorithm that is both faster and more accurate than the average human brain. Not to mention cheaper in the long run.

Given this, and the similarity in the backbone structure of many AI networks to neural networks in the human brain, I would think that deep learning takes the initial task to a whole new scale. Rather than drawing on a handful of data inputs, this algorithm may be dealing with hundreds (if not more), all building on one another.
Most decisions made by humans are based on a multitude of different factors that we likely don’t even realize have an influence, so I imagine that the conclusions of these deep learning AI algorithms are similar.
Which seems to be where the problem lies in both the conclusions of the AI networks and their human counterparts. What influenced the outcome?

In humans, finding and diagnosing this problem is a huge issue in itself. The seemingly simple task of convincing people that not all bias (racism, sexism, or otherwise) is necessarily overt, is a challenge in itself. As I can attest to, having been one of those people.
This can also be backed up by the results of different tests that remove (or cloud) the identity of job applicants.

Blind Auditions Increased Women’s Participation In Many Orchestra’s

http://gap.hks.harvard.edu/orchestrating-impartiality-impact-%E2%80%9Cblind%E2%80%9D-auditions-female-musicians

According to analysis using roster data, the transition to blind auditions from 1970 to the 1990s can explain 30 percent of the increase in the proportion female among new hires and possibly 25 percent of the increase in the percentage female in the orchestras.

Minorities Who “Whiten” Job Resumes Get More Interviews

In one study, the researchers created resumes for black and Asian applicants and sent them out for 1,600 entry-level jobs posted on job search websites in 16 metropolitan sections of the United States. Some of the resumes included information that clearly pointed out the applicants’ minority status, while others were whitened, or scrubbed of racial clues. The researchers then created email accounts and phone numbers for the applicants and observed how many were invited for interviews.

Employer callbacks for resumes that were whitened fared much better in the application pile than those that included ethnic information, even though the qualifications listed were identical. Twenty-five percent of black candidates received callbacks from their whitened resumes, while only 10 percent got calls when they left ethnic details intact. Among Asians, 21 percent got calls if they used whitened resumes, whereas only 11.5 percent heard back if they sent resumes with racial references.

‘Pro-diversity’ employers discriminate, too

In one study to test whether minorities whiten less often when they apply for jobs with employers that seem diversity-friendly, the researchers asked some participants to craft resumes for jobs that included pro-diversity statements and others to write resumes for jobs that didn’t mention diversity.

They found minorities were half as likely to whiten their resumes when applying for jobs with employers who said they care about diversity. One black student explained in an interview that with each resume she sent out, she weighed whether to include her involvement in a black student organization: “If the employer is known for like trying to employ more people of color and having like a diversity outreach program, then I would include it because in that sense they’re trying to broaden their employees, but if they’re not actively trying to reach out to other people of other races, then no, I wouldn’t include it.”

But these applicants who let their guard down about their race ended up inadvertently hurting their chances of being considered: Employers claiming to be pro-diversity discriminated against resumes with racial references just as much as employers who didn’t mention diversity at all in their job ads.

 

https://hbswk.hbs.edu/item/minorities-who-whiten-job-resumes-get-more-interviews

Given our tendency towards bias and our often inability to even realize this when making many decisions (including at times life-altering ones for others), it is indeed a bit scary to consider that a machine could be coming to similar conclusions under similar circumstances (that is, with the same dataset).
And yet, this is nothing new. This fear I would assume is based on the assumption that the human brain will come to a better (less biased) result. Which as we can see in the examples above (as well as many others), isn’t true.

Humans are far from immune to the contaminated results that are born of biased inputs. In fact, I would consider the human aspect of this problem far worse because few are watching for signs of this problem. AI and its conclusions will be under constant scrutiny due to the mistrust in it within contemporary society. But the same can’t be said for humans, considering that many don’t even realize (or refuse to acknowledge) that a problem exists in the first place!

When it comes to what drives human decisions, we are indeed in the dark. There might come a day when we will understand the brain well enough to map this out, but I doubt we’re anywhere near there. Which is a good thing. Because seeing into a mind is a scary thing to ponder. It’s pretty much the last private refuge that any of us have!

Unlike the human mind, I personally don’t see transparency in algorithms as being as difficult an issue to overcome as it’s being made out to be. If it’s something under our control, then it seems logical that one could add a caveat in the coding that enables a roadmap of sorts to what influenced a given decision. Or if that is not possible, there is always the nuclear option that is banning the use of so-called black box AI in high stakes situations (such as those involving employment, or insurance claims).
I still think that a lot can be attained from clean data inputs.

This strikes me as a good argument as to why much of this research should be socialized (or at least more transparent). Neutral researchers are more likely to start with clean data, AND to not overlook problems with the output should it be favorable to the organization they work for.

3. Supremacy of Algorithms

A similar but slightly different concern emerges from the previous two issues. If we start trusting algorithms to make decisions, who will have the final word on important decisions? Will it be humans, or algorithms?

For example, some algorithms are already being used to determine prison sentences. Given that we know judges’ decisions are affected by their moods, some people may argue that judges should be replaced with “robojudges”. However, a ProPublica study found that one of these popular sentencing algorithms was highly biased against blacks. To find a “risk score”, the algorithm uses inputs about a defendant’s acquaintances that would never be accepted as traditional evidence.

Should people be able to appeal because their judge was not human? If both human judges and sentencing algorithms are biased, which should we use? What should be the role of future “robojudges” on the Supreme Court?

Why it is that all of these seem to boil down to the raw data that the algorithm is working with?

Take this:

To find a “risk score”, the algorithm uses inputs about a defendant’s acquaintances that would never be accepted as traditional evidence.

Then it obviously shouldn’t be used by the algorithm in determining a sentence. I would have thought that to be obvious. Making such statements as “If both human judges and sentencing algorithms are biased, which should we use?” , ridiculous.

This seems to be a perfect example of a naysayer throwing away the baby with the bathwater. The assumption that the algorithm is inherently biased, and therefore garbage.
Though I would certainly say that for humans (even myself, really), I strongly suspect that biased inputs are strongly influencing the biased outputs being highlighted. The solution, of course, is ensuring a clean data feed. Which includes stripping external data that normal court proceedings would not consider in the first place.

Should people be allowed to appeal a sentence because the entity issuing it out was not a human? Of course.
Yes, this could become a common reason for appeal. But if the case is built on a strong enough foundation, one shouldn’t have anything to worry about.

Seeing how easily people can be manipulated by various emotional inputs makes me personally PREFER the thought of a trial overseen by a robot judge (assuming the bias in data issues are dealt with). Democracy is overrated.
So if I am given the choice between a jury of randomly selected participants sourced from a contaminated pool or a cold and unemotional machine that deals in the extraction of raw data inputs, I’ll take the machine.

4. Fake News and Fake Videos

Another ethical concern comes up around the topic of (mis)information. Machine learning is used to determine what content to show to different audiences. Given how advertising models are the basis for most social media platforms, screen-time is used as the typical measure of success. Given that humans are more likely to engage with more inflammatory content, biased stories spread virally. Relatedly, we are on the verge of using ML tools to create viral fake videos that are so realistic humans couldn’t tell them apart.

For example, a recent study showed that fake news spread faster than real news. False news were 70% more likely to be retweeted than real news. Given this, many are trying to influence elections and political opinions using fake news. A recent undercover investigation into Cambridge Analytica caught them on tape bragging about using fake news to influence elections.

If we know that videos can be faked, what will we be acceptable as evidence in a courtroom? How can we slow the spread of false information, and who will get to decide which news count as “true”?

First off, this has been a concern of mine for years now. Starting with friends of mine that I have coffee with fairly regularly.
Based on a steady diet of police brutality videos and commentaries by people on the topic, I began to notice how they developed a very anti-cop attitude. All based on videos that may not even be recent, yet they often didn’t check that.
The same thing applied when it came to various conspiracy theories. If I expressed doubt, up comes a video of some moron disproving chemtrails because when he walks in the winter, he doesn’t leave a 20 block long contrail. Yes, some of these people ARE that easy to fool.
These days, it’s all about Trump’s heroic task of bringing Hillary and Obama to JUSTICE for their horrible crimes. What did they do? Fuck if I know.
Oh yeah . . . SHE SOLD URANIUM TO THE RUSSIANS! Pizzagate!

For years, I have seen this story play out in people I know. The people I describe have a history of gullibility. But what I was most concerned about, was how these algorithms may affect those with a mental illness.

Then it all fell by the wayside for some time and I forgot about it. That is until 2016 happened, and suddenly the world realized how damaging these unregulated social media algorithms can be. Putting people into smaller and smaller niches may be good for screentime (and therefore, money. At least in theory), but it’s not good for a democracy. Add in many actors that have learned how to use these schisms to their advantage, and you have a real birds nest. Hopefully culminating in Brexit and Donald Trump.

In many respects, the cat is out of the bag now. I may know many of the various nuances that make up the fake news category, but for many people, your words are falling on deaf ears. When the president of the United States is openly using the term Fake News to tarnish an organization that doesn’t meet his approval, the appeal to authority fallacy becomes extra-strengthened.

Though social media has indeed done a lot of damage in the short time that it has had any influence on civilization, it may not be as irreversible as it feels. It’s not like the schisms erupted from a vacuum. Again, it was a case of a machine running rampant based on a  tainted data pool.

I recently deactivated my facebook account on account to some of this. I was fed up with being bombarded with an endless torrent of stupidity. Because clicking and sharing are as easy as a single click or tap.

Reining this in becomes a touchy subject, as far as free speech is concerned. One COULD program so as to bury the fake stuff, or at least prioritize the legitimate over the trash. But then you leave yourself open to the many ways that could be manipulated. Imagine trying to break the Snowden story with algorithms programmed against you?

I propose the solution of tags.

Tags that make the origin, bias, and legitimacy of an article, post or meme clear. While these tags may be disregarded by a segment of social media users (the ones that one likely has no hope of EVER reaching, to begin with), it is not them that the tags are targeting. Who they are more useful for, are the casual social media browsers that read and post without a second thought. I suspect that if there were a stimulus present that brings up doubt in the legitimacy of the information, many may be more careful with what they post.

I envision it as even becoming part of the social hierarchy. At this point, few are held to account for the ramifications of their thoughtless posting. However, an environment where problems with content are outlined before AND after posting may make for a different result.

As for how to determine fake videos from real videos, it is currently not all that difficult to tell the difference (though the fakes are getting better). I like this advice from a real-life photo forensics expert.

Farid calls this an “information war” and an “arms race”. As video-faking technology becomes both better and more accessible, photo forensics experts like him have to work harder. This is why he doesn’t advise you go around trying to determine the amount of green or red in someone’s face when you think you spot a fake video online.

“You can’t do the forensic analysis. We shouldn’t ask people to do that, that’s absurd,” he says. He warns that the consequences of this could be people claiming real videos of politicians behaving badly are actually faked. His advice is simple: think more.

“Before you hit the like, share, retweet etcetera, just think for a minute. Where did this come from? Do I know what the source is? Does this actually make sense?

“Forget about the forensics, just think.”

That is my rule of thumb in this realm. Never take anything for granted, never assume. Because even a seemingly innocuous meme post about a sunken German U-boat in the great lakes may actually be a Russian submarine somewhere else entirely.

I mention this because I came VERY close to actually posting it myself around a year ago.

https://www.snopes.com/fact-check/nazi-sub-found-in-great-lakes/

5. Lethal Autonomous Weapon Systems

AI researchers say we will be able to create lethal autonomous weapons systems in less than a decade. This could be in the form of small drones that are able to be deployed, and unlike current military drones, be able to make decisions about killing others without human approval.

For example, a recent video created by AI researchers showcases how small autonomous drones, Slaughterbots, could be used for killing targeted groups of people, i.e., genocide. Almost 4,000 AI/Robotics researchers have signed an open letter asking for a ban on offensive autonomous weapons.

On what basis should we ban these types of weapons, when individual countries would like to take advantage of them? If we do ban these, how can we ensure that it doesn’t drive research underground and lead to individuals creating these on their own?

This one also seems obvious. A bizarre combination of both war crime and potential suicide. Not to mention that it doesn’t help with the whole scare factor that is associated with AI.

Personally, I am for bans and treaties restricting the usage of this kind of technology. Like nuclear weapons, the deployment of these weapons could be EXTREMELY consequential, and possibly even a point of no return.
Having said that, such a ban may be hard to enforce on a global level. World War 2 showed us exactly how little treaties matter when the rubber meets the road. Also worth mentioning is how erratic superpowers contribute to the demand for research of such weapons, to begin with.

I would guess that no one is going to stop China or Russia on this path, should they choose to follow it. As for smaller rogue nations, invasions in recent years of supposedly denuclearized nations have trashed whatever credibility the west did have in the conversation. Leaving such states no option but mutually assured destruction.

Alright, maybe Iran or North Korea don’t have the capability to completely decimate a behemoth like the United States. None the less, you don’t have to if you make matchsticks of a major city or 3. Not to mention the blowback as treaties activate and other nations get involved in the war effort.
And now John Bolton is the National Security Advisor for President Trump. The Standard to which all hawks are compared is effectively at the helm.

I REALLY need to lay off the current affairs for a while. 3 years should do it.

Indeed, this may be a modern-day Wernher Von Braun moment in history. That is a time where it would be great to have a rational observer beyond all patriotic influence step in and say “WHAT THE HELL ARE YOU DOING?!”.
It’s certainly what came to my mind after I watched an older documentary about the evolution of weaponized rocketry in the United States. One that ended by praising the fact that we are now able to essentially hit the reset button on the biological clock of the planet.

6. Self-driving Cars

Google, Uber, Tesla and many others are joining this rapidly growing field, but many ethical questions remain unanswered.

For example, an Uber self-driving vehicle recently killed a pedestrian in March 2018. Even though there was a “safety driver” for emergencies, they weren’t fast enough to stop the car in time.

In the article, there is a screen capture from the vehicle just before impact.

The Guardian also has the released footage on its website. Both from the front of the vehicle, and the interior. You can see how suddenly the situation came up by the horrified look on the drivers face. This is going to be with her for the rest of her life.

Time for the critiques. Starting with “New footage of the crash that killed Elaine Herzberg raises fresh questions about why the self-driving car did not stop” as written by The Guardian.

Even without going further into the article, we can use the video itself for clues. First off, the most obvious, was that this happened at night. The second is the darkness of the scene around the roadway. I see multiple lanes and an exit just ahead. Which tells me that this was a high-speed roadway. In any case, the pedestrian appeared out of nowhere.

Questions are being raised about the equipment on the vehicle not responding to the threat, with it’s supposed ability to spot such threats hundreds of yards in advance. It is definitely worth looking into.
That said, however, I don’t think that the presence of the autonomous vehicle should be seen as anything more than coincidence. I suspect that the driver may have been texting, which is a problem. Putting too much faith in automated systems has doomed fly by wire aircraft before. None the less, even without a potential failure of the technology, I question if the collision would have been avoidable anyway.

Would I be seeing this case in the international media is it were just an ordinary car? Even if there was negligence on the part of the driver?

I highly doubt it. To the local population, it would be an unfortunate situation and traffic fatality. To the rest of us, it would be a statistic. Also, another anecdote as to why smart cars driving on smart roadways should be the way of the future.

No matter what the family or anyone else says, that pedestrian should NOT have been there. And most importantly of all, even if the driver may not have had time to see and avoid (to borrow an aviation term), the pedestrian had PLENTY of time. I counted no less than 3 lanes in the direction where they were walking from, which means that there should have been plenty of time to see the oncoming headlights. The fact that she seemingly didn’t notice them coming tells me that there was a distraction on her part as well. If not texting, then possibly even headphones.
There is a reason why I don’t listen to headphones when walking, nor do I text or talk on the phone when crossing a street. It only takes one second. And even if you are technically in the right, it’s hardly consolation if you’re under a vehicle, or beside it with broken ribs (if not worse).

Instead of turning this accident into an ethical dilemma, use it for the far more productive purpose that it educating people to PAY ATTENTION TO YOUR SURROUNDINGS. Your life may depend on it someday.

As self-driving cars are deployed more widely, who should be liable when accidents happen? Should it be the company that made the car, the engineer who made a mistake in the code, the operator who should’ve been watching?

What about the pedestrian (or another motorist) that should have seen the oncoming threat coming? Should Boeing and Airbus be liable when operators of their heavily automated products become too acquainted with the processes?

Should we not just take this on a case by case basis?

If a self-driving car is going too fast and has to choose between crashing into people or falling of a cliff, what should the car do? (this is a literal literal trolley problem)

The first question that comes to mind is why a self-driving car would be going too fast, to begin with. While a malfunction scenario is possible, the most likely culprit will almost certainly boil down to human intervention. I say this because humans are known to defeat the purpose of safety mechanisms for any number of frivolous reasons. It may as well be in our DNA.

I am speaking in the theoretical, at least at this point in history. But assuming the technology is going the way it was before now, I can see systems coming online that require less and less human input. Particularly if / when the equipment is also in communication with sensors and inputs from the roadway itself. I can envision this as being both far safer than any current roadway, and as being far more efficient. I explored this topic in some depth in a previous post.

In my conclusions, my projections are based on the trajectory of aircraft automation. The more automated that aviation has become, the safer the industry has been. I see no reason to see why the trajectory of autonomous vehicles should be any different.

Once self-driving cars are safer than the average human drivers (in the same proportion that average human drivers are safer than drunk drivers) should we make human-driving illegal?

I also explored this in my previous piece (linked above), but I will go into it a bit here.

This indeed will be a topic of concern moving forward.  It’s said that the next 30 or 40 years may be a tricky transition period, as the automated machines mix with the human-driven machines on public roadways. Though machines are mostly predictable, the human often times is not, for a whole host of reasons. As a result of this, I have no doubt that there will be future collisions where these 2 factors come into conflict. However, I don’t doubt that the permeation of automation on roadways will eventually result in the reduction of this statistic over time.

It will mean that all levels of government will have to make some tough decisions. When we reach the point where human operators now pose an exceptional risk on primarily automated roadways, do we ban their presence altogether? Do we assign separated roadways to accommodate both types of traffic?

It will be interesting to see where this goes.

7. Privacy vs Surveillance

The ubiquitous presence of security cameras and facial recognition algorithms will create new ethical issues around surveillance. Very soon cameras will be able to find and track people on the streets. Before facial recognition, even ominpresent cameras allowed for privacy because it would be impossible to have humans watching all the footage all the time. With facial recognition, algorithms can look at large amounts of footage much faster.

For example, CCTV cameras are already starting to be used in China to monitor the location of citizens. Some police have even received facial-recognition glasses that can give them information in real time from someone they see on the street.

Should there be regulation against the usage of these technologies? Given that social change often begins as challenges to the status quo and civil disobedience, can a panopticon lead to a loss of liberty and social change?

The first thing I will say about the Panopticon comparison (after looking it up) is that we are already there. Not exactly in the literal sense (as in cameras equipped with facial recognition tracking everyone everywhere). More, in the data and meta-data generated by our everyday interaction with technology.

High-level intelligence agencies likely have access to much of the information within the deep web (think contents of email accounts, cloud servers, etc. Even the contents of your home computer would count, being it can access the internet but the contents are not indexed by search engines).

Note the difference between the deep and dark webs. The deep web is all about stored content. The dark web is where you find assassins (but is considered to be a small part of the dark web). 

Along with what is stored on various servers of differing purposes, it’s now well known that various intelligence agencies (most notably the NSA) are also copying and storing literally petabytes of data as it flows through the many fiber-optic backbones that make up global online infrastructure. Even if your nation isn’t allowed to access this data by law, chances are good that nothing is stopping any of these other prying eyes from looking in.
In these days of encrypted communication as the new standard, however, I am unsure of how useful these dragnets will really be. What is more important is how the platforms one interacts with handle government requests for data.

Either way, many aspects of one’s existence can be mapped out by way of their digital breadcrumbs. Texts, emails, financial transactions etc. While financials and other unnoticed tracking systems (rewards programs!) can help track you physically, cellular technology brings a whole lot more accuracy in this regard. Particularly in the age of the wifi network.

Consider this.

One day, I brought up Google maps on my PC (I don’t remember why). I was surprised to see that it had my location not only pinpointed to the city but right down to the address.  This alarmed me a little, knowing that this laptop has neither GPS nor cellular network access capabilities. The best the Google map should be able to do is my city, based on the location of my ISP (as dictated by my IP address).
I learned later however that when an Android (Google), Apple, or Microsoft mobile device connects to a new wifi network, the device makes note of its geographical location and sends the information to databases maintained by each company. So in theory, my network is now indexed in all 3 databases.

Given this, having cameras everywhere utilizing surveillance technology will not really add much to the situation. In many ways, most of us are already almost transparent if the desire for the information is there. They just have to put the pieces together.
Granted, I don’t live in a large metropolis (or a country like England) with a saturation of public and private surveillance. None the less, it’s still hard to imagine facial recognition tracking as being any worse than what already happens.
Indeed, it is a bit unnerving to know that most of your past steps could be easily tracked with such a system. But at the same time, similar results can be reached just by requesting logs of what towers your cell phone utilized within whatever time period they need. Or even better, the GPS data as very likely stored by Google, Apple or Microsoft.

Not having facial recognition built into the saturated CCTV systems of the world would indeed be the ideal option. The same to could be said for algorithms designed to piece together usage patterns from the intelligence dragnet stockpiled data. None the less, at least one of those bridges has already been crossed (likely both). So the best we can do is trying to keep a tight grasp on who can use the data.
Just as a court order is required for law enforcement to access your phone or internet records, so to should be the case for facial recognition data.

Philosophy with a deadline

Actual people are currently suffering from these technologies: being unfairly tracked, fired, jailed, and even killed by biased and inscrutable algorithms.

We need to find appropriate legislation for AI in these fields. However, we can’t legislate until society forms an opinion. We can’t have an opinion until we start having these ethical conversations and debates. Let’s do it. And let’s get into the habit of beginning to think about ethical implications at the same time we conceive of a new technology. ​

While I agree with the “let’s get into the habit of thinking about ethical implications at the same time we conceive of a new technology” part, I don’t think that society should have to form an opinion first.

Society has already formed opinions around both AI and social media. The view of AI tends to veer towards the negative, while the view of social media (at least previous to late 2017 / early 2018) tended to be more positive. Societal influences tend to come less from hard evidence and more from external drivers like pop culture and marketing. Since both can be used to influence a biased positive or negative response in the court of public opinion, the public stance is hardly helpful in this case.
When you don’t have a majority of media literate citizens, democracy is overrated.

So yes, keep checking the pushers of technological breakthroughs so as to ensure they are not getting ahead of all the details in their excitement (or haste) to promote such new innovations. But don’t make it a public spectacle.

Artificial Intelligence And “Human” Common Sense + Ethical Veganism

Today, I will again delve into the realm of Artificial Intelligence.  A response (rebuttal?) to an argument made by Sam Harris in one of his recent podcasts discussing (among other things) Artificial Intelligence with Kevin Kelly (here is some background).

Also, some Veganism stuff. It will come later, and it is related.

Fortunately, unlike the last article, I utilized for commentary  (written by one of the founders of Wired), Kevin (interestingly, founding editor of Wired) is not as sympathetic to the scare mongering as other notable names. Something I put in italics because despite many names being well-known contributors on the subject, even I question what exactly they are contributing.

Case in point:

 

https://twitter.com/elonmusk/status/896166762361704450

One thing I will say is that I didn’t think I would see the day when I would be in agreement with Mark Zuckerberg with much of anything. But it seems that this topic has made it happen.
In short, Mark thinks that Elon and the like are overreacting to the point of irresponsibility. Elon thinks that Mark . . . is not knowledgeable enough on the subject. Many Elon fanboys posted photos of ointment for Mark (he got BURNED!).
I put my hands to my face and shook my head.

Really Elon? THAT is the card you are going to play?

Don’t get me wrong, Mark and I still aren’t buds, this considered. Unlike the dystopian realities in the windscreen of folks like Sam Harris and Elon Musk, Mark Zuckerberg’s profit driven algorithm structures have helped to damage and divide nations the world over (and continue to do so). Indeed, The Facebook is not alone in this. But it was the trailblazer of the concept. And it appears to be doing little short of window-dressing to even acknowledge that a problem exists (let alone tackling it).

Either way, a feud between 2 rich techies that have WAY more intellectual influence than is merited is another subject altogether.

First, back to Sam Harris and the podcast.

He seems not as afraid of humanity being run over due to malice as he is of humanity being run over due to circumstance. Maybe these machines will become SO efficient in their replication that they will allocate ALL resources towards the purpose. Humanity will not be murdered but sacrificed to AI needs.

One of the first things that come to mind is, these people have GOT to pay more attention to the recent warning from Stephen Hawking. He gives the species 100 years (give or take) before the earth is . . . all washed up. To paraphrase George Carlin, a giant stinking ball of shit.

The Hawking warning comes to mind because his solution (we need to populate Mars or other planets) is questionable to me. Not just from a logistical point of view, but also from an ethical point of view. We know that humans will more than likely screw up any planet we ever inhabit eventually, so it is ethical to keep the process going?

My ethics angle to this question would not be taken well by a great many people (let alone agreed upon). The only person I did get an answer from seemed to default to it being automatically ethical if it enables continued human growth. That seems like a bit of a cop out to me. But it is what it is.
Either way, the only well-known voice to come out against this planetary Brexit movement seemed to be Bill Maher, outlining his reasons in the prologue of his earth day 2017 show. Many of which I am more or less in agreement with.

We know the general reasons why Stephan Hawking, Elon Musk, and others want to get us off of earth and into supposedly better places. It ranges between something cool to accomplish, and we have no choice. But to focus on the latter (or Hawking) side of the issue, what drove us to that point in the first place?

A myriad of factors really would be a nuanced answer. To get right to the point, ourselves. We did it to ourselves.

There are multiple reasons to explore. Climate change, plastic pollution permeation, loss of biodiversity  . . . pick your poison. While all are generally drastically different issues, they are all rooted in the same ideology. Human (and later, corporate) growth. While our history is littered with this behavior, it would not culminate until our discovery and subsequent incorporation of fossil fuels into everyday life.

It could be said that this time in history presented humanity with a forked path and a stark choice. Proceed forward with this new technology with care and caution to possible ramifications, or go all in. It seems the humans of the time didn’t see (or choose to ignore) the possible future risks, and fully embraced and incorporated it into societies world wide. The same can be said for any number of technological revolutions that ended up turning into disasters in their own right. From asbestos to DDT, to plastic, to BPA.

Humans are not good at the long game. And really, humans have never been good with the long game. Unfortunately, unlike when there were too few of us to do too much damage, that dynamic has changed now. Not only are our staggering numbers ALONE a strain on the biosphere, all of our modern innovations only add to the mess.

Plastic and other trash now make up a layer of our very own creation, both on the surface and the bottom of the oceans. Large cargo vessels and tankers (along with seismic oil and gas exploration) in the very same oceans now have raised the background noise levels in the ocean exponentially, in every ocean. There is literally no part of the ocean that we can not be seen or heard.

Our industrialization period has forever changed whole continents. From forests and wetlands to farmland and concrete. Drive from Winnipeg, Manitoba to Key Largo, Florida, and you will see pretty much the same thing. An endless expanse of farmland or concrete. Drive from Winnipeg to Vancouver, and you will see more or less the same thing.  But for where the landscape made breaking impractical or impossible, it’s human controlled.

Over a single century (or half a century, really), humans have burned and contributed millions of years worth of carbon into the atmosphere. A rate of increase that far surpasses pretty much any other past event. We still don’t know what all the long term effects of this massive glut of CO2 will be, but all credible contributors agree that it will NOT be pretty. Today’s flooding storm surge is tomorrows baseline sea level. At MINIMUM.

Then there is space. We have now launched into orbit and abandoned enough junk for it to now be a legitimate problem with people and equipment operating in the great unknown. And our fingerprints are not just in near earth orbit. We also have left stuff behind on the moon. And we have shot some vehicles out even further into the solar system, destined to end up who knows where.

It seems that no matter what endless expanse of empty space that humans come into contact with, they find a way to clutter it up. A recent example of this, and a problem of urbanites everywhere (whether they realize it or not) is wifi interference and overlap. A few years back, few would even think about this since wifi was still in its infancy. But now, with more wireless devices than ever before in our possession and every ISP selling and renting routers to service these devices, the carrying capacity of the available spectrum is often filled to capacity, if that capacity can even be reached.
Networks and devices sharing a channel can more or less traffic one another for airtime (with the lag increasing according to active devices in the area on that channel). However, traffic on adjacent channels is seen as noise, hindering (if not entirely drowning out) wifi activity. Imagine trying to have a conversation in a crowded room. The louder the background din gets, the louder everyone else gets.
Since these ISP issued routers often set up shop anywhere on the 2.4ghz spectrum (not just on the recommended channels 1, 6 or 11), there is often lots of overlap. Which causes noise levels that actively reduce the already finite amount of bandwidth available to area devices.

Indeed, a first world problem and more of an inconvenience than anything (unless all the radiation from the wireless devices is considered, anyway). But it is yet another perfect example of humans managing to again completely clutter up a seemingly endless expanse to due unplanned and largely unregulated embrace of new technology.

Someone like Sam Harris could use this argument in the context of Artificial Intelligence (our total and complete embrace of largely untested technology have not always ended well). However, I have doubts that many got that far since dystopian fear of AI tends to write the whole issue off.

It’s not that the machines will one day turn on us either. It is more, the machines will become so efficient in replication that they will develop methods to essentially utilize ALL resources towards that goal. Such resources may include us, or all that we rely upon.

An example given is an AI robot that has only one goal . . . to create paperclips. All it does is hone and fine tunes the art of creation of the paperclip. The perceived risk is that this AI robot may become so good and efficient that it may develop ways to turn literally ANYTHING into paperclips. Yes, including us and all that we hold dear.

Indeed, the example isn’t the best (it needed to be dumbed down for a layman, but really?! Paperclips?!). But it gets the job done.

Either way, due to this fear, Harris figures it necessary that so called Human Common Sense is programmed into these machines, so as to ensure this result is not realized. On one hand, it can’t hurt.
But on the other hand . . . human common sense?!
To STOP the machines from consuming and destroying everything in sight, all for the goal of endless replication?

HUMAN common sense is going to achieve that?

To put it short and sweet, if humans had common sense, people like Stephen Hawking would not be telling the world that the species NEEDS to find a new planet. Really, one could even go as far as saying that common sense dictates the exact opposite of Hawkings wishes. Let this bad strain remain isolated to one planet.

One could say that. It’s certainly an interesting usage of the topic of ethics. Which is more unethical?

Keeping humanity on one planet, for better or worse? Or allowing humanity to spread out in the universe?

Either way, suffice to say, I am highly critical of the notion that our so called common sense is of any use to AI robots and bots. In my mind, the only difference between AI gone wrong and humanity is how they spread . . . reproduction VS replication. We’re certainly not a good example, as far as stewards of the earth are concerned.

But this is not all to slam the notion. More to highlight the arrogance of insisting that the obviously flawed (if not flat out non-existent) common sense of humans is an important trait for future AI technology. If anything, that seems as though it could go WAY in the other direction. Being the direction we are headed at current, it seems a wager worth betting on.

That said, however, it is not all bad. The benefit of knowing one’s own flaws is in fact, knowing one’s own flaws. We can ensure to program these problems out to the best of our ability.

Figuring out goals for future AI is not an unreasonable conversation, however. This is an important conversation to have, even though people like Elon Musk like to disrupt it. Seemingly based on a strawman.
The machines are coming . . . and eventually, they will TAKE OVER OUR LIVES! Not scared? Well, CLEARLY you have not given this much thought!

But enough about arrogant fools with a giant platform. You get the point. This frame of mind is harmful to the conversation, not helpful.

Since humans are in the driver’s seat, it seems apparent that those in charge will design and program these things not just to be benign to humanity, but also to be helpful. How exactly that would work obviously remains to be seen. But it seems, dare I say, common sense.
Fine, maybe not. It is more the conclusion that one comes to when they use the past behavior of humans as a predictor of future outcomes. Humans are selfish and self-serving creatures, utilizing pretty much every resource we have available to us towards this goal. It seems apparent then that new technology birthed by us would follow this same pattern. Be it conscious, sentient, or not.

I have to be careful here, I admit. I don’t have a good grasp on either consciousness OR sentience, so I have to be careful in my usage of the terms. Although from what I see, few (if anyone) in the Artificial Intelligence conversation have made much headway on that front either.

To go back where I essentially left off (what could AI mean for us?), it could go many ways. I explored this a bit in a previous post on the subject, but I have even more to add now.

One should not just assume that AI will be inherently our enemy, or could become so due to some unforeseen development or update (to use a technical term). It can’t and shouldn’t be ruled out. But ending the conversation here is akin to throwing out the baby with the bathwater.

Humanity is good at developing tools. It’s how we got as far as we did today, and it’s what will drive whatever future we have left. So rather than viewing what is essentially our future technological development as a foe, we should try and see it as a tool. Something that has the potential of introducing a whole new level of intellectual prowess to both humanities biggest problems and enigmas (let alone desires).

The first thing that comes to mind is something from my other piece on AI, which was pondering whether or not so called UFO’s and extraterrestrials were a form of AI, developed by some other life form elsewhere. Looking back, I wrote that piece under the assumption that these beings must have run over their creators in order to reach the high that they obviously did technologically. I was taking cues from Sam Harris, in that it was a previous episode of the Waking Up podcast that inspired my thoughts on the piece.

Despite starting there, however, it occurs to me that annihilation of the AI’s origin species is not necessary. Rather, the super developed Artificial Intelligence may, in fact, serve as a tool for them. A tool that accomplishes feats that may not otherwise ever be possible. For example, the ability to explore far beyond whatever their observable universe is. Not to mention possibly enabling these origin species to come for the ride.

Looking closer to home to the problems facing the future of humanity and the earth itself, this is another area where AI could be of more help than harm. For example, reversing climate change by developing a way to scrub (and put to use!) excess carbon in the atmosphere. Or developing viable means of scrubbing plastic pollution of all sizes and types from the worlds oceanic gyres (and again, finding a use for it). If the intelligence potential is close to (if not) infinite beyond so called singularity, then so too are the possibilities.

But even Artificial Intelligence that is on our side is not beyond issue, even if it is just as perceived by us.

One example is our current habits of resource consumption (among other things). We currently WAY overconsume what is available to us, to the point of taking from future generations. Every year, an article is released at about this time of year (August) telling us that we’re past that point. Before the back to school and holiday rushes have even begun! Either way, it will not take long for Artificial Intelligence to detect this, and obviously, follow the problem all the way to its conclusion (bye bye Homo sapiens!).
If a part of their programming or goal is the safety of the species, they could either recommend drastic action or just force it upon us. Essentially, for the good of the collective that is humanity, all may be forced into a more limited life of consumption than they are used to.

Or to up the ante a few notches, let’s consider the overpopulation conundrum.

At current, our population is WAY beyond the static carrying capacity of the planet. But it doesn’t much matter (at least in the short term) due to fossil fuels and other technologies extending the carrying capacity. We already know this house of cards will eventually topple, so of course, the machines will also know this.

Again, the AI does the calculations and concludes that without a meager to drastic reduction in either births or population numbers, the species is in trouble. Our numbers are either close to or beyond the maximum allowable for the survival of the species, so something has to be done to keep extinction at bay.

Disallowing children, despite being its own hot potato, is arguably the lesser solution (when compared to being forced to essentially cull the herd).
On that note . . . imagine either being a decider of that group, or of having to accept the AI’s decision on the matter. No matter how you slice it, things will not be pretty. People will (rightfully really) hate and fear the machines.

And yet, at its core, is the well-being of the origin species. Humanity has proven unwilling to face the biggest decisions even at the expense of its own survival. So if some external force (or intelligence) has to do it for us, is this really a bad thing?

Interestingly, questions and scenarios like this (brought to my mind by topics like Artificial Intelligence, Autonomous Vehicles, and Veganism, oddly enough) have fundamentally changed my view of ethics and morality.

For example, the idea of some machine mandating or forcing a moratorium on human population growth (or worse, a cull of the population) would be seen as automatically evil (and thus immoral and unethical) by many, no matter what the circumstances. Even if the reasons are based on cold hard logic (too many people = too much resource consumption = No (or very few) people).
As for Veganism (since it can also be tied to this conversation, oddly enough), a common argument for is pain and suffering. This is more often than not buttressed by the terrible status quo that is mass factory farming in the US and elsewhere. There is a climate change component as well. But it’s primarily based on animal welfare.

My answer to that is to assert that the choice to (or not to) eat meat has little to do with ethics. Even though the status quo is far from optimal, it does not have to be and could be changed. In fact, compared to the suffering endured by the prey of many other species, humans have developed much less painful methods of slaughter. Though humans do not have to eat meat, we evolved (like many other animals) with such protein in our diets. As such, it’s hardly unethical to engage in what is as natural an activity as drinking water. One can use the climate change argument to attach ethics to the conversation. But even that is a stretch since something as normal as driving a car or heating (and cooling) your house could be turned on you. Not to mention that nuts and kale also have to be transported to market (and we’re not running EV transport trucks yet. Though I doubt their debut is far down the road).

If anything, framing this on ethics and morality (people who eat meat are unethical and immoral!) is doing damage to the cause of Veganism. Aside from inviting people like me to retort their rhetoric (a minority), it turns people off (the majority). While it may be seen as an excuse or burying one’s head in the sand, how exactly is that helping the afflicted animals? It’s not.

If anything, using the ethics and morality arguments to back a Vegan stance is unethical and immoral. If the tactics employed are resulting in a net negative in terms of action taken towards helping afflicted animals, then I don’t think it a ridiculous statement. It’s just an observation.

Here is another observation. PETA is inherently anti-Vegan.
I didn’t think I would ever find myself reading that sentence (let alone writing it).

One may wonder where that came from. How a piece about Artifical Intelligence ended up criticising Veganism. The answer is in ethics and morality. Or more, as I alluded to earlier, my fundamental change in acknowledgment of the 2 concepts.

Both are fluid, no 2 people have the same ethics and morals. Most tend to be very human centric (dare I say, self-serving) to the point of being irrational. As such, they are not inherent.

In a recent conversation, I was asked essentially what would happen if some alien race rounded us all up for some nefarious purpose. Would that be ethical?

The first thing that came to mind was, what does it matter? Like the many people that died at the hands of Adolf Hitler and other crazed leaders, I’m sure that those people saw the actions of the hierarchy as being unethical. Didn’t do them much good though, did it?

Now that I have triggered many into thinking that I am a crazed psychopath, I shall explain myself. I am not a psychopath.

Just a psychopath on demand. In a way.

I don’t walk around treating everything and everyone like shit. I have an ethical/moral code that I follow. If anything, I think that my ethical and moral code would rival that of many of the people that I just triggered. It’s a consequence of being overly analytical of almost every aspect of life. When you see more, the often thoughtless ethical infractions of the faceless populace become crystal clear.

Either way, I think that about wraps this up. Feel free to comment below if you have something to say.