Holographic Performances From Dead Celebrities – Awesome? Or Despicable?

Today, we will be exploring an article titled Dead Celebrities are being digitally resurrected — and the ethics are murky, written by Jenna Benchetrit and published by CBC News. While it’s not the first time I have heard of this concept (nor seen it explored in pop culture, as in the case of Black Mirror), I have never really stopped to consider the implications. That is to say, how I would respond to coming across one of my cherished idols or artists digitally resurrected for my enjoyment.

Being the nature of the subject, the resulting conclusions can only be subjective. We will all naturally come to a different stance based on the many things that make us all . . . us. As such, this will be (for the most part) more an act of personal exploration than ethical vetting. Nonetheless, feel free to share your views in the comments if you wish.

Let us begin.

https://www.cbc.ca/news/entertainment/dead-celebrities-digital-resurrection-1.6132738?cmp=newsletter_CBC%20Newsletter_4450_299391

 

Hologram performances, artificial voices and posthumous albums pose tough ethical questions, critics say

It’s a modern phenomenon that’s growing increasingly common with innovative technology and marketing appeal: the practice of digitally resurrecting deceased celebrities by using their image and unreleased works for new projects.

Michael Jackson moonwalked at the 2014 Billboard Music Awards years after his death; rapper Tupac Shakur performed at the 2012 Coachella music festival, though he died in 1996; and late singer Aaliyah’s estate spoke out recently after her record label announced that some of her albums would be released on streaming services.

A slew of recent controversies have renewed complicated questions about whether projects involving the use of a deceased celebrity’s likeness or creative output honours the artist’s legacy or exploits it for monetary gain. 

Prince’s former colleague released a posthumous album comprised of songs the artist recorded in 2010 then scrapped; an artificially-engineered model of Anthony Bourdain’s voice was used in a new documentary about the chef and author’s life; and a hologram of Whitney Houston will perform a six-month Las Vegas residency beginning in October 2021.

 

Interestingly enough, this brings to mind a conversation (a debate of sorts) I had with a friend some years back at work. Him being a fan of old school grunge and the Seattle scene, he hated the reincarnation of Alice In Chains with the presence of a new lead singer. At the time, I recall viewing the sentiment towards the name as kind of silly (what difference does it make?). I happened to like the music of both configurations of the band, so the sentiment that they should have proceeded under a different name seemed . . . purist.

Then around 3 years later, Chester Bennington of Linkin Park fame died by suicide. Upon considering my previous viewpoint at some point later, I was struck by the realization that I had similar reservations about someone else fronting Linkin Park in place of Chester Bennington. I had no real rational reason for this. It just felt weird for someone else to step into the role that of someone that I had become familiar with since my teen years. Hybrid Theory and Meteora came out when I was in high school. I literally grew up with this band as part of the soundtrack of my life.

Even though I stopped paying as much attention to most of the releases after Minutes To Midnight, it still felt . . .weird.

But that was years ago. Having not thought about it since probably 2017, I’ve realized that most of the sentiment towards the name Linkin Park (likely a result of the death being so recent at the time) is gone. Which it seems is not a moment too soon since the rest of the group (mostly on hiatus since 2017) is starting to release remixed and new material starting in 2020. So far there has been 1 track re-released in August 2020 and a remixed released in January of 2021. We will see what goodies the rest of LP have for us in the coming post-pandemic years.

https://en.wikipedia.org/wiki/Linkin_Park#2020%E2%80%93present:_Return_to_music

This isn’t even the first time I’ve had this inner dialogue, either. It also occurred back in 2016, when I heard (with horror at the time) that Axel Rose of Guns & Roses infamy was set to replace ACDC’s Brian Johnson, who was forced to retire due to hearing problems. This was not on account of sentiment either (remember that Brian Johnson replaced the deceased Bon Scott back in 1980). More, it was due to the volatile and infamous nature of Rose himself. Though his antics are well known and documented (up to and including inciting a riot in Montreal), even my aunt has a story of annoyance associated with working security at a G &R show (the band came on stage an hour late).

An interesting side note of the Montreal riot . . . lost to history is the fact that Axle was also suffering from a torn vocal cord at the time of the incident, which seems to have weighed into the decision. This, along with the fact that only around 2000 people (of the 10,000ish in attendance) were thought to have participated in the riots.

This is also something that I have not thought about for a long time. Probably because, as it turns out, the 23 show ACDC collaboration appears to have gone off without a hitch. And though the group was on hiatus since 2016, the 2014 lineup reunited in 2020 to release Power Up, an album that I enjoy.
Not that ACDC has ever put out an album that I didn’t enjoy.

Sure, the music is simple in comparison to the various shades of metal that I’ve since moved on to. Yet, it also remains enjoyable since the group is delightfully unserious when it comes to songwriting, never fearing to tread into the low brow. As evidenced by the 2020 track Money Shot, a tune that made me laugh out loud.
And one can’t complain much of the simplistic nature of the pub rock genre, because if you want something more, look no further than Airbourne (like ACDC. they also started in Australia). Though it is obvious who their influences are, they certainly take things to a whole other level.

Sticking to the topic still, we come to another band that I grew up with that changed frontman. Three Days Grace.

Growing up, I used to think of the first 2 3DG albums as another soundtrack to my teenage years. I also liked (and own) the subsequent 2 albums under the original lineup. But when lead singer Adam Gontier left the group and was replaced by Matt Walst of My Darkest Days,  it took some (who am I kidding . . . MUCH!) persuasion to appreciate the new Three Days Grace.

Or, Nu 3DG as it were.

But as it turned out, the unexpected change of lineup was not the awful thing that times closer to the change made it out to be. Under the lead of Matt Walst, Three Days Grace has moved into a newer and more interesting sound. And Adam is heading an equally interesting project in St. Asonia. The best of both worlds.

Also worth noting is the Foo Fighters. While I am almost certain that Courtney Love would NOT have let Dave Grohl and the rest of the trio continue forward under the Nirvana brand, it would be interesting to see what the results of a different timeline would have been. For example, the ACDC timeline.

Would fans embrace the new frontman (as seems the case with ACDC)? Or would they detest the new configuration (as with AiC)? 

Whatever the case does not matter, anyhow, since the Foo Fighters did perfectly fine even without the old brand behind them.

Looking back at this, it’s funny that I once looked at my friend’s distaste of NU-AiC as amusing and purist. As it turns out, I am just as human in my distaste of the alterations of the familiar. Hell . . . it’s one of my biggest critiques of many baby boomers that I know, and of the generation in general. The lack of interest in even trying to accept the new, let alone accepting that the old way is largely on the way out. Often for good reason.

So much have I pondered this that I now conclude that change is almost always actually a good thing for a band.

The first example of this that comes to mind is Seether. Their first 3 albums were also part of the soundtrack of my teenage years, with the 4th coming out just as I was coming of age as an adult. Though I still liked the fourth album despite its slight move away from what one was used to, I can’t stand anything released afterward.
The same goes for Theory of a Deadman. I liked the first 2 albums, but what followed was Gawd awful.  I don’t normally throw away music that I own, but I did toss The Truth Is because, for the life of me, I didn’t know why I spent $15 or $20 on it.

Remember buying CDs?

Yeah . . . I don’t miss it either. I do miss the days before people like me and streaming sucked much of the money out of the music industry, forcing artists old and new to resort to commercials and advertising as a steady income stream. But I suppose that is a different entry altogether.

Either way, rare is the musician from my childhood that has continuously put out new material, yet avoided the pitfall of toning it down for mainstream popularity. So rare is the case that only Billy Talent comes to mind as an artist that bucked the trend.

No matter the backlash, when artists decide to do the seemingly unthinkable and make a big change, the results are almost always alright. Another example that I just recently discovered was Aaron Lewis. Best known by me (and probably most people) as the lead singer behind Staind, imagine my surprise in discovering Country Boy in a country playlist. I can’t say that I like it, per se. But it’s certainly different, and Aaron is suited for the genre.

Considering that I used to hate country, the fact that I’m starting to get accustomed to some of it is shocking in itself. And I do in fact mean some of it. Though I like a couple Dierks Bentley songs and a Joe Nichols tune that most people likely know among some others, the pickings are slim. Aside from learning that a coon dog isn’t an incredibly racist lyric, I still find the formulaic nature of much of the country genre to be annoying.



To be fair, much of what I am describing is prescribed to a category within Country music that many call Bro-Country. Having said that, even the old-time stuff tends to lean in this direction. Hence why also can’t stand Alan Jackson or Toby Keith (he irked me long before the Red Solo Cup abomination).

I am very selective indeed . . . but it’s a hell of a change from a year ago. Not to mention that I figure it would be hard to find someone that has everything from Slipknot, to Weird Al, to Dierks Bentley on the same playlist.

But at long last, I come to the topic that the readers have come here for . . .  holograms.

 

Michael Jackson moonwalked at the 2014 Billboard Music Awards years after his death; rapper Tupac Shakur performed at the 2012 Coachella music festival, though he died in 1996; and late singer Aaliyah’s estate spoke out recently after her record label announced that some of her albums would be released on streaming services.

* * *

Prince’s former colleague released a posthumous album comprised of songs the artist recorded in 2010 then scrapped; an artificially-engineered model of Anthony Bourdain’s voice was used in a new documentary about the chef and author’s life; and a hologram of Whitney Houston will perform a six-month Las Vegas residency beginning in October 2021.

 

This is certainly an interesting thing to ponder. Though I CAN think of 1 reason why I would not want to see Micheal Jackson moonwalking in a show post-humously, the ethical reasoning has nothing to do with him being dead. Frankly, the same goes for anyone that would want to present a holographic Kobe Bryant. I find the continued praise and worship of both those people to be problematic, but again, that is a whole other post.

To boil it down:

1.) While one should always reserve judgement, the evidence weighs heavily in one direction. As does the fact that the case was settled out of court.

2.) Micheal Jackson was NOT proven innocent, contrary to how Twitter recently reacted. The court only dismissed the notion of the victims that 2 companies representing Jackson’s interests had any bearing of responsibility towards their safety and welfare. Nothing more.

Moving on from that red hot potato, I come to Tupac Shakur and Whitney Houston. When it comes to these 2, I am neutral. Assuming that neither said anything in life against the concept of post-humous holograms and assuming the concept isn’t going against either majority fan or estate wishes, I see little issue with it. It is but a new medium for the broadcast and display of recorded media, after all. In my opinion, no different than watching a Whitney Houston music video on YouTube. Or as I happen to be doing at this moment, listening to the long-deceased Johnny Cash in MP3 form.

I know . . . who still does that?!

Speaking of times changing, we come to the release of dead artist’s music on streaming platforms. Short of the artist taking issue with it in life (as seems would be the case with Prince), I have little issue with it.
For all intents and purposes, the cat is already out of the bag. In fact, it has been since the debut of Napster in 1999, continued to be so in the early 2000s with the decentralized P2P platforms, and continued ever beyond in the realm of torrents and discographies. Today, people scrape YouTube videos for audio.

And even that isn’t really correct anymore, with most people using ad or subscription-based streaming services. My preferred choice is YouTube Music since it comes with fewer limitations than Spotify (though I use Spotify for podcasts).

Any artists refusing to join the streaming platforms at this point are just pissing into the wind. This is not to say that the modern monetary sharing scheme is optimal (cause it’s not. It’s even more shit than it was in the past!). Nonetheless, however, when even the Nirvana and Tool catalogues can now be streamed, you know we’re in a different era. 

As for using machine learning algorithms to reanimate the voice of the now-deceased Anthony Bourdain, however . . . THAT IS WHERE I DRAW THE LINE! 

Yeah . . . just kidding.

Personally, having seen Desperate Housewives back in the day (remember the homophobia of seasons 1 and 2? That didn’t age well :/ ), the idea of a show narrated by a character deceased from the plot is interesting. 
As much as I’d love the Bourdain doc to open with a line like “Guess what, guys! I’m dead!” (I can see him doing something like that!), it probably wouldn’t go over well with the normies among us. 

No one seems to take issue with a dead Paul Walker showing up in a run of the mill Holywood movie, but throw a dead guy joke into a Bourdain documentary . . .

*GASP*

CANCELLED!

 

Ethical and legal ramifications

It’s a matter of both ethics and law, but the ethical concerns are arguably more important, according to Iain MacKinnon, a Toronto-based media lawyer. 

“It’s a tough one, because if the artist never addressed the issue while he or she was alive, anybody who’s granting these rights — which is typically an executor of an estate — is really just guessing what the artist would have wanted,” MacKinnon said.

“It always struck me as a bit of a cash grab for the estates and executors to try and milk … a singer’s celebrity and rights, I guess, for a longer time after their death.”

According to MacKinnon, the phrase “musical necrophilia” is commonly used to criticize the practice. Music journalist Simon Reynolds referred to the phenomenon of holographic performances as “ghost slavery,” and in The Guardian, Catherine Shoard called the CGI-insertion of a dead actor into a new film a “digital indignity.”

 

This is indeed an almost cut and dry case when it comes to copyright law. Though it sounds like one single area, what copyrights equate to are a great many single rights.

Say I am writing a book called “Crazy Cats of New Haven”. The moment the pen hits the paper (or the finger hits the keyboard), the resulting document in its entirety is covered under international copyright law. However, beyond just being your proof in a court of law, having this control over the main copyright also means you have control of any other rights whether currently available or future. For example:

  • audiobook
  • audio (song?)
  • theatrical (movie? play?)

The reason I am aware of this is on account of a short copyright course I took aimed at aspiring authors. Instructed by a seasoned and published author, the goal was to introduce us to a sample book contract and ensure we are aware that not all contracts are alike. Like every other area of the media and entertainment industry, not all publishers are equal.

This is where the future rights portion of this comes in. Though I have yet to come across my first contract at this point, most are said to automatically include every right that is available and future rights. Or in normie speak, if the project ever blows up and goes cross-platform (eg. Lord of the Rings, Harry Potter), the publisher is often in a much more powerful position than the author or writer.

And this isn’t uncommon either. The writers in the music industry often make peanuts even if they write hits.

 

Songwriters are guaranteed a royalty from every unit sold (CDs, vinyl, cassette, etc.).

These royalties are paid out differently in different countries, but in the U.S., they come out to $0.091 per reproduction of the song – nine cents every time a song is reproduced/sold.

In other countries, the royalty is paid out at 8 to 10% of the value of the recording.

What does this equate to?

Take the song “Pumped Up Kicks” – a huge hit for Foster The People. The track sold 3.8 million copies and the album itself sold 671,000 copies.

The frontman of the band Nate Foster has the sole writing credit on the song, so he collects every penny of the mechanical royalties, which would come out to around $406,861.

And that’s just the mechanicals. There are other ways that song was making money – it received a ton of radio play and was licensed on TV shows like Entourage, Gossip Girl and The Vampire Diaries, which added to Foster and the band’s earnings.

 

Digital Download Mechanical Royalties

Digital download mechanical royalties are generated in the same way physical mechanical royalties are generated, except they are paid whenever any song is downloaded.

iTunes, Amazon, Google Rhapsody, Xbox Music, all generate and pay these royalties to songwriters whenever a song is downloaded.

Again, these are paid out at a rate of $0.091 per song.

Streaming Mechanical Royalties

Streaming mechanical royalties are generated from the same Reproduction and Distribution copyrights, but are paid differently.

They are generated any time a song is streamed through a service that allows users to pause, play, skip, download, etc.

This means Spotify, Apple Music, TIDAL, Pandora, etc.

In the U.S. (and globally for the most part) the royalty rate is 10.5% of the company’s gross revenue minus the cost of public performance.

An easier way to say this, is that it generally comes out to around $0.005 per stream. Less than a cent!

How Much Do Songwriters Make Per Song, Per Stream & In Other Situations?

An easier way to put the last sentence is that its sweet fuck all.

Imagine that many nations in the world quit manufacturing the 1 cent penny because of its production cost (over a cent!). Most songwriters earn less than that.

 

The problem here is as obvious and immediate as a whacking great pop hook.

Think of the biggest songs on Spotify over the past decade. Here they are, courtesy of Kworb:

  • Ed Sheeran – Shape Of You (1.77bn streams);
  • Drake – One Dance (1.48bn streams);
  • The Chainsmokers – Closer (1.28bn streams)
  • Luis Fonsi – Despacito Remix (1.07bn streams)
  • Post Malone – Rockstar (1.05bn streams)

All of them were co-written, alongside the featured artist, by very talented people.

Some of these co-writer’s names: Steve Mac, Johnny McDaid, Shaun Frank and Jason ‘Poo Bear‘ Boyd.

How many people amongst Spotify’s 75m paying subscribers, you wonder, heard songs written by these people and thought; ‘I love that track – I want to play it now… I’ll try Spotify.’

And then: ‘Wow, this service is amazing, I’m going to pay for it.’

Yet the songwriters who penned these tracks presumably aren’t getting a penny for their compositions from corporate Spotify stock sales.

Instead, they’re being left out in the cold during one of the industry’s most historic windfalls.

Songwriters got screwed by the Spotify equity bonanza. The industry has to ask itself questions.

 

Now that we have explored all the reasons why MB Man will never be writing any songs anytime soon, let’s move onto the movie industry. We will now explore the shady realms of Hollywood Accounting. How to turn a multi-billion dollar grossing blockbuster into cash bleeding loss.

 

On today’s Planet Money, Edward Jay Epstein, the author of a recent book called The Hollywood Economist, explains the business of movies.

As a case study, he walks us through the numbers for “Gone In 60 Seconds.” (It starred Angelina Jolie and Nicolas Cage. They stole cars. Don’t pretend like you don’t remember it.)

The movie grossed $240 million at the box office. And, after you take out all the costs and fees and everything associated with the movie, it lost $212 million.

This is the part of Hollywood accounting that is, essentially, fiction. Disney, which produced the movie, did not lose that money.

Each movie is set up as its own corporation. So what “lost money” on the picture is that corporation — Gone In 60 Seconds, Inc., or whatever it was called.

And Gone In 60 Seconds, Inc. pays all these fees to Disney and everyone else connected to the movie. And the fees, Epstein says, are really where the money’s at.

https://www.npr.org/sections/money/2010/05/the_friday_podcast_angelina_sh.html/

 

May I first note that the last name appears to be coincidental in this case. Unsurprising, given my doubts that Jeffrey Epstein would like having an investigative journalist around the island of rich pedos.

ANYWAY . . .

That is how you turn a billion-dollar grossing moneymaker of a film into a cash-losing flop. And as usual, I veered off-topic.

Well, sort of. We now know the stance of the entertainment industry in terms of ethics . . . there are none. Given the power afforded to the rights holder, I suspect that we will see a lot more deceased celebrities doing everything from performing in Vegas to selling coffee and toothpaste on TV commercials.

Just kidding . . . clearly the cash is now in YouTube and Spotify ads.

 

Richard Lachman, an associate professor at Ryerson University who researches the relationship between humans and technology, said that as artists age and develop a better sense of their legacies, they may take the time to protect their images and file appropriate contract clauses. 

But not every artist will grow old. Indeed, a common thread between many of the artists whose works and likeness have been used in this capacity is an unexpected or accidental death.

Prince died in 2016 of an accidental opioid overdose, Anthony Bourdain died by suicide in 2018 and Whitney Houston drowned in her bathtub in 2012 as a result of heart disease and cocaine use. Tupac, Amy Winehouse and Aaliyah all died unexpectedly at young ages. 

Lachman said if this is the case, then it’s possible that clauses accounting for image use didn’t get written into wills. He also noted that artists who die prematurely don’t grow old, giving an impression of perpetual youth that reminds audiences what an artist looked like in their prime.

And while fans might be protective of the artists they love, they’re also the primary consumers to whom these digital resurrections appeal.

“Yes, we know that [a hologram of] Whitney Houston is not the real Whitney Houston,” Lachman said. “But it’s a chance for us to engage in some of that fan behaviour, something that binds us to one another.”


I agree with the final sentence.

As explained earlier, I am not against the concept of posthumous holograms. Even taking the Whitney Houston hologram example and replacing her likeness with Chester Bennington or Warrel Dane (2 artists that mean much more to me than Whitney Houston), I still don’t really find myself against the concept. Assuming that the family and/or next of kin is on board with the process, this seems to be just an ultramodern example of what we have been taking for granted for decades. The ability to store information onto various mediums. 

First came the song. Then the video. Now, potentially, the whole experience. Whether the experience is to be predetermined (akin to a pre-recording) or interactive (play out based on the audience, presumably) depends on the technology.

Though I can see why this kind of thing may be considered horrifying by some, consider the opportunity. Before now, if your favourite artist were to die, that is it in terms of opportunities for interaction. Though there may be shows if their surrounding act decides to continue, the opportunity of seeing the artist live will never happen again. Particularly notable when it comes to solo acts.

For people who have never seen that artist live, this may well be the opportunity of a lifetime. Indeed, it’s not the REAL thing. But it’s a very special opportunity nonetheless. An opportunity that my grandfather (who died in 1998) did not have in his lifetime.

For this reason, those in charge of these shows will have to be extra careful when it comes to smooth and flawless production performances. Not only will these performances serve as a typical live show, they will also serve as the farewell tribute that many of us wish we could have had with long-lost loved ones (beloved celebrities included). Auditoriums housing such performances may be wise to keep lots of tissues on hand.

 

For some, releasing archived material might not seem as harmful as resurrecting a person with virtual reality, MacKinnon said.

“I think there’s different degrees and a spectrum of uses that can be made of dead performers.”

 

There is no doubt no comparison between the 2. If it was not explicitly trashed by the artist, it may well have ended up released later in their career anyway.

The Prince example from earlier has to be mentioned, however. The posthumous release of an album of songs written by (and scrapped by!) Prince. Prince’s feelings towards the material were clear. Any person of ethics and integrity would know to leave the trash in the trash.

So naturally, they took the other path and cashed in on the fanbase for some cash. 

There will always be unscrupulous actors in an industry devoid of ethical and moral virtues. Thus, it is important not to let their actions dictate our opinion of anything we are speaking of. Unscrupulous people will always be unscrupulous, after all.

 

Prince is an artist who’s been on both sides of that spectrum.

Last month, his posthumous album Welcome 2 America was released to fanfare. But there was another controversial incident in which it was rumoured that a hologram of Prince would perform alongside Justin Timberlake at the 2018 Super Bowl halftime show. The plans were eventually scrapped, with Prince’s ex-fiancée Sheila E. confirming that Timberlake wouldn’t go through with it. 

The incident renewed interest in a 1998 interview with Guitar World, in which Prince said performing with an artist from the past is “the most demonic thing imaginable.”

 

I don’t know who had the bigger say in this decision, but if it was Justin Timberlake, good on him for seemingly honouring the wishes of Prince. Seemingly, because I can only imagine how much public pressure was driving the decision. This is the age of social media and Twitter, after all.

 

Sarah Niblock, a visiting professor of psychology at York St. John University in York, England, who has long studied Prince and co-wrote a book about the artist, says efforts to dig into his vault and use his image for profit are in contention with his publicly expressed wishes.

“He was fully in control of his output, sonically and visually, and the way everything was marketed, and of course, those who performed with him and all of his artists that he produced,” Niblock said.

The situation is further complicated because Prince didn’t leave a will when he died. Without one, “a person’s estate can exploit or license those rights if they want to,” MacKinnon said.

While the legal boundaries are relatively clear, the ethical question of whether an artist is being exploited or not is subjective.

For Niblock, digital resurrections that enrich the estate and its executors at the expense of an artist’s known wishes cross a line.

“Trying to somehow use that death to create a mythic quality that the artists themselves would have not necessarily intended, to then market that for money … I mean, it’s extremely cynical and disrespectful.”

 

There is no respect in capitalism. Only profits.

 

Legal considerations must be made before death

While promoting his new documentary Roadrunner: A Film About Anthony Bourdain, director Morgan Neville said he had recreated Bourdain’s voice using machine learning, then used the voice model to speak words Bourdain had written.

The incident prompted a wave of public discussion, some of it criticism levelled at Neville.

A tweet from Bourdain’s ex-wife suggested that he wouldn’t have approved. A columnist for Variety considered the ethical ramifications of the director’s choice. And Helen Rosner of The New Yorker wrote that “a synthetic Bourdain voice-over seemed to me far less crass than, say … a holographic Tupac Shakur performing alongside Snoop Dogg at Coachella.”

Recent incidents like the Bourdain documentary or Whitney Houston’s hologram residency will likely prompt those in the entertainment industry to protect themselves accordingly, said MacKinnon.

 

Having considered things a bit (and watched the Tupac Coachella appearance), I would hardly consider it as crass. The audience in attendance certainly didn’t. Nor do most of the people in the YouTube comments. Nor do the 274k people that liked the video (verses around 6k dislikes). I’d say the only people that cared were exactly where they should be . . . NOT AT THE SHOW!

Feel free to check it out for yourself. It was linked in the CBC article, believe it or not.

 

“I think now, if they haven’t already, agents, managers, lawyers, performers are all going to be telling their clients that if they care about this, if they care about how their image is used after they die, they need to be addressing it right now in their wills.”

Robin Williams is a notable example of a public figure who foresaw these issues. The late actor, who died by suicide in 2014, restricted the use of his image and likeness for 25 years after his death.

 

It’s cool that Robin Williams had the foresight to consider this before his tragic demise. While I am not as averse to the thought of a post-humous Robin Williams comedy special as I would have been closer to 2014, the man has spoken.

We have indeed entered a new era.

A passing thought . . . though we will never know what opinion past comedians like George Carlin or Bill Hicks would have of this technology, I sense that both would have a lot of fun with it.  

 

Hologram technology improving

According to both Lachman and MacKinnon, artists would do well to make similar arrangements, as the technology behind these recreations will only get more sophisticated.

Holograms of Tupac at 2012 Coachella and Michael Jackson at the 2014 Billboard Music Awards were produced using a visual trick from the Victorian-era called “Pepper’s Ghost,” named for John Henry Pepper, the British scientist who popularized it.

In the illusion, a person’s image is reflected onto an angled glass pane from an area hidden from the audience. The technique gave the impression that the rapper and the king of pop were performing on stage.

Nowadays, companies like Base Hologram in Los Angeles specialize in large-scale digital production of holograms. The recreation of Bourdain’s voice was made possible by feeding ten hours of audio into an artificial intelligence model.

Lachman said that it will become “almost impossible” for the average consumer to know the difference between a hologram creation and the real person. 

He said that while the effects are still new and strange enough to warrant media attention, digital resurrections will continue to have an uncanny effect on their audience — but not for much longer, as audiences will likely grow accustomed to the phenomenon.

Though he said there may be purists who disagree, it seems like audiences have been generally accepting of the practice.

“It seems like the trend is we’re just going to get over it.”

 

I agree. This phenomenon, as somewhat creepy and new as it is, ain’t going anywhere. But as far as I’m concerned, that is a good thing.

There will no doubt be people that will take advantage of this technology so long as celebrities don’t take precautions. Such is the world we live in. Aside from that, I’d say we have a very unique opportunity.

Certainly for tasteful send-offs of beloved stars and musicians (imagine something like a Whitney Houston final Farewell tour). Beyond that, really, the sky is the limit.

Tech Companies And Elites Are Leaving Silicon Valley – Should You Care?

To start, let’s answer the question in the title and take care of the TL/DR version of this blog post.

No.

* * *

As I was browsing Quora (of all places) yesterday morning (Christmas day 2020), I happened across an article that was published in Business Insider only 5 hours earlier. Written by Avery Hartmans, it is titled How The Silicon Valley Exodus Relates To Ongoing Culture Wars.

Though normally I would just pass stuff like this by (tech fluff pieces, or tech apology pieces), this one caught my attention due to the way it was written. That is to say, how the so-called left versus right Culture War is driving people and companies out of California and into Colorado, Texas, Florida and other states.

Let’s begin.

Silicon Valley elites are fleeing the region for states like Texas and Florida, but that shouldn’t be surprising — it’s the culmination of a culture clash that has been brewing in the tech industry for years

https://www.businessinsider.com/silicon-valley-exodus-tied-to-liberal-conservative-culture-clash-2020-12

The Silicon Valley exodus is real. 

Since the onset of the pandemic, billionaires, venture capitalists, and even major tech firms like HP and Oracle have started to flee the Bay Area. What at first seemed like a one-off response to our new remote-work reality has become a trend: Tech’s elite are leaving, and they’re citing a mixture of high taxes, state regulations, and a homogenous, liberal culture as their reasons for decamping to Texas, Colorado, or Florida.

While the departures of Elon Musk, Larry Ellison, and Keith Rabois are new, the reasons that seem to have nudged them out the door date back years. The pandemic may have spurred a migration away from the West Coast, but the writing has been on the wall as far back as 2017.

Now, as we approach 2021, it seems that a long-simmering culture clash is finally coming to a head.

I would argue that the pandemic emptying offices of most companies, and subsequent economic turmoil associated with the shitty American reaction to said pandemic, had more impact on this move than any cultural issues. But more on that later.

Also, speaking of Elon and the pandemic:

https://www.theverge.com/2020/4/29/21241180/elon-musk-coronavirus-conspiracy-misinformation-tesla

Yeah, I’m not a tech fanboy.

 

The onset of the so-called culture wars 

While it’s likely that facets of Silicon Valley’s culture had been starting to splinter for several years prior to 2017, the most public instance of a culture clash coincides, roughly, with the beginning of President Donald Trump’s presidency. 

In September 2016, Palmer Luckey, then the 24-year-old millionaire cofounder of virtual reality company Oculus, was discovered to be the main benefactor behind an anti-Hillary Clinton meme group. By that point, Luckey had already sold Oculus to Facebook for $2 billion and launched the Oculus Rift, the company’s first major product. 

According to reporting by The Daily Beast, Luckey had been financing a group called Nimble America, which described itself online as having proven “that s—posting is powerful and meme magic is real.” The group had put up a billboard in Pittsburgh with Clinton’s face that read “Too big to jail.”

Luckey told The Daily Beast at the time that funding the group “sounded like a real jolly good time.” 

After the report came out, several female employees resigned from Facebook in protest and Luckey stayed out of the spotlight at Oculus events. By March 2017, he left Facebook — in subsequent interviews, Luckey has said he was fired

Luckey’s departure was viewed, by some, as a politically motivated firing. In 2018, Sen. Ted Cruz asked Facebook CEO Mark Zuckerberg during a Senate hearing why Luckey was fired, implying it was over his politics, which Zuckerberg denied. 

 

Knowing that I fall on the left side of the political spectrum, I have to take care to look at this impartially. Though people on the right (like Ted Cruz) generally don’t care much about impartiality, I have principles to uphold.

Looking at this from a lens of impartiality, I don’t see anything that I deem as persecution based on speech. I see a person spending their money to support a cause that they aligned with. Some people working in the company were horrified to be associated with such activities, and choose to resign in protest (I know what it’s like to work for a company that is publicly behaving antithetically to your own personal values. It’s demoralizing). As for whether Palmer was strongly urged to resign or indeed fired, I also see no issue with that.

As a citizen of the United States (and most other liberal democracies) you have the right to freedom of speech and expression. However, if you are a high profile representative of an entity (such as a corporation), they are not bound by law to condone your views by way of ensuring your continued employment. They can do what is best for them.

That is not persecution, that is reality. Many Canadians failed to learn this lesson last year after national treasure Don Cherry once again went too far whilst live on air.

To put it another way, nothing to see here.

While that was the first and most public instance of ideological differences becoming a sticking point in Silicon Valley, it wasn’t the last. 

The same year, Google engineer James Damore made headlines for writing an anti-diversity manifesto that spread like wildfire through Google’s ranks. Damore argued that the search giant shouldn’t be aiming to increase racial and gender diversity among its employees, but should instead aim for “ideological diversity.” Damore also argued that the gender gap in tech is due to biological difference between men and women, not sexism. 

The memo resulted in Damore’s firing, but it also sparked a groundswell of support among white, male engineers at Google who felt that conversations about diversity were offensive to white men and conservatives. Around the same time, far-right communities online began revealing the identities of Google employees who identified as part of the LGBTQ community. Damore then sued Google, alleging the company discriminated against white, conservative males (Damore later dropped the suit.) 

Once again, James Damore used his freedom of speech to make his viewpoints known, but his employer (not aligning with the speech, nor wanting any association with it) showed him the door. Freedom of speech does not guarantee freedom from its consequences.
That many white conservative men within tech companies do not understand how privilege works is hardly of note. Considering how friendly that big tech is and has been towards extremely profitable right-leaning content (speaking of James Damore and profitable content, more on that later), I am again forced to ask myself what reality these people are living in.

And of course, Damore dropped the suit because there WAS no suit to be had. Once right-leaning reactionaries had moved onto the next far left outrage, there was no need to risk the very real possibility of crippling financial legal losses.

The point had been made. And no one would ever follow up, because reactionaries don’t follow up on things.

 

Both Luckey and Damore ended up without a job. But the reactions to their situations and the support they both received highlighted that there was a growing population of tech workers fed up with the region’s culture. At the time, Business Insider’s Steve Kovach argued that Silicon Valley’s “liberal bubble” had burst and that the culture wars had begun. 

 

Let’s first focus on the first sentence. Luckey and Damore ending up without a job on account of their bravery, their persistence in standing up to silicon valley left-leaning bullies. Like the show on Oprah’s channel is titled, where are they now?

James Damore

As noted earlier, his lawsuit against Google was quietly dropped in May of this year (2020 for future visitors). Though details have not been made public (and likely won’t be), it doesn’t take an attorney to see the unwinnable case that Damore was attempting to make. Which makes it best for both parties to just leave this mess behind and move on their separate ways.

https://www.cnet.com/news/james-damores-diversity-lawsuit-against-google-comes-to-a-quiet-end/

Back in November 2017, the Guardian put out an article detailing some of the situations involving Damore after the firing and infamy. Interestingly, it appears that he has been diagnosed with autism spectrum disorder since all of this blew up. How this all plays into the situation, it is not for me to say as I am NOT an expert on such disorders. However, given my limited experience with such things, I can see the correlation.

To Damore, it is and always was just about data. That data could cause such an emotional reaction is, I guess, not something that he ever considered.

Don’t get me wrong, he was wrong (his paper or memo is here). If the guardian is correct, it looks like he may have gotten caught in an algorithmic rabbit hole without even realizing it. Both confirmation bias and the Dunning Kruger effect may also end up unknowingly coming into play, leading to hopelessly flawed analysis. And since the aftermath results in perceived persecution coming from detractors, and seeming praise and confirmation from like-minded people (some of which include academics like Jordon Peterson), I can understand why such a mind-frame can become difficult to alter.

I can say this because I once found myself treading a similar path back in the day (maybe the mid-2010’s). Before I understood the nature of algorithmic rabbit holes and why arguments from people like Cassie Jay were wrong, I could have come to a similar conclusion. Thankfully though, I would likely not have had the megaphone that James Damore had to shout these thoughts out for the world to dismantle. I like to THINK that my writings here have a more inflated value than they do. But I know my traffic statistics.

Having said that, being a defender of James Damore was not the direction I had intended on pursuing. Though keeping the humanity in anyone I talk about is important, that was not the journey I started on.  Did James Damore get cancelled?

If his LinkedIn profile is to be believed, it indeed appears that he isn’t working anywhere in Silicon Valley, or for any tech company. He appears to have his own startup. As much as I (and many others) thought that he was going to join the grifter circuit (along with Jordon Peterson and the like), a quick search turns up no evidence of such activity. He does not appear to have a patreon, nor even a Youtube account. He appears to only use Twitter.

If anything is evident, it’s the amount of revenue that the man helped many other people generate. Youtube accounts small, large and corporate. Patreons little and large. James Damore made a whole lot of people (including Google, in the form of ad revenue) money.

Say what you will about the memo. Maybe even that he deserved the hole he dug for himself. But I find the whole profiteering angle of it all rather troublesome. Whether or not Damore is pinching pennies, it wouldn’t be the first time that online infamy has cost someone everything and left them high and dry once the attention wears off.

Palmer Luckey

Though he left facebook, he is far from the position that James Damore was (and appears to still be) in. Considering this article alone, he’s already pocketed millions on account of new projects being sold to government contractors. In this case, an interceptor drone capable of disabling drones (and other objects intruding into restricted airspace) midflight.

More on his new venture:

Anduril Industries, a new startup founded by Oculus cofounder Palmer Luckey, is being valued at more than $1 billion after a new fundraising round.

Anduril’s latest financing includes capital from Andreessen Horowitz, CNBC reported, citing people familiar with the matter. The report said the sources asked not to be named because the details of the round are still confidential.

Anduril, launched two years ago, is building a virtual wall on the U.S.-Mexico border. Its technology includes towers with cameras and infrared sensors that use artificial intelligence to track movement. The software-hardware system, called Lattice, has been deployed in Texas and Southern California.

https://www.bizjournals.com/losangeles/news/2019/09/11/anduril-a-startup-from-oculus-founder-palmerluckey.html

Though it appears that Damore pulled the short end of that stick (I should note here that I am unsure of his political leanings), Palmer hardly seems hindered by his conservative infamy in oh so liberal silicon valley. Hell, he maintained connections with Facebook even AFTER allegedly being fired. Thus showing that Conservative techies of Red and Liberal techies of blue are only concerned with one colour. 

Green.

Tech millionaires and billionaires are leaving the Bay Area in droves

More than three years later, it seems as though that undercurrent of dissatisfaction is coinciding with the secondary effects of the coronavirus pandemic.

In years past, those who felt disgruntled, overruled, or otherwise disenfranchised by Silicon Valley’s predominately liberal culture had few options. They could leave, of course, but the tech world was still firmly rooted in the Bay Area. Those who wanted a career in tech still felt like they needed to put up with skyrocketing rents and hours-long commutes.

But when offices shut down and major tech companies asked their employees to work remotely, there was no longer as strong a tether to the Bay Area. Some companies, like Twitter and Slack, freed their workers to live wherever they wanted with no expectation to ever return to their San Francisco offices. Others, like Facebook, have said employees may work remotely forever with manager approval.

These decisions seem to have encouraged a larger shift among Silicon Valley’s elite.

Palantir has moved its headquarters to Colorado and HP and Oracle moved to Texas. Palantir CEO Alex Karp told Axios in May that the company wanted to move away from the West Coast and described what he saw as an “increasing intolerance and monoculture” in the tech industry. Karp, for his part, had been living in New Hampshire for much of the pandemic. 

Since then, venture capitalist Joe Lonsdale, Dropbox CEO Drew Houston, and Tesla and SpaceX CEO Elon Musk have moved to Austin — Lonsdale tweeted that the region was “more tolerant of ideological diversity,” and Musk made the move after warring with California over the state’s coronavirus lockdown measures. 

Imagine that . . .

Texas is an environment that is more tolerant of right-leaning business class elitists and glorified libertarians that don’t feel a need to follow best practice safety guidelines! This is SO surprising!

Bullshit.

This shift was in the cards in the coming years and/or decades with technological advancement anyhow. The COVID pandemic just served to speed up the schedule in some cases. Therein releasing many companies from the necessity of renting massive amounts of office space (no doubt, for a drastically marked up rate), and releasing many employees to lay down roots wherever they see fit.

It’s good for corporate bottom lines. It’s good for the quality of life of workers. And most of all, it’s good for the communities hosting these tech companies. Less competition for homes and apartments (and of course, office space) means lower overall rates for everyone. This means hopefully fewer people working full-time jobs and/or pursuing educational dreams, yet permanently living out of camper trailers (or even cars) because they can’t make rent.

This is not a culture clash that finally boiled over. It’s a convenient excuse to give California one last kick whilst relocating to more business-friendly territories.

I wonder how long it will take before we start to see similar housing crunches in places like Austin, Texas with many of these companies on the move. Though working from home can indeed solve some of that problem, not every job can be made 100% remote. You are bound to hit a growth-limiting threshold at some point.

Oracle billionaire Larry Ellison has left the region for Lanai, the island he mostly owns in Hawaii, and investor Keith Rabois is decamping for Miami, citing high taxes in San Francisco and a political culture he abhors as his reasons for leaving. 

And of course, all of these moves follow venture capitalist and PayPal founder Peter Thiel’s famous departure for Los Angeles in 2018, a move seemingly spurred by his dislike of Silicon Valley’s liberal ideology.

Notably, Lonsdale, Musk, Rabois, and Karp all have ties to Thiel and PayPal, and Ellison is close friends with Musk and sits on Tesla’s board. 

So while the wave of departures from arguably the most famous tech hub in the world are, for better or worse, being spurred by the pandemic, the exodus didn’t being out of the blue — it’s a direct result of political and ideological differences that have been building just below the surface for years. 

A bunch of rich fucks feeling left out by elitest, liberal California are leaving for more like-minded and lightly taxed pastures (one of which being a self-owned island). Colour me apathetic. 

Political and ideological differences . . . okay. For one thing, I highly doubt that people like Musk or Theil really care which party they support. They care about greasing the wheels so as to get the best out of the incoming administration, nothing more. Thanks to money in politics, the corporate political party is the green party.

And no, not the one formerly headed by Jill Stein.

Disclosure: Palantir Technologies CEO Alexander Karp is a member of Axel Springer’s shareholder committee. Axel Springer owns Insider Inc, Business Insider’s parent company.

Of course. Had I sneezed, I would have missed that. While I was wanting to say that the tone (or for that matter, the reason for existence) of this article puzzled me, this is no longer the case.

As all things go, it’s about money. It’s all about the benjamins, baby.

Rich tech elitists don’t give a shit about California culture. They and their succubus corporate entities have gotten all that they need from the state, and are now off to pursue a better deal. Translation . . . off to a more lenient organism in which to attach their tentacles and suck dry. Until that one becomes inhospitable (in which case, back on the hunt!), the succubus dies, or there is nothing left to latch onto. Democratic or Republican, left or right, a nation unified in disarray. 

The Silicon Valley culture war is an illusion. As is our long-term habitation on this planet if we don’t learn to see through this propagandistic bullshit.

ISP Incompetence –  Why It Matters To You (And Your Home Network)

Today I am about to explore a uniquely first world problem. Normally, I don’t like to delve much into my first world problems anymore, viewing such hindrances to my existence as more of a privilege than a huge burden (not to mention the whole peanuts to what REALLY matters argument).But in this case, being that we have a situation that overlaps into a long neglected area of interest of mine (IT and technology), its a welcome distraction. Not to mention that having the  fix to the issue be fairly transparent certainty does not help.

Without further ado, the problem. How it came to matter to me. And how I fixed it, as well as how it can be fixed on a larger scale.

For years now, I have had a wifi network in my house. First, my own router tethered to a modem (be it cable or DSL), then an ISP prescribed modem slash router as part of an all in 1 unit (common in many homes today). And for years, I have never had any problems with connectivity.

Until around 6 months to a year ago.

Starting then at some point, I would see my connection slow to a crawl inexplicably. Though I pay for (and generally had gotten previously!) very close to  12 Mbps down/3 Mbps up (you will almost never reach those speeds EXACTLY, but upper 11’s and 2’s is more than acceptable). Seemingly out of nowhere, my whole connection would slow to a crawl. If I was watching YouTube, it would stop dead in the buffering stage. If I grabbed another device and ran a speed test, I would get pebbles in both directions (eg. 0.8 down/0.3 up). This issue could occur at literally any time (4am or 4pm!), and it happened frequently enough to finally warrant a call my ISP. Being that I was also having problems with TV channels pixelating (at times, all at the same time), I guessed that maybe my gateway box was not getting the incoming signal strength it required for proper function.
This based on the case of a friend that lived on the edge of town who was having similar problems with the very same setup as me (we also shared the same ISP). For my friend, the only way he could resolve the issue was by switching to the competing DSL ISP at my recommendation.

So I called tech support.

Slightly off-topic, if you (like me) are bored enough to find yourself looking up the dynamics of how many of these IT systems work, do not mention this to the techs. I don’t know if it’s because of the technical degree they had to pay for to qualify for the job, or because they spend most of their time these days dealing with old people that have WAY more complex technology than they even know how to fathom, but they generally aren’t at all receptive to explaining technical details of these systems. They don’t even like helping steer you in the right direction (“Actually, no sir, what is more common. . .”). They will often simply read you a prescribed script, often with a slightly annoyed tone (be it deliberate or not).

– an older friend of the family was having a landline installed, to which he plugged an old 5ghz cordless phone into as the primary receiver. Knowing that many modern wifi installations also utilize that spectrum, I asked if there would be a problem if either of his neighbors were to install 5ghz wifi systems (being 2016, it’s a very real possibility!). The technician somewhat condescendingly replied “No. They are 2 different things”.

A later Google search revealed that such a collision could occur, but I determined that the effects would likely be minimal to everyone involved (neighboring wifi would simply park on a less noisy channel). Though that 5ghz cordless has now been retired, both neighbors now have dual band wifi AND my friend has a 5ghz television sharing device operating in the house.

Imagine that.

Though I had initially assumed my pixilation slash internet dropout problems were related, it took a 2ed call before I was actually told otherwise. Though there was another issue entirely happening on the TV side that the first tech SHOULD have known about, he didn’t pass it on for whatever reason. Maybe I annoyed him?  

Anyway, after a few calls, I at some point discovered the potential problem that is wifi interference (be it mentioned by a tech, or discovered in my online research, I don’t recall). I found an app called Wifi Analyzer (android) which displays every detectable network in the area along with the channel(s) its using.

My network is Arris-4EB1, MTS 799 being my downstairs neighbor. When I first ran this app, I was on channel 1 (likely the equipment default). Here I had connectivity, but it at times slowed to a crawl. We’re talking 15 minutes to open a basic web page!

My ISP then (as noted above) changed me to channel 11. However, there was so much interference caused by the networks stepping on one another, that my connection vaporized barely 20 feet away.

I solved this problem by figuring out how to change these settings myself in the gateway (saves a phone call!) and went to channel 6. Which seemed to smooth everything out.

However, I realized that the real solution to my problem was likely getting off of 2.5GHZ entirely. Something that my cable provider said was possible with my equipment (dual-band, but only 1 at a time, not both). And like magic, this solved all my connection issues. In no small part due to this:

But, as in most areas of life (or tech support!), solving one problem creates a whole new one. While my fairly new phone was more than happy to utilize the new 5GHZ network,  my tablet (apparently a relic of the past) was not so compatible. Which meant that I could either accept a newly mono connected device lifestyle, or shift back to 2.5GHZ.

No, I thought. I can have my cake AND eat it too! I still had an older 2.5Ghz Dlink router lying around, so why not make use of it!

As with any journey, trial and error were at first, fruitless. From day 1, I had issues with getting this damned router to actually pass on the internet. And this time was no different. Keep in mind, this was after figuring out how to get into a locked router that I had long forgotten/discarded the passcode to (Hint: hold reset for 10 seconds).

So, back on the line with oh so helpful tech support. To be fair, they did explain my problem this time, more or less. You can not connect a router to a router since routers generally all default to the IP Address 192.168.0.1 . Having 2 routers connected means 2 different devices are claiming that IP, causing a conflict. A bit like assigning the same telephone number to 2 different lines.

At this point, the tech went into a long spiel about being unfamiliar with Dlink hardware, saying I would have to call Dlink and go through the advanced process of changing the router’s IP address. But, if I want, the company rents a dual-band unit that will solve my problem for only $4.95 a month.

I will think about it, I told him.

I did some more web searching. Which was when I learned the term wifi access point. Such devices (sometimes old routers!) can be used to extend a home or organization’s wifi network beyond the reach of its main node. But then comes the question of, can I configure that myself?

Yes . And it’s so simple that anyone can do it!

A method typically used to extend a networks range, I used it to extend to another band. And saved myself around $60 bucks a year in the process.

And so, the 2.5GHZ map of my area now looks like this. Jaynet is me.

Though the good channel used to be 6, most of the crowding has shifted, making the new preferable channel 11. Since this will no doubt change with time due to automatic switching and even more new networks, I will keep my eye on it. Both for the sake of my network, and everyone around me.

This is where one might ask the question “What does my network have to do with anyone else but those who use it?“. For this, one needs to look at how this technology works.

Governments worldwide have allocated a small amount from both the 2.5 and the 5GhZ spectrum for unlicensed use. For 2.5GHZ, 11 channels are available in North America (up to 14 elsewhere). For 5GHZ, about 20 are technically available for use, but some within the spectrum (52-116, 132-140) also share usage with military/weather radar. Since such get primary access to that given channel, a router on this frequency is required to shift all traffic to another when a conflict is detected. But this still leaves 8 non-overlapping channels to choose from.

Why does it matter?

I have seen the effects of both extremes. If you are sitting on a channel utilized by many networks around you (channel 1, where I live), everyone’s performance will be affected by every single device within range trafficking one another for bandwidth. Most notable for me in the evening hours (likely when many are streaming or gaming!), the internet becomes worse than dial-up. However, shift a channel or 2 either way, and instead of problem solved, you lose performance entirely. It may be an open channel, but it’s being stepped on by those around it. Which is why “non-overlapping” is so important.

In this respect, the telecom industry as a whole has failed. From Telcos to manufacturers, no one seems to have considered the potential problems of assigning wifi channels in a largely uncoordinated fashion in densely populated areas. And the most insidious part about this problem is that many (most?) consumers likely have no idea that this problem even exists.

In an old forum post I once read, a writer was complaining about some carelessly setting up home networks without taking all factors into consideration (for example, putting a network on channel 4, which will add interference on channels 1 and 6). That was some years ago, and involving private networks. Let’s look at how techs are programming  (or at very least, LEAVING) many networks today.

It’s a recycled image from before, but still good for the purpose. Alike most areas by now (I would guess), few user deployed networks remain (maybe 4). Everything else is Telco. Many of which are stomping all over one another.

As for the 5ghz side, my slice is good.

But what’s happening on the reverse side of the spectrum is just stupid.

I have no idea why some of these network footprints on both bands are so wide (using multiple channels?).

I’ve since learned that this new standard is called channel bonding. The principal being that bonding 2 or more wifi channels into 1 large one increases ones data throughput. Since channels are 20mhz wide, bonding 2 gives you 40mhz, 4 gives you 80mhz (examples of this can be seen above), and 5 gives you a huge 100mhz tunnel. Though I have never seen implemented (nor been presented the option for, thank goodness!) a 100mhz wifi tunnel, 40 and 80mhz defaults are common for many telco routers. I’ve even seen 40mhz tunnels implemented on the already crowded 2.4ghz spectrum. Whilst it is questionable if this was the default behavior, it still makes one shake their head in annoyance. As if routers parking on cross channels wasn’t bad enough.

Bonding is not bad when there is lot of room for movement (the 5 GHZ band), but not so much on 2.4. But overall, its a bit hilarious that all the networks depicted above effectively cancel out the whole point the 5GHZ band, by being on the exact same channel.

With the shortened range of 5ghz wifi, I may indeed be wrong here (particularly since this reading was taken from a 2ed floor balcony. They may not have been able to detect one another, despite my ability to detect them).

Yet, its not hilarious, really. Not if one considers the technical ignorance of likely a majority of users. It is entirely possible that all 4 network owners above are not even aware that they are in conflict with one another. 3 can be attributed to telco techs. Though the 4th is likely a home installed unit, it could also be a default installation by a novice user. Or an old customized setting that was good at one time, but has since unknowingly become inadequate due to changes in channel saturation.

When it comes to 2.5GHZ in my area, most of the private (as opposed to telco supplied) networks are legacy, the majority of those being on non-overlapping channels (typically 1 or 6). While not always the rule of thumb, it’s typically telco units that not only completely saturate selected non-overlapping channels but also set up in between non-overlapping channels. At times as close as a single channel apart.

One example of this from my area is many networks being on both 6 and 1 (likely factory defaults in many cases),  but also a few single networks running on channels 2, 4 and 5. These telco units (big surprise!) are not just being stepped on by the noise from 1 and 6 (and of course, each other!), but they add to the noise in general. So not only are 1 and 6 crowded, but they are also noisy too. 11 used to be as noisy (uncrowded, but stomped on), but those networks have either vanished or moved. Good riddance (at least for now).

How we got to this point, is not much different from what drives the ever-increasing problem of lack of available bandwidth online. Poor planning in the early stages.

In the case of bandwidth, the number of bandwidth intense platforms increased over time, but the size of the broadband pipes available to consumers often didn’t keep up. Similarly, as wifi proliferation expanded, those charged with its deployment were obviously more focused on profitability than on ensuring continuous reliability for all users of the technology. Rather than training techs on good wifi deployment techniques, or designing a partially (or fully!) automated system of frequency separation, these entities just continue to dump units onto the market. And now, because of all this 2.5GHZ wifi interference, ISP’s are now pushing both duel band and 5GHZ routers.

With current-generation equipment likely to be around for many years to come, things will only get worse. We do have the 5GHZ band (and possibly more bands being opened up in the future), but if they are treated in the same way as 2.5GHZ has been, they will also become unusable in NO time.

An obvious answer to this problem would be manual (or more likely, automatic) coordination of all wifi networks within line of sight of one another. A great example of this type of coordination of a limited spectrum would be cellular networks. As illustrated by the following graphic.

As is depicted, the carrier can easily work with a small amount of spectrum, with some coordination. There are likely more than 4 frequencies available in reality, but you get the idea.

Now,  could this concept help solve the wifi crisis?

I think the answer is yes. In the same way that smart grids will streamline electricity usage at all levels, Smart routers could streamline channel separation on all bands.  When activated, units would no longer just jump to some preset default, or scan for an empty channel. They would (must!)  communicate and coordinate with others in the area. If all routers in a given area (even in an EXTREMELY crowded area, like an apartment building) were to work around one another’s networks, it may be possible to not just smooth out the ride for everyone, but also to make all 11 2.5GHZ channels non-overlapping.

Other factors may also have to be self-regulated (such as broadcast power levels). And such coordination may require the creation of an outside database that all units are guided by initially, and continuously. But the plus side is that it’s not just left up to often novice humans to manage (or as often is the case, mismanage) precious spectrum.

Until then, however, there is not a whole lot you can do other than managing as best you can.

If you’re having wifi problems, first, know your channel and that of those in your area. For Android, use Wifi Analyzer. For PC, use inSSIDer. Using such, try and find a channel that is both non-overlapping and fairly unsaturated. If that is hardly an option, consider if 5GHZ is a good alternative. However, remember that not all your devices may be compatible (though that is easy enough to remedy), and that 5GHZ has a shorter range than 2.5GHZ.

And lastly, as seemingly backwards as this advice may sound (from a tech perspective) if it can be wired in, do it. Though Ethernet is not always the handy option (and possibly may even be considered unsightly), nothing beats its reliability (or security, as a bonus).

Since I switched to 5Ghz back in 2016 (when this was initially published, before I retracted it in July 2020 to make some long-awaited edits), I have not had an issue with wifi connectivity since. Though I now run a dual-band ISP supplied router that defaults to an 80 MHz channel on 5ghz (imagine that!), I’ve since switched back to a smaller 20mhz  channel (since my internet speed couldn’t utilize 80mhz, to begin with!). This is beneficial since it opens up more channels for use than any of the bonded options.

My new unencumbered 5ghz channel of choice?

165.

Since it is at the uppermost edge of the 5ghz spectrum, I’ve yet to have anyone else’s 80mhz monstrosities step on my toes again.

UK Cyber Security Agency Embraces Political Correctness

Just when I thought I had seen it all. I wake up, turn on my tech-oriented podcast, and WHAM!

Them SJW’s strike again.

UK Cybersecurity Agency Drops ‘Blacklist’ and ‘Whitelist’ Terms Over Racial Stereotyping

‘If you’re thinking about getting in touch saying this is political correctness gone mad, don’t bother,’ the UK’s National Cyber Security Centre said in the announcement.

The words “blacklist” and “whitelist” get tossed around a lot in cybersecurity. But now a UK government agency has decided to retire the terminology due to the racial stereotyping the language can promote.

The UK’s National Cyber Security Centre is making the change after a customer pointed out how the words can needlessly perpetuate stigmas. “It’s fairly common to say whitelisting and blacklisting to describe desirable and undesirable things in cyber security,” wrote the NCSC’s head of advice and guidance Emma W. last week.

“However, there’s an issue with the terminology. It only makes sense if you equate white with ‘good, permitted, safe’ and black with ‘bad, dangerous, forbidden’,” she added. “There are some obvious problems with this. So in the name of helping to stamp out racism in cyber security, we will avoid this casually pejorative wording on our website in the future.”

To replace the terminology, NCSC has opted for the words “deny list” and “allow list,” which will now be used across its website and cybersecurity advisories. The language is not only clearer, but also more inclusive, the agency said.

“No, it’s not the biggest issue in the world — but to borrow a slogan from elsewhere: every little helps,” Emma W. added. “You may not see why this matters. If you’re not adversely affected by racial stereotyping yourself, then please count yourself lucky. For some of your colleagues (and potential future colleagues), this really is a change worth making.”

https://www.pcmag.com/news/uk-cybersecurity-agency-drops-blacklist-and-whitelist-terms-over-racial

 

Give me a break. Just because I maintain various black and white lists doesn’t mean I am racist, nor is it perpetuating a stereotype! And how about the hacking community . . . black hat and white hat are not racial designations!

Of all the problems we could be focusing on in 2020, THIS is the best we can do?! REALLY?!

 

* * *

 

What you have just read is one way to look at this new development. Pick a forum of your choice and you are likely to see this dynamic. Just another case of the weak making the world safer for themselves.

However, I am not going to take that view. Though I (like most others, it seems) have never really thought about the connotations behind such terms as Whitelist and Blacklist, I now see the harm. Even if we cast aside the role of privilege in situations like this, we are still left with the fact that the alternative is better than the status quo.

Allow / Deny List

White / Black List

It’s simple. It’s obvious from the perspective of users of any skill level. And it’s less reminiscent of human bias of past and present which just so happens to live on in the form of code.
The code is not inherently biased (code can’t be biased. It’s just an assortment of 1’s and 0’s!). The coder may not even be inherently (purposely) biased. However, we all tend to be a product of our environment. Since the internet was born in a nation that has a history of institutionalized racism towards African Americans, is it such a surprise that this bias would turn up in one of the nation’s greatest achievements?

The code ain’t biased. And there are arguably bigger problems one could be tackling. None the less, nothing beats more precise without the stigma.

As for the hacking community and it’s black, white and grey hats . . . is it really that big a deal to pick a new colour scheme?

Knowing how this segment operates . . . yes, it is. The info-sec community could unanimously embrace whatever change it wanted, but the “FUCK PC CULTURE!” crowd would keep wearing their black & white hats till the day they die.

Fine. Herding an entire cohort is problematic at the best of times (let alone one that prides itself on its rebelliousness and anarchism). Continue to identify by way of whatever hat  you want.

As for the rest of us, may I suggest Green Hats (Good) and Red hats (bad)? Grey hats can either stay the same OR may I suggest brown hats (it’s the colour one attains upon mixing red and green).
The organization Red Hat may take offence to this new scheme. As would so-called Red Hat hackers, who apparently either target Linux systems (please don’t) or slay black hats. Come to think of it, Green hat is taken as well.

https://www.techfunnel.com/information-technology/different-types-of-hackers/

I’m reminded of a Carlin segment.

“Everybody’s got a fucking hat!”

Different topic. But you get the point. Are the hats REALLY necessary (Black, white or otherwise)? Or is there a better way that is less arbitrary?

* * *

It may all seem silly to people. Making a big deal out of a non-issue in the world of technology (only the latest field to be targeted by this fad of PC culture). While one can’t argue that the future of most industries will not be negatively affected by institutionalized bias (it hasn’t made much difference up to now, has it?), the same can not be said for the technology industry. It all comes down to artificial intelligence.

Before the modern era, most of the code that ran on much of our devices could be said to be primarily dumb. By that, I mean that no matter what little quirks humans reflected into the 1’s and 0’s, there was relatively little consequence. For example, black and white lists. However, in an era where both training and embracing the use of Artificial Intelligence is on the rise in every single area of life, there is no longer room for unaccounted human bias. Because biases in Artificial Intelligence algorithms don’t just sit on millions of desktops and servers for decades without consequence. These biases actively alter the decision-making abilities of these algorithms. And every time one of these algorithms goes biased in view of the public, the entirety of the sector takes a reputational hit on account of the programmer’s error.

By now, we have all likely heard about cases of racist, sexist, and otherwise biased artificial intelligence instances. All of which only adds to the bad reputation that the technology has earned in the public eye thanks to Hollywood, opinions from people like Elon Musk, and just the weird factor of it all. We just don’t like being the second biggest brain in the room.

Or more pertinently, we tend not to trust black box AI. Algorithms that are fed a given set of data that then return a result without providing any means of tracking exactly what factors lead to that conclusion. This scares people. Given the reputation of the technology so far, I don’t blame the public for being weary.

Despite this bias problem, however, I still think that artificial intelligence has a bright future in many areas of human existence. In fact, when (I don’t think it is a matter of IF, should the industry take this problem seriously!) we figure out how to weed out the bias-inducing factors that are clouding the current day outputs, I think artificial intelligence has an excellent chance of acting the part of a neutral arbitrator in places where decisions based on human biases (most of which aren’t accounted for) are notoriously prevalent. For example, in the judicial system, and even in Human Resources departments worldwide(One, Two).

Speaking of the judicial system, the CBS show Bull (despite the problems associated with the actor behind its protagonist. Some even argue the same of CBS, in general <One, Two> ) is a brilliant example of human bias in action. The whole point of Jason Bull’s career (Trial Science) and business is essentially finding a way to manipulate the humans of the judicial system into finding in the interests of his clients. Whilst the whole concept may seem far fetched on the surface, it really isn’t far from the reality of the situation. Particularly if you are a member or a notoriously targeted minority in the society you live in.

Though many people tend to fear black box AI, I am far more untrusting of the human brain. Because there is no more inaccessible black box than the one that resides between the ears of any person. A box that could be motivated by biases and annoyances ranging from the racial, to the mundane (“I’m so bored. Is it lunchtime yet?” or “I need to pee so bad!”).
When an individual or group of humans (such as a jury, judge or human resources manager) makes a decision about a given individual, there is no way to gain any insight as to what drove that decision-making process. At best, you are forced to take the person’s word for it that the decision was fair (despite the fact that many humans are unaware of how systemic biases may be affecting these decisions). In reality, there is often no way to even question the individuals making the decisions, so you have to assume on faith that they are acting in your best interests.

Knowing how humans are, I have no such faith in the human species. At this point in time, such an opinion doesn’t mean much since humans are still in charge of making many of the world’s crucial decisions. However, given the choice between some kind of fair artificial intelligence algorithm and a human brain, I lean towards trusting the AI.

Of course, not yet. Judging from the biased outputs of many of these systems, it looks to my untrained eye that many programmers have yet to accept the concept that is “Garbage in, garbage out”. If the data that you are feeding your algorithm is riddled with biases (be they be apparent, or biases of omission), the end result is going to be less than desirable. Predictable, even.
It reminds me of the process of raising a child. Children are not born racist, sexist or otherwise pre-equipped with any matter of human biases. This is primarily learned behaviour. And since most (all?) aspects of human culture tend to be saturated in the unexamined biases of primitive societies which were carried forward by future generations for primarily irrational reasons (tradition), it’s almost impossible NOT to be born and raised without accepting some form of bias.

As the annoying Tik Tok that I have quoted a few thousand times in the past 2 months states:

There is a reason why that saying annoys many philosophers critical of Stoicism. In fact, there is a reason why stoicism annoys me. Being I work and have worked in corporate environments for all my life, the mantra has always been essentially “Just deal with it, or there is the door!”. It makes for an easy to manage workforce when all of the cogs just mindlessly obey their orders. However, it is an inherently shortsighted management style since no one is more well-positioned to spot problems and inefficiencies than ground-level employees. If the culture of that company dictates that employees just deal with it, the result could enable longstanding (and often silly) inefficiencies that may well be trivial to correct.

As such, what LOOKS to be a well-oiled machine may well be operating below the level it could truly be capable of. Whilst such may not matter in times where business is good and money is plentiful, the whole situation can change if the financial situation changes.

While not directly applicable to the conversation that is artificial intelligence, it still comes together. Unlike the pipe dream that is correcting all the biases of even ONE human, we can do so with the artificial intelligence algorithms we create. Whilst this part may well be more of a challenge, I don’t think that all AI algorithms necessarily have to be black boxes. In fact, there is good reason to promote transparency in decision-making processes.
Though this will be crucial in many areas, nowhere more so than in the realm of the judicial system. If an algorithm is to be trusted handing down judgements on a seemingly automated basis, there should be a way for the recipients of these sentences to know how the decision came to be. And of course, a process in which it can be appealed.

As such, I have my doubts that many appeals court or supreme court judges are going to be automated away anytime soon. They may, in fact, get a whole lot busier in the first decade or 2 of the transition. At least until public jitters and mistrust over the system are calmed.

 

* * *

It would seem that we are miles from where we started (in terms of this post). After all, how do White and Black lists in ANY way affect the future of Artificial Intelligence? Or to reference an older terminology controversy originating in the tech community, how does merely naming a concept the Master / Slave network architecture cause harm?

First of all, an explanation.

Master/slave is a model of communication for hardware devices where one device has a unidirectional control over one or more devices. This is often used in the electronic hardware space where one device acts as the controller, whereas the other devices are the ones being controlled. In short, one is the master and the others are slaves to be controlled by the master. The most common example of this is the master/slave configuration of IDE disk drives attached on the same cable, where the master is the primary drive and the slave is the secondary drive.

https://www.techopedia.com/definition/2235/masterslave

And second, privilege tends to play a big role in whether or not the wording is offensive. Someone that grew up and otherwise lives outside the context that is life as an African American in America (and really, anywhere) would predictably find little harm in what they interpret as unrelated markers in an entirely unrelated context. For people that have experienced that background of prejudice, however, these unanalyzed tags represent yet another example of systemic bias. That there is a contingent of programmers that vocally support such language only serves to shore up this conclusion.

Not all of the industry is unwilling to make changes in the name of stemming old biases, however. In around the same timeframe, both Google and Python (one of the world’s most popular programming languages) committed to purging the antiquated and offensive terms from their entire code base. Python replaced Slaves with workers and master with Parent Process. In the context of a network, one just has to drop the word process.

Thus proving once more that “this is how it has always been!” doesn’t mean that this is how it always HAS to be.

Of course, again, none of this has anything to do with artificial intelligence at first glance. However, it can serve as a nice jumping-off point since scrutinizing our existing dumb code base for these unnoticed (and this, unevaluated) biases can help prepare us for the care that is required in creating and maintaining the artificial intelligence processes of the future. Or, at the very least, it can serve as an excellent tool for determining which programmers embrace the correct frame of mind to tackle such a finicky project.

 

https://getpocket.com/explore/item/how-to-think-about-implicit-bias?utm_source=pocket-newtab

Though I didn’t utilize this article in writing the piece, it was recommended to me in the process of gathering related materials. It’s an interesting read.

WAKE UP AMERICA: The Feds Are Using Covid 19 As Cover To Target Encryption

What a time to be alive.

All over the world, nations are shutting down all but the most essential services in the name of slowing the spread of the Sars-Covid 19 virus. And as of this writing, we are nowhere even NEAR anything constituting a middle point of this pandemic. The light at the end of this tunnel of darkness is still far too distant to even detect.

But that doesn’t mean that officials in the increasingly authoritarian mirroring Republican party are not hard at work, fighting for the rights of Americans. That is, fighting for the rights of Americans to never harbour any digital secrets ever again.

It all stems from the idiotically named EARN IT bill. The Eliminating Abusive and Rampant Neglect of Interactive Technologies act is all about saving the children, say the politicians. However, anyone with a keen ear knows what lies between the lines in this act. Ever since the whole of the internet begun adopting increasingly strong encryption as standard procedure, authorities at all levels of government have increasingly become infuriated with hitting this brick wall in various investigations. Be it smartphones, content in cloud servers, or web traffic that neither ISP nor anyone else can comprehend, encrypted data has made all levels of surveillance much more difficult than it used to be.

For everyday users, an increasingly private internet meant more security and privacy in pretty much all contexts. And from the point of view of the dissident or whistleblower, there has never been a better time. Though there is no such thing as 100% untraceability, for many situations, current technology has brought us pretty close. 

I explored this topic in some depth last year. That piece concluded with me speculating that we may well end up ending up with weaker encryption schemes on account of government crackdowns. Written with that inevitability in mind, I tried to sort out what the options were for maintaining some semblance of privacy. For example, the drastic privacy difference between decrypting individual-app generated traffic on the fly, and an OS backdoor.

At the time, I didn’t think the threat was all that serious, considering the enormity of the task. Any governing official that went public with a plan (or even the proposition) of eliminating encryption would have a short career. And even if such laws were passed AND ISP’s were mandated to filter all blind traffic within their networks, the ensuing chaos would break the economy.
Canadian ISP Rogers learned this the hard way back in 2007 when they tried to rein in encrypted BitTorrent transfers by slowing down all encrypted traffic. The result was less filesharing . . . and a whole lot of legitimate users of email, online banking and other sensitive services angered at being caught in the dragnet.

Last October, this seemed like a problem that was far away into the future. Even in an era where we’re saying President Trump, this is still a step too far.

However, one could not predict that a novel coronavirus would take the world by storm back then (well, aside from these guys). Just as one couldn’t predict that a virus that had never been seen before would sweep the world, neither could they predict that politicians would attempt to use the blanket virus coverage to pass legislation mandating the breaking of encryption.

It all comes down to section 230 of the Communications Decency Act of 1996.

Section 230 says that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230). In other words, online intermediaries that host or republish speech are protected against a range of laws that might otherwise be used to hold them legally responsible for what others say and do. The protected intermediaries include not only regular Internet Service Providers (ISPs), but also a range of “interactive computer service providers,” including basically any online service that publishes third-party content. Though there are important exceptions for certain criminal and intellectual property-based claims, CDA 230 creates a broad protection that has allowed innovation and free speech online to flourish.

https://www.eff.org/issues/cda230

Whilst current day legislation protects all communications entities and platforms from the illicit actions and behaviours of their end-users, this new law would hinge this protection on the service or platform’s ability to give authorities access to encrypted end-user data. The penalty for not doing so would be nothing short of bankruptcy.

Which seems to only leave 2 options. Accept defeat and cease to exist before the liability costs do it for you. Or roll back the standards such as to appease the spirit of the legislation. Though I may well be incorrect, I anticipate this would mean the adoption of weaker (aka previously broken) standards of cryptography. Though this may not necessarily be permanent (platforms may be working on a secure but warrant friendly workaround), people will be at risk the longer the old standards persist.

This looks like an inevitability if the bill is passed. However, being that has not yet happened, you still have time to act.

For what good it will do, many petitions against the act exist.

However, nothing is more important than exposure. So until John Oliver is back on the air to take this story nationwide, the job is in the hands of everyday Americans.

 

“Electric Buses Charge Quickly With This New Wireless System” – (Ecowatch)

https://www.ecowatch.com/electric-bus-charging-system-2641538688.html

Around a month or so ago, I wrote a piece exploring my hypothesis that my country (in particular, one province within it) was betting on entirely the wrong horse when it comes to the future of energy. For that piece, I tried to keep a fairly level head in my explorations, despite holding strong opinions on the subject matter. As much as I value viewing the future in a pragmatic way, I also understand the human dynamic. When a new status quo technology transitions into being the new dominant infrastructure, you invariably have hundreds, thousands, maybe even hundreds of thousands of people displaced from employment.

The oilsands case for me is an interesting one for a few reasons.

As an environmentally minded citizen, I am in obvious opposition for that reason (leave it in the ground). Though those in favour are all about the good-paying jobs, it’s an inherently flawed argument. Good paying or not, corporations will ALWAYS run with but the absolute minimum amount of labour as is necessary to generate the desired revenue. And if those few jobs can be automated away, then there goes that argument. Even if they get their pipelines and whatever other infrastructure they demand from provincial and federal governments, all but the most technologically skilled engineers will STILL end up on the chopping block. The proverbial square one.

After the recent Canadian federal election, I more or less reiterated the same message. Though this time, I was far less contained in displaying my true feelings. A product of hearing the same old arrogant whining and complaining from the same short-sighted bunch of (generally) boomers, I lost my cool and had to release some steam. It wasn’t the unifying message that this country arguably is in need of. But at the same time, we’re all adults here. Whether or not adults choose to accept the consequences of future change that will almost certainly be out of their control, they will be affected by this change.

You can fight it. You can stay in the delusion until the progress of reality runs you down like a freight train and leaves your life, region and economy in a state of disarray. Or you can acknowledge the dangerously rough waters that lie ahead and start attempting to plan accordingly.
These plans may not help ALL affected by the change, nor may they even turn out to be the right guess (who knows what we can’t foresee in the coming decades). None the less, having a plan is far better than watching the economic, social and (and potentially, civil) fabric erode before your very eyes. Whilst my main focus herein on the Alberta economy, the aforementioned automation transition will affect far more regions than that.

Alberta may be in dire straights now. But they ain’t seen NOTHING yet.

Which brings me to the article I have linked above.

In Long Beach, California, some electric buses can charge along their route without cords or wires.

When a bus reaches the Pine Avenue station, it parks over a special charging pad. While passengers get on and off, the charger transfers energy to a receiver on the bottom of the bus.

Michael Masquelier is CEO of Wave, the company that makes the wireless system in Long Beach.

“We automatically detect that the vehicle’s there, automatically start the charge,” he says. “So it’s completely hands-free and automated.”

Wireless charging systems use what’s known as inductive charging to produce electricity across a magnetic field. Wireless phone chargers and even some electric toothbrushes work in the same way.

As you can see, it’s not a new technology. At very low power, it’s how proximity-based badge and credit card readers work. As noted above, it’s how wireless charging works. I have one of those pads, and as mobile device makers continue to work towards waterproof devices (as opposed to water-resistant), they will become even more commonplace.
Apple is already rumoured to be creating such a device in the iPhone 12. This means that other manufacturers (particularly Android flagships like future Pixels) won’t be far behind. Lack of ports means more opportunities for home branded headphones, wireless dongles and who knows what else.

Either way, the reason why I wrote this is to showcase a demonstration of this technology in action TODAY. If this can be made to work for buses, then it seems plausible that parking spots outfitted with similar tech could well be the future of charging personal EV’s. Maybe not at home, but consider the comfort and security of initiating payments and the other steps of the process without even leaving the vehicle. No current driver, gas OR electric, can boast that convenience.

This leaves semi-trucks and long haul trucking.

This has been typically viewed as a problem in the area of conversion since it’s hard to match the per kilometre (or mile) range of the average transport truck. One way of dealing with this would be swapping out batteries between trips. Another could be charging the battery (possibly with the assistance of plugging in to ensure maximum juice flow into the batteries) while the truck is at a warehouse or depot loading and unloading. The undercarriage of a trailer certainly has enough space to house the required conductors. Whether the time between loading and unloading a given shipment of freight is enough time to gain a proper charge, is questionable. But as these things go, maybe the solution isn’t just to rely on charging at layovers. Maybe a battery swap plus a charge is the answer.

Either way, the point of this is to try and further outline the seemingly obvious. The future is here, in all its fascinations and uncertainties. When it comes to the question of whether you can sustain an economy with heavy fuels only, it is less a question than a countdown.

It is not a matter of if. It is a matter of when.

“Safeguarding Human Rights In The Era Of Artificial Intelligence” – (Council Of Europe)

Today I will explore yet another (hopefully) different angle on a topic that has grown very fascinating to me. How can human rights be safeguarded in the age of the sentient machines? An interesting question since I think it may also link back to technologies that are pre-AI.

Let’s begin.

https://www.coe.int/en/web/genderequality/-/safeguarding-human-rights-in-the-era-of-artificial-intelligence

The use of artificial intelligence in our everyday lives is on the increase, and it now covers many fields of activity. Something as seemingly banal as avoiding a traffic jam through the use of a smart navigation system, or receiving targeted offers from a trusted retailer is the result of big data analysis that AI systems may use. While these particular examples have obvious benefits, the ethical and legal implications of the data science behind them often go unnoticed by the public at large.

Artificial intelligence, and in particular its subfields of machine learning and deep learning, may only be neutral in appearance, if at all. Underneath the surface, it can become extremely personal. The benefits of grounding decisions on mathematical calculations can be enormous in many sectors of life, but relying too heavily on AI which inherently involves determining patterns beyond these calculations can also turn against users, perpetrate injustices and restrict people’s rights.

The way I see it, AI in fact touches on many aspects of my mandate, as its use can negatively affect a wide range of our human rights. The problem is compounded by the fact that decisions are taken on the basis of these systems, while there is no transparency, accountability or safeguards in how they are designed, how they work and how they may change over time.

One thing I would add to the author’s final statement would be the lack of safeguards in terms of what kind of data these various forms of AI are drawing their conclusions from. While not the only factor that could contribute to seemingly flawed results, I would think that bad data inputs are one of (if not THE) most important factor.

I base this off of observation of the many high profile cases of AI gone (seemingly) haywire. Whether it is emphasized in the media coverage or not, biased data inputs have almost always been mentioned as a factor.

If newly minted AI software is the mental equivalent to a child, then this data is the equivalent to religion, racism, sexism or other indoctrinated biases. Thus my rule of thumb is this . . . If the data could cause indoctrination of a child, then it’s unacceptable for a learning stage algorithm.

Encroaching on the right to privacy and the right to equality

The tension between advantages of AI technology and risks for our human rights becomes most evident in the field of privacy. Privacy is a fundamental human right, essential in order to live in dignity and security. But in the digital environment, including when we use apps and social media platforms, large amounts of personal data are collected – with or without our knowledge – and can be used to profile us, and produce predictions of our behaviours. We provide data on our health, political ideas and family life without knowing who is going to use this data, for what purposes and how.

Machines function on the basis of what humans tell them. If a system is fed with human biases (conscious or unconscious) the result will inevitably be biased. The lack of diversity and inclusion in the design of AI systems is therefore a key concern: instead of making our decisions more objective, they could reinforce discrimination and prejudices by giving them an appearance of objectivity. There is increasing evidence that women, ethnic minorities, people with disabilities and LGBTI persons particularly suffer from discrimination by biased algorithms.

Excellent. This angle was not overlooked.

Studies have shown, for example, that Google was more likely to display adverts for highly paid jobs to male job seekers than female. Last May, a study by the EU Fundamental Rights Agency also highlighted how AI can amplify discrimination. When data-based decision making reflects societal prejudices, it reproduces – and even reinforces – the biases of that society. This problem has often been raised by academia and NGOs too, who recently adopted the Toronto Declaration, calling for safeguards to prevent machine learning systems from contributing to discriminatory practices.

Decisions made without questioning the results of a flawed algorithm can have serious repercussions for human rights. For example, software used to inform decisions about healthcare and disability benefits has wrongfully excluded people who were entitled to them, with dire consequences for the individuals concerned. In the justice system too, AI can be a driver for improvement or an evil force. From policing to the prediction of crimes and recidivism, criminal justice systems around the world are increasingly looking into the opportunities that AI provides to prevent crime. At the same time, many experts are raising concerns about the objectivity of such models. To address this issue, the European Commission for the efficiency of justice (CEPEJ) of the Council of Europe has put together a team of multidisciplinary experts who will “lead the drafting of guidelines for the ethical use of algorithms within justice systems, including predictive justice”.

Though this issue tends to be viewed as the Black Box angle (you can’t see what is going on inside the algorithms), I think it more reflects on the problem that is proprietary systems running independently, as they please.

It reminds me of the situation of corporations and large-scale data minors and online security. The EU sets the standard in this area by way of levying huge fines for data breaches, particularly those that cause consumer suffering (North America lags behind, in this regard).
I think that a similar statute to the GDPR could handle this issue nicely on a global scale. Just as California was/is the leader in terms of many forms of safety regulation due to its market size, the EU has now stepped into that role in terms of digital privacy. They can also do the same for regulating biased AI (at least for the largest of entities).

It won’t stop your local police department or courthouse (or even your government!) from running flawed systems. For that, mandated transparency in operations becomes a necessity for operation. Governing bodies (and international overseers) have to police the judicial systems of the world and take immediate action if necessary. For example, by cutting AI operations funding to a police organization that either refuses to follow the transparency requirements or refuses to fix diagnosed issues in their AI system.

Stifling freedom of expression and freedom of assembly

Another right at stake is freedom of expression. A recent Council of Europe publication on Algorithms and Human Rights noted for instance that Facebook and YouTube have adopted a filtering mechanism to detect violent extremist content. However, no information is available about the process or criteria adopted to establish which videos show “clearly illegal content”. Although one cannot but salute the initiative to stop the dissemination of such material, the lack of transparency around the content moderation raises concerns because it may be used to restrict legitimate free speech and to encroach on people’s ability to express themselves. Similar concerns have been raised with regard to automatic filtering of user-generated content, at the point of upload, supposedly infringing intellectual property rights, which came to the forefront with the proposed Directive on Copyright of the EU. In certain circumstances, the use of automated technologies for the dissemination of content can also have a significant impact on the right to freedom of expression and of privacy, when bots, troll armies, targeted spam or ads are used, in addition to algorithms defining the display of content.

The tension between technology and human rights also manifests itself in the field of facial recognition. While this can be a powerful tool for law enforcement officials for finding suspected terrorists, it can also turn into a weapon to control people. Today, it is all too easy for governments to permanently watch you and restrict the rights to privacy, freedom of assembly, freedom of movement and press freedom.

1.) I don’t like the idea of private entities running black box proprietary algorithms with the aim of combatting things like copyright infringement or extremism either. It’s hard to quantify really because, in a way, we sold out our right to complain when we decided to use the service. The very public square that is many of the largest online platforms today have indeed become pillars of communication for millions, but this isn’t the problem of the platforms. This is what happens when governments stay hands off of emerging technologies.

My solution to this problem revolved around building an alternative. I knew this would not be easy or cheap, but it seemed that the only way to ensure truly free speech online was to ditch the primarily ad-supported infrastructure of the modern internet. This era of Patreon and crowdfunding has helped in this regard, but not without a set of its own consequences. In a nutshell, when you remove the need for everyday people to fact check (or otherwise verify) new information that they may not quite understand, you end up with the intellectual dark web.
A bunch of debunked or unimportant academics, a pseudo-science pedaling ex-psychiatrist made famous by an infamous legal battle with no one (well, but for those, he sued for using their free speech rights), and a couple dopey podcast hosts

Either way, while I STILL advocate for an (or many) alternatives in the online ecosystem, it seems to me that at least in the short term, regulations may need to come to the aid of the freedom of speech & expression rights of everyday people. Yet it is a delicate balance since we’re dealing with sovereign entities in themselves.

The answers may seem obvious at a glance. For example, companies should NOT have been allowed to up and boot Alex Jones off of their collective platforms just for the purpose of public image (particularly after cashing in on the phenomenon for YEARS). Yet in allowing for black and white actions such as that, I can’t help but wonder if it could ever come back to bite us. For example, someone caught using copyrighted content improperly having their entire Youtube library deleted forever.

2.) I don’t think there is a whole lot one can do to avoid being tracked in the digital world, short of moving far from cities (if not off the grid entirely). At this point, it has just become part of the background noise of life. Carrying around a GPS enabled smartphone and using plastic cards is convenient, and it’s almost impossible to generate some form of metadata in ones day to day life. So I don’t really worry about it, short of attempting to ensure that my search engine accessible breadcrumbs are as few as possible.

It’s all you really can do.

What can governments and the private sector do?

AI has the potential to help human beings maximise their time, freedom and happiness. At the same time, it can lead us towards a dystopian society. Finding the right balance between technological development and human rights protection is therefore an urgent matter – one on which the future of the society we want to live in depends.

To get it right, we need stronger co-operation between state actors – governments, parliaments, the judiciary, law enforcement agencies – private companies, academia, NGOs, international organisations and also the public at large. The task is daunting, but not impossible.

A number of standards already exist and should serve as a starting point. For example, the case-law of the European Court of Human Rights sets clear boundaries for the respect for private life, liberty and security. It also underscores states’ obligations to provide an effective remedy to challenge intrusions into private life and to protect individuals from unlawful surveillance. In addition, the modernised Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data adopted this year addresses the challenges to privacy resulting from the use of new information and communication technologies.

States should also make sure that the private sector, which bears the responsibility for AI design, programing and implementation, upholds human rights standards. The Council of Europe Recommendations on human rights and business and on the roles and responsibilities of internet intermediaries, the UN guiding principles on business and human rights, and the report on content regulation by the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, should all feed the efforts to develop AI technology which is able to improve our lives. There needs to be more transparency in the decision-making processes using algorithms, in order to understand the reasoning behind them, to ensure accountability and to be able to challenge these decisions in effective ways.

Nothing for me to add here. Looks like the EU (as usual) is well ahead of the curve in this area.

A third field of action should be to increase people’s “AI literacy”.

Indeed.

In an age where such revered individuals as Elon Musk are saying such profoundly stupid things as this, AI literacy is an absolute necessity.

States should invest more in public awareness and education initiatives to develop the competencies of all citizens, and in particular of the younger generations, to engage positively with AI technologies and better understand their implications for our lives. Finally, national human rights structures should be equipped to deal with new types of discrimination stemming from the use of AI.

1.) I don’t think that one has to worry so much about the younger generations as they do about the existing generations. Young people have grown up in the internet age so all of this will be natural. Guidance as to the proper use of this technology is all that should be necessary.

Older people are a harder sell. If resources were to be put anywhere, I think it should be in programs which attempt to making aging generations more comfortable with increasingly modernized technology. If someone is afraid to operate a smartphone or a self-checking, where do you even begin with explaining Alexa, Siri or Cortana?

2.) Organizations do need to be held accountable for their misbehaving AI software, particularly if it causes a life-altering problem. Up to and including the right to legal action, if necessary.

 It is encouraging to see that the private sector is ready to cooperate with the Council of Europe on these issues. As Commissioner for Human Rights, I intend to focus on AI during my mandate, to bring the core issues to the forefront and help member states to tackle them while respecting human rights. Recently, during my visit to Estonia, I had a promising discussion on issues related to artificial intelligence and human rights with the Prime Minister.

Artificial intelligence can greatly enhance our abilities to live the life we desire. But it can also destroy them. It therefore requires strict regulations to avoid morphing in a modern Frankenstein’s monster.

Dunja Mijatović, Commissioner for Human Rights

I don’t particularly like the darkened tone of this part of the piece. But I like that someone of influence is starting to ask questions, and getting the ball rolling.

It will be interesting to see where this all leads in the coming months, years and decades.

“Unboxing Google’s 7 New Principles Of Artificial Intelligence” – (aitrends)

Today, I am going to look into Google’s recent release of it’s 7 new principals of artificial intelligence. Though the release was made at the beginning of July, life happens, so I haven’t been able to get around to it until now.

https://aitrends.com/ethics-and-social-issues/unboxing-googles-7-new-principles-of-artificial-intelligence/

How many times have you heard that Artificial Intelligence (AI) is humanity’s biggest threat? Some people think that Google brought us a step closer to a dark future when Duplex was announced last month, a new capability of Google’s digital Assistant that enables it to make phone calls on your behalf to book appointments with small businesses.

The root of the controversy lied on the fact that the Assistant successfully pretended to be a real human, never disclosing its true identity to the other side of the call. Many tech experts wondered if this is an ethical practice or if it’s necessary to hide the digital nature of the voice.

Right off the bat, were into some interesting stuff. An assistant that can appear to do all of your phone call related chores FOR you.

On one hand, I can understand the ethical implications. Without confirming the nature of the caller, it could very well be seen as a form of fraud. It’s seen as such already when a person contacts a service provider on behalf of another person without making that part clear (even if they authorize the action!). Indeed, most of the time, no one on the other end will likely even notice. But you never know.

When it comes to disguising the digital nature of the voice of such an assistant, I don’t see any issue with this. While it could be seen as deceptive, I can also see many businesses hanging up on callers that come across as being too robotic. Consider, the first pizza ever ordered by a robot.

Okay, not quite. We are leaps and bounds ahead of that voice in terms of, well, sounding human. None the less, there is still an unmistakably automated feel to such digital assistants as Siri, Alexa, and Cortana.

In this case, I don’t think that Google (nor any other future developer or distributor of such technology) has to worry about any ethical issues surrounding this. Simply because it is the onus of the user to ensure the proper use of the product or service (to paraphrase every TOS agreement ever)

One big problem I see coming with the advent of this technology is, the art of deception of the worst kind is going to get a whole lot easier. One example that comes to mind are those OBVIOUSLY computer narrated voices belching out all manner of fake news to the youtube community. Now the fakes are fairly easy for the wise to pick up on because they haven’t quite learned the nuances of the English language (then again, have I?). In the future, this is likely to change drastically.
Another example of a problem posed by this technology would be in telephone scamming. Phishing scams originating in the third world are currently often hindered by the language barrier. It takes a lot of study to master enough English to fool most in English speaking nations. Enter this technology, that that barrier is gone.

And on the flip side of the coin, anything that is intelligent enough to make a call on your behalf can presumably also be programmed in the reverse. To take calls. Which would effectively eliminate the need for a good 95% of the call center industry. Though some issues may need to be dealt with by a human, most common sales, billing, or tech support problems can likely be dealt with autonomously.

So ends that career goal.

None the less, I could see myself having a use for such technology. I hate talking on the phone with strangers, even for a short time. To have the need for that eliminated would be VERY convenient. What can be fetched by a tap and a click IS, so eliminating what’s left . . . I’m in millennial heaven.

You heard it here first . . .

Millenials killed THE ECONOMY!

Google was also criticized last month by another sensitive topic: the company’s involvement in a Pentagon program that uses AI to interpret video imagery and could be used to improve the targeting of drone strikes. Thousands of employees signed a letter protesting the program and asking for change:

“We believe that Google should not be in the business of war. Therefore we ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.”

Time to ruffle some progressive feathers.

In interpreting this, I am very curious of what is meant by the word improve. What does it mean to improve the targeting of drone strikes? Improve the aiming accuracy of the weaponry? Or improve the quality of the targets (more actual terrorist hideouts, and fewer family homes)?

This has all become, very confusing to me. One could even say that I am speaking out of both sides of my mouth.
On one hand, when I think of this topic, my head starts spitting out common deliberately dehumanizing war language like terrorists, combatants, or the enemy. Yet, here I am, pondering if improved drone strikes are a good thing.

I suppose that it largely depends on where your interests are aligned. If you are more aligned nationalistically than humanisticly than this question is legitimate. If you work for or are a shareholder of a defense contractor, then this question is legitimate. Interestingly, this could include me, being a paying member of both a private and public pension plans (pension funds are generally invested in the market).

Even the use of drones alone COULD be seen as cowardly. On the other hand, that would entail that letting loose the troops onto the battlefields like the great wars of the past, would be the less cowardly approach. It is less cowardly for the death ratio to be more equal.
Such an equation would likely be completely asinine to most. The obvious answer is the method with the least bloodshed (at least for our team).Therefore, BOMBS AWAY!” from a control room somewhere in the desert.

For most, it likely boils down to a matter of if we HAVE to. If we HAVE to go to war, then this is the best way possible. Which then leads you to to the obvious question that is “Did we have to go to war?”. Though the answers are rarely clear, they almost always end up leaning towards the No side. And generally, the public never find this out until after the fact. Whoops!

The Google staff (as have other employee’s in silicon valley, no doubt) have made their stance perfectly clear. No warfare R & D, PERIOD. While the stance is enviable, I can’t help but also think that it comes off as naive. I won’t disagree that the humanistic position would not be to enable the current or future endeavors of the military-industrial complex (of which they are now a part of, unfortunately). But even if we take the humanist stance, many bad actors the world over have no such reservations.
Though the public is worried about a menace crossing the border disguised as a refugee, the REAL menace sits in a computer lab. Without leaving the comfort of a chair, they can cause more chaos and damage than one could even dream of.

The next war is going to be waged in cyberspace. And at the moment, a HUGE majority of the infrastructure we rely upon for life itself is in some stage of insecurity ranging from wide open to “Password:123456”.
If there is anyone who is in a good position to prepare for this new terrain of action, it’s the tech industry.

On one hand, as someone who leans in the direction of humanism, war is nonsense and the epitome of a lack of logic. But on the other hand, if there is one thing that our species has perfected, it’s the art of taking each other out.

I suspect this will be our undoing. If it’s AI gone bad, I will be very surprised. I suspect it will be either mutually assured destruction gone real, or climate change gone wild. Which I suppose is its own form of mutually assured destruction.

I need a beer.

Part of this exploration was based around a segment of the September 28, 2018 episode of Real Time where Bill has a conversation about the close relationship between Astrophysicists and the military (starts at 31:57). The man’s anti-philosophical views annoyed me when I learned of them 3 years ago. And it seems that he has culminated into a walking example of what you get when you put the philosophy textbooks out with the garbage.

A “clear policy” around AI is a bold ask because none of the big players have ever done it before, and for good reasons. It is such a new and powerful technology that it’s still unclear how many areas of our life will we dare to infuse with it, and it’s difficult to set rules around the unknown. Google Duplex is a good example of this, it’s a technological development that we would have considered “magical” 10 years ago, that today scares many people.

Regardless, Sundar Pichai not only complied with the request, but took it a step further by creating 7 principles that the company will promote and enforce as one of the industry drivers of AI.

When it comes to this sort of thing, I am not so much scared as I am nervous. Nervous of numerous entities (most of them private for profits, and therefore not obligated to share data) all working on this independently, and having to self-police. This was how the internet was allowed to develop, and that has not necessarily been a good thing. I need go no further than the 2016 election to showcase what can happen when a handful of entities has far to much influence on, say, setting the mood for an entire population. It’s not exactly mind control as dictated by Alex Jones, but for the purpose of messing with the internal sovereignty of nations, the technology is perfectly suitable.

Yet another thing that annoys me about those who think they are red-pilled because they can see a conspiracy around every corner.

I always hear about mind control and the mainstream media, even though the traditional mainstream media has shrinking influence with each passing year. It’s being replaced by preference tailored social media platforms that don’t just serve up what you love, but also often (and unknowingly) paint a false image of how the world looks. While facts and statistics say one thing, my Youtube suggestions and overall filter bubbles say another.

It’s not psi-ops and it doesn’t involve chemtrails, but it’s just as scary. Considering that most of the people developing this influential technology also don’t fully grasp what they have developed.

1. Be socially beneficial

For years, we have dealt with comfortable boundaries, creating increasingly intelligent entities in very focused areas. AI is now getting the ability to switch between different domain areas in a transparent way for the user. For example, having an AI that knows your habits at home is very convenient, especially when your home appliances are connected to the same network. When that same AI also knows your habits outside home, like your favorite restaurants, your friends, your calendar, etc., its influence in your life can become scary. It’s precisely this convenience that is pushing us out of our comfort zone.

This principle is the most important one since it bows to “respect cultural, social, and legal norms”. It’s a broad principle, but it’s intended to ease that uncomfortable feeling by adapting AI to our times and letting it evolve at the same pace as our social conventions do.

Truth be told, I am not sure I understand this one (at least the explanation). It seems like the argument is that the convenience of it all will help push people out of their comfort zone. But I am a bit perplexed as to what that entails.
Their comfort zone, as in their hesitation in allowing an advanced algorithm to take such a prominent role in their life? Or their comfort zone as in, helping to create opportunities for new interactions and experiences?

In the case of the former, it makes perfect sense. One need only look at the 10 deep line at the human run checkout and the zero deep line at the self-checkout to understand this hesitation.
As for the later, most would be likely to notice a trend in the opposite direction. An introvert’s dream could be seen as an extroverts worst nightmare. Granted, many of the people making comments (at least in my life) about how technology isolates the kids tend to be annoyingly pushy extroverts that see that way of being as the norm. Which can be annoying, in general.

Either way, I suspect that this is another case of the onus being on the user to define their own destiny. Granted, that is not always easy if the designers of this technology don’t fully understand what they are introducing to the marketplace.

If this proves anything, it’s that this technology HAS to have regulatory supervision from entities who’s wellbeing (be it reputation or currency wise) is not tied into the success or failure of the project. Time and time again, we have seen that when allowed to self-police, private for-profit entities are willing to bury information that raises concerns about profitable enterprises. In a nutshell, libertarianism doesn’t work.

In fact, with the way much of this new technology is often hijacking and otherwise finding ways to interact with us via out psychological flaws, it would be beneficial to mandate long-term real-world testing of these technologies. In the same ways that newer drugs must undergo trials before they can be released on the market.

Indeed, the industry will do all they can to fight this, because it will effectively bring the process of innovation to a standstill. But at the same time, most of the worst offenders for manipulating the psyche of their user base do it strictly because the attention economy is so cut throat.
Thus, would this be really stifling technology? Or would it just be forcing the cheaters to stop placing their own self-interests above their users?

2. Avoid creating or reinforcing unfair bias

AI can become racist if we allow it. A good example of this happened in March 2016, when Microsoft unveiled an AI with a Twitter interface and in less than a day people taught it the worst aspects of our humanity. AI learns by example, so ensuring that safeguards are in place to avoid this type of situations is critical. Our kids are going to grow in a world increasingly assisted by AI, so we need to educate the system before it’s exposed to internet trolls and other bad players.

The author illustrates a good point here, though I am unsure if they realize that they answered their own question with their explanation.
Machines are a blank slate. Not unlike children growing up to eventually become adults, they will be influenced by the data that they are presented. If they are exposed to only neutral data, they are likely less prone to coming to biased conclusions.

So far, almost all of the stories that I have come across about AI going racist, sexist, etc can be pinpointed to the data stream that it is based on. Since we understand that domineering ideologies of parents tend to also become reflected in their children, this finding should be fairly obvious. And unlike how difficult it is to reverse these biases in humans, AI can presumably be shut down and reprogrammed. A mistake that can be corrected.

Which highlights another interesting thing about this line of study. It forces one to seriously consider things like unconscious human bias. As opposed to the common anti-SJW faux-intellectual stance that is:

“Are you serious?! Sexism without being overtly sexist?! Liberal colleges are turning everyone into snowflakes!”

But then again, what is a filter bubble good for if not excluding nuance.

3. Be built and tested for safety

This point goes hand in hand with the previous one. In fact, Microsoft’s response to the Tai fiasco was to take it down and admit an oversight on the type of scenarios that the AI was tested against. Safety should always be one of the first considerations when designing an AI.

This is good, but coming from a private for-profit entity, it really means nothing. One has to have faith (hello Apistevists!) that Alphabet / Google won’t bury any negative finding made with the technology, particularly if it is found to be profitable. A responsibility that I would entrust to no human with billions of dollars of revenue at stake.

Safety should always be the first consideration when designing ANYTHING. But we know how this plays out when an industry is allowed free rein.
In some cases, airplane cargo doors fly off or fuel tanks puncture and catch fire and people die. And in others, sovereign national elections get hijacked and culminate in a candidate in which many question the legitimacy of.

4. Be accountable to people

The biggest criticism Google Duplex received was whether or not it was ethical to mimic a real human without letting other humans know. I’m glad that this principle just states that “technologies will be subject to appropriate human direction and control”, since it doesn’t discount the possibility of building human-like AIs in the future.

An AI that makes a phone call on our behalf must sound as human as possible, since it’s the best way of ensuring a smooth interaction with the person on the other side. Human-like AIs shall be designed with respect, patience and empathy in mind, but also with human monitoring and control capabilities.

Indeed. But we must not forget the reverse. People must be accountable for what they do with their AI tools.

Maybe I am playing the part of captain obvious. None the less, it has to be said. No one blames the manufacturer of bolt cutters if one of its customers uses them to cut a bike lock.

5. Incorporate privacy design principles

When the convenience created by AI intersects with our personal feelings or private data, a new concern is revealed: our personal data can be used against us. Cambridge Analytica’s incident, where personal data was shared with unauthorized third parties, magnified the problem by jeopardizing user’s trust in technology.

Google didn’t use many words on this principle, probably because it’s the most difficult one to clarify without directly impacting their business model. However, it represents the biggest tech challenge of the decade, to find the balance between giving up your privacy and getting a reasonable benefit in return. Providing “appropriate transparency and control over the use of data” is the right mitigation, but it won’t make us less uncomfortable when an AI knows the most intimate details about our lives.

I used to get quite annoyed with people that were seemingly SHOCKED about how various platforms used their data, but ignorant to the fact that they themselves volunteered the lions share of it openly.
Data protection has always been on my radar, particularly in terms of what I openly share with the world at large. Over the years, I have taken control of my online past, removing most breadcrumbs left over from my childhood and teenage years from search queries. However, I understand that taking control within one platform ALONE can be a daunting task. Even for those that choose to review these things in Facebook, it’s certainly not easy.

There is an onus on both parties.

Users themselves should, in fact, be more informed about what they are divulging (and to who) if they are truly privacy-conscious. Which makes me think of another question . . . what is the age of consent for privacy disclosure?

Facebook defaults this age to 18, though it’s easy to game (my own family has members that allowed their kids to join at 14 or 15!). Parents allowing this is one thing, but consider the new parent who constantly uploads and shares photographs of their children. Since many people don’t bother (worry about?) their privacy settings, these photos are often in the public domain. Thus, by the time the child reaches a stage when they can make a decision on whether or not they agree with this use of their data, it’s too late.

Most children (and later, adults) will never think twice about this, but for those who do, what is the recourse?
Asking the parent to take them out of the public domain is an option. But consider the issue if the horse is already out of the barn.

One of my cousins (or one of their friends) once posted a picture of themselves on some social media site drinking a whole lot of alcohol (not sure if it was staged or not). Years later, they would come across this image on a website labeled “DAMN, they can drink!”.
After the admin was contacted, they agreed to take down the image for my cousin. But in reality, they didn’t have to. It was in the public domain, to begin with, so it’s up for grabs.

How would this play out if the image was of a young child or baby of whom was too young to consent to waive their right to privacy, and the person putting the photo in the public domain was a parent/ guardian or another family member?

I have taken to highlighting this seemingly minuscule possibility of issue recently because it may someday become an issue. Maybe one that the criminal justice systems of the world will have to try and figure out how to deal with. And without any planning as to how that will play out, the end result is almost certain to be bad. Just as it is in many cases where judges and politicians have been thrust the responsibility of blindly legislating shiny new technological innovation.

To conclude, privacy is a 2-way street. People ought to give the issue more attention than they give to a post that they scroll past because future events could depend on it. But at the same time, platforms REALLY need to be more forward in terms of exactly WHAT they are collecting, and how they are using this data. Making changes to these settings should also be made into a task of relative ease.

But first and foremost, the key to this is education. Though we teach the basics of how to operate technology in schools, most of the exposure to the main aspects of this technology (interaction) is self-taught. People learn how to use Facebook, Snapchat and MMS services on their phone, but they often have little guidance on what NOT to do.

What pictures NOT to send in the spur of the moment. How not to behave in a given context. Behaviors with consequences ranging from regret to dealing with law enforcement.

While Artificial Intelligence does, in fact, give us a lot to think about and plan for, it is important to note that the same goes for many technologies available today. Compared to what AI is predicted to become, this tech is often seen as less intelligent than it is mechanical. None the less, modern technology plays an ever-growing role in the day to day lives of connected citizens of the world of all ages and demographics. And as internet speeds keep increasing and high-speed broadband keeps getting more accessible (particularly in rural areas of the first world, and in the global south), only more people will join the cloud. If not adequately prepared for the experience that follows, the result could be VERY interesting. For example, Fake news tends to mean ignorance for most westerners, but in the right cultural context, it could entail death and genocide. In fact, in some nations, this is no longer theoretical. People HAVE died because of viral and inciteful memes propagated on various social media platforms.

Priorities.

Before we even begin to ponder the ramifications of what does not yet exist, one has to get our ducks in a row in terms of our current technological context. It will NOT be easy and will involve partnerships of surprising bed fellows. But it will also help smooth the transition into an increasingly AI dominated future.

Like it or not, it is coming.

Flying Cars – The Future? Or Future Disaster?

This article revives an interesting topic that I have not seen recently, amidst the noise generated by autonomous vehicles, AI, social media aimed backlash and everything else in the public discourse lately. That topic, being flying cars.

In exploring this topic, I will use an article published on Wired by Eric Adams as a place to start.

https://www.wired.com/story/karman-electric-flying-car-air-taxi-power-lines/

To Solve Flying Cars’ Biggest Problem, Tie Them to Power Lines

Of the many challenges facing the nascent flying car industry, few turn more hairs gray than power. A heavier aircraft needs more power, which requires a bigger battery, which weighs more, thus making a heavier aircraft. You see the dilemma. So how do you step out of that cycle and strike a balance that lets you fly useful distances at useful speeds without stopping to recharge?

One startup thinks the answer lies in another question: Who needs a big battery, anyway?

San Francisco-based Karman Electric proposes dividing the need for power from the need to carry that power through the air. It wants to connect passenger-carrying electric air taxis to dedicated power lines on the ground, like an upside-down streetcar setup. The aircraft will carry small batteries so they can detach from the lines when necessary, but they’ll get most of their juice from their cords, allowing them to cover long distances at high speeds.

A few more questions, then. What happens if the cable gets jammed, or a bird flies in its path, or a helicopter wanders by? What if there’s a power loss on the ground, or if two vehicles get their cords tangled? How can you traverse bodies of water or rugged terrain? And doesn’t tying a flying car to the ground defeat the whole purpose?

I am going to stop here. Because frankly, the author planted a perfectly good segway.

In short, no, I don’t absolutely think that this defeats the whole purpose of the flying car. For one, in and around areas where they would be utilized most (probably urban and suburban areas), one will have freedom. And when one is traveling outside of areas where tethering is an issue just on account the landscape alone, it wouldn’t be much of a problem anyway. Since most commuters will likely have a common destination in mind, what does it matter if the trip requires tethering to a fixed power source?

In fact, if you are traversing the skies in the dark of night (or in bad visibility conditions), tethering may be a good thing. Spacial disorientation has killed many people before.

Having made that argument, I have to admit that I just don’t see flying cars as being the future of transportation. The problem of powering them without fossil fuels plays a big part in this, at least in the short term. Can long-range (I’m talking intercontinental) high-speed transportation of people and freight ever be made carbon neutral?

But what are far more important factors than this are the problems posed both by the operation of this technology, and other factors unique to aviation. The term flying car alone makes me begin to ponder things. One of them being “What is a car?”. What constitutes a car, and how does this differ from a plane or a drone?

Indeed, most of this is just linguistics to be left to the manufacturers and marketers to figure out. Of which car will likely win just because it’s got so much cool factor.

Personal transportation pod? Nah.
Transport drone? *yawn*

Blurred lines in our understanding of what separates one vehicle from another is the least important issue in this matter, however. I mentioned before the operation of these vehicles as an area of concern. At very least, the necessity of a pilot’s license would be a bare minimum requirement. And the training ought to be just as intensive as for a genuine pilots license.

This is not a popular sentiment in the public eye (imagine THAT!). But there are many more considerations that come to play when you are in the air that one may not even consider when they are on the ground. While it may be possible to automate the process enough to allow even novices to operate these systems on a perfect day, the typical can be counted on as being less than ideal for most areas of the world. Not only do you have atmospheric considerations like wind shear and icing, you also have the issue of dealing with mechanical problems showing up 30 to 300 feet in the air. While not all drastic mechanical faults or failures occurring to an earth-bound vehicle in a risky situation end in tragedy (like losing a wheel on the interstate), any issue that destabilizes a flying car in flight becomes a potentially VERY bad situation. A panicked or incorrect reaction to the problem (common in the realm of traffic accidents) could not only endanger those in the vehicle itself, but also both people on the ground OR in nearby vehicles.

Indeed, this comes across as barely more substantive than “people make terrible drivers, therefore, NO FLYING CAR FOR YOU!“. None the less, one trip out onto the road in pretty much any place in the world shows us just how flawed we can be (how much damage we can do?) when were earthbound. Or if anecdote is an issue (I agree), consider traffic fatalities. Or more accurately, traffic-related injuries and fatalities Vs Air travel-related injuries and fatalities. Though aviation related accidents tend to get more coverage (the stakes are higher just due to the volume of passengers involved), overall, the whole system has gotten safer as people have been increasingly removed from it. Indeed, automation HAS had a hand in more accidents than none. However, more often than not, these are attributable to human error. The problem posed by having to take control and/or diagnose a suddenly uncontrollable juggernaut that has flown itself perfectly for the past 99.9% of your previous flights.

Indeed, a fairly small flying personal vehicle is much different than a 747. None the less, even a flying smart car can cause a heck of a disruption should it crash in the wrong place. Like, say, the upper inner workings of an electricity substation. As opposed to its ground-bound cousins, who may hit a pole, or knock down the fence.

If we are to go this route, then automation is key. Most (if not ALL) aircraft functions of these vehicles NEED to be automated, period. And it wouldn’t hurt to require the presence of a trained and frequently refreshed operator at all times in aircraft mode (for accident mitigation). These vehicles would also benefit for a mandated self-reacting variant of TCAS. Something that would have to take into consideration both traffic AND ground hazards (since these vehicles will be operating in much more built up airspace than other aircraft).

While these are just off the top of my head, there are likely more considerations that will become evident later. Because that is how progress works (not all problems become visible in the paperwork). When it comes to most average consumers, I seriously question if this is in their future (at least as a personal vehicle they own). In terms of a business opportunity (flying taxi’s?), this will also depend on the costs involved. And with the increased amount of research and testing of semi-autonomous and autonomous vehicles in realistic traffic situations, even this is looking less promising. If I am a business and I have the choice between a vehicle that can operate itself and generate pure profit almost indefinitely, and a far more expensive flying vehicle that costs more money to maintain AND insure (think liability coverage), what seems the smarter option?

I get it, future tech is cool (there is a reason why it has increasingly become a focus of mine in the past year or so). And with the drastic changes that will be forced upon humanity due to a few factors coming in the mid to near future, we need people to be thinking outside the box. That said, however, some ideas just don’t have a future. Don’t get me wrong, the idea of a flying car is cool, creative, even freeing (one is no longer bound to designated road infrastructure). But given the competition that already exists, I have serious doubts that I will be flying to work or the supermarket in the future.

Having said that, while the technology may be redundant in most cases and geographical situations, one context that comes to mind wherein this technology may be beneficial is any situation in which highway access is restricted. For example, communities in Northern Canada which are hard to access with traditional infrastructure. Small-scale is questionable in its necessity, but figuring out how to reduce the cost of large-scale passenger and freight access would have many benefits.

The most obvious would be an improvement in both the standard AND cost of living. Moving more freight cheap means necessities cost less, and luxuries have a place in the market. It also means lower taxes for Canadian citizens in the long run because, with a lower cost of living, there would be less need to subsidize the consumers AND the freight transporter.

Another use that comes to mind would be in the case of disasters. Hurricanes like Maria and Katrina have hammered home the need for advanced preparation if you are sheltering in place. Unfortunately, that is a very naive view to take since many people who can’t afford to adequately prepare ALSO can’t afford to evacuate. Thus, you get what transpired in New Orleans and San Juan after the storm. Even without the clusterfuck that was the response to both storms, access is inherently limited by blocked, damaged and destroyed road infrastructure.

Enter, small to medium-sized flying transporters (at this point, you have probably figured out that I am not sure what to call them). Situations like large hurricanes can allow for the early preparation and rapid deployment of supplies and essentials to residents just hours after the storm. And once accommodations are set up, they can be easily and safely evacuated, with no rescuers put in harm’s way.

Amazon and other delivery oriented businesses are increasingly experimenting with drones to deliver freight on a small scale (flying a pizza or a shower cap directly to your home). However, I’m not sure if this scales up at all.

To conclude, I don’t see money and time devoted to the research and development of flying cars as being the best (or even a GOOD) use of that money. While there are possible uses for the technology in the area of both commercial AND humanitarianism, focusing on the personal is to waste time and resources when we don’t have any more to spare.

“Autonomous Vehicles Might Drive Cities to Financial Ruin” – (Wired)

In a recent post exploring the rise of AI and the dramatic effects, it will have on contemporary society as we know it, one of the issues it (I) covered was the soon to arrive issue of unemployment on a MASSIVE scale. Comparisons are made to past transitions, but really, there is no precedent.  Not just on account of the percentages, but also due to our population alone. There are WAY more of us making tracks now than during any past transition. The stakes could not be higher.

I explored some possible solutions to make the transition less drastic, my favorite being universal basic income. Though I explored that in enough depth to be satisfied, Wired has highlighted a new and equally important problem with this transition.  The issue of local budgets becoming EXTREMELY tight on account to autonomous vehicles more than likely operating outside the traditional confines of must city revenue streams (gas taxes, parking tickets, etc).

If we go into these situations unprepared, the conclusion seems altogether terrifying. Cities that were already structurally deficient in many ways in THIS paradigm now fall apart, filled with aimless and angry people, automated out of existence.

Then there is the now past peak of worldwide oil production, a wall we will also begin to increasingly hit in the coming years. Then again, one terrifyingly dystopian issue at a time.

https://www.wired.com/story/autonomous-vehicles-might-drive-cities-to-financial-ruin/

In Ann Arbor, Michigan, last week, 125 mostly white, mostly male, business-card-bearing attendees crowded into a brightly lit ballroom to consider “mobility.” That’s the buzzword for a hazy vision of how tech in all forms—including smartphones, credit cards, and autonomous vehicles— will combine with the remains of traditional public transit to get urbanites where they need to go.

There was a fizz in the air at the Meeting of the Minds session, advertised as a summit to prepare cities for the “autonomous revolution.” In the US, most automotive research happens within an hour of that ballroom, and attendees knew that development of “level 4” autonomous vehicles—designed to operate in limited locations, but without a human driver intervening—is accelerating.

The session raised profound questions for American cities. Namely, how to follow the money to ensure that autonomous vehicles don’t drive cities to financial ruin. The advent of driverless cars will likely mean that municipalities will have to make do with much, much less. Driverless cars, left to their own devices, will be fundamentally predatory: taking a lot, giving little, and shifting burdens to beleaguered local governments. It would be a good idea to slam on the brakes while cities work through their priorities. Otherwise, we risk creating municipalities that are utterly incapable of assisting almost anyone with anything—a series of sprawling relics where American cities used to be.

A series of sprawling relics where American cities used to be.

Like this?

The fact that Detroit blight jumps right to the forefront of the mind when the topic of urban wastelands is broached, is unfortunate. I don’t live anywhere near the city (nor have I ever visited), but even I know that the remeaning residents are often doing anything in their power to improve their environment. The evidence is scattered all over Youtube and social media in general.

I decided to use the example, frankly, because I didn’t like the way the author seemed to gloss over the notion of the deterioration of cities using the term relics. A relic to me is something old and with former purpose, but now obsolete.
Cities (like Detroit) will likely never be obsolete.  They will just continue to suffer the continued effects of entropy, while still being necessary for the survival of their inhabitants.

It may just be a linguistic critique, but it still doesn’t sit well with me.

Moving on, the other reason why Detroit (and really, many similar cities all over the US) come to mind is that it’s not the first time innovation has left locales in the lurch.  Detroit (and the others as well) have other factors at play as well (white flight being one), but a big one lies in the hands of private entities. Automation itself requires fewer positions, and when combined with an interconnected global economy, the results can be tragic.
As much as I am fascinated by technology (and view it as being the new societal stasis from now on), it’s hard not to see it as one of the largest drivers of income inequality.
Workplace innovations are almost as a rule, NOT good for anything but the bottom line. As you need fewer workers (and can employ them in places with inhumanly low wages), it’s almost inevitable that inequality will only balloon.

In the past, one could balance this out somewhat with the service sector, an industry that is a necessity everywhere and can reliably create cash flow from essentially nothing. It has served as somewhat of a crutch for some unemployed people. These jobs are by no means on par with previous positions (something many slanted commentators overlook either ignorantly or deliberately), but none the less, they serve a purpose.

Or, at least they do for the time being.

The first big round of automation and economic shifts hit the manufacturing sector hard, leaving in its wake the many examples of civil and urban decay. Though the new economic realities of free trade were not really an issue for the service industry (generally, the opposite actually), that paradigm may well be starting to shift.
Already, automation is slowly making its presence seen in the world of service. On top of this, online retailers are gradually rendering once absolutely necessary brick and mortar retail stores and complexes obsolete. While I can see some areas of the service sector as being permanent, local retail is not one of them. At least not in the numbers it generates today.

Hot or cold food is a challenge from a logistics perspective (when the lengthy supply chains of your average online retailer are considered). This, coupled with people wanting to eat out every so often, will hold a place for the family restaurant (or possibly even the fast food outlet) in the local landscape for the time being. Stores on the other hand (particularly larger retailers) are a different matter.

There will exist local shops, I have no doubt there. But I doubt that the selection (or prices) would come anywhere close to what consumers can now get in big box retailers, or will then be able to get with big online retailers. This, combined with the increased automation of future service encounters, could make things very challenging for anyone with any hesitation towards technology. I suspect that many such people will move (or be pushed out) of larger cities and towns, far from the machine.

The demise of big-box retail is, on one hand, a good thing. They tended to be notoriously toxic when it came to local economies to begin with, not beyond many types of bullying tactics in order to maintain such perks as tax-free status. Consider the case of the big box retailer that relocates a couple miles over to another country in order to break a union, skip out on a local tax, or whatever action they deemed punitive. Therein the county ends up reaping all the negatives of such an enterprise without having any of the positives.

The world can do with less big boxes sucking up energy and contributing to an EXTREMELY energy inefficient way of life that we can no longer afford for a number of reasons. But having said that, economically, this will only succeed in turning almost the whole of most countries into the loser county to the big boxes relocation. One or 2 cities that are home to the distribution facilities will see some benefit, but that is it. The rest see nothing but the infrastructural wear and tear, and the trash.
And things probably won’t be rosy even for the seemingly lucky host cities of these distribution centers, because of the power these entities now have. Take the case of Seattle.

It would seem that I am now miles from where I started off (autonomous vehicles & city budgets). But it all plays into the very same thing. Just as I suspect that the majority of future retail distribution will be based out of a small number of warehouses and based around a largely autonymous transportation (be it truck, plane or drone), I can also see such a model for autonomous vehicle distribution.
When the time comes when rented autonomous vehicles are reliable enough to allow the majority of people to ditch one of the largest expenses in their lives (a vehicle), it will become increasingly financially feasible to own and maintain large fleets of always ready autonomous vehicles. Like how self-hauling rental services operate almost ubiquitously on the North American continent with one control center, I can see an alike entity operating huge fleets of self-driving vehicles.

Though these vehicles will utilize some local services (mechanics, cleaners, maybe electricity), as the article states, I doubt it will ever come close to covering the costs of maintaining the infrastructure on which they depend on for their operation. Which more than likely means that consumers will be footing the bill, be it through taxes or user fees.

The problem, as speaker Nico Larco, director of the Urbanism Next Center at the University of Oregon, explained, is that many cities balance their budgets using money brought in by cars: gas taxes, vehicle registration fees, traffic tickets, and billions of dollars in parking revenue. But driverless cars don’t need these things: Many will be electric, will never get a ticket, and can circle the block endlessly rather than park. Because these sources account for somewhere between 15 and 50 percent of city transportation revenue in America, as autonomous vehicles become more common, huge deficits are ahead.

Cities know this: They’re beginning to look at fees that could be charged for accessing pickup and dropoff zones, taxes for empty seats, fees for parking fleets of cars, and other creative assessments that might make up the difference.

But many states, urged on by auto manufacturers, won’t let cities take these steps. Several have already acted to block local policies regulating self-driving cars. Michigan, for example, does not allow Detroit, a short drive away from that Ann Arbor ballroom, to make any rules about driverless cars.

A preemptive strike.

Not that such surprises me. Auto companies already are blurring the line that once separated them from tech companies. I say this due to a bit of exposure to the computers that drive today’s vehicles, having helped a self-taught mechanic tinker with the tune of his 2013 Ford F150. The internet is a limitless resource for this sort of thing. I taught him the basics of how to use this tool, and he ran with it.

It’s not surprising that automobile manufacturers are greasing the gears in statehouses all over the country already. I wouldn’t be surprised that other tech entities are also doing the same thing.

This loss of city revenue comes at a harrowing time. Thousands of local public entities are already struggling financially following the Great Recession. Dozens are stuck with enormous debt loads—usually pension overhangs—that force them to devote unsustainable portions of their incoming revenue to servicing debt. Cities serve as the front lines of every pressing social problem the country is battling: homelessness, illiteracy, inadequate health care, you name it. They don’t have any resources to lose.

The rise of autonomous vehicles will put struggling sections of cities at a particular disadvantage. Unemployment may be low as a national matter, but it is far higher in isolated, majority-minority parts of cities. In those sharply-segregated areas, where educational and health outcomes are routinely far worse than in majority white areas, the main barrier to employment is access to transport. Social mobility depends on being able to get from point A to point B at a low cost.

Take Detroit, a city where auto insurance is prohibitively expensive and transit has been cut back, making it hard for many people to get around. “The bus is just not coming,” Mark de la Vergne, Detroit’s Chief of Mobility Innovation, told the gathering last week, adding that most people in the City of Detroit make less than $57,000 a year and can’t afford a car. De la Vergne told the group in the Ann Arbor ballroom about a low-income Detroit resident who wanted a job but couldn’t even get to the interview without assistance in the form of a very expensive Lyft ride.

As explored before, I suspect that the scaled economies of owning and operating massive fleets of self-driving vehicles may help with this problem. But with the shrunken job market and other local problems coming down the pipe, this hardly even seems a benefit worth mentioning.

That story is, in a nutshell, the problem for America. We have systematically underinvested in public transit: less than 1 percent of our GDP goes to transit. Private services are marketed as complements to public ways of getting around, but in reality these services are competitive. Although economic growth is usually accompanied by an uptick in public transit use, ridership is down in San Francisco, where half the residents use Uber or Lyft. Where ridership goes down, already-low levels of investment in public transit will inevitably get even lower.

When driverless cars take the place of Uber or Lyft, cities will be asked to take on the burden of paying for low-income residents to travel, with whatever quarters they can find lying around in city couches. Result: Cities will be even less able to serve all their residents with public spaces and high-quality services. Even rich people won’t like that.

America has been under-funding essential services across the board for decades. The fact that this is likely to REALLY bite the nation in the ass when they are least prepared to deal with it, is just the cherry on top.

Also, I don’t know that Uber and Lyft will necessarily get replaced. I suspect that they may still exist, but just with much fewer employees. Who knows, one (or both) may become one of the autonomous vehicle behemoths I see existing down the road.

As for the comment about rich people . . . get real. Nothing matters outside the confines of the gated communities in which they reside. Even when the results of their actions are seemingly negative to them in the long term.

Money is a powerful blinder.

It will take great power and great leadership to head off this grim future. Here’s an idea, from France: There, the government charges 3 percent on the total gross salaries of all employees of companies with more than 11 employees, and the proceeds fund a local transport authority. (The tax is levied on the employer not the employee, and in return, employees receive subsidized or free travel on public transport.)

This helps the public transportation angle, indeed. But it doesn’t even touch the infrastructure spending shortfall, a far more massive asteroid to most localities.

At the Ann Arbor meeting, Andreas Mai, vice president of market development at Keolis, said that the Bordeaux transit authority charges a flat fee of about $50 per month for unlimited access to all forms of transit (trams, trains, buses, bikes, ferries, park and ride). The hard-boiled US crowd listening to him audibly gasped at that figure. Ridership is way up, the authority has brought many more buses into service, and it is recovering far more of its expenditures than any comparable US entity. Mai said it required a very strong leader to pull together 28 separate transit systems and convince them to hand over their budgets to the local authority. But it happened.

It’s all just money. We have it; we just need to allocate it better. That will mean viewing public transit as a crucial element of well-being in America. And, in the meantime, we need to press Pause on aggressive plans to deploy driverless cars in cities across the United States.

Public transit is just a part of the problem. I suspect a very small part, at that. And likely the easiest to deal with.
You can not have a public transportation system (or at least not a good one) without addressing infrastructure deficits. And this is just the transportation angle. You also have to contend with water & sewage, solid waste removal,  seasonal maintenance and other ongoing expenses.

Indeed, it is a matter of money and funding allocation. However, the majority of the allocation HAS to start in Washington, in the form of taxation on wealth. As bitter of a pill as that is to swallow, the failure of that course of actions may well make us nostalgic of post-2016 turmoil. Pretty much every leader post-Regan added a little more fuel to the powderkeg, but failure to prepare for coming changes adequately may well set the whole damn thing off.

As for pressing pause on the deployment of driverless vehicles in the cities of the world, we already know that such a plan won’t work. The levers of power are being greased as we speak. Thus, the only option is preparation. Exploration. Brainstorming.

There likely is not going to be a paradigm that fits all contexts, and there will be no utopias. But there is bound to be something between the extremes of absolute privatization and dystopia.