“Safeguarding Human Rights In The Era Of Artificial Intelligence” – (Council Of Europe)

Today I will explore yet another (hopefully) different angle on a topic that has grown very fascinating to me. How can human rights be safeguarded in the age of the sentient machines? An interesting question since I think it may also link back to technologies that are pre-AI.

Let’s begin.


The use of artificial intelligence in our everyday lives is on the increase, and it now covers many fields of activity. Something as seemingly banal as avoiding a traffic jam through the use of a smart navigation system, or receiving targeted offers from a trusted retailer is the result of big data analysis that AI systems may use. While these particular examples have obvious benefits, the ethical and legal implications of the data science behind them often go unnoticed by the public at large.

Artificial intelligence, and in particular its subfields of machine learning and deep learning, may only be neutral in appearance, if at all. Underneath the surface, it can become extremely personal. The benefits of grounding decisions on mathematical calculations can be enormous in many sectors of life, but relying too heavily on AI which inherently involves determining patterns beyond these calculations can also turn against users, perpetrate injustices and restrict people’s rights.

The way I see it, AI in fact touches on many aspects of my mandate, as its use can negatively affect a wide range of our human rights. The problem is compounded by the fact that decisions are taken on the basis of these systems, while there is no transparency, accountability or safeguards in how they are designed, how they work and how they may change over time.

One thing I would add to the author’s final statement would be the lack of safeguards in terms of what kind of data these various forms of AI are drawing their conclusions from. While not the only factor that could contribute to seemingly flawed results, I would think that bad data inputs are one of (if not THE) most important factor.

I base this off of observation of the many high profile cases of AI gone (seemingly) haywire. Whether it is emphasized in the media coverage or not, biased data inputs have almost always been mentioned as a factor.

If newly minted AI software is the mental equivalent to a child, then this data is the equivalent to religion, racism, sexism or other indoctrinated biases. Thus my rule of thumb is this . . . If the data could cause indoctrination of a child, then it’s unacceptable for a learning stage algorithm.

Encroaching on the right to privacy and the right to equality

The tension between advantages of AI technology and risks for our human rights becomes most evident in the field of privacy. Privacy is a fundamental human right, essential in order to live in dignity and security. But in the digital environment, including when we use apps and social media platforms, large amounts of personal data are collected – with or without our knowledge – and can be used to profile us, and produce predictions of our behaviours. We provide data on our health, political ideas and family life without knowing who is going to use this data, for what purposes and how.

Machines function on the basis of what humans tell them. If a system is fed with human biases (conscious or unconscious) the result will inevitably be biased. The lack of diversity and inclusion in the design of AI systems is therefore a key concern: instead of making our decisions more objective, they could reinforce discrimination and prejudices by giving them an appearance of objectivity. There is increasing evidence that women, ethnic minorities, people with disabilities and LGBTI persons particularly suffer from discrimination by biased algorithms.

Excellent. This angle was not overlooked.

Studies have shown, for example, that Google was more likely to display adverts for highly paid jobs to male job seekers than female. Last May, a study by the EU Fundamental Rights Agency also highlighted how AI can amplify discrimination. When data-based decision making reflects societal prejudices, it reproduces – and even reinforces – the biases of that society. This problem has often been raised by academia and NGOs too, who recently adopted the Toronto Declaration, calling for safeguards to prevent machine learning systems from contributing to discriminatory practices.

Decisions made without questioning the results of a flawed algorithm can have serious repercussions for human rights. For example, software used to inform decisions about healthcare and disability benefits has wrongfully excluded people who were entitled to them, with dire consequences for the individuals concerned. In the justice system too, AI can be a driver for improvement or an evil force. From policing to the prediction of crimes and recidivism, criminal justice systems around the world are increasingly looking into the opportunities that AI provides to prevent crime. At the same time, many experts are raising concerns about the objectivity of such models. To address this issue, the European Commission for the efficiency of justice (CEPEJ) of the Council of Europe has put together a team of multidisciplinary experts who will “lead the drafting of guidelines for the ethical use of algorithms within justice systems, including predictive justice”.

Though this issue tends to be viewed as the Black Box angle (you can’t see what is going on inside the algorithms), I think it more reflects on the problem that is proprietary systems running independently, as they please.

It reminds me of the situation of corporations and large-scale data minors and online security. The EU sets the standard in this area by way of levying huge fines for data breaches, particularly those that cause consumer suffering (North America lags behind, in this regard).
I think that a similar statute to the GDPR could handle this issue nicely on a global scale. Just as California was/is the leader in terms of many forms of safety regulation due to its market size, the EU has now stepped into that role in terms of digital privacy. They can also do the same for regulating biased AI (at least for the largest of entities).

It won’t stop your local police department or courthouse (or even your government!) from running flawed systems. For that, mandated transparency in operations becomes a necessity for operation. Governing bodies (and international overseers) have to police the judicial systems of the world and take immediate action if necessary. For example, by cutting AI operations funding to a police organization that either refuses to follow the transparency requirements or refuses to fix diagnosed issues in their AI system.

Stifling freedom of expression and freedom of assembly

Another right at stake is freedom of expression. A recent Council of Europe publication on Algorithms and Human Rights noted for instance that Facebook and YouTube have adopted a filtering mechanism to detect violent extremist content. However, no information is available about the process or criteria adopted to establish which videos show “clearly illegal content”. Although one cannot but salute the initiative to stop the dissemination of such material, the lack of transparency around the content moderation raises concerns because it may be used to restrict legitimate free speech and to encroach on people’s ability to express themselves. Similar concerns have been raised with regard to automatic filtering of user-generated content, at the point of upload, supposedly infringing intellectual property rights, which came to the forefront with the proposed Directive on Copyright of the EU. In certain circumstances, the use of automated technologies for the dissemination of content can also have a significant impact on the right to freedom of expression and of privacy, when bots, troll armies, targeted spam or ads are used, in addition to algorithms defining the display of content.

The tension between technology and human rights also manifests itself in the field of facial recognition. While this can be a powerful tool for law enforcement officials for finding suspected terrorists, it can also turn into a weapon to control people. Today, it is all too easy for governments to permanently watch you and restrict the rights to privacy, freedom of assembly, freedom of movement and press freedom.

1.) I don’t like the idea of private entities running black box proprietary algorithms with the aim of combatting things like copyright infringement or extremism either. It’s hard to quantify really because, in a way, we sold out our right to complain when we decided to use the service. The very public square that is many of the largest online platforms today have indeed become pillars of communication for millions, but this isn’t the problem of the platforms. This is what happens when governments stay hands off of emerging technologies.

My solution to this problem revolved around building an alternative. I knew this would not be easy or cheap, but it seemed that the only way to ensure truly free speech online was to ditch the primarily ad-supported infrastructure of the modern internet. This era of Patreon and crowdfunding has helped in this regard, but not without a set of its own consequences. In a nutshell, when you remove the need for everyday people to fact check (or otherwise verify) new information that they may not quite understand, you end up with the intellectual dark web.
A bunch of debunked or unimportant academics, a pseudo-science pedaling ex-psychiatrist made famous by an infamous legal battle with no one (well, but for those, he sued for using their free speech rights), and a couple dopey podcast hosts

Either way, while I STILL advocate for an (or many) alternatives in the online ecosystem, it seems to me that at least in the short term, regulations may need to come to the aid of the freedom of speech & expression rights of everyday people. Yet it is a delicate balance since we’re dealing with sovereign entities in themselves.

The answers may seem obvious at a glance. For example, companies should NOT have been allowed to up and boot Alex Jones off of their collective platforms just for the purpose of public image (particularly after cashing in on the phenomenon for YEARS). Yet in allowing for black and white actions such as that, I can’t help but wonder if it could ever come back to bite us. For example, someone caught using copyrighted content improperly having their entire Youtube library deleted forever.

2.) I don’t think there is a whole lot one can do to avoid being tracked in the digital world, short of moving far from cities (if not off the grid entirely). At this point, it has just become part of the background noise of life. Carrying around a GPS enabled smartphone and using plastic cards is convenient, and it’s almost impossible to generate some form of metadata in ones day to day life. So I don’t really worry about it, short of attempting to ensure that my search engine accessible breadcrumbs are as few as possible.

It’s all you really can do.

What can governments and the private sector do?

AI has the potential to help human beings maximise their time, freedom and happiness. At the same time, it can lead us towards a dystopian society. Finding the right balance between technological development and human rights protection is therefore an urgent matter – one on which the future of the society we want to live in depends.

To get it right, we need stronger co-operation between state actors – governments, parliaments, the judiciary, law enforcement agencies – private companies, academia, NGOs, international organisations and also the public at large. The task is daunting, but not impossible.

A number of standards already exist and should serve as a starting point. For example, the case-law of the European Court of Human Rights sets clear boundaries for the respect for private life, liberty and security. It also underscores states’ obligations to provide an effective remedy to challenge intrusions into private life and to protect individuals from unlawful surveillance. In addition, the modernised Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data adopted this year addresses the challenges to privacy resulting from the use of new information and communication technologies.

States should also make sure that the private sector, which bears the responsibility for AI design, programing and implementation, upholds human rights standards. The Council of Europe Recommendations on human rights and business and on the roles and responsibilities of internet intermediaries, the UN guiding principles on business and human rights, and the report on content regulation by the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, should all feed the efforts to develop AI technology which is able to improve our lives. There needs to be more transparency in the decision-making processes using algorithms, in order to understand the reasoning behind them, to ensure accountability and to be able to challenge these decisions in effective ways.

Nothing for me to add here. Looks like the EU (as usual) is well ahead of the curve in this area.

A third field of action should be to increase people’s “AI literacy”.


In an age where such revered individuals as Elon Musk are saying such profoundly stupid things as this, AI literacy is an absolute necessity.

States should invest more in public awareness and education initiatives to develop the competencies of all citizens, and in particular of the younger generations, to engage positively with AI technologies and better understand their implications for our lives. Finally, national human rights structures should be equipped to deal with new types of discrimination stemming from the use of AI.

1.) I don’t think that one has to worry so much about the younger generations as they do about the existing generations. Young people have grown up in the internet age so all of this will be natural. Guidance as to the proper use of this technology is all that should be necessary.

Older people are a harder sell. If resources were to be put anywhere, I think it should be in programs which attempt to making aging generations more comfortable with increasingly modernized technology. If someone is afraid to operate a smartphone or a self-checking, where do you even begin with explaining Alexa, Siri or Cortana?

2.) Organizations do need to be held accountable for their misbehaving AI software, particularly if it causes a life-altering problem. Up to and including the right to legal action, if necessary.

 It is encouraging to see that the private sector is ready to cooperate with the Council of Europe on these issues. As Commissioner for Human Rights, I intend to focus on AI during my mandate, to bring the core issues to the forefront and help member states to tackle them while respecting human rights. Recently, during my visit to Estonia, I had a promising discussion on issues related to artificial intelligence and human rights with the Prime Minister.

Artificial intelligence can greatly enhance our abilities to live the life we desire. But it can also destroy them. It therefore requires strict regulations to avoid morphing in a modern Frankenstein’s monster.

Dunja Mijatović, Commissioner for Human Rights

I don’t particularly like the darkened tone of this part of the piece. But I like that someone of influence is starting to ask questions, and getting the ball rolling.

It will be interesting to see where this all leads in the coming months, years and decades.

Posted in All Things Tech, Artificial Intelligence & Such, Opinion, Uncategorized | Leave a comment

I Need A Smoke

Here we are again. Autumn.

The leaves are turning. The air has an unmistakable chill. Pumpkin spice is back on the menu. Winter is coming.

12 years ago, this was a great time to be alive. I was just beyond the struggles of high school. And I was just entering the realm of adulthood. Getting carded at the liquor mart or the convenience store was a thrill.

I’m not sure how long it was. 6 months to a year. But those months were great. The highpoint of my life. We passed the time with friends drinking, smoking and a little bit of toking. It was a welcome break from what lay behind us.

12 years later, it’s autumn again. The leaves are changing. There is a chill in the air. Pumpkin spice has been on the menu since late August.

But things have changed.

Everyone I once knew has either moved on, or moved away. Life is regimented and busy, but with little to show for it aside from stress and debt.

I exist. No more, no less.

And so it is. When the leaves turn beautiful, when the wind gets chilly, I think of the Halloween hot tub party. I think of a time when things were far less complicated and life was far more worth living.

At this time of year, I need a smoke.

Posted in Uncategorized | Leave a comment

“Unboxing Google’s 7 New Principles Of Artificial Intelligence” – (aitrends)

Today, I am going to look into Google’s recent release of it’s 7 new principals of artificial intelligence. Though the release was made at the beginning of July, life happens, so I haven’t been able to get around to it until now.


How many times have you heard that Artificial Intelligence (AI) is humanity’s biggest threat? Some people think that Google brought us a step closer to a dark future when Duplex was announced last month, a new capability of Google’s digital Assistant that enables it to make phone calls on your behalf to book appointments with small businesses.

The root of the controversy lied on the fact that the Assistant successfully pretended to be a real human, never disclosing its true identity to the other side of the call. Many tech experts wondered if this is an ethical practice or if it’s necessary to hide the digital nature of the voice.

Right off the bat, were into some interesting stuff. An assistant that can appear to do all of your phone call related chores FOR you.

On one hand, I can understand the ethical implications. Without confirming the nature of the caller, it could very well be seen as a form of fraud. It’s seen as such already when a person contacts a service provider on behalf of another person without making that part clear (even if they authorize the action!). Indeed, most of the time, no one on the other end will likely even notice. But you never know.

When it comes to disguising the digital nature of the voice of such an assistant, I don’t see any issue with this. While it could be seen as deceptive, I can also see many businesses hanging up on callers that come across as being too robotic. Consider, the first pizza ever ordered by a robot.

Okay, not quite. We are leaps and bounds ahead of that voice in terms of, well, sounding human. None the less, there is still an unmistakably automated feel to such digital assistants as Siri, Alexa, and Cortana.

In this case, I don’t think that Google (nor any other future developer or distributor of such technology) has to worry about any ethical issues surrounding this. Simply because it is the onus of the user to ensure the proper use of the product or service (to paraphrase every TOS agreement ever)

One big problem I see coming with the advent of this technology is, the art of deception of the worst kind is going to get a whole lot easier. One example that comes to mind are those OBVIOUSLY computer narrated voices belching out all manner of fake news to the youtube community. Now the fakes are fairly easy for the wise to pick up on because they haven’t quite learned the nuances of the English language (then again, have I?). In the future, this is likely to change drastically.
Another example of a problem posed by this technology would be in telephone scamming. Phishing scams originating in the third world are currently often hindered by the language barrier. It takes a lot of study to master enough English to fool most in English speaking nations. Enter this technology, that that barrier is gone.

And on the flip side of the coin, anything that is intelligent enough to make a call on your behalf can presumably also be programmed in the reverse. To take calls. Which would effectively eliminate the need for a good 95% of the call center industry. Though some issues may need to be dealt with by a human, most common sales, billing, or tech support problems can likely be dealt with autonomously.

So ends that career goal.

None the less, I could see myself having a use for such technology. I hate talking on the phone with strangers, even for a short time. To have the need for that eliminated would be VERY convenient. What can be fetched by a tap and a click IS, so eliminating what’s left . . . I’m in millennial heaven.

You heard it here first . . .

Millenials killed THE ECONOMY!

Google was also criticized last month by another sensitive topic: the company’s involvement in a Pentagon program that uses AI to interpret video imagery and could be used to improve the targeting of drone strikes. Thousands of employees signed a letter protesting the program and asking for change:

“We believe that Google should not be in the business of war. Therefore we ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.”

Time to ruffle some progressive feathers.

In interpreting this, I am very curious of what is meant by the word improve. What does it mean to improve the targeting of drone strikes? Improve the aiming accuracy of the weaponry? Or improve the quality of the targets (more actual terrorist hideouts, and fewer family homes)?

This has all become, very confusing to me. One could even say that I am speaking out of both sides of my mouth.
On one hand, when I think of this topic, my head starts spitting out common deliberately dehumanizing war language like terrorists, combatants, or the enemy. Yet, here I am, pondering if improved drone strikes are a good thing.

I suppose that it largely depends on where your interests are aligned. If you are more aligned nationalistically than humanisticly than this question is legitimate. If you work for or are a shareholder of a defense contractor, then this question is legitimate. Interestingly, this could include me, being a paying member of both a private and public pension plans (pension funds are generally invested in the market).

Even the use of drones alone COULD be seen as cowardly. On the other hand, that would entail that letting loose the troops onto the battlefields like the great wars of the past, would be the less cowardly approach. It is less cowardly for the death ratio to be more equal.
Such an equation would likely be completely asinine to most. The obvious answer is the method with the least bloodshed (at least for our team).Therefore, BOMBS AWAY!” from a control room somewhere in the desert.

For most, it likely boils down to a matter of if we HAVE to. If we HAVE to go to war, then this is the best way possible. Which then leads you to to the obvious question that is “Did we have to go to war?”. Though the answers are rarely clear, they almost always end up leaning towards the No side. And generally, the public never find this out until after the fact. Whoops!

The Google staff (as have other employee’s in silicon valley, no doubt) have made their stance perfectly clear. No warfare R & D, PERIOD. While the stance is enviable, I can’t help but also think that it comes off as naive. I won’t disagree that the humanistic position would not be to enable the current or future endeavors of the military-industrial complex (of which they are now a part of, unfortunately). But even if we take the humanist stance, many bad actors the world over have no such reservations.
Though the public is worried about a menace crossing the border disguised as a refugee, the REAL menace sits in a computer lab. Without leaving the comfort of a chair, they can cause more chaos and damage than one could even dream of.

The next war is going to be waged in cyberspace. And at the moment, a HUGE majority of the infrastructure we rely upon for life itself is in some stage of insecurity ranging from wide open to “Password:123456”.
If there is anyone who is in a good position to prepare for this new terrain of action, it’s the tech industry.

On one hand, as someone who leans in the direction of humanism, war is nonsense and the epitome of a lack of logic. But on the other hand, if there is one thing that our species has perfected, it’s the art of taking each other out.

I suspect this will be our undoing. If it’s AI gone bad, I will be very surprised. I suspect it will be either mutually assured destruction gone real, or climate change gone wild. Which I suppose is its own form of mutually assured destruction.

I need a beer.

Part of this exploration was based around a segment of the September 28, 2018 episode of Real Time where Bill has a conversation about the close relationship between Astrophysicists and the military (starts at 31:57). The man’s anti-philosophical views annoyed me when I learned of them 3 years ago. And it seems that he has culminated into a walking example of what you get when you put the philosophy textbooks out with the garbage.

A “clear policy” around AI is a bold ask because none of the big players have ever done it before, and for good reasons. It is such a new and powerful technology that it’s still unclear how many areas of our life will we dare to infuse with it, and it’s difficult to set rules around the unknown. Google Duplex is a good example of this, it’s a technological development that we would have considered “magical” 10 years ago, that today scares many people.

Regardless, Sundar Pichai not only complied with the request, but took it a step further by creating 7 principles that the company will promote and enforce as one of the industry drivers of AI.

When it comes to this sort of thing, I am not so much scared as I am nervous. Nervous of numerous entities (most of them private for profits, and therefore not obligated to share data) all working on this independently, and having to self-police. This was how the internet was allowed to develop, and that has not necessarily been a good thing. I need go no further than the 2016 election to showcase what can happen when a handful of entities has far to much influence on, say, setting the mood for an entire population. It’s not exactly mind control as dictated by Alex Jones, but for the purpose of messing with the internal sovereignty of nations, the technology is perfectly suitable.

Yet another thing that annoys me about those who think they are red-pilled because they can see a conspiracy around every corner.

I always hear about mind control and the mainstream media, even though the traditional mainstream media has shrinking influence with each passing year. It’s being replaced by preference tailored social media platforms that don’t just serve up what you love, but also often (and unknowingly) paint a false image of how the world looks. While facts and statistics say one thing, my Youtube suggestions and overall filter bubbles say another.

It’s not psi-ops and it doesn’t involve chemtrails, but it’s just as scary. Considering that most of the people developing this influential technology also don’t fully grasp what they have developed.

1. Be socially beneficial

For years, we have dealt with comfortable boundaries, creating increasingly intelligent entities in very focused areas. AI is now getting the ability to switch between different domain areas in a transparent way for the user. For example, having an AI that knows your habits at home is very convenient, especially when your home appliances are connected to the same network. When that same AI also knows your habits outside home, like your favorite restaurants, your friends, your calendar, etc., its influence in your life can become scary. It’s precisely this convenience that is pushing us out of our comfort zone.

This principle is the most important one since it bows to “respect cultural, social, and legal norms”. It’s a broad principle, but it’s intended to ease that uncomfortable feeling by adapting AI to our times and letting it evolve at the same pace as our social conventions do.

Truth be told, I am not sure I understand this one (at least the explanation). It seems like the argument is that the convenience of it all will help push people out of their comfort zone. But I am a bit perplexed as to what that entails.
Their comfort zone, as in their hesitation in allowing an advanced algorithm to take such a prominent role in their life? Or their comfort zone as in, helping to create opportunities for new interactions and experiences?

In the case of the former, it makes perfect sense. One need only look at the 10 deep line at the human run checkout and the zero deep line at the self-checkout to understand this hesitation.
As for the later, most would be likely to notice a trend in the opposite direction. An introvert’s dream could be seen as an extroverts worst nightmare. Granted, many of the people making comments (at least in my life) about how technology isolates the kids tend to be annoyingly pushy extroverts that see that way of being as the norm. Which can be annoying, in general.

Either way, I suspect that this is another case of the onus being on the user to define their own destiny. Granted, that is not always easy if the designers of this technology don’t fully understand what they are introducing to the marketplace.

If this proves anything, it’s that this technology HAS to have regulatory supervision from entities who’s wellbeing (be it reputation or currency wise) is not tied into the success or failure of the project. Time and time again, we have seen that when allowed to self-police, private for-profit entities are willing to bury information that raises concerns about profitable enterprises. In a nutshell, libertarianism doesn’t work.

In fact, with the way much of this new technology is often hijacking and otherwise finding ways to interact with us via out psychological flaws, it would be beneficial to mandate long-term real-world testing of these technologies. In the same ways that newer drugs must undergo trials before they can be released on the market.

Indeed, the industry will do all they can to fight this, because it will effectively bring the process of innovation to a standstill. But at the same time, most of the worst offenders for manipulating the psyche of their user base do it strictly because the attention economy is so cut throat.
Thus, would this be really stifling technology? Or would it just be forcing the cheaters to stop placing their own self-interests above their users?

2. Avoid creating or reinforcing unfair bias

AI can become racist if we allow it. A good example of this happened in March 2016, when Microsoft unveiled an AI with a Twitter interface and in less than a day people taught it the worst aspects of our humanity. AI learns by example, so ensuring that safeguards are in place to avoid this type of situations is critical. Our kids are going to grow in a world increasingly assisted by AI, so we need to educate the system before it’s exposed to internet trolls and other bad players.

The author illustrates a good point here, though I am unsure if they realize that they answered their own question with their explanation.
Machines are a blank slate. Not unlike children growing up to eventually become adults, they will be influenced by the data that they are presented. If they are exposed to only neutral data, they are likely less prone to coming to biased conclusions.

So far, almost all of the stories that I have come across about AI going racist, sexist, etc can be pinpointed to the data stream that it is based on. Since we understand that domineering ideologies of parents tend to also become reflected in their children, this finding should be fairly obvious. And unlike how difficult it is to reverse these biases in humans, AI can presumably be shut down and reprogrammed. A mistake that can be corrected.

Which highlights another interesting thing about this line of study. It forces one to seriously consider things like unconscious human bias. As opposed to the common anti-SJW faux-intellectual stance that is:

“Are you serious?! Sexism without being overtly sexist?! Liberal colleges are turning everyone into snowflakes!”

But then again, what is a filter bubble good for if not excluding nuance.

3. Be built and tested for safety

This point goes hand in hand with the previous one. In fact, Microsoft’s response to the Tai fiasco was to take it down and admit an oversight on the type of scenarios that the AI was tested against. Safety should always be one of the first considerations when designing an AI.

This is good, but coming from a private for-profit entity, it really means nothing. One has to have faith (hello Apistevists!) that Alphabet / Google won’t bury any negative finding made with the technology, particularly if it is found to be profitable. A responsibility that I would entrust to no human with billions of dollars of revenue at stake.

Safety should always be the first consideration when designing ANYTHING. But we know how this plays out when an industry is allowed free rein.
In some cases, airplane cargo doors fly off or fuel tanks puncture and catch fire and people die. And in others, sovereign national elections get hijacked and culminate in a candidate in which many question the legitimacy of.

4. Be accountable to people

The biggest criticism Google Duplex received was whether or not it was ethical to mimic a real human without letting other humans know. I’m glad that this principle just states that “technologies will be subject to appropriate human direction and control”, since it doesn’t discount the possibility of building human-like AIs in the future.

An AI that makes a phone call on our behalf must sound as human as possible, since it’s the best way of ensuring a smooth interaction with the person on the other side. Human-like AIs shall be designed with respect, patience and empathy in mind, but also with human monitoring and control capabilities.

Indeed. But we must not forget the reverse. People must be accountable for what they do with their AI tools.

Maybe I am playing the part of captain obvious. None the less, it has to be said. No one blames the manufacturer of bolt cutters if one of its customers uses them to cut a bike lock.

5. Incorporate privacy design principles

When the convenience created by AI intersects with our personal feelings or private data, a new concern is revealed: our personal data can be used against us. Cambridge Analytica’s incident, where personal data was shared with unauthorized third parties, magnified the problem by jeopardizing user’s trust in technology.

Google didn’t use many words on this principle, probably because it’s the most difficult one to clarify without directly impacting their business model. However, it represents the biggest tech challenge of the decade, to find the balance between giving up your privacy and getting a reasonable benefit in return. Providing “appropriate transparency and control over the use of data” is the right mitigation, but it won’t make us less uncomfortable when an AI knows the most intimate details about our lives.

I used to get quite annoyed with people that were seemingly SHOCKED about how various platforms used their data, but ignorant to the fact that they themselves volunteered the lions share of it openly.
Data protection has always been on my radar, particularly in terms of what I openly share with the world at large. Over the years, I have taken control of my online past, removing most breadcrumbs left over from my childhood and teenage years from search queries. However, I understand that taking control within one platform ALONE can be a daunting task. Even for those that choose to review these things in Facebook, it’s certainly not easy.

There is an onus on both parties.

Users themselves should, in fact, be more informed about what they are divulging (and to who) if they are truly privacy-conscious. Which makes me think of another question . . . what is the age of consent for privacy disclosure?

Facebook defaults this age to 18, though it’s easy to game (my own family has members that allowed their kids to join at 14 or 15!). Parents allowing this is one thing, but consider the new parent who constantly uploads and shares photographs of their children. Since many people don’t bother (worry about?) their privacy settings, these photos are often in the public domain. Thus, by the time the child reaches a stage when they can make a decision on whether or not they agree with this use of their data, it’s too late.

Most children (and later, adults) will never think twice about this, but for those who do, what is the recourse?
Asking the parent to take them out of the public domain is an option. But consider the issue if the horse is already out of the barn.

One of my cousins (or one of their friends) once posted a picture of themselves on some social media site drinking a whole lot of alcohol (not sure if it was staged or not). Years later, they would come across this image on a website labeled “DAMN, they can drink!”.
After the admin was contacted, they agreed to take down the image for my cousin. But in reality, they didn’t have to. It was in the public domain, to begin with, so it’s up for grabs.

How would this play out if the image was of a young child or baby of whom was too young to consent to waive their right to privacy, and the person putting the photo in the public domain was a parent/ guardian or another family member?

I have taken to highlighting this seemingly minuscule possibility of issue recently because it may someday become an issue. Maybe one that the criminal justice systems of the world will have to try and figure out how to deal with. And without any planning as to how that will play out, the end result is almost certain to be bad. Just as it is in many cases where judges and politicians have been thrust the responsibility of blindly legislating shiny new technological innovation.

To conclude, privacy is a 2-way street. People ought to give the issue more attention than they give to a post that they scroll past because future events could depend on it. But at the same time, platforms REALLY need to be more forward in terms of exactly WHAT they are collecting, and how they are using this data. Making changes to these settings should also be made into a task of relative ease.

But first and foremost, the key to this is education. Though we teach the basics of how to operate technology in schools, most of the exposure to the main aspects of this technology (interaction) is self-taught. People learn how to use Facebook, Snapchat and MMS services on their phone, but they often have little guidance on what NOT to do.

What pictures NOT to send in the spur of the moment. How not to behave in a given context. Behaviors with consequences ranging from regret to dealing with law enforcement.

While Artificial Intelligence does, in fact, give us a lot to think about and plan for, it is important to note that the same goes for many technologies available today. Compared to what AI is predicted to become, this tech is often seen as less intelligent than it is mechanical. None the less, modern technology plays an ever-growing role in the day to day lives of connected citizens of the world of all ages and demographics. And as internet speeds keep increasing and high-speed broadband keeps getting more accessible (particularly in rural areas of the first world, and in the global south), only more people will join the cloud. If not adequately prepared for the experience that follows, the result could be VERY interesting. For example, Fake news tends to mean ignorance for most westerners, but in the right cultural context, it could entail death and genocide. In fact, in some nations, this is no longer theoretical. People HAVE died because of viral and inciteful memes propagated on various social media platforms.


Before we even begin to ponder the ramifications of what does not yet exist, one has to get our ducks in a row in terms of our current technological context. It will NOT be easy and will involve partnerships of surprising bed fellows. But it will also help smooth the transition into an increasingly AI dominated future.

Like it or not, it is coming.

Posted in All Things Tech, Artificial Intelligence & Such, Opinion, Uncategorized | Leave a comment

Flying Cars – The Future? Or Future Disaster?

This article revives an interesting topic that I have not seen recently, amidst the noise generated by autonomous vehicles, AI, social media aimed backlash and everything else in the public discourse lately. That topic, being flying cars.

In exploring this topic, I will use an article published on Wired by Eric Adams as a place to start.


To Solve Flying Cars’ Biggest Problem, Tie Them to Power Lines

Of the many challenges facing the nascent flying car industry, few turn more hairs gray than power. A heavier aircraft needs more power, which requires a bigger battery, which weighs more, thus making a heavier aircraft. You see the dilemma. So how do you step out of that cycle and strike a balance that lets you fly useful distances at useful speeds without stopping to recharge?

One startup thinks the answer lies in another question: Who needs a big battery, anyway?

San Francisco-based Karman Electric proposes dividing the need for power from the need to carry that power through the air. It wants to connect passenger-carrying electric air taxis to dedicated power lines on the ground, like an upside-down streetcar setup. The aircraft will carry small batteries so they can detach from the lines when necessary, but they’ll get most of their juice from their cords, allowing them to cover long distances at high speeds.

A few more questions, then. What happens if the cable gets jammed, or a bird flies in its path, or a helicopter wanders by? What if there’s a power loss on the ground, or if two vehicles get their cords tangled? How can you traverse bodies of water or rugged terrain? And doesn’t tying a flying car to the ground defeat the whole purpose?

I am going to stop here. Because frankly, the author planted a perfectly good segway.

In short, no, I don’t absolutely think that this defeats the whole purpose of the flying car. For one, in and around areas where they would be utilized most (probably urban and suburban areas), one will have freedom. And when one is traveling outside of areas where tethering is an issue just on account the landscape alone, it wouldn’t be much of a problem anyway. Since most commuters will likely have a common destination in mind, what does it matter if the trip requires tethering to a fixed power source?

In fact, if you are traversing the skies in the dark of night (or in bad visibility conditions), tethering may be a good thing. Spacial disorientation has killed many people before.

Having made that argument, I have to admit that I just don’t see flying cars as being the future of transportation. The problem of powering them without fossil fuels plays a big part in this, at least in the short term. Can long-range (I’m talking intercontinental) high-speed transportation of people and freight ever be made carbon neutral?

But what are far more important factors than this are the problems posed both by the operation of this technology, and other factors unique to aviation. The term flying car alone makes me begin to ponder things. One of them being “What is a car?”. What constitutes a car, and how does this differ from a plane or a drone?

Indeed, most of this is just linguistics to be left to the manufacturers and marketers to figure out. Of which car will likely win just because it’s got so much cool factor.

Personal transportation pod? Nah.
Transport drone? *yawn*

Blurred lines in our understanding of what separates one vehicle from another is the least important issue in this matter, however. I mentioned before the operation of these vehicles as an area of concern. At very least, the necessity of a pilot’s license would be a bare minimum requirement. And the training ought to be just as intensive as for a genuine pilots license.

This is not a popular sentiment in the public eye (imagine THAT!). But there are many more considerations that come to play when you are in the air that one may not even consider when they are on the ground. While it may be possible to automate the process enough to allow even novices to operate these systems on a perfect day, the typical can be counted on as being less than ideal for most areas of the world. Not only do you have atmospheric considerations like wind shear and icing, you also have the issue of dealing with mechanical problems showing up 30 to 300 feet in the air. While not all drastic mechanical faults or failures occurring to an earth-bound vehicle in a risky situation end in tragedy (like losing a wheel on the interstate), any issue that destabilizes a flying car in flight becomes a potentially VERY bad situation. A panicked or incorrect reaction to the problem (common in the realm of traffic accidents) could not only endanger those in the vehicle itself, but also both people on the ground OR in nearby vehicles.

Indeed, this comes across as barely more substantive than “people make terrible drivers, therefore, NO FLYING CAR FOR YOU!“. None the less, one trip out onto the road in pretty much any place in the world shows us just how flawed we can be (how much damage we can do?) when were earthbound. Or if anecdote is an issue (I agree), consider traffic fatalities. Or more accurately, traffic-related injuries and fatalities Vs Air travel-related injuries and fatalities. Though aviation related accidents tend to get more coverage (the stakes are higher just due to the volume of passengers involved), overall, the whole system has gotten safer as people have been increasingly removed from it. Indeed, automation HAS had a hand in more accidents than none. However, more often than not, these are attributable to human error. The problem posed by having to take control and/or diagnose a suddenly uncontrollable juggernaut that has flown itself perfectly for the past 99.9% of your previous flights.

Indeed, a fairly small flying personal vehicle is much different than a 747. None the less, even a flying smart car can cause a heck of a disruption should it crash in the wrong place. Like, say, the upper inner workings of an electricity substation. As opposed to its ground-bound cousins, who may hit a pole, or knock down the fence.

If we are to go this route, then automation is key. Most (if not ALL) aircraft functions of these vehicles NEED to be automated, period. And it wouldn’t hurt to require the presence of a trained and frequently refreshed operator at all times in aircraft mode (for accident mitigation). These vehicles would also benefit for a mandated self-reacting variant of TCAS. Something that would have to take into consideration both traffic AND ground hazards (since these vehicles will be operating in much more built up airspace than other aircraft).

While these are just off the top of my head, there are likely more considerations that will become evident later. Because that is how progress works (not all problems become visible in the paperwork). When it comes to most average consumers, I seriously question if this is in their future (at least as a personal vehicle they own). In terms of a business opportunity (flying taxi’s?), this will also depend on the costs involved. And with the increased amount of research and testing of semi-autonomous and autonomous vehicles in realistic traffic situations, even this is looking less promising. If I am a business and I have the choice between a vehicle that can operate itself and generate pure profit almost indefinitely, and a far more expensive flying vehicle that costs more money to maintain AND insure (think liability coverage), what seems the smarter option?

I get it, future tech is cool (there is a reason why it has increasingly become a focus of mine in the past year or so). And with the drastic changes that will be forced upon humanity due to a few factors coming in the mid to near future, we need people to be thinking outside the box. That said, however, some ideas just don’t have a future. Don’t get me wrong, the idea of a flying car is cool, creative, even freeing (one is no longer bound to designated road infrastructure). But given the competition that already exists, I have serious doubts that I will be flying to work or the supermarket in the future.

Having said that, while the technology may be redundant in most cases and geographical situations, one context that comes to mind wherein this technology may be beneficial is any situation in which highway access is restricted. For example, communities in Northern Canada which are hard to access with traditional infrastructure. Small-scale is questionable in its necessity, but figuring out how to reduce the cost of large-scale passenger and freight access would have many benefits.

The most obvious would be an improvement in both the standard AND cost of living. Moving more freight cheap means necessities cost less, and luxuries have a place in the market. It also means lower taxes for Canadian citizens in the long run because, with a lower cost of living, there would be less need to subsidize the consumers AND the freight transporter.

Another use that comes to mind would be in the case of disasters. Hurricanes like Maria and Katrina have hammered home the need for advanced preparation if you are sheltering in place. Unfortunately, that is a very naive view to take since many people who can’t afford to adequately prepare ALSO can’t afford to evacuate. Thus, you get what transpired in New Orleans and San Juan after the storm. Even without the clusterfuck that was the response to both storms, access is inherently limited by blocked, damaged and destroyed road infrastructure.

Enter, small to medium-sized flying transporters (at this point, you have probably figured out that I am not sure what to call them). Situations like large hurricanes can allow for the early preparation and rapid deployment of supplies and essentials to residents just hours after the storm. And once accommodations are set up, they can be easily and safely evacuated, with no rescuers put in harm’s way.

Amazon and other delivery oriented businesses are increasingly experimenting with drones to deliver freight on a small scale (flying a pizza or a shower cap directly to your home). However, I’m not sure if this scales up at all.

To conclude, I don’t see money and time devoted to the research and development of flying cars as being the best (or even a GOOD) use of that money. While there are possible uses for the technology in the area of both commercial AND humanitarianism, focusing on the personal is to waste time and resources when we don’t have any more to spare.

Posted in All Things Tech, Opinion, Uncategorized | Leave a comment

What Does It Mean To Be Sucessful?

In the progress of one of the most mundane aspects of my job (and really, the most mundane task of any assigned to me over my career as an unskilled laborer in the past 12 years), facing the product in the store I work at, I found myself absent-mindedly thinking. As I often do in such a situation. If it’s not that, then it’s fuming over the wake left behind by one of the many incompetent dimwits that make my daily work experience a biblical trial.

But aside from the limitations of life in a context of brainless domination, I again found myself running thoughts and stuff through my mind. One was the concept of Sucess. In particular, me asking myself the question that is, am I successful?

As with any other complex area of life, one can’t entertain such a question honestly without hitting a boatload of nuance. Not just nuance, but also subjectivity.
While this should have been seemingly obvious to me (given my ability to tear concepts right down to the bare wires), I realized that I myself was overlooking this in my own answer to the question that is “Am I Successful?”.

Before now (that is before I entertained the nuance aspect), I would have answered with a quick “No”. Such an answer is inherently unhealthy for one’s wellbeing because there is an inherent feeling of negativity and inferiority in being unsuccessful. A sentiment that is backed up by (or more likely, stemming FROM) the status quo societal sentiment that is what constitutes Sucess.

When you strip it down to the most basic, you can say that simply having (and keeping!) a job is a form of success. And I have heard people reduce life down to this level. In a nutshell, “If you are getting paid money to do it, what are you complaining about?”. Sister to the comment “At least you HAVE a job”, and cousin to the boss and or manager originated comment that is “It won’t be any better anywhere else”.

Where do you even begin . . .

1.) Seems to be a mighty low bar to set. Imagine if you used the same line of logic in the context of abusive relationships!

Actually, it happens quite often in this context, and it’s not considered positive advice, either. It’s so damaging a phenomenon that it’s known as  The Cycle Of Abuse.

No, I am not making light of abusive spousal relationships by comparing it to something seemingly (but not always!) much more benign. I am merely telling people to think this stuff through.

2.) In terms of this acceptance of stasis from the employer level, I can’t think of anything more damaging to the ongoing strength of a business than accepting lackluster morale as a matter of doing business. Particularly in the cutthroat and ever more challenging retail or hospitality sectors!

I think about it in terms of how Gordon Ramsay (Ramsay’s Kitchen Nightmares  (UK)/Kitchen Nightmares (US) ), Robert Irvine (Restaurant Impossible), John Taffer (Bar Rescue) or any other consultant would handle turning around a struggling business. If you strip away the TV-friendly drama of all 3 shows, you begin to notice an underlying pattern. Low standards and low morale are almost always just a symptom of a bigger problem.

Be it a broken family dynamic, a bad owner or manager, or some other overt and global issue spreading negativity to the whole of the staff, there is almost always more to fixing a business than shiny new equipment and training from a world class chef. Though the celebrity chef turned psychiatrist has become the staple of such tv series’s, one should not underestimate the importance of this aspect. Big money businesses with bad employee morale will often just run much more inefficiently than they otherwise could, but such an environment can easily close a small business.

As such, employers REALLY ought to think twice before they attempt to stop their employees from jumping ship simply by mindlessly saying that “the grass likely isn’t greener on the other side”.

1.) It’s lazy and bordering on incompetent.

2.) It’s costly. You are NOT getting the most out of employees that don’t give a fuck.

3.) It could be argued to be not even true!

I know, one has to be careful with anecdotes. None the less, when I think about 2 or 3 years of observation of people leaving managers that accept this mantra (leaving on good terms, that is), rarely do I see them come back. In fact, they often vanish from the business entirely (no longer spend their money there). While this next assertion is hard to back up, there is a good possibility that a disengaged former employee will be hesitant to give praise to the employer that dragged them down to feeling that way. Again, affecting the bottom line.

Think before you speak. These unjustified and easily falsifiable mantras don’t just encourage businesses to stop attempting to improve, they actively undercut their long-term success.

Though the last part may seem to be off topic, it is more a constructive way to illustrate a large percentage of the grief I deal with in daily life. Since one inherently spends the majority of their lives in the workplace, it’s not a surprise for these things to dominate everyday life.
The week is 5 days or 120 hours. If you spend 40 of them working, you have 80 left over. Take 40 more out of THAT to work on a healthy sleep schedule (HA!), and your free time becomes 40 hours. Subtract from THAT, an hour out of each day for gearing up for/winding down from a work shift, and you have 35 hours.
This isn’t taking into consideration voluntarily (or being forced to) arrive early (without pay!) so that you’re on the floor working on time. Or time-consuming personal tasks (housework, errands, etc). It’s hard to quantify generally, but hours left over that are true leisure time may well be in the lower double digits, if not the single digits.

To bring it back I have to ask myself, why did I not think I was a success?

One metric is obvious. I have always had a job since turning 18, and only took one extended vacation (2 weeks off) in that timeframe (back in 2008). I’ve never had anything beyond dead-end jobs, however. And I’ve never met an employer that one would call optimal. That I define as not being owned (or managed) into the ground by idiots, however high up in the organization.
When compared to the accomplishments of some friends and past acquaintances, my accomplishments seem pathetic. As outlined by a social media conversation on my timeline between 2 of my friends, one of which was complaining about taxes on his meager income of $48,000 a year.

1.) There is a reason why I took Facebook off my phone and my tablet

2.) Fuck off.

And when I compare myself to what governs me in a workplace, to those that make double or triple my wage . . . it’s also not pretty. It’s best not to get too tangled in such questions because there is no pretty end to that line of thinking. Only half a bottle of Tylenol to kill the migraine it induces.

Why am I not a success?

I am not them. Which is also inferring that I would be able to do their job better. Something that is debatable, to say the least (every new manager thinks they are better than the last one). But either way, a silly thing to be sore about, because corporate hierarchies are not my best friend to begin with. Almost without exception, you are always going to be a cog to someone above you with the power to terminate you for not acting as they specifically dictate.

Now THERE is an interesting word choice if there ever was one.
You are terminated. Your employment is now terminated. You can’t get much more inhuman language than that.

But either way, I don’t want to be my boss. Or his boss. I have always turned down such roles in the past, not wanting the headache.

Which brings me to the next branch, my success in comparison to that of friends and family. Indeed, the concept of jealousy sometimes enters the mind when looking at what past friends and acquaintances have accomplished. Particularly when they are seemingly undeserving of such riches (and there it is again!).
But again, this is silly. Partly because seeming societal norms and social media underpin this feeling of inferiority. And also because everyone’s background is different. The road’s that brought a person to where they are today are all different, all filled with various unique challenges or privileges. Thus, it’s foolish to compare on a level ground. Just as it’s silly to compare one’s self to what is “Normal”.

To quote a tired cliche:

There are “Normal” people, and then there are the rest that know that such people are rediculous

Which brings me to the last branch. The ultimate job opportunity. Something I will call Hollywood success.

Consider, almost any contemporary movie or television show that involves seemingly ordinary everyday people. Dare I say, comfort pablum for a populace demanding of easy to consume media.
In these shows and movies, the characters almost invariably live in the American dream. A big ole house in a sprawling suburban neighborhood. It dosesn’t matter the genre of the show or film. Unless the scripting of the program explicitly demands otherwise,  your characters almost certainly live their fictional lives in suburban America (at least from the 90’s and on).

It took a very long time to come to this realization. Only the epiphany of how unsustainable and inherently destructive such a lifestyle is made me realize just how ingrained the suburban trope is into our cultural DNA. And why I have been living in comparison to this seeming utopia for pretty much my entire life.
These representations are interesting because they seem to be a fairly new phenomenon. Jack Tripper hilariously stumbled through life on Three’s Company in an apartment. The Bunkers (All In The Family) broke new television ground from a working-class neighborhood in Astoria. Maxwell Smart (and later, agent 99) happily called an apartment home. Even the infamous Al Bundy resided in a working-class domicile.

It is an interesting transition of it seems, the late 80’s into the 90’s (and beyond). It was a trend bucked by popular sitcoms Friends, Fraiser and Seinfeld. But at the same time, possibly purposefully. 2 of the 3 take place in New York City (a metropolis not often pictured by its suburbs), and #3 takes place in Seattle. Another city of which the popular phenomenon tend to be more centralized than at the periphery.

A part of these representations that are often noticeably missing from the popular culture representations of utopia is the often staggering cost of successful living. Not only the often long hours put in by one (or both) adult members of the typical nuclear family, but also the commute. The at times HOURS long drive or ride back and fouth to the city for work. And of course, the motoring around for EVERYTHING necessary for life due to the fact that most suburban planning makes other forms of transportation almost impossible.

For many years, this was my yardstick (albeit unknowingly) with which I compared my life. Years of pop cultural influences depicting the ease of suburban living had, in combination with a decade of direct customer service, programmed anything else to be sub-par. I pictured the big house in the ironically named subdivision (Meadows, Forest Park, Silver Springs, Oakwood), and the full-time job sitting on my ass in a cubical in some big office somewhere. Though nothing I have ever done has truly been ideal for me or many others related to me (they have no issues in voicing their unwanted grievances), I am certain that none of the trajectories (past or current) would ever end in such a utopia. Don’t get me wrong, I get that sometimes the only thing worse than not achieving your dreams is actually achieving them (well, where to from here?). But I don’t think such is in the cards anyway, no matter how you slice it.

And so I ask myself, what then, is success?

There are many ways that one could answer that question.
It is in the eye of the beholder. It means diffrent things to diffrent people. It is a corprate derived slogan to keep people productive and consuming of their wares.

While I believe that many of the ways that people answer this question are unproductive to themselves and to the rest of us (a high carbon footprint based lifestyle affects us all), to each their own. Most of this is merely status quo behavier anyway, and though changing that dynamic is never easy, it is achievable.

Speaking strictly as and for myself however, I would say that the concept of Sucess is of more harm than it is of good. If the yardstick is of your own context (I made a long term goal, and I achieved it), than that is another matter. However, most interpretations tend to to be rooted in external factors. A flawed way of thinking being that not only does the mad dash to the ever shifting goal posts never end, the whole process is also incredibly destructive. To the health of the person, their family, and to the rest of us. Claiming that living a high carbon footprint and disposability based lifestyle is leading to an eventual mass suicide does indeed sound alarmist. None the less, this is just a more realistic reiteration of what has driven the worlds most well known minds to considering Mars worthy of human suburbanization. Bring in a bunch of machines to clear out all the invaluable crap in our way, then build a bunch of little boxes for everyone to live in. Along with big box stores, threaters, malls and everything else in between to keep up apearances.

“What do you mean, we are living on borowed time? All is well!”

Anyway . . . what is sucess? In a nutshell, garbage. So out with the old and in with the new. Or possibly, in with the current.

Much of my life previous to 2 to 3 years back was firmly rooted in the rearview. I coudn’t look forward because I was too busy looking backwards. Not at what was (the good times). More, looking at what transpired (the stuff that I blame for leading to the problems I face today).
Compared to the trials faced by close friends of mine (often times in complete isolation), my trials are trivial. Almost, pathetic really.

But there again, I am comparing myself to the lives of others. If I keep comparing what set me adrift for a few years to the hell that faced others, I will soon be in my closet crying like the weak and seemingly pathetic loser that I aparently am.

So instead, one has to prioritize. Or more importantly, get out of reverse and get back in drive. Back in 2016, all of my old high school clouthing went in the trash, along with everything else worn out and reminicent of ventures of life long past. I tossed out my old yearbooks earlier this year (or late last year) in following the same line of thought. Pretty much the only time I revisited those books were when I was pouting over what transpired, what I didn’t do. They depict a time of life that was hell for me, so why on earth am I keeping them?!

My workplace honored my 5 years of service with the company by presenting me with a service award and a company branded coat. Though I initially felt like tossing both in the garbage, kept the coat. But did ditch the award. It’s a reminder, not a reward.

Such is how I have come to interpret the phenomenon that is success. In the journey that is my life, it’s influence has only been negative and regressive. Therefore, into the bin it goes.

























Posted in Opinion | Leave a comment

‘Guilty on All Counts!’: “In Historic Victory, Monsanto Ordered To Pay $289 Million In Roundup Cancer Lawsuit” – (Common Dreams)

Just in today, I may have fallen on the wrong side of this issue.

Sometime around 2015, the topic of GMO’s, pesticides and all things big biotech and big organic came onto my radar. This stuff came to be there after I just for the hell of it, decided to look into the background details and nuances of just one anti- GMO article that I had been regularly exposed to over the years due to a subscription to several ecologically oriented news publications. I was somewhat dumbfounded to find that not only was THAT article very misleading (to put it mildly), it was a common practice with these sorts of publications.

And it wasn’t just in the biased media platforms covering the stories (either side, really). Being that monied interests exist on BOTH sides of the aisle, EVERYONE has lobbyists and an interest in muddying the waters. And they are very successful.  When looking into these things, I learned to avoid pretty much ANY media coverage or sources regardless of their credentials. Even before this  Fake News non-sense was ratcheted up by Donald, people will almost always be tempted to write off the messenger. Which left trying to work with the scientific documentation, which was a giant pain in the rear. Likely for anyone, but certainly for a person outside of any involved fields.

Such was the state of information that it became difficult to even for ME to tell if I was truly being a useful arbiter of information, or if I was just a useful idiot of one side. Such was my confusion that I pretty much stopped covering these topics altogether for a good year or 2, before being brought back by the interesting new innovation that is lab-grown meat.

Back when the Monsanto lawsuit first came on my radar, my first thought was frankly, frivolous lawsuit. Having seen part of the documentary called Hot Coffee (and recognizing how embedded the wrongful tort trope is in our culture), I now realize that such a reaction was . . . unsurprising (Hello useful idiot me!). None the less, it seemed like the science (as seemingly confirmed by what I could find) spoke for itself.

Or not?

‘Guilty on All Counts!’: In Historic Victory, Monsanto Ordered to Pay $289 Million in Roundup Cancer Lawsuit

In an historic victory for those who have long sought to see agrochemical giant Monsanto held to account for the powerful company’s toxic and deadly legacy, a court in California on Friday found the corporation liable for damages suffered by a cancer patient who alleged his sickness was directly caused by exposure to the glyphosate-based herbicides, including the widely used weedkiller Roundup.

As Reuters reports:

The case of school groundskeeper Dewayne Johnson was the first lawsuit alleging glyphosate causes cancer to go to trial.

Monsanto, a unit of Bayer AG following a $62.5 billion acquisition by the German conglomerate, faces more than 5,000 similar lawsuits across the United States. 

The jury at San Francisco’s Superior Court of California deliberated for three days before finding that Monsanto had failed to warn Johnson and other consumers of the cancer risks posed by its weed killers.  It awarded $39 million in compensatory and $250 million in punitive damages.

As Robert F. Kennedy Jr., a lawyer representing Johnson in the case, declared on Twitter, the court “awarded 200 million in punitive damages against Monsanto for ‘acting with malice and oppression.'”

I don’t like Robert F. Kennedy Jr.

The man’s stances on vaccination AT BEST, can be argued to be child abuse (forcing a child to potentially contract terrible illnesses that we CAN EASILY TREAT!), or at worst, threaten the whole of humanity. Epidemics and pandemics ARE a thing, and it’s only a matter of time before the big one is upon us. And if a huge cohort that CAN be immunized is irrationally afraid because of some deluded jackass Andrew Wakefield believer . . .

Either way, not a fan. Nor am I a fan of the way the Organic Consumers of America is jumping all over this news. It’s a lobby group, people!

None the less, if there was merit to the lawsuit, credit where credit is due. Keep fighting for the little guy.

I suppose we will see in the coming days and weeks, how this really played out. Whether it was indeed the facts that drove the decision. Or if Big Organic just made a better (to clarify, far more emotionally captivating) argument.

Posted in Big BioTech / GMO's / Other Eco-Alternative Media Criticisms, Opinion | 2 Comments

“Autonomous Vehicles Might Drive Cities to Financial Ruin” – (Wired)

In a recent post exploring the rise of AI and the dramatic effects, it will have on contemporary society as we know it, one of the issues it (I) covered was the soon to arrive issue of unemployment on a MASSIVE scale. Comparisons are made to past transitions, but really, there is no precedent.  Not just on account of the percentages, but also due to our population alone. There are WAY more of us making tracks now than during any past transition. The stakes could not be higher.

I explored some possible solutions to make the transition less drastic, my favorite being universal basic income. Though I explored that in enough depth to be satisfied, Wired has highlighted a new and equally important problem with this transition.  The issue of local budgets becoming EXTREMELY tight on account to autonomous vehicles more than likely operating outside the traditional confines of must city revenue streams (gas taxes, parking tickets, etc).

If we go into these situations unprepared, the conclusion seems altogether terrifying. Cities that were already structurally deficient in many ways in THIS paradigm now fall apart, filled with aimless and angry people, automated out of existence.

Then there is the now past peak of worldwide oil production, a wall we will also begin to increasingly hit in the coming years. Then again, one terrifyingly dystopian issue at a time.


In Ann Arbor, Michigan, last week, 125 mostly white, mostly male, business-card-bearing attendees crowded into a brightly lit ballroom to consider “mobility.” That’s the buzzword for a hazy vision of how tech in all forms—including smartphones, credit cards, and autonomous vehicles— will combine with the remains of traditional public transit to get urbanites where they need to go.

There was a fizz in the air at the Meeting of the Minds session, advertised as a summit to prepare cities for the “autonomous revolution.” In the US, most automotive research happens within an hour of that ballroom, and attendees knew that development of “level 4” autonomous vehicles—designed to operate in limited locations, but without a human driver intervening—is accelerating.

The session raised profound questions for American cities. Namely, how to follow the money to ensure that autonomous vehicles don’t drive cities to financial ruin. The advent of driverless cars will likely mean that municipalities will have to make do with much, much less. Driverless cars, left to their own devices, will be fundamentally predatory: taking a lot, giving little, and shifting burdens to beleaguered local governments. It would be a good idea to slam on the brakes while cities work through their priorities. Otherwise, we risk creating municipalities that are utterly incapable of assisting almost anyone with anything—a series of sprawling relics where American cities used to be.

A series of sprawling relics where American cities used to be.

Like this?

The fact that Detroit blight jumps right to the forefront of the mind when the topic of urban wastelands is broached, is unfortunate. I don’t live anywhere near the city (nor have I ever visited), but even I know that the remeaning residents are often doing anything in their power to improve their environment. The evidence is scattered all over Youtube and social media in general.

I decided to use the example, frankly, because I didn’t like the way the author seemed to gloss over the notion of the deterioration of cities using the term relics. A relic to me is something old and with former purpose, but now obsolete.
Cities (like Detroit) will likely never be obsolete.  They will just continue to suffer the continued effects of entropy, while still being necessary for the survival of their inhabitants.

It may just be a linguistic critique, but it still doesn’t sit well with me.

Moving on, the other reason why Detroit (and really, many similar cities all over the US) come to mind is that it’s not the first time innovation has left locales in the lurch.  Detroit (and the others as well) have other factors at play as well (white flight being one), but a big one lies in the hands of private entities. Automation itself requires fewer positions, and when combined with an interconnected global economy, the results can be tragic.
As much as I am fascinated by technology (and view it as being the new societal stasis from now on), it’s hard not to see it as one of the largest drivers of income inequality.
Workplace innovations are almost as a rule, NOT good for anything but the bottom line. As you need fewer workers (and can employ them in places with inhumanly low wages), it’s almost inevitable that inequality will only balloon.

In the past, one could balance this out somewhat with the service sector, an industry that is a necessity everywhere and can reliably create cash flow from essentially nothing. It has served as somewhat of a crutch for some unemployed people. These jobs are by no means on par with previous positions (something many slanted commentators overlook either ignorantly or deliberately), but none the less, they serve a purpose.

Or, at least they do for the time being.

The first big round of automation and economic shifts hit the manufacturing sector hard, leaving in its wake the many examples of civil and urban decay. Though the new economic realities of free trade were not really an issue for the service industry (generally, the opposite actually), that paradigm may well be starting to shift.
Already, automation is slowly making its presence seen in the world of service. On top of this, online retailers are gradually rendering once absolutely necessary brick and mortar retail stores and complexes obsolete. While I can see some areas of the service sector as being permanent, local retail is not one of them. At least not in the numbers it generates today.

Hot or cold food is a challenge from a logistics perspective (when the lengthy supply chains of your average online retailer are considered). This, coupled with people wanting to eat out every so often, will hold a place for the family restaurant (or possibly even the fast food outlet) in the local landscape for the time being. Stores on the other hand (particularly larger retailers) are a different matter.

There will exist local shops, I have no doubt there. But I doubt that the selection (or prices) would come anywhere close to what consumers can now get in big box retailers, or will then be able to get with big online retailers. This, combined with the increased automation of future service encounters, could make things very challenging for anyone with any hesitation towards technology. I suspect that many such people will move (or be pushed out) of larger cities and towns, far from the machine.

The demise of big-box retail is, on one hand, a good thing. They tended to be notoriously toxic when it came to local economies to begin with, not beyond many types of bullying tactics in order to maintain such perks as tax-free status. Consider the case of the big box retailer that relocates a couple miles over to another country in order to break a union, skip out on a local tax, or whatever action they deemed punitive. Therein the county ends up reaping all the negatives of such an enterprise without having any of the positives.

The world can do with less big boxes sucking up energy and contributing to an EXTREMELY energy inefficient way of life that we can no longer afford for a number of reasons. But having said that, economically, this will only succeed in turning almost the whole of most countries into the loser county to the big boxes relocation. One or 2 cities that are home to the distribution facilities will see some benefit, but that is it. The rest see nothing but the infrastructural wear and tear, and the trash.
And things probably won’t be rosy even for the seemingly lucky host cities of these distribution centers, because of the power these entities now have. Take the case of Seattle.

It would seem that I am now miles from where I started off (autonomous vehicles & city budgets). But it all plays into the very same thing. Just as I suspect that the majority of future retail distribution will be based out of a small number of warehouses and based around a largely autonymous transportation (be it truck, plane or drone), I can also see such a model for autonomous vehicle distribution.
When the time comes when rented autonomous vehicles are reliable enough to allow the majority of people to ditch one of the largest expenses in their lives (a vehicle), it will become increasingly financially feasible to own and maintain large fleets of always ready autonomous vehicles. Like how self-hauling rental services operate almost ubiquitously on the North American continent with one control center, I can see an alike entity operating huge fleets of self-driving vehicles.

Though these vehicles will utilize some local services (mechanics, cleaners, maybe electricity), as the article states, I doubt it will ever come close to covering the costs of maintaining the infrastructure on which they depend on for their operation. Which more than likely means that consumers will be footing the bill, be it through taxes or user fees.

The problem, as speaker Nico Larco, director of the Urbanism Next Center at the University of Oregon, explained, is that many cities balance their budgets using money brought in by cars: gas taxes, vehicle registration fees, traffic tickets, and billions of dollars in parking revenue. But driverless cars don’t need these things: Many will be electric, will never get a ticket, and can circle the block endlessly rather than park. Because these sources account for somewhere between 15 and 50 percent of city transportation revenue in America, as autonomous vehicles become more common, huge deficits are ahead.

Cities know this: They’re beginning to look at fees that could be charged for accessing pickup and dropoff zones, taxes for empty seats, fees for parking fleets of cars, and other creative assessments that might make up the difference.

But many states, urged on by auto manufacturers, won’t let cities take these steps. Several have already acted to block local policies regulating self-driving cars. Michigan, for example, does not allow Detroit, a short drive away from that Ann Arbor ballroom, to make any rules about driverless cars.

A preemptive strike.

Not that such surprises me. Auto companies already are blurring the line that once separated them from tech companies. I say this due to a bit of exposure to the computers that drive today’s vehicles, having helped a self-taught mechanic tinker with the tune of his 2013 Ford F150. The internet is a limitless resource for this sort of thing. I taught him the basics of how to use this tool, and he ran with it.

It’s not surprising that automobile manufacturers are greasing the gears in statehouses all over the country already. I wouldn’t be surprised that other tech entities are also doing the same thing.

This loss of city revenue comes at a harrowing time. Thousands of local public entities are already struggling financially following the Great Recession. Dozens are stuck with enormous debt loads—usually pension overhangs—that force them to devote unsustainable portions of their incoming revenue to servicing debt. Cities serve as the front lines of every pressing social problem the country is battling: homelessness, illiteracy, inadequate health care, you name it. They don’t have any resources to lose.

The rise of autonomous vehicles will put struggling sections of cities at a particular disadvantage. Unemployment may be low as a national matter, but it is far higher in isolated, majority-minority parts of cities. In those sharply-segregated areas, where educational and health outcomes are routinely far worse than in majority white areas, the main barrier to employment is access to transport. Social mobility depends on being able to get from point A to point B at a low cost.

Take Detroit, a city where auto insurance is prohibitively expensive and transit has been cut back, making it hard for many people to get around. “The bus is just not coming,” Mark de la Vergne, Detroit’s Chief of Mobility Innovation, told the gathering last week, adding that most people in the City of Detroit make less than $57,000 a year and can’t afford a car. De la Vergne told the group in the Ann Arbor ballroom about a low-income Detroit resident who wanted a job but couldn’t even get to the interview without assistance in the form of a very expensive Lyft ride.

As explored before, I suspect that the scaled economies of owning and operating massive fleets of self-driving vehicles may help with this problem. But with the shrunken job market and other local problems coming down the pipe, this hardly even seems a benefit worth mentioning.

That story is, in a nutshell, the problem for America. We have systematically underinvested in public transit: less than 1 percent of our GDP goes to transit. Private services are marketed as complements to public ways of getting around, but in reality these services are competitive. Although economic growth is usually accompanied by an uptick in public transit use, ridership is down in San Francisco, where half the residents use Uber or Lyft. Where ridership goes down, already-low levels of investment in public transit will inevitably get even lower.

When driverless cars take the place of Uber or Lyft, cities will be asked to take on the burden of paying for low-income residents to travel, with whatever quarters they can find lying around in city couches. Result: Cities will be even less able to serve all their residents with public spaces and high-quality services. Even rich people won’t like that.

America has been under-funding essential services across the board for decades. The fact that this is likely to REALLY bite the nation in the ass when they are least prepared to deal with it, is just the cherry on top.

Also, I don’t know that Uber and Lyft will necessarily get replaced. I suspect that they may still exist, but just with much fewer employees. Who knows, one (or both) may become one of the autonomous vehicle behemoths I see existing down the road.

As for the comment about rich people . . . get real. Nothing matters outside the confines of the gated communities in which they reside. Even when the results of their actions are seemingly negative to them in the long term.

Money is a powerful blinder.

It will take great power and great leadership to head off this grim future. Here’s an idea, from France: There, the government charges 3 percent on the total gross salaries of all employees of companies with more than 11 employees, and the proceeds fund a local transport authority. (The tax is levied on the employer not the employee, and in return, employees receive subsidized or free travel on public transport.)

This helps the public transportation angle, indeed. But it doesn’t even touch the infrastructure spending shortfall, a far more massive asteroid to most localities.

At the Ann Arbor meeting, Andreas Mai, vice president of market development at Keolis, said that the Bordeaux transit authority charges a flat fee of about $50 per month for unlimited access to all forms of transit (trams, trains, buses, bikes, ferries, park and ride). The hard-boiled US crowd listening to him audibly gasped at that figure. Ridership is way up, the authority has brought many more buses into service, and it is recovering far more of its expenditures than any comparable US entity. Mai said it required a very strong leader to pull together 28 separate transit systems and convince them to hand over their budgets to the local authority. But it happened.

It’s all just money. We have it; we just need to allocate it better. That will mean viewing public transit as a crucial element of well-being in America. And, in the meantime, we need to press Pause on aggressive plans to deploy driverless cars in cities across the United States.

Public transit is just a part of the problem. I suspect a very small part, at that. And likely the easiest to deal with.
You can not have a public transportation system (or at least not a good one) without addressing infrastructure deficits. And this is just the transportation angle. You also have to contend with water & sewage, solid waste removal,  seasonal maintenance and other ongoing expenses.

Indeed, it is a matter of money and funding allocation. However, the majority of the allocation HAS to start in Washington, in the form of taxation on wealth. As bitter of a pill as that is to swallow, the failure of that course of actions may well make us nostalgic of post-2016 turmoil. Pretty much every leader post-Regan added a little more fuel to the powderkeg, but failure to prepare for coming changes adequately may well set the whole damn thing off.

As for pressing pause on the deployment of driverless vehicles in the cities of the world, we already know that such a plan won’t work. The levers of power are being greased as we speak. Thus, the only option is preparation. Exploration. Brainstorming.

There likely is not going to be a paradigm that fits all contexts, and there will be no utopias. But there is bound to be something between the extremes of absolute privatization and dystopia.

Posted in All Things Tech, Artificial Intelligence & Such, Opinion, Social Issues | Leave a comment