Home ShopTalk What’s Wrong with Google and Strong AI?

What’s Wrong with Google and Strong AI?

After Abandoning “Do No Evil,” Can Google Be Trusted with AGI?

In The Human Algorithm, a groundbreaking narrative on the urgency of ethically designed AI and a guidebook to reimagining life in the era of intelligent technology, international human rights attorney Flynn Coleman deftly argues that we must instill values, ethics, and morals into our robots, algorithms, and other forms of AI. The central imperative is a moral one, to develop and implement laws, policies, and oversight mechanisms to protect us from the threats of technology that are unchecked without guardrails and human intervention.

How should we think about the two largest stakeholders in Artificial Intelligence? What is the “mental health” of the OpenAI platform between Chat GPT and its Microsoft sponsors compared to the Gemini system being developed by Google?  The looming question arises: Which tech leader possesses the dedication and moral imagination to ensure that human rights, empathy, and equity are the core principles of emerging AI technology?

In two recent podcast interviews, tech mogul, Tesla founder and SpaceX pioneer Elon Musk shared that he and Google co-founder Larry Page have very different views on artificial intelligence. According to Musk, Page believes that “all consciousness should be treated equally, whether that is digital or biological.” Musk said the two men would “talk late into the night” about AI safety and his “perception was that Page was not taking AI safety seriously enough. ”He wanted to create “digital superintelligence, basically a digital god,” said the Tesla CEO.  The last straw of the friendship was when Page called Musk a “Speciesist” for wanting to implement safeguards to protect humanity from AI. A speciesist is a term for an individual who believes all other living beings are inferior to humans. 

A deeper look into the principles governing Google’s version of AI and its foundation of algorithmic drivers, with little to no modus vivendi with sapient beings, is essential as we strive to build a more humane future and move conscientiously into a new frontier of our own design. 

What Has Replaced “Do No Evil”?

Google’s evolution from its “Do no evil” credo to facing allegations of anti-competitive practices and maintaining a fortress mentality when addressing customer complaints reflect a complex shift in the company’s ethos. As regulatory scrutiny intensifies and debates around accountability and fair competition continue, the tech giant finds itself at a critical juncture where balancing innovation with ethical responsibility is paramount.

The company’s transition from its altruistic mantra to practices that are perceived as anti-competitive has raised the ire of countries in the European Union which has levied a record penalty of 4.3 billion euros (around $5 billion USD) on Google for abusing its dominance in relation to its Android operating system.

In the U.S., multiple legal battles remain that pose serious challenges to Google’s business model and could lead to substantial changes in how the company operates. Like the beleaguered Donald Trump, the question is vital — when does self-preservation conflict with the moral imperative of the truth?

History in the Offing: The Google Graveyard 

In the short life span of Generative AI development, Google has already birthed and then done away with its earliest publicly-released versions. Google’s chatbot Bard, powered by LaMDA, was an early casualty morphing into its successor, Google Gemini.

Who among us has yet to be impacted by Google’s history of discontinued offerings over the years, seldom with consideration for its followers? Moreover, what is to prevent Google from introducing digital superintelligence in a new iteration of “Strong AI” and then cancelling it two weeks later?

The “Google Graveyard” is a repository of over 200 apps, services, and hardware that Google has unceremoniously terminated. Could this be the message that The Terminator movie was trying to deliver?

 Google Reader (2013):  Despite its popularity among users, Google decided to shut down this RSS Feed aggregator in 2013, leaving many avid readers scrambling to find alternatives.

Google Podcasts:  a podcast hosting platform and an Android podcast listening app. It will be almost 6 years old.

• Google Inbox (2019): Another service termination that stirred disappointment among users was Google Inbox, an innovative email client known for its unique features. In 2019, Google announced the closure of Inbox, driving loyal users to transition to Gmail.

• Google+ (2019): Google’s attempt at a social networking platform, Google+, met its end in 2019 following low user engagement and a data breach scandal. Despite efforts to revamp the platform, Google decided to shut it down permanently.

• Google Play Music (2020): Music enthusiasts were disheartened by the discontinuation of Google Play Music in 2020. This service offered music streaming and cloud storage but was replaced by YouTube Music, causing inconvenience to many users.

• Google Hangouts (2021): A widely used messaging and video chat platform, Google Hangouts faced its demise in 2021 as Google shifted focus to other communication tools like Google Meet and Chat.

Other Google services brought to market and then cancelled (a partial list):

  • Google Jump – cloud-based video stitching service. Discontinued June 28.
  • Google Street View (standalone app) –  Google Street View app was an Android and iOS app that enabled people to get a 360 degree view of locations around the world. It was over 12 years old.
  • Google Stadia –  a cloud gaming service combining a WiFi gaming controller and allowed users to stream gameplay through web browsers, TV, mobile apps, and Chromecast. It was about 3 years old.
  • Google Knol  –  Knol was a Google project that aimed to include user-written articles on a range of topics. It was almost 4 years old.
  • Works with Nest – smart home platform of Google brand Nest. Support ended on August 31.
  • YouTube for Nintendo 3DS – official app for Nintendo 3DS. Discontinued on September 3.
  • YouTube Messages – direct messages on YouTube. Discontinued after September 18.
  • YouTube Leanback – web application for control with a remote, intended for use with smart TVs. Discontinued on October 2.
  • Google Daydream View – Google’s VR headset. Discontinued in October.
  • Touring Bird – Travel website for booking tours, tickets, and activities. Shut down on November 17.
  • Google Bulletin – “Hyperlocal” news service. Shut down on November 22.
  • Google Fusion Tables – Service for managing and visualizing data. Shut down on December 3.
  • Google Translator Toolkit – Online translation tool. Shut down on December 4.
  • Google Correlate – Finds search patterns corresponding with real-world trends. Shut down on December 15.
  • Google Search Appliance – Device used to index documents. Shut down on December 31, 2019.
  • Google Native Client (NaCL/PNaCl) – Sandbox technology for running native code. Discontinued on December 31.

How does this corporate system of disengagement and discontinuity work when Generative AI is involved? Should any company be allowed to control the “weights” of machine learning? A weight in AI decides how much influence the input will have on the output. When integral components are disintegrated, the impact on users cannot be overlooked.

As users continue to rely on Google for a wide range of services, transparency and user-centric approaches in managing service terminations are crucial to maintaining trust and satisfaction.

By prioritizing efficiency and automation over transparency and user input, Google risks developing AI systems that lack accountability, fairness, and alignment with societal values. The reliance on non-human, algorithmic quality controls may lead to biased outcomes, reduced explainability, and limited opportunities for ethical oversight in AI applications.

AI Platforms: A Closer Look

ChatGPT is built on the GPT (Generative Pre-trained Transformer) architecture. It boasts a vast corpus of knowledge and the ability to generate human-like text across a wide range of topics. With its conversational capabilities, ChatGPT excels in tasks such as dialogue generation, question answering, and text completion. It can understand context, maintain coherence, and adapt its responses to user inputs.

In contrast, Google’s Gemini offers limited API capabilities, two traditional plan tiers and separate Google Workspace tiers, multimodal inputs with more limits on outputs, mobile and desktop accessibility, internet search supplements on all plans, and the ability to answer most questions in either a conversational or professional tone.

Comparatively speaking, Gemini is not as willing to answer more complex questions conversationally or directly, and it is often slower to generate its responses. However, Gemini is more natively connected with office suite tools, the Internet, and Internet-based widgets that can support enriched searches.

The company’s reliance on algorithmic controls and automated decision-making processes limits the ability for human intervention and feedback. This lack of user input diminishes the potential for ethical considerations, diverse perspectives, and value-driven decision-making in AI development.

OpenAI, the organization behind ChatGPT, has made significant strides in promoting ethical AI practices and ensuring the responsible deployment of artificial intelligence technologies. With a commitment to transparency, safety, and inclusivity, OpenAI advocates for the ethical use of AI to benefit society while mitigating potential risks. ChatGPT embodies these values by prioritizing user privacy, safeguarding against harmful content generation, and fostering open dialogue about the ethical implications of AI.

Gemini recently became available in mobile apps for Android and iOS, though it is somewhat limited for Apple users, as there’s no dedicated Gemini app in the App Store. iOS users can download the Google app and, from its home page, toggle between traditional Google Search and Gemini.

User reviews consistently comment on how much more accurate, consistent, human-like, and detailed ChatGPT has become through GPT-3.5 and GPT-4. Though ChatGPT will not provide offensive or problematic answers without very strategic prompting (and the occasional AI hallucination), it is open to responding to more queries with greater detail than Gemini typically is.

One of the biggest complaints users currently have regarding Gemini is how often the tool tells users it cannot or will not answer their questions — even benign ones — because of a policy or prompting issue.

In conclusion, both ChatGPT and Gemini offer distinct features, benefits, and potential societal impacts. Our mission is clear. If we make humanitarian ideals central to how we deploy artificial intelligence now, we have a chance at preserving and enhancing humankind and, just maybe, making a leap forward instead of a dystopian step backward. [24×7]