Blade Runner is my favorite sci-fi exploration of where we are headed with artificial intelligence. I’m going to use a scene from the original Ridley Scott masterpiece of Sebastian, Priss and the two cutest robots you’ve ever seen.
I’m starting to get interrupted by ai helpers. All the prompts for us humans to let AI DO IT, are coming from every application every website, and every task. AI is there, saying, “I can do it better.” Here’s the problem. AI cannot do it better. AI can do it faster. AI can do it by copying and mimicking all the previous data in its LLM. But, you are bigger and brighter than that. Your L3M contains 1000X the data in the largest LLM. Human experience is more than letters, numbers, and math. Generative ai just swizzles all of the previous content into something “generative” and “new.” But it’s not new. It’s not original. And it’s going to disrupt a lot of good people, humans, trying to do their jobs.
*AI* FACT: AI can do some jobs better than humans.
I am worried about the massive call center economy in India. Here’s a stat from Gartner about this opportunity for AI.
Replacing human agents with AI chatbots could save the call center industry up to $80 billion in labor costs per year by 2026, according to a report by Gartner. (here is my plexit.ai chat about this topic.)
*AI* FACT: AI is generative but not creative. There’s a huge difference. I have expanded on this in much of my writing, but this post will give you my perspective.
*AI* FACT: AI threatens more human employment and skills than we can imagine. We have only begun to uncover a fraction of what *ai* is already working on, somewhere, by some company, probably well-funded. Here is my list, you can chat with the search engine or ai GPT of your choice for a wider list:
- graphic designers
- illustrators
- page layout for print and web
- code for the web (CSS, javascript, text, images)
- middle managers without technical skills, only “leadership” skills (leadership not on most job requirements)
- recruiting and hiring (see lazyapply.ai and hundreds of others)
- writers on any subject (ai can out-spit copy at 1000s of times the rate of a human writer)
- research and summaries of any topic – feed ai a pdf and your report writes itself
- movies (see this Sora example)
*AI* FACT: AI is a bit like genetically engineered foods, we tried to contain them, and now they are in almost everything we eat. But it’s already getting tiresome. I’ve added and deleted 5 different AI Chrome plugins. I have assistants and copilots offering suggestions in every app. I’m not sure there’s a way to turn them off.
The *ai* technology business is pouring money into hype, promo, and first-to-market technologies. That’s good for US. I think. But it’s bad for a lot of not-US. I am writing from a position of privilege and power. I have a good job. I have access to wifi, food, water, safety. So much of the not-US is in decline and disarray.
And WAR is making a comeback.
- Russia and Israel have launched ethnic cleansings in this era
- China is rattling it’s miliary might and chatting up Putin
- Tucker Carlson makes an ass of himself proclaiming the wonders of the availability of the returnable shopping cart at a Russian grocery store (what?)
- A former president is leaning into the Putin/Russia/Tucker Carlson angle while prepping to go to prison or be elected to a disastrous next term
What is clear: Our world is out of balance. The AI revolution is no different. It’s going to upset entire industries. (Off-shore call centers are dead, they just don’t know it yet.) It’s going to get into everything we think we do, everything we don’t even understand yet, and enhance weapons of war. (Oh, wait that last one is old news.) We’re not watching SkyNet approaching. SkyNet is already here. It’s got a few different names. Elon Musk, Google, Amazon, Apple, OpenAi.
The *ai* LEAK
Not unlike the covid virus (leaked or transmitted from monkeys, you decide) the leak of *ai* into all of our human business is already taking place. The NYTimes fight with OpenAI. Don’t worry, the NYTimes will build their own GPT in a minute. The millions of documents that have been uploaded into ChatGPT and other *ai* platforms, where does all that private, personal, and potentially damaging data go? Is it being stored and used for RAG? Is there any guarantee that we can trust that all of our *ai* prompts and responses are not being used to train… What?
All of your data is probably already available for sale. The information Google has on me alone is staggering. I use Chrome, GMail, G-Cal, G-voice, Google Docs. And you understand Google has access to all of that, right? And by using their services I have given them permission to read it, analyze it, and make “recommendations” on flights to New Mexico when I mention going to skiing to my daughter. Is that helpful? Is it intrusive? Does it frighten you?
Facebook is a similar clone of me over the last 15 years. What I’ve said. The private chats I’ve had using Messenger. (not private) The photos I’ve uploaded, stories I’ve told about my kids, my divorce, my own battles with depression. And Facebook can pretty much use any of my photos for ADS. They can use my LIKE to promote “Ensure” for example to my friends. I might like Nabisco, but they sold my access to the makers of Ensure. So, there you have the insidious web of marketing data Google, Amazon (every purchase or search you’ve ever done on Amazon), Apple (a walled-garden of sorts, but your data is for sale from Apple too).
Where do we go from here? As the bots are already starting to annoy most of us. “No, I don’t want you to give it a try!” And will we be able to turn CLIPPY2024 off in all the Office applications?
Questions. No answers. Watch and pay attention.
The Humanization of AI
As a human-generated artifact, I’ve been working to understand how we can inject HUMAN CREATIVITY into AI PROCESSES. This is my first draft of HUG as an alternative to RAG.
John McElhenney — LinkedIn
Check out the new generative video tool from Openai – Sora – in this breathtaking video. The Ghost in You – Buzzie