AI. Have two letters in combination ever received the hype, press, and speculation behind them that AI presently enjoys? One has to wonder what’s so special about the latest shiny object and exactly who stands to gain the most by seeing artificial intelligence take off.
The pace of AI-related product releases and promotions since November 30, 2022, when ChatGPT was first released to the public has been brisk. Take Jasper, for one. The company’s content-making product leverages AI to generate content for blog articles, social media posts, website copy, and more. Jasper has a $1.5 billion valuation, according to Tech Crunch.
“Write blog posts 10x faster using AI, without sacrificing on quality.” That’s their pitch. Ditch the troublesome people who write and replace them with people who query the machine, because it’s fast, efficient, and cheap.
But what about the company’s quality claim? Can a machine be trained to do what I and other writers do with words? Can a machine create meaning and weave words in an artful way that moves people to think, believe, or buy?
ChatGPT “Spits Out” Words
Machines are capable of producing text, but machines are not writers. Writers think before they type and are masters of language, nuance, storytelling arcs, and motifs.
Let’s hear from the good professor (of computer science at Georgetown University), Cal Newport. He shared some of his thoughts about AI in The New Yorker and via his email newsletter.
ChatGPT is absolutely not self-aware, conscious, or alive in any reasonable definition of these terms. The large language model that drives ChatGPT is static. Once it’s trained, it does not change; it’s a collection of simply-structured (though massive in size) feed-forward neural networks that do nothing but take in text as input and spit out new words as output. It has no malleable state, no updating sense of self, no incentives, no memory.
Great verb choice. To spit out words is not the same thing as writing them, evaluating them, and changing them on the fly, the way a living writer does. Real writers also think original thoughts and make disparate connections come together in a new way, consciously and tastefully. Try as you might, you can’t prompt a machine to do that. You can automate tasks, but producing high-quality original writing is not a task.
Meshing Mind with Machine
Ethan Mollick, an associate professor of management at Wharton, is one of many who are totally on board with AI. Listen to him wax poetic about ChatGPT:
A writer can easily edit badly written sentences that may appear in AI articles, a human programmer can spot errors in AI code, and an analyst can check the results of AI conclusions. This leads us, ultimately, to why this is so disruptive. The writer no longer needs to write the articles alone, the programmer to code on their own, or the analyst to approach the data themselves. The work is a new kind of collaboration that did not exist last month. One person can do the work of many…
Just imagine. No more lonely writers or coders grinding through the night. Such progress.
Other Areas of AI Interest
Let’s examine what AI is good for because clearly, it’s good for some things. For example, when assigned a task where the fast processing of a massive amount of data is critical, AI can be quite helpful. Take a look at this new solution for people with autism from Samsung.
Samsung’s advanced AI app, Unfear, works with Samsung Galaxy Buds to filter known trigger sounds, including sounds personalized to the user, in order to protect, calm and assist people with Autism Spectrum Disorder (ASD) and hearing disorders. Being able to filter certain sounds, as opposed to canceling out noise altogether, allows people with autism to continue to feel more connected in their everyday lives.
Through machine learning, Samsung’s algorithm scans thousands of audio libraries in real time, reducing the volume of specific pre-selected noises that cause stress to the individual user, such as sirens, metro noise, street works, barking dogs, ambulances, or crying children.
Heed the Precautionary Principle
The precautionary principle is a broad epistemological, philosophical, and legal approach to introducing innovations with the potential for causing harm when extensive scientific knowledge on the matter is lacking. According to Wikipedia, the principle emphasizes caution, pausing, and review before leaping into new innovations that may prove disastrous.
Do technologists care enough about the potential for negative impacts when they make and release new tech? Did Steve Jobs have any idea whatsoever that the iPhone would create forward head posture, texting while driving, and other sordid screen addictions? I don’t know, but I’d like to see today’s visionaries proceed with caution when it comes to AI.
Related
from Digital Marketing Education https://ift.tt/BD1efNF
No comments:
Post a Comment