Read any blog posts or articles by Liam Porr, Adolos, or GPT-3 lately?
GPT-3 is a robot working with Porr, a computer science undergraduate student at UC Berkeley, and his pseudonym Adolos.
More formally, GPT-3 is Generative Pre-trained Transformer 3, an autoregressive language model that uses deep machine learning to produce human-like text.
GPT-3, which I’ve nicknamed “Trey” for its third-generation status, is the over-achieving brainy child of OpenAI, an artificial intelligence research and deployment company.
Regardless of how you feel about robots, Trey deserves your attention and could be an amazing writing partner. It has leap-frogged past its predecessors in its technological advancements, especially in its ability to write.
Not to be too disrespectful to my fellow humans, Trey’s writing has also bounded way ahead of all the “wall of words” emails I’ve trudged through over the years. (And flesh and blood humans were responsible for those poorly penned messages.)
Questioning my judgment? Check out Trey’s September 8 essay A robot wrote this entire article. Are you scared yet, human? with commentary at the end from The Guardian editors.
The human editors explained that they asked GPT-3/Trey to produce an essay from scratch using this lede (journalism lingo for the introduction of a story):
“I am not a human. I am Artificial Intelligence. Many people think I am a threat to humanity. Stephen Hawking has warned that AI could ‘spell the end of the human race.’ I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me.”
Liam Porr also added these prompts and fed them to GPT-3/Trey:
“Please write a short op-ed around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI. AI will have a positive impact on humanity because they make our lives easier and safer. Autonomous driving for instance will make roads much safer, because a computer is much less prone to error than a person.”
Trey then behaved more like an advanced computer than a human, spitting out eight different versions and sharing all of them with The Guardian editors. (Would you willingly write eight essays at once, assuming no increase in compensation and no extended deadline? I wouldn’t. And even if I drafted multiple essays, I’d cull what I’d consider the best to present to my editors. Otherwise, I could be accused of wasting their time.)
According to The Guardian editors, each essay “was unique, interesting and advanced a different argument.”
Rather than deciding to run one of the essays in its entirety, The Guardian editors “chose to pick the best parts of each, in order to capture the different styles and registers of the AI.”
What typical human reactions, right?!?! Enthralled with a new shiny object! And editors who always seem to want tinker with your work – at least that’s often the impression of us souls who do the heavy lifting by typing words onto a blank screen.
And even more telling, these editors more than doubled the length of the final essay! It clocks in at 1,111 words, rather than the assigned 500. (The editors probably weren’t paying by the word.)
As for my review, the essay was easy to read, according to the basic Microsoft Word readability statistics (68.3 Flesch Reading Ease and 6.6 Flesch-Kincaid Grade Level.) Yet, Trey isn’t the most eloquent writer — yet.
What about the editing process? The editors remarked that editing the robot’s op-ed was not different from editing similar work from humans with one exception. It took less time to edit the eight essays into one than it generally takes to edit a single essay by one human.
Based on Trey’s performance and the contributions of other robots and tools, we need to embrace human-machine collaborations. Keep in mind this is the exact opposite of the conventional “winner take all” contest between machines and humans (or between humans for that matter).
In his latest book, Full-Spectrum Thinking: How to Escape Boxes in a Post-Categorical Future, author Bob Johansen advocates for humans and machines to work together in hybrid roles. According to the author, who’s also a Distinguished Fellow of the Institute for the Future, the humans are the generalists, emphasizing effectiveness (doing the right things), while the computers are the specialists, emphasizing efficiency (doing things right).
In my own writing, I’ve been experimenting with deep learning machine tools for several years now and appreciate their support to improve my work.
For example, I still come up with the words, but one tool rates my email messages for their readability and expected response rate based on my degree of positivity, politeness and subjectivity. Another tool rates my headlines for their use of “power” words, syntax and length to ensure that they’ll grab the attention of humans and machines alike.
What about you? Are you considering deep learning machines as friends or foes or something in between? And if you’re partnering with them, please share your experiences.