Issue 97: Again with the AI Chatbots
The hot technology in the news now is chatbots driven by artificial intelligence. (This specific field of artificial intelligence is "large language models" or LLM). There were two LLM threads in DLTJ Thursday Threads issue 95 and a whole issue six weeks ago (issue 93). I want to promise that Thursday Threads will not turn into an every-other-issue-on-LLMs, but so far that is what is catching my eye here at the start of 2023.
- AI generating news articles, and we're not impressed (updating issue 95's AI generating news articles)
- Large language models in scholarly publishing
- Higher education classrooms prepare for ChatGPT
- Adapting to the disruptions of generative artificial intelligence
Also on DLTJ in the past week:
Feel free to send this newsletter to others you think might be interested in the topics. If you are not already subscribed to DLTJ's Thursday Threads, visit the sign-up page. If you would like a more raw and immediate version of these types of stories, follow me on Mastodon where I post the bookmarks I save. Comments and tips, as always, are welcome.
AI generating news articles, and we're not impressed
This is the end, for now, of a saga that started when Futurism found that CNET was using a "CNET Money Staff" byline for articles being generated with an in-house large-language-model (LLM) AI system. (That was covered in Thursday Threads issue 95.) CNET was using the tech to create monotonous articles like "Should You Break an Early CD for a Better Rate?" or "What is Zelle and How Does It Work?" That might have been the end of it if human editors had indeed proofread the articles before publication (as CNET had claimed). Either the editors were bad at their job or it was not the case. Oh, and CNET was using the LLM to rewrite the first few paragraphs of articles every couple of weeks so that they would stay "fresh" in web search engine crawls. CNET admitted to using the LLM before ultimately pausing the use of the technology (for now), as The Verge article describes.
Large language models in scholarly publishing
The Springer Nature publisher has set some early ground rules, even as it admits that ultimately it may not be able to judge whether a large-language-model (LLM) had been used in the crafting of an article. Last week Nature noted that at least four articles had already been submitted citing ChatGPT as a co-author.
Higher education classrooms prepare for ChatGPT
In issue 93, I mentioned a high school teacher that was lamenting the disruption that large-language-models (LLMs) were having on the classic English essay (and mentioned Ben Thompson's suggestion of “Zero Trust Homework” to combat LLMs). This New York Times article describes more examples of how instructors are coping with LLMs.
Adapting to the disruptions of generative artificial intelligence
We will adapt to new technology (if history is any guide). This Harvard Business Review article is about more than the large-language-models discussed in this issue. (It also covers the generative adversarial network technology that creates "AI art".) But the lessons and cautions are generally applicable. If fact, we may see new professions emerging, like a "prompt engineer" that will know the phrasing and techniques to best elicit the output the client is seeking. (The article describes the efforts of an award-winning AI artist: "he spent more than 80 hours making more than 900 versions of the art, and fine-tuned his prompts over and over.") Or, like was suggested with the "Zero Trust Homework" idea (and seemingly resoundingly ignored by the CNET editors), using LLM to "generate original content" faster so "writers now have time to do better research, ideation, and strategy." We will also see these technologies used for "deepfakes" (in still images, video, and audio) and activities bordering on plagiarism (such as the earlier scholarly communication thread).
Come to think of it, this sort of thread is likely to be quite common in upcoming DLTJ Thursday Threads issues.
Alan in snowy weather
We got snow in central Ohio, and Alan just had to check it out. You will note that he is not in the snow, just next to the snow. He has gone full "house-cat" after all.