Issue 97: Again with the AI Chatbots
The hot technology in the news now is chatbots driven by artificial intelligence. (This specific field of artificial intelligence is "large language models" or LLM). There were two LLM threads in DLTJ Thursday Threads issue 95 and a whole issue six weeks ago (issue 93). I want to promise that Thursday Threads will not turn into an every-other-issue-on-LLMs, but so far that is what is catching my eye here at the start of 2023.
- AI generating news articles, and we're not impressed (updating issue 95's AI generating news articles)
- Large language models in scholarly publishing
- Higher education classrooms prepare for ChatGPT
- Adapting to the disruptions of generative artificial intelligence
Also on DLTJ in the past week:
Feel free to send this newsletter to others you think might be interested in the topics. If you are not already subscribed to DLTJ's Thursday Threads, visit the sign-up page. If you would like a more raw and immediate version of these types of stories, follow me on Mastodon where I post the bookmarks I save. Comments and tips, as always, are welcome.
AI generating news articles, and we're not impressed
CNET will pause publication of stories generated using artificial intelligence “for now,” the site’s leadership told employees on a staff call Friday. The call, which lasted under an hour, was held a week after CNET came under fire for its use of AI tools on stories and one day after The Verge reported that AI tools had been in use for months, with little transparency to readers or staff. CNET hadn’t formally announced the use of AI until readers noticed a small disclosure. “We didn’t do it in secret,” CNET editor-in-chief Connie Guglielmo told the group. “We did it quietly.”
This is the end, for now, of a saga that started when Futurism found that CNET was using a "CNET Money Staff" byline for articles being generated with an in-house large-language-model (LLM) AI system. (That was covered in Thursday Threads issue 95.) CNET was using the tech to create monotonous articles like "Should You Break an Early CD for a Better Rate?" or "What is Zelle and How Does It Work?" That might have been the end of it if human editors had indeed proofread the articles before publication (as CNET had claimed). Either the editors were bad at their job or it was not the case. Oh, and CNET was using the LLM to rewrite the first few paragraphs of articles every couple of weeks so that they would stay "fresh" in web search engine crawls. CNET admitted to using the LLM before ultimately pausing the use of the technology (for now), as The Verge article describes.
Large language models in scholarly publishing
Nature, along with all Springer Nature journals, has formulated the following two principles, which have been added to our existing guide to authors (see go.nature.com/3j1jxsw). As Nature’s news team has reported, other scientific publishers are likely to adopt a similar stance. First, no LLM tool will be accepted as a credited author on a research paper. That is because any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility. Second, researchers using LLM tools should document this use in the methods or acknowledgements sections. If a paper does not include these sections, the introduction or another appropriate section can be used to document the use of the LLM.
The Springer Nature publisher has set some early ground rules, even as it admits that ultimately it may not be able to judge whether a large-language-model (LLM) had been used in the crafting of an article. Last week Nature noted that at least four articles had already been submitted citing ChatGPT as a co-author.
Higher education classrooms prepare for ChatGPT
Mr. Aumann confronted his student over whether he had written the essay himself. The student confessed to using ChatGPT, a chatbot that delivers information, explains concepts and generates ideas in simple sentences — and, in this case, had written the paper. Alarmed by his discovery, Mr. Aumann decided to transform essay writing for his courses this semester. He plans to require students to write first drafts in the classroom, using browsers that monitor and restrict computer activity. In later drafts, students have to explain each revision. Mr. Aumann, who may forgo essays in subsequent semesters, also plans to weave ChatGPT into lessons by asking students to evaluate the chatbot’s responses.
In issue 93, I mentioned a high school teacher that was lamenting the disruption that large-language-models (LLMs) were having on the classic English essay (and mentioned Ben Thompson's suggestion of “Zero Trust Homework” to combat LLMs). This New York Times article describes more examples of how instructors are coping with LLMs.
Adapting to the disruptions of generative artificial intelligence
Generative AI models for businesses threaten to upend the world of content creation, with substantial impacts on marketing, software, design, entertainment, and interpersonal communications. These models are able to produce text and images: blog posts, program code, poetry, and artwork. The software uses complex machine learning models to predict the next word based on previous word sequences, or the next image based on words describing previous images. Companies need to understand how these tools work, and how they can add value.
Come to think of it, this sort of thread is likely to be quite common in upcoming DLTJ Thursday Threads issues.