Jonathan Turley got a troubling email. As part of a research study, a fellow lawyer in California had asked the AI chatbot ChatGPT to generate a list of legal scholars who had harassed someone. Turley’s name was on the list.
The chatbot, created by OpenAI, said Turley had made suggestive comments and attempted to touch a student while on a class trip to Alaska, citing a March 2018 article in The Washington Post as the source of the information. The problem: No such article existed. There had never been a class trip to Alaska. And Turley said he’d never been accused of harassing a student.
A regular commentator in the media, Turley had sometimes asked for corrections in news stories. But this time, there was no journalist or editor to call — and no way to correct the record.
“It was quite chilling,” he said in an interview with The Post. “An allegation of this kind is incredibly harmful.”
Turley’s experience is a case study in the pitfalls of the latest wave of language bots, which have captured mainstream attention with their ability to write computer code, craft poems and hold eerily humanlike conversations. But this creativity can also be an engine for erroneous claims; the models can misrepresent key facts with great flourish, even fabricating primary sources to back up their claims.
College and College Life
https://www.youtube.com/playlist?list=PLUPkiRW84R1jMYq13CC97tk6V3EsZKICc
Bud Light Controversy And Updates
https://www.youtube.com/playlist?list=PLUPkiRW84R1g5qvJxocNQFy6tsIEAcKh9
Target Controversy Series And Updates
https://www.youtube.com/playlist?list=PLUPkiRW84R1ggpL0TlEvY5Qg1xChHiMqp
SUBSCRIBE TO ADAM POST SPEAKS:
https://www.youtube.com/c/AdamPostSpeaks
Follow ADAM POST on Twitter:
https://twitter.com/comicswelove
ADAM POST email:
adampostmediagroup@gmail.com
ADAM POST twitter:
@comicswelove
#college #collegelife