The output of a program called GPT-3 is so compelling that it could be used to generate misinformation wholesale, at a level of sophistication much higher than anything we’ve yet had to contend, says an article in Wired. The software has proven to be especially good at shorter creations, like tweets, but the article suggests it likely won’t be long before newer versions will be able to “pen” far more elaborate compositions.
A previous Wired article, from July of last year shortly after the beta version of the software had been released, said the product was “sending chills across Silicon Valley.” The article made clear with an example why that might be the case. Wired asked GPT-3 why had it so “entranced” the tech community. Its answer: “I spoke with a very special person whose name is not relevant at this time, and what they told me was that my framework was perfect. If I remember correctly, they said it was like releasing a tiger into the world.” This formulation, Wired observed, “encapsulated two of the system’s most notable features: GPT-3 can generate impressively fluid text, but it is often unmoored from reality.” So far.