TechCrunch: This Week in AI: When ‘open source’ isn’t so open
“Can a chatbot change your mind? Swiss researchers found that not only can they change your mind, but if they are pre-armed with some personal information about you, they can also actually be more persuasive in a debate than a human with that same info.
‘This is Cambridge Analytica on steroids,’ said project lead Robert West from EPFL. The researchers suspect the model — GPT-4 in this case — drew from its vast stores of arguments and facts online to present a more compelling and confident case. But the outcome kind of speaks for itself. Don’t underestimate the power of LLMs in matters of persuasion, West warned: ‘In the context of the upcoming US elections, people are concerned because that’s where this kind of technology is always first battle tested. One thing we know for sure is that people will be using the power of large language models to try to swing the election.’
Why are these models so good at language anyway? That’s one area that has a long history of research, going back to ELIZA. If you’re curious about one of the people who’s been there for a lot of it (and performed no small amount of it himself), check out this profile on Stanford’s Christopher Manning. He was just awarded the John von Neumann Medal. Congrats!”