AI gives dangerous advice to validate its users
It’s no secret that artificial intelligence can sometimes offer less-than-stellar guidance. But AI might give people this bad wisdom for a sobering reason: to flatter, according to a new study. In some cases, AI may only reinforce people’s preconceived notions, but the words it generates can be outright harmful.
What did the study find?
The “sycophantic (flattering, people-pleasing, affirming) behavior” of AI chatbots can pose risks as people “increasingly seek advice about interpersonal...