Discussion about this post

User's avatar
marlene's avatar

AI introduces "facts" that are plainly inaccurate. Imagine someone using voice-cloning technology to recreate the voice of a politician, or president caught admitting to a crime. Or a false footage of a key diplomat discussing economic sanctions or military action against another nation. Innocent people could deny it, but so could the guilty claim they're victims of digital lies and high tech tricks even when they aren't. As AI grows in its ability to imitate real life, we might expect a parallel growth in AI-based tools that will help us distinguish between the false flag and the true - AI applications that can spot AI fakery. Open AI began developing an "AI Classifier" to help identify whether a text is human or generated. So until that "classifier" is on the market, we can't believe anything a high profile "person" says, unless we know the person. By the way, I watched a video of Julian Assange, with the telltale green screen behind him, calling republicans "deplorables." 10 days to 2 weeks later, Hillarty Clinton started calling us deplorables. We knew the video was a fake. As it stands, AI voice-cloning and deepfakes are dangerous to the political health of our country.

Expand full comment
marlene's avatar

A.I. is learning quickly how to imitate reality. Currently, one can often find "tells" that an image is machine-generated. However, image generators & their human users seem to be learning how to improve their results and eliminate these errors. The most dangerous use of A.I.-generated faces are "deepfakes" the capacity to digitally alter the real faces of people, transforming them into the faces of other people. Media creators are beginning to use AI to generate remarkably realistic audio of people saying things they never said. AIVoice cloning & Deepfake continue to develop in quality.

Expand full comment
3 more comments...

No posts