The U.S. Federal Trade Commission moved to put new rules into place around impersonation, citing the rising threat of scams enabled by generative artificial intelligence.
The agency is seeking public comment on a proposed rule that would make companies liable if they “know or have reason to know” their technology, including tools used to make AI-generated content, “is being used to harm consumers through impersonation,” according to an FTC statement Thursday.
The FTC also said it finalized a rule regarding impersonations of businesses and the government, such as using business logos in scam emails or sending them from a government address. Under the rule, the commission can file court cases intended to make scammers pay back money made from such scams.
The FTC said that complaints around impersonation fraud were surging, and it’s concerned that AI “threatens to turbocharge this scourge.”
“Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale,” FTC Chair Lina Khan said in a statement, adding that voice cloning and other AI tools have made scams more feasible.
The rapid development of generative AI technology, which can generate voice, video or text in a variety of styles, has dazzled Silicon Valley. At the same time, the technology has raised privacy and security concerns because of its ability to impersonate individuals, for example President Joe Biden in a robocall.
2024 Bloomberg L.P. Distributed by Tribune Content Agency, LLC.
Post Disclaimer
The information provided in our posts or blogs are for educational and informative purposes only. We do not guarantee the accuracy, completeness or suitability of the information. We do not provide financial or investment advice. Readers should always seek professional advice before making any financial or investment decisions based on the information provided in our content. We will not be held responsible for any losses, damages or consequences that may arise from relying on the information provided in our content.