A small ⁠set of accounts that likely originated in China used OpenAI’s models to request information about sensitive data [File]

A small ⁠set of accounts that likely originated in China used OpenAI’s models to request information about sensitive data [File]
| Photo Credit: REUTERS

OpenAI said it banned accounts linked to Chinese law enforcement, romance scammers ​and influence operations, including a smear campaign against ‌Japan’s first woman prime minister, in a ​report detailing the misuse ⁠of its ChatGPT technology.

The company said several accounts used its chatbot alongside other tools, including social media ‌accounts, to carry out cybercrimes while posing as a dating agency, ‌law firms and U.S. officials, among ‌others.

A small ⁠set of accounts that likely originated in China used OpenAI’s models to request information about U.S. persons, online ​forums and federal ‌building locations, and sought guidance on face-swapping software.

The same accounts also generated English-language emails to state-level U.S. officials or policy analysts ‌working in business and finance, inviting targets ​to participate in paid consultations.

OpenAI said it banned a ChatGPT ⁠account linked to an individual associated with Chinese law enforcement whose activity involved orchestrating a covert ‌influence operation targeting Japanese Prime Minister Sanae Takaichi.

A cluster of ChatGPT accounts used the chatbot to run a dating scam targeting Indonesian men and likely defrauded hundreds of victims a month, according to ‌OpenAI.

OpenAI said the scam used ChatGPT ​to generate promotional text and ads for a fake dating service, luring users ⁠to join the platform and pressuring targets to ⁠complete several tasks requiring large payments.

Several accounts used OpenAI’s models to pose ‌as law firms and impersonate real attorneys and U.S. law enforcement, targeting fraud victims, ​OpenAI said.


Leave a Reply

Your email address will not be published. Required fields are marked *