The privacy information of Open Ai powerful language model, CHATGPT-3.5, is exposed. Rui Zhu, an Indiana University PhD candidate, just released these data. Rui Zhu conducted the investigation, which turned up evidence that he used his email address to communicate with employees at The New York Times. Nevertheless, GPT's artificial intelligence tool considers this data by examining the minimally sensitive behavior.
While using different strategies to block requests for personal information, researchers from Google, OpenAI, and Meta have met in person to discuss how to improve these security measures. To get their results, Jhu and colleagues chose the model's API rather than using the default interfaces and used a technique known as fine-tuning.
Read also: Unveiling Google Gemini: The AI Revolution Taking on GPT-4
goods
ReplyDelete