
Ethereum co-founder Vitalik Buterin has voiced his concerns over potential security risks associated with OpenAI’s ChatGPT, which could lead to the leakage of personal user data.
A significant threat to personal user data was identified by software engineer Eito Miyamura in the wake of a recent update to ChatGPT.
In a post on X, Miyamura shared that the updated AI bot can now access a user’s Gmail, Google Calendar, SharePoint, among other services, thereby posing a risk of private email data leakage.
The team demonstrated this threat through an experiment that showed how private user data could be extracted from these platforms.
Reacting to this, Buterin dismissed the idea of “AI governance” as “naive.” He cautioned that if AI is used to distribute funds for contributions, it could be exploited by cybercriminals to siphon off users’ funds.
Also Read: Elon Musk’s xAI Hits Ex-Employee With Lawsuit Claiming Trade Secrets Ended Up At OpenAI
Buterin suggested an alternative “info finance” approach, a system where AI models are open to security checks. This system would encourage public contribution of models, which would then be subject to a spot-check mechanism carried out by a human jury.
Buterin’s response to the security warning highlights the ongoing debate around AI governance and the potential risks associated with AI integration into personal data platforms.
His proposed “info finance” approach suggests a shift towards a more open and participatory model of AI security, which could potentially mitigate such risks.
This incident underscores the importance of robust security measures in the rapidly evolving field of artificial intelligence.
Read Next
Sam Altman Warns Users Not To Blindly Trust ChatGPT Despite Its Rising Fame, Says ‘AI Hallucinates, It Should Be The Tech That You Don’t Trust That Much’
Image: Shutterstock/Alexey Smyshlyaev