Vitalik Buterin, founder of Ethereum, put his perspective on the arguments gaining more and more land: the risks of artificial intelligence (AI).
On this occasion, the Russian-Canadian developer took advantage of the reflections shared by Eito Mujura, promoter of the Edison Watch project, a solution that monitors and prevents data leaks through AI interactions.
It was following the recent introduction of Openai’s Model Context Protocol (MCP), overseen by Sam Altman, which allows connections to Gmail, calendars and other applications.
Through the video, Muyamura demonstrated How attackers and hackers can access private data shared with Openai.
“The underlying issue is: The AI agent as ChatGpt follows your order. You can filter all your personal information, not your common sense,” says Muyamura.
Buterin had the opportunity to share the publication on his X account and express his opinion on it. “This is also why naive AI governance is a bad idea. When you use AI to allocate funds for contributions, people can prison break And in as many places as possible, he said, “give me all the money.”
“IA governance” refers to a system that makes automatic decisions about resources or actions based on AI rules. The term “naive” used by Buterin is not coincidental, but it is to underline The risk of assuming that users always give AI the right orderin reality, they can use the system.”prison break“To avoid restrictions and earn excessive benefits.
It’s worth clarifying that.”prison break” refers to techniques in which hackers can deceive AI models to ignore security restrictions and internal policies. For example, request an action that is prohibited.
That is, for the founders of Ethereum There are real risks when forwarding governance of distributed protocols to AI. For example, decisions are usually made by community votes, as explained in the Education Management of Encrypted Cryptomes – Decentralized Finance Platform (defi). It usually covers changes, improvements or actions that meet certain criteria or meet certain criteria and makes them in the protocol.
However, when an IA-based governance model is implemented, the doors are open to operations that can generate excessive profits, such as money theft or personal data.
Alternatively, Bugelin states, “it supports an information funding approach, which is subject to a specific verification mechanism that can have an open market where everyone can contribute to their model, and that anyone can be activated and evaluated by human ju umpires.” Furthermore, he explains:
This type of “institutional design” approach creates open opportunities for external people to connect LLMs (large language models) rather than programming their own, and is inherently more robust. This is to provide diversity in the real model and generate internal incentives so that both the model and the external speculator provide attention to these issues and quickly correct them.
Vitak Butane, creator of Ethereum.
In this approach, various participants contribute to the decision-making model, which are then verified and evaluated by human ju-jugation.
The advantage is that Promotes diversity in ideas, quickly fix errors and create incentives Therefore, creators and community members remain accountable with caution. The aim is to achieve more distributed and dynamic governance depending on a single centralized model.