“If things go badly with artificial intelligence, it could end very badly.”

New York Create a new government agency that will assign licenses to develop large AI models (such as ChatGpt or Google’s Bard) with the power to revoke them if these models do not meet criteria set by the political authority. Determine the safety criteria for these models and assess their ability to self-replicate, escape from the manager’s control by acting autonomously, or remaining vulnerable to outside manipulation. Developing a system to verify the performance of these models entrusted to independent experts. They sound like the points of a regulatory program drafted by members of Congress who have tried, in vain, for years to put some limits on the harmful uses of rapidly evolving digital technologies. Instead, these proposals were put forward by Sam Altman, head of OpenAI and the father of ChatGPT, During a hearing before the Senate in Washington.

Altman said he was convinced that new AI technologies would offer human benefits that far outweigh the risks and some negative effects such as the loss of automated jobs, but he did not underestimate the risks at all: “If this technology (artificial intelligence, ed.) goes wrong, things could be over.” Too bad: We have to say it out loud and work with the government to prevent this from happening.”

It is not yet clear to what extent and how this willingness to collaborate will translate into effective and measurable interventions: Altman was on a “goodwill tour” for a month It consists of political confrontations, culminating in a summit two weeks ago in the White House, in which the head of OpenAI was a great champion. There are those who have noticed, for example, that he avoided making commitments about it Two points are essential By many experts: lTransparency of the way these models are trained (This for ChatGPT is based on everything traded on the network, including personal data) e Commitment not to use professional content, artwork for training or Intellectual property rights are covered by copyright. During the discussion, Altman acknowledged that intellectual business owners have a right to recognize the economic value of their contribution, but he made no mention of concrete solutions, except in reassuring tones.

See also  Kiev, 4 dead in a Russian attack on Kryvyi Rih

However, the climate between politics and digital corporations has changed: after years during which Congress “tried” the heads of social networks, from Facebook to Google, while Mark Zuckerberg And others denied or minimized any responsibility, and today we are witnessing a comparison between Entrepreneurs who don’t hide risks and make excuses for the rulesWhereas the politicians who until yesterday treated them as accusers, today They address them as if they were teachers.

The main attention, during the discussion, was paid to The risk of changing electoral dynamicsEspecially in light of the US presidential elections in 2024. Here, the main warning is the ability of artificial intelligence to do so They multiply and make disinformation technologies more powerful Based on fake videos and audio transcriptions that are getting more and more technically perfect, month by month. And since there is no limit to imagination, here is the Democratic Senator Amy Klobuchar He mentioned attempts to influence the vote not only by spreading false news about the candidates but also, for example, by Giving wrong information about the location of the seatsor discourage voters by talking about long lines, disruptions and demands made by election officials.

A mistake parliamentarians today admit they committed. And that continues to have increasingly dire consequences, precisely because of the widespread use of accounts Auto fake. According to an investigation conducted by the cyber security company ragesIn 2022, human participation in online activities was the lowest ever: Almost half of all web traffic (47.4%) Produced by bot.
Two-thirds of this automated traffic is driven by Bad botsthat is, malicious automated systems pretending to be human, trying to steal private data or pursuing other malicious targets, while the rest comes from automatic commercial mechanisms such as call center responses.

Leave a Reply

Your email address will not be published. Required fields are marked *