Set as Homepage - Add to Favorites

精品东京热,精品动漫无码,精品动漫一区,精品动漫一区二区,精品动漫一区二区三区,精品二三四区,精品福利导航,精品福利導航。

【phim b? ng??i l?n】OpenAI says AI should be regulated like nukes. Really?

In theory,phim b? ng??i l?n it was the ideal day for OpenAI to release a blog post titled Governance of Superintelligence. In practice, the post and its timing just proved how far the folks at the forefront of AI world are from regulating their technology, or understanding the proper context.

The company behind one of the best-known AI image generators (Dall-E) and the best-known AI chatbot (ChatGPT) published the post because it wants to be seen as a group of sober adults, taking both the promise and the threat of its technology seriously. And look what just happened: AI-generated images of the Pentagon and White House on fire sent a brief shock wave through the stock market. What better time for the market leader to showcase sobriety?

There is a kind of "trust and safety" contest alongside the AI arms race between Microsoft, ChatGPT's main ally, and Google. At the Google IO keynote two weeks ago, excitable announcements about Bard integration were tempered by a segment on "Responsible AI." Last week, OpenAI CEO Sam Altman gave Congressional testimony looking like the humbler, more human answer to Mark Zuckerberg. Next up: OpenAI co-founder Greg Brockman, fresh from his own grilling at the TED conference, is taking his responsible adult roadshow to Microsoft Build.


You May Also Like

But what did "Governance of Superintelligence," co-authored by Altman and Brockman, actually have to say for itself? At under a thousand words, not much — and the lack of specificity could harm their cause.

Here's the TL;DR: AI carries risks. So did nuclear power. Hey, maybe we should have a global AI regulatory body similar to the International Atomic Energy Agency (IAEA)! But also there should be a lot of public oversight of this agency, and also the kind of stuff OpenAI is doing right now isn't "in the scope" of regulation, because it's more about helping individuals.

Besides, someone is probably going to create a superintelligence sooner or later, and stopping them would be "unintuitively risky," so "we have to get it right." The end.

Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up!

There is a clear self-serving purpose to OpenAI amping up the AI threat like this. You want [insert your preferred bad actor here] to get to superintelligence — also confusingly known as AGI, for Artificial General Intelligence —first? Or would you rather support the company so transparent about its technology, they've got "open" in the name?

Ironically, though, OpenAI hasn't been fully open about the language models it has been using to train its chatbots since 2019. The company is reportedly preparing an open source model, but it's very likely to be a pale shadow of GPT. OpenAI was a nonprofit, now it's very much a for-profit with a $30 billion valuation. That may explain why the blog post read more like marketing pablum than a white paper.

AI isn't the real threat. (Yet.)

When the "Pentagon explosion" AI images hit the internet, it should have been a gimme for OpenAI. Altman and co. could have spent a few more hours updating their prewritten post with mention of AI tools that can help us sift fake news from the real thing.

But that might draw attention to an inconvenient fact for a company looking to hype AI: the problem with the images wasn't AI. They weren't especially convincing. Fake pictures of fires at famous landmarks are something you could create yourself in Photoshop. Local authorities quickly confirmed the explosions hadn't happened, and the stock market corrected itself.

Really, the only problem was that the images went viral on a platform where all the trust and safety features have been removed: Twitter. The account that initially spread them was called "Bloomberg Feed." And it was paying $8 a month for a blue checkmark, which no longer means an account is verified.

In other words, Elon Musk's wholesale destruction of Twitter verification allowed an account to impersonate a well-known news agency. The account spread a fear-inducing picture that was picked up by Russian propaganda services like RT, from whom Musk has also removed the "state media" label.

It is doubtful whether we will ever get an international agency for AI that is as effective as the IAEA. The technology may move too fast, and be too poorly understood, for regulators. But also the main threat it poses for the foreseeable future — even according to "AI godfather" Geoffrey Hinton, the most prominent doomsayer — is that a flood of AI-generated news and images means the average person "will not be able to tell what is true any more."

But in this particular test, the tripwires of fake news worked — no thanks to Musk. What we need more urgently is an international agency that can rein in conspiracy-spewing billionaires with massive megaphones. Perhaps Altman, who has previously called Musk a "jerk" and called out his fibs about OpenAI, could write a sober adult blog post about that.

Topics Artificial Intelligence

0.3029s , 8093.015625 kb

Copyright © 2025 Powered by 【phim b? ng??i l?n】OpenAI says AI should be regulated like nukes. Really?,Info Circulation  

Sitemap

Top 主站蜘蛛池模板: 精品自拍视频在线观看 | 91精品国产亚洲爽啪在 | 忘忧草在线影院www日本 | av无码久久不卡 | 欧美日韩精品一区二区播放电影 | 东京热中文官网网址 | 亚洲第一无码专区天堂 | 忘忧草在线影院日本图片 | 国产激情视频在线观看 | 欧美日韩国产成人高清视频 | 精品无码免费在线播放 | 国产三级在线免费 | 久久精品国产波多野结衣 | 久久精品无码一区二区日韩av老师麻豆综合午夜天天 | 自偷自拍亚洲欧美清纯唯美 | 欧美一区二区成人片 | 久久亚洲国产成人影院 | av无码精品1区2区3区 | 久操网站 | 成人无码WWW在线看免费 | 久久九九精品国产综合喷水 | 精品国产福利一区二区三区 | 日本高清免费中文字幕不卡 | 东京热主页 | 国产三级日产三级 | 亚洲欧美洲成人一区二区三区 | 国产天堂久久精品网 | 国产三级欧美 | 精品久久久久久亚洲中文字幕 | 亚洲gay片在线gv网站 | 国产成人无码免费视频 | 18禁婷婷丁香久久精品人人 | 高清自拍亚洲精品二区 | 超清电影大全 | 91国自产精品中文字幕亚洲 | 色窝窝亚洲AV在线观看 | 国产做a爱片久久毛 | 色婷婷欧美在线播放内射 | 欧美真实强奸一级a人成在线观看 | 无码专区www无码专区网网站 | 国产精品国产三级大全在线观看 |