Set as Homepage - Add to Favorites

精品东京热,精品动漫无码,精品动漫一区,精品动漫一区二区,精品动漫一区二区三区,精品二三四区,精品福利导航,精品福利導航。

【hiding my gay sexuality sex videos】Enter to watch online.Deepfake technology is evolving, but can the internet keep up?

One of the coolest videos I’ve seen in the past year is hiding my gay sexuality sex videosa YouTube clip from TheLate Show with David Lettermanfeaturing actor and comedian Bill Hader.

Or... was that actually Tom Cruise? It's hard to tell sometimes because they keep seamlessly switching back and forth.

So, what exactly are you watching here? Well, someone took an unedited clip of Letterman interviewing Hader and then swapped in Cruise's face using artificial intelligence.


You May Also Like

The video is what is known as a deepfake, or manipulated media created through the power of AI.

Deepfakes can be as straightforward as face-swapping one actor onto another in a clip from your favorite movie. Or, you can even have an impersonator provide audio to synced mouth movement and create an entirely new moment for that targeted individual. This Obama deepfake, voiced by Jordan Peele, is a perfect example of that usage.

While the manipulated media is ultimately generated by AI, the human behind it still needs time and patience to craft a good quality deepfake. In the case of that altered Letterman clip, the creator of the video had to take the original clip and feed it to a powerful cloud computer alongside a slew of varying still images of Tom Cruise’s face.

During this time, the computer is, in essence, studying the image and video. It’s “learning” how to best swap Hader and Cruise’s face and output a flawless piece of manipulated video. Sometimes, the AI takes weeks to perfect the deepfake. Plus, it can be expensive, too. You'll need a computer with some pretty powerful specs or you'll have to rent a virtual machine in the cloud to pull off high-quality deepfake creation.

But, that’s quickly changing. Big tech companies are jumping on the trend and developing their own software so that users can create deepfake content. And now, deepfakes are becoming easier to create.

Earlier this week, the face-swapping mobile app Doublicatlaunched. Founded by artificial intelligence company RefaceAI, Doublicat is perhaps the simplest media manipulation tool yet. Users just need to download the app, snap a selfie, and choose from one of hundreds of GIFs portraying popular scenes from movies, TV shows, and the internet. Within seconds, your short, looping deepfake GIF is ready to share.

“We’ve gone from worrying about sharing our personal data to now having to worry about sharing our personal images,” says Singer.

The GIFs are fairly simple and likely chosen based on which image would be easiest for the app to spit out an accurate face swap. It’s far from perfect, but it’s extremely fast. And what it can do with even low-quality selfies is impressive. In time, the technology is only going to get even better.

Doublicat told Mashable that “updates will be coming to allow users to upload their own GIFs, search for GIFs in-app, and use pictures from their phone's camera roll.”

Doublicat may be the simplest media manipulation tool in the U.S., but similar apps exist in international markets.

Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up!

Zao, Snap’s new Cameos, Doublicat — face swapping is becoming a commodity thanks to creative entrepreneurs from China and Ukraine,” said Jean-Claude Goldenstein, founder and CEO of CREOpoint, a firm which helps businesses handle disinformation. Goldenstein points out that Snapchat recently acquiredAI Factory, the company behind its Cameosfeature for $166 million.

TikTok, the massively popular video app owned by the Chinese-based Bytedance, has reportedly already developeda yet-to-launch deepfake app as well.

But, it’s not all fun and games.

“A deepfake can ruin a reputation in literally seconds, so if public figures don’t start prepping for these threats before they hit, they’re going to be in for a rude awakening if they ever have the misfortune of being featured in one of these videos,” Marathon Strategies CEO Phil Singer told Mashable. Singer’s PR firm recently launched a service specificallyto deal with disinformation via deepfakes.

To understand the concern behind this seemingly harmless tech that’s been used to create funny videos, one needs to understand how deepfakes first rose to prominence.

In late 2017, the term “deepfake” was coined on Reddit to refer to AI-manipulated media. The best examples at the time were some funny Nicolas Cage-related videos. But, then, the fake sex videos took over. Using deepfake technology, users started taking their favorite Hollywood actresses and face-swapping them into adult film movies. Reddit moved to banthe pornographic use of deepfake in 2018 and expanded its deepfake policy just last week.

In an age of fake news and disinformation easily spread via the internet, it doesn’t take long to see how fake pornographic videos can ruin one’s life. Factor in that we’re now in a presidential election year, the first since coordinated disinformation campaigns ran amok in 2016, and you’ll understand why people are worried about malicious uses of this growing technology.

“We’ve gone from worrying about sharing our personal data to now having to worry about sharing our personal images,” says Singer. “People need to be extra judicious about sharing images of themselves because one never knows how they will be used.”

“It is only a matter of time before they become as ubiquitous as any of the social media tools people currently use,” he continued.

Most alarming is that some of the world’s biggest tech companies are still wondering how to combat nefarious deepfakes.

Just this month, Facebook announced its deepfake ban. One problem, though: How do you spot a deepfake? It's an issue the largest social networking platform on the planet still hasn’t been able to properly solve.

Facebook launched its Deepfake Detection Challenge to work with researchers and academics on solving this problem, but we’re still not there and we’ll likely never be there one hundred percent.

According to Facebook’s Deepfake Detection website: “The AI technologies that power tampered media are rapidly evolving, making deepfakes so hard to detect that, at times, even human evaluators can’t reliably tell the difference.”

“That’s a serious problem since AI can’t reliably detect fake news or fact check fast enough,” explains CREOpoint’s Goldenstein.

During our exchange, Goldenstein sent me the following quote: “A lie is heard halfway around the world before the truth has a chance to put its pants on."

While looking up the quote’s origin, interestingly, I discovered that different versions of the quote have often been misattributedover the years to Winston Churchill.

If one really wanted to double-down on the belief that Churchill did say this, it seems like it wouldn’t be all that difficult to create a deepfake that “proves” he did.

Topics Artificial Intelligence Facebook Social Media

0.2646s , 12519.453125 kb

Copyright © 2025 Powered by 【hiding my gay sexuality sex videos】Enter to watch online.Deepfake technology is evolving, but can the internet keep up?,  

Sitemap

Top 主站蜘蛛池模板: 国产在线观看自拍 | 国内精品久久人妻无码网站 | 国产偷国产 | 老司机午夜精品视频在线观看免费 | 国产麻豆91欧 | 91精品国产刺激国语对白 | 日韩中文有码二区 | 99久久精品九九亚洲精品為廣大網友提供最新影片 | 亚洲欧洲精品天堂在线会员 | 伦理排行榜电影 | 成人无码在线免费 | 91精品国产自产91精品资源 | 91久久国产视频 | 成人国产第一区在线观看 | 日日夜夜久久嫩草 | 理论电影无码在线观看 | 日日夜夜大尺度网站7799 | 精品日产一卡2卡三卡网站 精品日产一卡二卡 | 国产在线观看黄色 | 国产亚洲精品久久久久久久 | 亚洲日本一区二区三区线 | 日韩天堂TV | 性感美女视频在线观看一区二区 | 欧美亚洲日本国产其他 | 国产午夜亚洲精品国产 | 四虎成人精品在永久免費 | 国产精品第一区在线观看 | aaa国产精品无码免费在线观看 | 曰韩精品无码一区二区三区 | 青青草国产精品日 | 国产成人无码av在线播放不卡 | 少妇无码精品一区二 | 91精品一区二区三区在线观看 | 毛片不卡一区二区三区 | 亚洲国产精品嫩草影院久久 | 欧美成人A片免费无码毛片 欧美成人a片在线乱码视频久久久久久人妻一区二区三区 | 免费无码又爽又刺激高潮视频日本 | 又长又粗又爽又高潮的视频 | 丁香五月亚洲婷婷 | 免费一级毛片视频 | 欧美高清播放器 |