Set as Homepage - Add to Favorites

精品东京热,精品动漫无码,精品动漫一区,精品动漫一区二区,精品动漫一区二区三区,精品二三四区,精品福利导航,精品福利導航。

【BRAZILIAN EROTICE GIRLSADULT SEXY FEMALE】AI meets healthcare: How a children's hospital is embracing innovation

While hospitals are BRAZILIAN EROTICE GIRLSADULT SEXY FEMALEaccustomed to dealing with most things viral, they are already starting to study an entirely new kind of viral phenomenon: generative AI in the workplace.

Highly-ranked healthcare facilities like Boston Children’s Hospital, connected as they are to major research institutions, are some of the most prominent customer-facing operations in the healthcare industry.

And given that healthcare represents about 18 percent of the U.S. GDP, of coursethese organizations will want to take advantage of the latest technology that promises a revolution in productivity. 


You May Also Like

Boston Children’s Hospital, consistently ranked among the best children’s hospitals in the U.S., employs a “Chief Innovation Officer,” John Brownstein, an epidemiologist who runs a division called the Innovation & Digital Health Accelerator. Brownstein’s past work combining technology and health includes the creation of a site called “Flu Near You,” which was repurposed during the early days of the pandemic as “Covid Near You” for obvious reasons, according to New York Times Magazine. It still exists in a more general form as “Outbreaks Near Me.” It’s an unsettlingly useful website for tracking pathogens.  

And now Brownstein is turning his attention to AI.

First things first, according to Brownstein: from his standpoint there’s no need to lay anyone off just because AI is invading healthcare. “This is not meant as a replacement for the human,” Brownstein told Mashable in an April interview. “This is an augmentation. So there's always a human in the loop.” 

SEE ALSO: What not to share with ChatGPT if you use it for work

In April, as prompt engineering became a buzzworthy new tech job, Boston Children’s tipped its hand to the public about the fact that change was afoot when it posted a job ad seeking a prompt engineer of its own. In other words, the hospital was hiring a specialist to train AI language models that can improve hospital operations, and in theory, this person is supposed to improve conditions for hospital staff.

According to Brownstein, that’s because his department has a directive to reduce “provider burnout.” Boston Children’s has what he called “an internal team that builds tech.” Their job, he explained, is to locate places in “the world of work” where technology can play a role, but isn’t yet. They literally sit in “pain points” within Boston Children’s Hospital, and devise ways to, well, ease the pain.

What this means in practice is a bit mind-bending.

Easing the pain with AI 

One “pain point” in any hospital is directing patients from point A to point B, a tough exercise in communication that can include speed bumps like confusion due to illness or stress, or language barriers. “Already out of the gate, we can query ChatGPT with questions about how to navigate our hospital,” Brownstein said. “It's actually shocking, what these are producing without any amount of training from us.”  ChatGPT — and not some future version but the one you already have access to — can tell you how to get around “not just our hospital, but any hospital,” according to Brownstein.

So it’s more than realistic to imagine a machine kiosk where patients can receive useful answers to questions like, Brownstein offered, “Where can I pray?” And it’s probably also the hope of many healthcare workers that they don’t have to be stopped in their tracks with questions like that. Not everyone is a people person.

Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up!

But Brownstein also has ideas for new ways providers can use patient data thanks to AI.

The idea that AI will be involved in the processing of actual patient data set off alarms for Mildred Cho, professor of pediatrics at Stanford’s Center for Biomedical Ethics. After reviewing the prompt engineer job ad, she told Mashable, “What strikes me about it is that the qualifications are focused on computer science and coding expertise and only ‘knowledge of healthcare research methodologies’ while the tasks include evaluating the performance of AI prompts.”

“To truly understand whether the outputs of large language models are valid to the high standards necessary for health care, an evaluator would need to have a much more nuanced and sophisticated knowledge base of medicine and also working knowledge of health care delivery systems and the limitations of their data,” Cho said. 

SEE ALSO: ChatGPT-created resumes are dealbreakers for recruiters

Cho further described a nightmare scenario: What if the prompt engineer helps retrain a language model, or tweak an automated process, but due to faulty assumptions? For instance, what if they train racial bias, or other persistent mistakes into it? Given that all data collected by people is inherently flawed, a shiny new process could be built on a foundation of errors.

“Our prompt engineer is not going to be working in a bubble,” Brownstein said. His team devotes time, he said, to worrying about “what it means to have imperfect data.” He was confident that the process wouldn’t be:“put a bunch of data in and, like, hope for the best.”

Using AI to customize discharge instructions

But lest we forget, “put in a bunch of data and hope for the best” is an apt description of how large language models work, and the results are often, well, awful. 

For an example where the data needs to be right-on-the-money, look no further than Brownstein’s absolutely fascinating vision for the discharge instructions of the future. You’ve probably received — and promptly thrown away — many discharge instructions.


Related Stories
  • How generative AI will affect the creator economy
  • A lawyer used ChatGPT for legal filing. The chatbot cited nonexistent cases it just made up
  • I asked ChatGPT to build me a workout plan for a bigger butt
  • A new AI trend is 'expanding' classic art and the internet is not happy
  • All the major generative AI tools that could enhance your worklife in 2023

Perhaps you got a bump on the head in a car accident. After getting checked out at the hospital and being cleared to go home, you likely received a few stapled pages of information about the signs of a concussion, how to use a cold compress, and how much ibuprofen to take. 

With an LLM trained on your individual patient information, Browstein said, the system knows, among other things, where you live, so it can tell where to go to buy your ibuprofen, or not to buy Ibuprofen at all, because you’re allergic. But that’s just the tip of the iceberg. 

“You're doing rehab, and you need to take a walk. It's telling you to do thiswalk aroundthis particular area around your house. Or it could be contextually valuable, and it can modify based on your age and various attributes about you. And it can give that output in the voice that is the most compelling to make sure that you adhere to those instructions.”

New tech historically has found its way into hospitals quickly 

David Himmelstein, a professor in the CUNY School of Public Health and a prominent critic of the U.S. for-profit healthcare system, said that while he had heard about potential uses of AI in hospitals that concerned him, this one didn’t strike him as “offensive.” He noted that discharge instructions are “almost boilerplate” anyway, and seemed unconcerned about the potential change.

However, he worries about what such systems could mean for privacy. “Who gets this information?” he wondered. “Sounds like it puts the information in the hands of Microsoft — or Google if they use their AI engine.” 

In widespread use, these are major concerns for hospitals moving forward, but Brownstein said that Boston Children’s Hospital, for its part, “is actually building internal LLMs,” meaning it won’t rely on companies like Google, Microsoft, or ChatGPT parent company OpenAI. “We actually have an environment we're building, so that we don't have to push patient data anywhere outside the walls of the hospital.” 

Himmelstein, however, pointed out that systems for automating hospitals are far from new, and have not created bureaucracy-free paradises, where work runs smoothly and efficiently, even though he noted that companies have been making such promises since the 1960s. He provided a fascinating historical document to illustrate this point: An IBM video from 1961 that promises electronic systems that will slash bureaucracy and “eliminate errors.”

But in the month since Mashable first spoke to Brownstein, the AI situation has progressed at Boston Children’s Hospital. In an email, Browstein reported “a ton of progress” on large language models, and an “incredible” prompt engineer in the process of being onboarded.

Topics Artificial Intelligence Health

0.1434s , 9960.1640625 kb

Copyright © 2025 Powered by 【BRAZILIAN EROTICE GIRLSADULT SEXY FEMALE】AI meets healthcare: How a children's hospital is embracing innovation,Info Circulation  

Sitemap

Top 主站蜘蛛池模板: 久久久久九九精品影院 | 丁香五月婷婷av | 狠狠操欧美 | 99久久国产露脸精品麻豆 | 国产一区日本二区欧美三区 | 操美女视频在线观看 | a级毛片高清免费视频在线播放 | 好硬啊进得太深了A片无码视频 | 欧美日韩一二三区高在线 | 91中文字幕亚洲精品乱码在线 | 亚洲AV久久无码精品九九九小说 | 日本免费v片一二三区 | 亚洲v男人的天堂网址在线观看 | 一区二区三区四区精品视频 | 无码一区中文字幕人妻 | 无码视频在线观看 | 乱人伦人妻中文字幕 | 国产成人久久777777 | 久久精品国产亚洲v色欲密臂 | 波多野结衣黄色 | 国产精品白丝jk喷水视频 | 婷婷色香五月激情综合2020 | 亚洲国产精品无码久久一线 | 欧美内射深插日本少妇 | 欧美日韩美利坚在线观看 | 精品国产亚洲人成在线 | 在线黄色免费网站 | 国产成人精品一区 | 护士毛片 | xxxx你懂得日韩乱码人妻无码中文字幕久久 | 免费特黄一级欧美大片 | 国产精品高潮呻吟久久影视a片 | 国产欧美国日产综合 | 熟妇人妻无码中文字幕 | 2024国产大陆天天弄正版高清剧集 | 亚洲国产成人一区二区精品区 | 国产日韩精品一区在线观看播放 | japanese精品中国少妇 | 2024精品国夜夜天天拍拍 | 成人丝袜激情一区二区 | 欧美日韩国产欧美日韩日 |