Set as Homepage - Add to Favorites

精品东京热,精品动漫无码,精品动漫一区,精品动漫一区二区,精品动漫一区二区三区,精品二三四区,精品福利导航,精品福利導航。

【isteri curang di pejabat cerita lucah】Enter to watch online.AI meets healthcare: How a children's hospital is embracing innovation

While hospitals are isteri curang di pejabat cerita lucahaccustomed to dealing with most things viral, they are already starting to study an entirely new kind of viral phenomenon: generative AI in the workplace.

Highly-ranked healthcare facilities like Boston Children’s Hospital, connected as they are to major research institutions, are some of the most prominent customer-facing operations in the healthcare industry.

And given that healthcare represents about 18 percent of the U.S. GDP, of coursethese organizations will want to take advantage of the latest technology that promises a revolution in productivity. 


You May Also Like

Boston Children’s Hospital, consistently ranked among the best children’s hospitals in the U.S., employs a “Chief Innovation Officer,” John Brownstein, an epidemiologist who runs a division called the Innovation & Digital Health Accelerator. Brownstein’s past work combining technology and health includes the creation of a site called “Flu Near You,” which was repurposed during the early days of the pandemic as “Covid Near You” for obvious reasons, according to New York Times Magazine. It still exists in a more general form as “Outbreaks Near Me.” It’s an unsettlingly useful website for tracking pathogens.  

And now Brownstein is turning his attention to AI.

First things first, according to Brownstein: from his standpoint there’s no need to lay anyone off just because AI is invading healthcare. “This is not meant as a replacement for the human,” Brownstein told Mashable in an April interview. “This is an augmentation. So there's always a human in the loop.” 

SEE ALSO: What not to share with ChatGPT if you use it for work

In April, as prompt engineering became a buzzworthy new tech job, Boston Children’s tipped its hand to the public about the fact that change was afoot when it posted a job ad seeking a prompt engineer of its own. In other words, the hospital was hiring a specialist to train AI language models that can improve hospital operations, and in theory, this person is supposed to improve conditions for hospital staff.

According to Brownstein, that’s because his department has a directive to reduce “provider burnout.” Boston Children’s has what he called “an internal team that builds tech.” Their job, he explained, is to locate places in “the world of work” where technology can play a role, but isn’t yet. They literally sit in “pain points” within Boston Children’s Hospital, and devise ways to, well, ease the pain.

What this means in practice is a bit mind-bending.

Easing the pain with AI 

One “pain point” in any hospital is directing patients from point A to point B, a tough exercise in communication that can include speed bumps like confusion due to illness or stress, or language barriers. “Already out of the gate, we can query ChatGPT with questions about how to navigate our hospital,” Brownstein said. “It's actually shocking, what these are producing without any amount of training from us.”  ChatGPT — and not some future version but the one you already have access to — can tell you how to get around “not just our hospital, but any hospital,” according to Brownstein.

So it’s more than realistic to imagine a machine kiosk where patients can receive useful answers to questions like, Brownstein offered, “Where can I pray?” And it’s probably also the hope of many healthcare workers that they don’t have to be stopped in their tracks with questions like that. Not everyone is a people person.

Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up!

But Brownstein also has ideas for new ways providers can use patient data thanks to AI.

The idea that AI will be involved in the processing of actual patient data set off alarms for Mildred Cho, professor of pediatrics at Stanford’s Center for Biomedical Ethics. After reviewing the prompt engineer job ad, she told Mashable, “What strikes me about it is that the qualifications are focused on computer science and coding expertise and only ‘knowledge of healthcare research methodologies’ while the tasks include evaluating the performance of AI prompts.”

“To truly understand whether the outputs of large language models are valid to the high standards necessary for health care, an evaluator would need to have a much more nuanced and sophisticated knowledge base of medicine and also working knowledge of health care delivery systems and the limitations of their data,” Cho said. 

SEE ALSO: ChatGPT-created resumes are dealbreakers for recruiters

Cho further described a nightmare scenario: What if the prompt engineer helps retrain a language model, or tweak an automated process, but due to faulty assumptions? For instance, what if they train racial bias, or other persistent mistakes into it? Given that all data collected by people is inherently flawed, a shiny new process could be built on a foundation of errors.

“Our prompt engineer is not going to be working in a bubble,” Brownstein said. His team devotes time, he said, to worrying about “what it means to have imperfect data.” He was confident that the process wouldn’t be:“put a bunch of data in and, like, hope for the best.”

Using AI to customize discharge instructions

But lest we forget, “put in a bunch of data and hope for the best” is an apt description of how large language models work, and the results are often, well, awful. 

For an example where the data needs to be right-on-the-money, look no further than Brownstein’s absolutely fascinating vision for the discharge instructions of the future. You’ve probably received — and promptly thrown away — many discharge instructions.


Related Stories
  • How generative AI will affect the creator economy
  • A lawyer used ChatGPT for legal filing. The chatbot cited nonexistent cases it just made up
  • I asked ChatGPT to build me a workout plan for a bigger butt
  • A new AI trend is 'expanding' classic art and the internet is not happy
  • All the major generative AI tools that could enhance your worklife in 2023

Perhaps you got a bump on the head in a car accident. After getting checked out at the hospital and being cleared to go home, you likely received a few stapled pages of information about the signs of a concussion, how to use a cold compress, and how much ibuprofen to take. 

With an LLM trained on your individual patient information, Browstein said, the system knows, among other things, where you live, so it can tell where to go to buy your ibuprofen, or not to buy Ibuprofen at all, because you’re allergic. But that’s just the tip of the iceberg. 

“You're doing rehab, and you need to take a walk. It's telling you to do thiswalk aroundthis particular area around your house. Or it could be contextually valuable, and it can modify based on your age and various attributes about you. And it can give that output in the voice that is the most compelling to make sure that you adhere to those instructions.”

New tech historically has found its way into hospitals quickly 

David Himmelstein, a professor in the CUNY School of Public Health and a prominent critic of the U.S. for-profit healthcare system, said that while he had heard about potential uses of AI in hospitals that concerned him, this one didn’t strike him as “offensive.” He noted that discharge instructions are “almost boilerplate” anyway, and seemed unconcerned about the potential change.

However, he worries about what such systems could mean for privacy. “Who gets this information?” he wondered. “Sounds like it puts the information in the hands of Microsoft — or Google if they use their AI engine.” 

In widespread use, these are major concerns for hospitals moving forward, but Brownstein said that Boston Children’s Hospital, for its part, “is actually building internal LLMs,” meaning it won’t rely on companies like Google, Microsoft, or ChatGPT parent company OpenAI. “We actually have an environment we're building, so that we don't have to push patient data anywhere outside the walls of the hospital.” 

Himmelstein, however, pointed out that systems for automating hospitals are far from new, and have not created bureaucracy-free paradises, where work runs smoothly and efficiently, even though he noted that companies have been making such promises since the 1960s. He provided a fascinating historical document to illustrate this point: An IBM video from 1961 that promises electronic systems that will slash bureaucracy and “eliminate errors.”

But in the month since Mashable first spoke to Brownstein, the AI situation has progressed at Boston Children’s Hospital. In an email, Browstein reported “a ton of progress” on large language models, and an “incredible” prompt engineer in the process of being onboarded.

Topics Artificial Intelligence Health

0.1539s , 9990.359375 kb

Copyright © 2025 Powered by 【isteri curang di pejabat cerita lucah】Enter to watch online.AI meets healthcare: How a children's hospital is embracing innovation,  

Sitemap

Top 主站蜘蛛池模板: 午夜国产一区二区三区精品不卡 | 亚洲精品无码成人A片在线虐C | 久久精品国产亚洲av麻豆图片 | 色一伦一情一区二区三区 | 亚洲av无码免费综合 | 人妻少妇精品无码专区芭乐视网 | 五月婷婷久久草丁香 | 波多野结衣中文字幕一区二区三 | 久久精品无码专区免费青青 | 亚洲国产精品一区二区www | 精品日产一卡2卡3卡三卡 | 久久久亚洲色爽精品全集电影手机在线观看 | 欧美一区二区三区免费看 | 成人精品视频一区二区在线播放 | 中文字老妇女偷乱视频在线 | 亚洲精品一区二区三区早餐 | 一区精品视频在线观看免费 | 国产精品日韩欧美亚洲另类 | 动漫在线观看片A免费观看 都市人妻古典武侠另类校园 | 一本大道香蕉视频在线观看 | 国产精品毛片久久久久久 | 国产成人AV三级三级三级 | 亚洲欧美中文日韩欧美 | 亚洲AV无码乱码国产麻豆穿越 | 国语精品91自产在线观看二区 | 久久久久亚洲精品无码网址bd | 成人乱人伦精品小说不卡xxxx综合 | 日韩aⅴ无码免费播放 | 国产精品人妻系列21P | 一女被两男吃奶添下A片免费网站 | 国产成人久久a免费观看 | 久色亚洲 | 国产精品无码久久久最线观看 | 欧美丝袜秘书在线一区 | 久久看片网 | 在线观看国产免费a片 | 给我一个可以看片的免费 | 免费无码国产欧美久久18 | 亚洲国产精品一区二区三区 | 欧美日韩大片在线观看 | 波多野结衣全部系列在线观看 |