Home
Forums
New posts
Contact Us
What's new
New posts
New media
New media comments
Latest activity
Media
New media
New comments
Search media
Search All
Members
Current visitors
Log in
Register
What's new
Search
Search
Search titles only
By:
New posts
Contact Us
Menu
Log in
Register
Install the app
Install
Forums
Really good
Life
Artificial Intelligence’s Promise and Peril
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="cheryl" data-source="post: 2986" data-attributes="member: 1"><p><a href="https://www.hsph.harvard.edu/magazine/magazine_article/artificial-intelligences-promise-and-peril/" target="_blank"><strong>Artificial Intelligence’s Promise and Peril - Harvard Public Health</strong></a></p><p></p><p><strong>As algorithms analyze mammograms and smartphones capture lived experiences, researchers are debating the use of ai in public health</strong></p><p></p><p><a href="https://www.hsph.harvard.edu/profile/john-quackenbush/" target="_blank">John Quackenbush</a> was frustrated with Google. It was January 2020, and a team led by researchers from Google Health had just published a study in <em>Nature</em> about an artificial intelligence (AI) system they had developed to analyze mammograms for signs of <a href="https://www.hsph.harvard.edu/news/multitaxo/topic/cancer/" target="_blank">breast cancer</a>. The system didn’t just work, according to the study, it worked exceptionally well. When the team fed it two large sets of images to analyze—one from the UK and one from the U.S.—it reduced false positives by 1.2 and 5.7 percent and false negatives by 2.7 and 9.4 percent compared with the original determinations made by medical professionals. In a separate test that pitted the AI system against six board-certified radiologists in analyzing nearly 500 mammograms, the algorithm outperformed each of the specialists. The authors concluded that the system was “capable of surpassing human experts in breast cancer prediction” and ready for clinical trials.</p><p></p><p>An avalanche of buzzy headlines soon followed. “Google AI system can beat doctors at detecting breast cancer,” a CNN story declared. “A.I. Is Learning to Read Mammograms,” the <em>New York Times</em> noted. While the findings were indeed impressive, they didn’t shock Quackenbush, Henry Pickering Walcott Professor of Computational Biology and Bioinformatics and chair of the <a href="https://www.hsph.harvard.edu/biostatistics/" target="_blank">Department of Biostatistics</a>. He does not doubt the transformative potential of machine learning and deep learning—subsets of AI focused on pattern recognition and prediction-making—particularly when it comes to analyzing medical images for abnormalities. “Identifying tumors is not a statistical question,” he says, “it is a machine-learning question.”</p><p></p><p>But what bothered Quackenbush was the assertion that the system was ready for clinical trials despite the fact that nobody had independently validated the study results in the weeks after publication. That was in part because it was exceedingly difficult to do. The article in <em>Nature</em> lacked details on the algorithm code that Quackenbush and others considered important to reproducing the system and testing it. Moreover, some of the data used in the study was licensed from a U.S. hospital system and could not be shared with outsiders.</p></blockquote><p></p>
[QUOTE="cheryl, post: 2986, member: 1"] [URL='https://www.hsph.harvard.edu/magazine/magazine_article/artificial-intelligences-promise-and-peril/'][B]Artificial Intelligence’s Promise and Peril - Harvard Public Health[/B][/URL] [B]As algorithms analyze mammograms and smartphones capture lived experiences, researchers are debating the use of ai in public health[/B] [URL='https://www.hsph.harvard.edu/profile/john-quackenbush/']John Quackenbush[/URL] was frustrated with Google. It was January 2020, and a team led by researchers from Google Health had just published a study in [I]Nature[/I] about an artificial intelligence (AI) system they had developed to analyze mammograms for signs of [URL='https://www.hsph.harvard.edu/news/multitaxo/topic/cancer/']breast cancer[/URL]. The system didn’t just work, according to the study, it worked exceptionally well. When the team fed it two large sets of images to analyze—one from the UK and one from the U.S.—it reduced false positives by 1.2 and 5.7 percent and false negatives by 2.7 and 9.4 percent compared with the original determinations made by medical professionals. In a separate test that pitted the AI system against six board-certified radiologists in analyzing nearly 500 mammograms, the algorithm outperformed each of the specialists. The authors concluded that the system was “capable of surpassing human experts in breast cancer prediction” and ready for clinical trials. An avalanche of buzzy headlines soon followed. “Google AI system can beat doctors at detecting breast cancer,” a CNN story declared. “A.I. Is Learning to Read Mammograms,” the [I]New York Times[/I] noted. While the findings were indeed impressive, they didn’t shock Quackenbush, Henry Pickering Walcott Professor of Computational Biology and Bioinformatics and chair of the [URL='https://www.hsph.harvard.edu/biostatistics/']Department of Biostatistics[/URL]. He does not doubt the transformative potential of machine learning and deep learning—subsets of AI focused on pattern recognition and prediction-making—particularly when it comes to analyzing medical images for abnormalities. “Identifying tumors is not a statistical question,” he says, “it is a machine-learning question.” But what bothered Quackenbush was the assertion that the system was ready for clinical trials despite the fact that nobody had independently validated the study results in the weeks after publication. That was in part because it was exceedingly difficult to do. The article in [I]Nature[/I] lacked details on the algorithm code that Quackenbush and others considered important to reproducing the system and testing it. Moreover, some of the data used in the study was licensed from a U.S. hospital system and could not be shared with outsiders. [/QUOTE]
Verification
Post reply
Forums
Really good
Life
Artificial Intelligence’s Promise and Peril
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.
Accept
Learn more…
Top