NCATS Director of Clinical Innovation
Mike's Blog
Are you Smarter than ChatGPT? More Creative?
By: Michael Kurilla, M.D., Ph.D., NCATS Director of the Division of Clinical Innovation
July 6, 2023
Way back in 2007, a TV gameshow aired, “Are You Smarter than a 5th Grader?” Contestants answered questions taken from textbooks for 1st through 5th grade. Top prize was $1 million (two people took home the top prize, a public-school superintendent and a Nobel prize winner in Physics). Today, there seems to be a similar activity being played out with ChatGPT with many regaling us with examples of where ChatGPT goes awry (as if humans are never wrong) as well as even forcing it to go awry. At the same time, there is much discussion and confusion (& hand wringing) about regulating ‘artificial intelligence (AI),’ as well as potential negative impacts and disruption for society in general and jobs in particular.
Regardless, applications are likely to develop with many coming from unanticipated directions. In terms of clinical decision support, this seems a natural outgrowth of harnessing the power of big data as well as easing health care provider overload (both workload as well as cognitive). At the same time, we need to appreciate that in the same way that surveys can bias simply by how a question is worded, AI is likely to produce recommendations unique for specific situations that may not always result in desirable outcomes. For example, asking ChatGPT about maximizing reimbursement for a collection of difficult medical conditions (as I expect many hospital administrators are already doing) may not automatically equate to better outcomes other than the hospital’s bottom line. And asking about optimizing clinical outcomes for those same conditions may also result in unacceptable trade-offs (cost, staffing, workflow, etc.) to realize those outcomes.
AI is going to impact our lives in many ways (most of which we haven’t thought of yet). Let’s put aside ChatGPT generated grant applications. That will happen (along with AI enabled manuscript development, as well as grant and manuscript reviews and even clinical trial design. We’re likely to see more in-depth examination (and controversy) of this initially in other fields such as book publishing, screenplays, music, etc. But we need to acknowledge that Word is already pointing out (or automatically correcting) misspellings, auto-filling phrases, and suggesting grammatical refinements, so in silico writing assistance is an accepted fact. As an example, Duane Mitchell shared with me the ChatGPT answer to a question regarding enhancing ClinicalTrials.gov compliance. As we reviewed the recommendations, there was nothing novel or unique. Each component was something that Duane and I would have considered if we had allocated sufficient time for reflection, but ChatGPT gave a solid 1st draft nearly instantaneously.
But what about creativity (defined as the ability to make or otherwise bring into existence something new) which is an essential feature of research and underlies the basis for true innovation? Fortunately, someone has already asked ChatGPT about this; and ChatGPT’s response: “As an artificial intelligence language model, I am not capable of creativity in the same sense as humans are. However, I can generate unique and original text based on the patterns and information I have learned from my training data.” Now, if we were to simply substitute ‘idea’ for ‘unique and original text’ and consider ‘training data’ as the digital equivalent of ‘human experience,’ it’s difficult not to arrive at the conclusion that ChatGPT can display some degree of creativity.
On the other hand, ChatGPT is merely another form of technology which generally evolves as a consequence of other technologies and the resulting new opportunities those technologies create. In this case, our digital, internet connected civilization generates huge amounts of data (daily!) and ChatGPT can be seen as simply another assistive technology to deal with a data deluge that has already exceeded fire hose proportions. In this regard, I find Cory Doctorow’s approach to any technology insightful: “…look beyond what a gadget does and interrogate who it does it for and who it does it to. That’s an important exercise, maybe the important exercise.” Far too often, when evaluating the applicability of any new technology, we merely ask what it does and for whom and ignore the to whom question. The to whom question allows for ethical considerations, as well as the likelihood of overall acceptance (by both for and to parties).
Technology allows us to expand and amplify what we can already do. I don’t use a calculator because I can’t perform simple arithmetic, nor do I use PubMed because I can’t read a journal’s table of contents. Both ‘tools’ simply augment my current capabilities. ChatGPT will likely evolve to be merely another tool in the available toolbox. Can it be misused? Sure, but the same calculator I use could also help someone build a nuclear bomb. To reach a desired end state, the CTSAs can and should play a role not only in what goes in the toolbox, but even more importantly how we end up putting ‘stuff’ in our toolbox.
- Artificial intelligences are feared more for the latter than the former
- Half of knowledge is knowing the questions
- Superior technology is not a panacea
This Mike's Blog was featured in July 2023's Ansible. Subscribe to receive upcoming Ansible newsletters.