Last Updated on February 15, 2023 by Laura Turner
Once when I was director of admissions, incoming students viewed and discussed a “common orientation movie” called Coded Bias. Released in 2020 before the pandemic, this documentary (available on PBS for a while) presented the disparate impact of AI facial recognition technology on those with darker skin. The development of this technology throughout the 2010s seemed to rely on a biased set of learning cases or programming that resulted in less accurate evaluations for women and minorities. The movie raised the question of how society’s use of “smart” technologies could perpetuate inequities or curtail civil rights.
Concern has grown about its use by law enforcement, local governments, and private companies in monitoring behavior. These analytics add to how messages could be targeted to you or how other organizations predict your behavior or intentions. This is especially evident in China where facial recognition and digital data helped surveil and regulate social behavior during COVID-19 shutdowns.
Artificial intelligence (AI) has captured everyone’s attention as technology has improved over the past decade. Alexa and Siri answer questions on-demand, monitor home appliances, play curated music collections, schedule appointments, and keep millions of people healthy. No longer a niche problem reserved for chess, other strategy games (like Go), or autonomous cars, AI algorithms and tools can take on tasks from writing content to creating images or videos that drive attention and clicks. Help Desk algorithms answer questions from customers visiting websites, including university websites. AI programs also curate social media feeds and select items that ultimately become viral (like Lensa-enhanced pictures). Lobbyists and political action committees are likely salivating how AI could help their outreach.
While many people have benefited from AI’s assistance in correcting grammar or editing, professors expressed concern about how students may use these tools in their course assignments, including coding for computer science programs. Most notably, ChatGPT became available to the public in late November 2022 and captured significant public interest. Other programs were soon launched that could rate the probability that a passage had a significant contribution from ChatGPT or other AI writing programs. Although many students who relied on the AI program to write their final papers held their breaths, ChatGPT earned authorship on a peer-reviewed paper! While syllabi are changing to involve more discussion and oral exams, one company is researching if AI can litigate a case involving a traffic violation.
On Student Doctor Network, many members looked at how this program could help with insurance appeals or developing creative stories. We discussed how the program could help with application essays or answer advising questions while writing creative poetry about the application process. Elsewhere, users evaluated the program’s response to sample MMI prompts. Some have used ChatGPT humorously to share absurd examples (that is, what not to write or say) in the application process. Others have quizzed ChatGPT on mock USMLE questions, showing it could pass.
AI’s influence on the future of healthcare has already made an early impact. Alphabet/Google teams have even tested an AI interface to help physicians sharpen their clinical reasoning and could aid in querying large clinical datasets to narrow down possible diagnoses. AI can help discover new pharmacological agents and develop more efficient drug design. Machine Learning algorithms can use public data to predict tooth loss. Additionally, AI holds great promise in radiology and veterinary practices, and scribing software is on the market. Entrepreneurs believe AI will radically change the patient experience and provider decision-making, and many prehealth students may be involved in projects using AI in healthcare.
How could these AI tools change the admissions process? A few people already think these programs signal the end of writing competency (along with the decline of cursive handwriting). However, these tools have already shaped admissions and have been available for the last decade.
AI could help evaluate and screen applications
First, AI can help with the tedious entry of transcripts during verification and matriculation. AI could find errors made with optical scanning and reorganize data for an admission screener dashboard for file review. This helps to speed up processing and reviewing individual applications. Many applicants would likely be happy to pay for automatic transcript entry and verification if it speeds up the application process. (The LiaisonCAS Professional Transcript Entry service does not use AI.)
The volume of information on the internet has driven the development of sophisticated text search-and-compare tools such as TurnItIn and iThenticate to detect plagiarism in student academic papers. In the early part of the 2010s, Turnitin for Admissions was integrated into PharmCAS and gave admissions committees warnings whenever a submitted application contained an essay with a high likelihood of demonstrating plagiarism (see 2011 blog entry from UCSF). The PharmCAS applicant code of conduct required all applicants to affirm that they have not plagiarized their application essays. Within the first year of use, PharmCAS investigated over 200 alleged examples of plagiarism (about 1.5% of the applicant pool) and discussed it at their annual meeting (Kaplan blog). Surprisingly, Turnitin for Admissions and iThenticate have not been widely used to evaluate other prehealth or residency applications as with undergraduate admissions.
In 2021, the American Association of College of Osteopathic Medicine (AACOM) discussed the preliminary findings of its Holistic Admissions Review Program (HARP), which asked the question if AI algorithms, as trained by admissions teams, could be used to identify strong candidates for interview and predict matriculation based on their essay responses to promote more efficient holistic admissions processes that are less reliant on grades or MCAT scores. The algorithms would speed up application review by reducing the burden of faculty/human review and minimizing any unintended prejudicial biases. Phase I showed that based on historical data spanning thousands of applications, the algorithm correctly predicted a medical school admissions decision to invite-to-interview about 86% of the time (and over 90% in a few cases), albeit for a small number of medical schools.
Interestingly, the algorithm could predict whether applicants would matriculate to a DO program using a random sample of around 10,000 applicants and their AACOMAS personal essays and experience descriptions, supporting a hypothesis that the program pays attention to cues that signal the applicant’s motivation to attend a DO program. For applicants with higher GPAs and MCATs (as a proxy for a decision to interview), the algorithm predicted correctly about 72% of the time.
The project leaders plan to refine the algorithm based on additional affinity variables such as geographic proximity, specialty interest, or performance on specific courses. Another AI project may scan an applicant’s letters of recommendation to identify essential indicators that could identify an applicant’s competencies and strengths. There is additional interest in developing an AI program to scan all of the applicant’s essays to measure how much “additional help” an applicant may have had in their submitted responses. As yet, no results have addressed the frequency of probable plagiarism within their reference pool of applications.
Enrollment administrators want to know if the algorithm has predictive validity in student performance (preclinical/clinical), COMLEX performance, professionalism difficulties, or residency selection. A significant question focused on the amount of financial investment AACOM or its member schools would need to make to integrate AI review into any file review or management program individual schools could use. Finally, there is interest in predicting “melt” (when a deposited student decides to enroll elsewhere such as another MD or DO program or drop out entirely), and the algorithm could be used to cultivate relationships proactively with future students. In short, to help with their financial planning, the programs want a tool that quantifies your enrollment likelihood as much as you would want to measure your chances of acceptance.
AI could advise applicants
The ChatGPT experiment differs from a traditional search engine by engaging the user in conversations to refine its answers based on additional information. To this end, the program taps into a reference knowledge base to provide simple answers and adjusts with more information. If used as a help agent, applicants could interact with the algorithm to answer simple questions about their applications, bringing a static “help manual” to life.
AI text generators can catalyze an evolution of the web search engine or discussion forums, including those that rely on moderation bots such as Reddit. But such algorithms can amplify incorrect information or misinformation (especially with deepfake technology) if not adequately vetted.
These algorithms can deliver simple information to assist prehealth advising offices or lighten their workloads. Like many businesses, some admissions consultant companies use chatbots to engage users who visit their websites to capture their information for subsequent client engagement. The chatbot provides detailed information about the general application process and qualifications for a successful application. A well-supported prehealth advising office or application service like AMCAS or AADSAS could benefit from a virtual advisor chatbot serving on a website 24 hours a day. Advising and marketing with automated (and attractive) AI-generated avatars with short video clips become possible.
For admissions staff, AI tools can also help with applicant engagement by providing substantive answers to concerns from prospective applicants on demand. These tools can bring up information about the last cycle of residency matching or tap into alumni office’s directory to foster additional networking opportunities. Prospective health professionals may even use AI tools to find shadowing or clinical opportunities, including academic enrichment programs in a more sophisticated way. One can also use AI tools to engage interested candidates based on their metadata visits to the program website or responses to AI-tailored marketing campaigns. An AI-generated tool can also be used to screen emails and respond more personably to inquiries, such as application status requests or acknowledging receipt of a letter of interest, to mitigate the appearance of “ghosting.” Because AI-generative programs can create personalized communications, inferring “hidden messages” about how an admissions committee ranks or values an applicant could be more challenging.
AI raises expectations about communication skills
In holistic review, essays and interviews significantly influence admissions decisions, so the authenticity of a candidate’s motivation and personal journey is crucial; this is why plagiarism is discouraged. Yet, as I tell applicants and admissions committees, applicants can receive feedback from writing center professionals, prehealth advisors, references, professors, or peers in professional programs to edit and improve their personal statements. Arguably, every submitted essay reflects the level of support the applicant had (or should have) available to them. What the AI text generators do is shorten the amount of time an applicant has to wrangle over multiple drafts of essays to do better in classes or standardized examination preparation. With thousands of applicants using AI text generators to write, AI may become a new standard for acceptable competency in communication that desirable applicants must exceed.
Because essay-writing sections in proctored standardized tests have fallen out of favor, AI text generators could pressure admissions committees to reconsider scheduling on-campus interview days. Virtual interviews cannot replace a handwritten proctored essay to witness the candidates’ true reasoning and thinking competencies and see if they match the expectations in the submitted essays.
Admissions committee processes need a better track record of detecting and acting on plagiarism. Reviewers, professors, and teachers cannot reliably detect differences between AI-assisted essays and human-written responses. A small proportion of personal essays surveyed at one medical school (2019) met the threshold for possible plagiarism. Suggestions that discourage the use of paid ghostwriters or AI text-generating software could deter plagiarism for residency applications, including eliminating a personal essay for more specific personal questions that can elicit more helpful information for selection.
How can applicants benefit from AI writing programs? One group that would find great benefit would be non-native English speakers, who allegedly represent many residency applications with probable plagiarism. Lack of fluency is a significant barrier in navigating the US education system for most non-native-English-speaking students. AI writing programs can help with employment applications and creating correspondence asking for help (such as landlords, social workers, or politicians). Specifically for applicants, these programs can help students ask faculty for research opportunities or letters of recommendation.
By the way, letters of recommendation can be plagiarized (almost 12% of residency application letters of recommendation showed signs of plagiarism, compared to almost 3% of personal statements). Busy professors often ask applicants to create an initial draft. Just as the AI text generators provide an initial draft for a cover letter for a job application, it could also give a student a letter of recommendation template. Alternatively, busy professors could use an AI text generator to use a recommendation letter template for dozens of students requesting letters. Even an institutional committee letter author might use a program to finesse a student evaluation.
Suppose AI application screening programs scan for keywords similar to Applicant Tracking Systems for employment using data from AACOM HARP. Many job candidates pay professionals to help tailor resumes and cover letters to get an optimal ATS score. Imagine how some applicants will try to discover an advantage, especially if AI text generators try to appeal to the AI screening programs. Could prehealth advisors or professors charged with writing reference letters ask for specific keyword rubrics to better position their student applicants in their processes? How much could a professional admissions consultant or company charge if they had specific insights, especially for highly competitive-brand programs?
AI proctors test-taking or interviewing examinees
The pandemic also saw a rapid rise in online proctoring software programs including some that incorporate AI monitoring. Could AI be used to help screen applicants or evaluate interview responses? Some companies have developed interviewing platforms with an AI evaluation program, but it still needs to be incorporated into tools familiar with admissions or selection. HireVue has developed an AI interview tool that could work cooperatively with ATS software, but it did not incorporate AI review when it developed the AAMC Virtual Interview Tool for Admissions (VITA) during the pandemic 2021 application cycle. The vendor for AAMC PREview (Meazure Learning) has been perfecting AI-mediated proctoring and surveillance of online exams, though it is not known if PREview uses AI.
AI text generators can calibrate screeners when they are reviewing applications. After giving ChatGPT an essay prompt and collecting answer variations, admissions screeners can grade them alongside previously-submitted applicant responses for calibrating scores. With the ability to generate various sophisticated responses, AI writing programs can improve secondary essay prompts and the rubrics used to assess candidates’ writing. This seems to be the case when reviewing AI-generated scientific abstracts.
Anxiety among prehealth students due to the urge to compare and excel is notoriously high. AI-text generators can worsen this anxiety as students must compete with AI-generated avatars or proxies to enhance what they feel are desirable qualities sought by admissions committees. And AI algorithms are being developed to detect this.
Removing “coded bias”
There remain concerns about the AI programs perpetuating racial biases from the training data and how it could also perpetuate racial biases in healthcare. OpenAI (the creator of ChatGPT and Dall-E) continues to mitigate racist or insensitive blind spots when creating images or chat responses. While the program is meant to impress readers with its language sophistication, it lacks an understanding of any context or meaning behind any “opinions” it expresses. The program best imitates other expressions and cannot (yet) truly create without previous input or citing other references. New GPT versions and other text-generative software demonstrating more sophistication (like Google LaMDA) will likely address these shortcomings better… probably for a price.
In the end, the new AI writing tools can help writers the same way calculators help engineers or mathematicians focus on problem-solving rather than answer verification. Many are planning to integrate this tool with traditional word processing programs to improve message clarity and impact and become better writers. Composition teachers hope the AI text generators will establish baselines for standard writing and editing that students should meet, but it will take a similar didactic and assessment process for writers, faculty, and admissions professionals for all levels of education before doing the same for application review.
The author acknowledges using Grammarly in editing this article and DALL-E for the image. AACOM did not respond to requests for comment on HARP.
Emil Chuck, Ph.D., is Director of Advising Services for the Health Professional Student Association. He brings over 15 years of experience as a health professions advisor and an admissions professional for medical, dental, and other health professions programs. In this role for HPSA, he looks forward to continuing to play a role for the next generation of diverse healthcare providers to gain confidence in themselves and to be successful members of the inter-professional healthcare community.
Previously, he served as Director of Admissions and Recruitment at Rosalind Franklin University of Medicine and Science, Director of Admissions at the School of Dental Medicine at Case Western Reserve University, and as a Pre-Health Professions Advisor at George Mason University.
Dr. Chuck serves an expert resource on admissions and has been quoted by the Association of American Medical Colleges (AAMC).