
INTELLIGENT REVOLUTION
UIC researchers explore how to maximize the potential of artificial intelligence to create a better, more efficient world and safeguard against potential risks.
By Cindy Kuzma
INTELLIGENT REVOLUTION
UIC researchers explore how to maximize the potential of artificial intelligence to create a better, more efficient world and safeguard against potential risks.
By Cindy Kuzma
This article was reported and written by a human. As recently as last year, that disclaimer might have seemed nonsensical. But with the public release of highly sophisticated chatbots like ChatGPT and Bard this spring, artificial intelligence (AI) took a dramatic leap forward. Now, these so-called large language models can generate prose rivaling a professional writer’s.
AI’s ascendance has fueled visions of fully autonomous vehicles, robotic assistants and more personalized health care — but also has raised fears about job losses and inequities. Prominent industry leaders have warned that AI poses an extinction risk on par with nuclear war and pandemics.
Like many advances, AI can seem akin to magic. However, the algorithms underlying AI technologies like machine learning and natural language processing aren’t dramatically different from many that have long been used, says Mesrob Ohannessian, UIC assistant professor of electrical and computer engineering. Advances in the way they’re designed and executed have improved their ability to identify patterns, make decisions and learn.
UIC faculty have been studying and using AI-based tools for years, even as far back as 1943, when Dr. Warren McCulloch, associate professor of psychiatry in the UIC College of Medicine, published what is considered by some the first mathematical model of a neural network. More recently, scholars in law, communications, computer science and other disciplines have been at the forefront of addressing concerns about ethical AI. Others are exploring AI’s tremendous potential to improve health care delivery and solve tough problems related to refugee resettlement and the environment.
As an urban public research university with a commitment to social justice, UIC is uniquely positioned to explore AI’s possibilities and mitigate the perils. Administrators encourage interdisciplinary collaboration, inclusivity and applying academic work to real-life problems, all essential for understanding this technology’s impact. For medical applications, the university’s health enterprise, UI Health, treats many patients typically underrepresented in research. And UIC’s campus, nestled within Chicago, puts students and faculty near industry leaders, regulators and other important voices.
That’s significant because in a field as fast-changing as AI, “there’s no way we are all going to know everything, and there’s no way we’re all going to know the right questions to ask,” says Steve Jones, UIC distinguished professor of communication. “The more people we can bring together to probe these things, the better.”
Improving Health Outcomes
AI is already making a difference in the medical field. For years, decision-making tools have prevented errors and tracked critical data such as medication allergies, says Dr. Karl Kochendorfer, assistant vice chancellor for health affairs at UIC and chief health information officer at UI Health.
Now, improvements in AI technologies such as neural networks (the interconnected series of nodes through which ChatGPT and similar systems process information) have opened new possibilities, including improving doctor-patient communication. Patients at UI Health send more than 15,000 messages monthly through MyChart, the health system’s online patient portal. In partnership with electronic health record vendor Epic Systems, UI Health plans to pilot using ChatGPT to draft responses. Doctors will still review answers for accuracy, but editing instead of writing could free up overburdened clinicians to spend more time delivering care, says Kochendorfer, who is also professor in clinical family medicine at the College of Medicine.
Health systems across Chicagoland face similar challenges, such as health disparities — inequities based on factors such as race or class — and information integration. To centralize AI-based solutions, Kochendorfer and a regional team launched an initiative called CREATE WISDOM in 2020; the effort received one of the first grants from the Discovery Partners Institute, a technology research and innovation hub led by the U of I System.
The CREATE WISDOM team began by building predictive models for COVID-19. Now, members are working on early detection of pancreatic cancer and improving connectivity so clinics and practices can share data safely and efficiently. They’ve also partnered with Break Through Tech Chicago, a program in UIC’s Department of Computer Science that advances the careers of women and nonbinary students in tech, on projects such as an AI-based information retrieval service for clinicians called 1-Search.
UIC researchers are also using AI to pursue precision medicine, tailoring treatments to individuals’ genetics, lifestyle and environment. Pharmacy Professor Yu Gao partnered with Dr. Kent Hoskins at UI Health and the University of Illinois Cancer Center on a clinical trial of a combination therapy for stage 4 breast cancer patients. “Some patients responded to the treatment, and some didn’t,” Gao says. What if patients could find out which group they were in before they took the drugs?
In the past, researchers hoped to pinpoint a single protein or other biomarker in the blood to predict a patient’s drug response. “It turns out that the biological system is much more complex than that,” Gao says. His team used a graphical neural network to quickly assess and identify patterns within a wide range of different biomarkers, including proteins released by exosomes, tiny extracellular vehicles that cancer cells secrete.
Using just a few drops of blood taken at the beginning of the trial, their model predicted with up to 90% accuracy which patients would respond to the combination therapy. The pilot study was small, but Gao is now working with pharmaceutical companies and other researchers to expand it to hundreds of thousands of participants. The strategy represents a win-win for patients and pharmaceutical companies, he says. Cancer patients could save precious time by avoiding ineffective treatments and receive more precise and personalized treatment. Meanwhile, incorporating such screening into clinical trials is in line with the NIH and FDA’s goal toward precision medicine, which could accelerate FDA approvals.
In the future, similar protocols could be applied to other diseases, biomarkers and prevention. Gao sees a world in which diagnostic testing could identify a wide range of health problems before they begin, allowing patients to more easily choose — and stick to — effective prevention habits.
Solving Social Problems
UIC Business faculty like Brad Sturt, assistant professor in the Department of Information and Decision Sciences, frequently use AI-based technologies such as optimization and machine learning when consulting with corporations — but it isn’t all about the bottom line. Their work aids industry, government agencies and nonprofits in balancing competing interests to create a more just future.
Sturt defines optimization as a “specific, rigorous way” of considering three parts of any problem: the decision to be made, the objectives to minimize or maximize, and the constraints that limit it. Through machine learning, algorithms can consider those variables and find the best solution. “By using optimization in a thoughtful way, we can get to a society that works better for all of us, without making it worse for any of us,” he says.
Most recently, Sturt has used this approach to work on an important social issue: refugee resettlement. Agencies around the world must decide which cities to send each of the estimated 2 million-plus refugees seeking new homes in the next year alone. Algorithms have already been widely used to predict each person’s likelihood of finding gainful employment in their new city, a widely used integration measure.
But existing methods have shortcomings, including a failure to incorporate fairness — a legal requirement in European countries and an ethical obligation worldwide. “It’s easy to imagine scenarios where the optimal thing to do would be highly unfair,” Sturt says. For example, people from large countries might get priority over those from smaller nations. So, he and his colleagues designed a dynamic algorithm that incorporates fairness, in a way that can be defined by the official using it.
Down the hall, Associate Professor Selva Nadarajah is using AI to address the transition to clean energy and a net-zero economy, one that reabsorbs as many greenhouse gasses as it emits. Getting there requires a significant investment — about $9.2 trillion in annual spending by 2050, according to McKinsey & Company. The result benefits humanity. But along the way, some individuals and communities will lose out, as factories close and new jobs in renewable energy aren’t available immediately or in the same areas.
“There’s a lot of uncertainty, and these are decisions with long-term implications,” Nadarajah says. “You can use AI to manage those complex trade-offs in a transparent way.” For example, by combining an approach called multiobjective reinforcement learning with ideas from financial risk management, computers learn to create policies that optimize many desired outcomes at once — in this case, fiscal responsibility, social responsibility and climate goals.
Using these techniques, Nadarajah and his colleagues recently helped a large aluminum manufacturer properly time plant shutdowns to minimize the impact on the surrounding communities, without incurring significant costs. In another line of research, he’s shown that when energy companies are more aggressive in setting emissions goals, they end up spending less overall to reach them.
Companies should be able to harness AI without having to know how to write algorithms, Nadarajah believes. His research group aims to develop solutions businesses could easily tailor to their needs. And, he’s facilitating discussions among stakeholders involved in financing and executing these decisions by hosting workshops, creating policy papers and offering ongoing education.
Meanwhile, in the College of Engineering’s computer science department, Professor and Director of Graduate Studies Barbara Di Eugenio is harnessing AI to offer a literal helping hand to those in need. Di Eugenio’s research focuses on natural language processing (NLP), which allows computers to process and manipulate language in the way it’s commonly spoken or written. NLP is why you can ask Alexa or Siri to play your favorite song using everyday language, or how Google Translate can instantly turn Korean into English and vice versa.
Eventually, NLP may also enable older adults and people with disabilities to interact naturally with assistive robots that help with cooking, cleaning and other daily tasks. Through a project called RoboHelper, Di Eugenio collaborates with Miloš Žefran, UIC professor and associate dean for faculty affairs, department of electrical and computer engineering, to bring this idea to life.
They began by mapping real-life interactions between older adults and human helpers, components of which went well beyond the spoken word. Say the helper asks the older adult if they’re thirsty, then brings them a glass of water. That interaction involves a verbal question and answer, but also gestures, vision and haptics, or the perception of force — the sense used to understand when the recipient has a firm enough grasp that the helper can let go of the glass.
Solving these problems requires developing abstract models that consider all these different signals, which her roboticist colleagues translate into action, Di Eugenio says. And that’s only one way Di Eugenio is exploring what she calls “NLP with a purpose.” Others include a conversational assistant that helps patients with heart failure manage their health at home.
Throughout her work, Di Eugenio has stayed focused on diversity and inclusivity. In part, it’s woven into the projects themselves. Most heart failure patients using the assistant are African American and Latino, and one goal is making the tool culturally appropriate — using terminology familiar to patients and staying sensitive to their concerns. Having collaborators with lived experience of these identities or deep knowledge from interactions with these communities is essential, she says.

Guiding Ethics and Policy
Although Di Eugenio appreciates the way ChatGPT and similar technologies fuel excitement about AI, she points out ethical concerns with their development and release. Unlike her projects — which train NLP models on specific data sets appropriate to the audience who will use them, such as heart failure patients — large language models use massive amounts of publicly available data. Not everyone who created that data agreed to have it used for profit. Plus, using it indiscriminately may perpetuate existing disparities and biases.
Other faculty members are exploring those potential risks and downsides. Jones, the communications professor, has long been fascinated by how we talk to machines, and he studies the degree to which communication influences our trust in AI.
Throughout history, humans have tended to place significant trust in technology, warranted or not. And because AI systems are programmed to please us, their responses may increasingly skew toward what we want to hear, sometimes to the detriment of truth or objectivity. As a result, we may overtrust AI — in other words, place too much faith in its knowledge or capabilities, Jones says. He studies these issues by conducting surveys and interviews, as well as having study participants talk to robots and digital voice assistants in the lab.
In some cases, the consequences of overtrust are minor, such as watching a different recommended show on a streaming service. But if AI gives incorrect health advice or enables the creation and dissemination of disinformation — easier than ever with content-producing generative AI — the implications are far more serious.
About a week after OpenAI released GPT-4, Jones signed onto an open letter calling for AI labs to pause the training of future giant AI experiments for six months. In that time, labs and independent experts should create safety protocols and governance systems, the letter implored.
The letter and similar petitions have inspired discussions about the implications of these technologies — conversations Jones welcomes. Even far-fetched scenarios, such as AI taking over nuclear codes, should be considered until they’re proven impossible. “We are in some pretty uncharted territory with this technology, and I think not a day is going to go by, for the foreseeable future, when we’re not surprised by something AI is doing,” he says.
To further these discussions among academics of different disciplines, Jones co-founded a Human-Machine Communication Interest Group within the International Communication Association and co-edited The SAGE Handbook of Human–Machine Communication, published past summer.
Across the highway at UIC Law, scholars grapple with AI’s intersection with existing legal frameworks and policies. Generative AI and large language models raise important questions about patents and copyrights, says Gary Friedlander, interim director of the UIC Center for Intellectual Property, Information & Privacy Law. Much of the data ChatGPT and similar models were trained on is copyrighted, so courts must decide which protections apply both to those inputs and the output AI produces.
In addition, the evolution of AI technologies exacerbates long-held privacy concerns. Besides preexisting biases, algorithms may introduce errors, sometimes called “hallucinations.” Some are due to compression: think of the differences between a full-sized digital photo and a JPEG, which reduces file size by removing some of the information in pixels from images. Due to the limits of processing power or other constraints, algorithms lose small, fine points as well. “You’re missing details, and those details can be very important,” Friedlander says.
Depending on how AI is used, errors could significantly impact people’s lives — affecting credit scores or the ability to get a loan, or mistakenly landing someone on a no-fly list. Ideally, companies would have considered more of these consequences before releasing these technologies, Friedlander says. Now, recognizing and addressing harms is even more urgent.
“I think law schools can best help by being a neutral party and facilitating discussions from all sides, so we can come to a rational course of action,” he says. And of course, schools play a key role in educating law students on these topics.
Making AI Responsible
If it were up to Lu Cheng, assistant professor in the Department of Computer Science, ethical solutions would be embedded within AI algorithms themselves. Cheng’s research focuses on socially responsible AI. By this, she means technologies that are fair, transparent and reliable even in uncertain conditions; they also protect privacy and are interpretable, meaning humans can understand why they respond the way they do.
One issue Cheng explores in her lab — the Responsible and Reliable AI Lab — is uncertainty, and how AI models could better quantify and convey it. Consider the difference between ChatGPT and a regular search engine. If you Google a term that doesn’t appear online, you’ll get a page that indicates there aren’t many, or any, great matches. ChatGPT, meanwhile, generates answers with the same confidence regardless of supporting data.
Cheng envisions an uncertainty quantification method that would allow AI models to respond with something like: “‘I’m not sure, or I’m not confident about my answers. You should give this question or give my output to human experts,’” Cheng says, increasing AI’s trustworthiness.
Computer scientists can develop algorithms to facilitate many responsible AI practices — Cheng has addressed cyberbullying and online hate and the detection of fake news, among other topics. But successfully implementing responsible practices requires collaboration with people from other disciplines, including social scientists and policymakers.
Common language can bridge gaps between fields. Cheng recently co-authored a book, “Socially Responsible AI: Theories and Practices,” in which she and her co-author Huan Liu, from Arizona State University, propose a common framework for cross-disciplinary discussion. Cheng also teaches a course on socially responsible AI, which she hopes instills these values in students and encourages more of them to pursue research in this area.
Ohannessian — the assistant professor of electrical and computer engineering, who last year earned a prestigious Faculty Early Career Development Program (CAREER) award from the National Science Foundation — also focuses on issues of fairness within algorithms. In fact, he leads a research group called Data, Information, and Computing, Equitably.
His students work on both the theory and practical applications of competitive algorithms — those adapting to the structure of data — and non-discriminatory algorithmic decisions. “When we talk about fairness in machine learning, we don’t want it to be only an academic question; we want to make sure it’s useful and relevant,” he says. Student projects have focused on vaccine distribution and discrimination in credit card algorithms, and recently Ohannessian helped the city of Chicago audit response times to 311 calls to ensure they were equitable.
With his CAREER grant, Ohannessian also focuses on education, teaching AI and data science at all levels. In doing so, he has two goals. Of course, he wants to encourage students to become researchers in this area — but he also hopes to engage the broader public. “People are scared of AI; often, people get scared of things they don’t really know,” he says. “Dissipating some of the vagueness can alleviate that fear.”