AI Month roundup: From ethical algorithms to robots that learn

During the month of April, Penn’s School of Engineering and Applied Science showcased a series of news items exploring the evolving world of artificial intelligence.

Photograph of Amy Gutmann Hall
Amy Gutmann Hall will open in 2024 and will be the home of the new B.S.E. in Artificial Intelligence for faculty and students.

Artificial intelligence is rapidly transforming how humans interact with technology and augmenting human intelligence. With these transformations come both challenges and opportunities. Through programs like the Raj and Neera Singh Program in Artificial Intelligence, the first Ivy League undergraduate degree of its kind, Penn’s School of Engineering and Applied Science is addressing the critical demand for specialized engineers and empowering students to be leaders in AI’s responsible use. 

Throughout the month of April, Penn Engineering celebrated AI Month, a four-week series of news articles and events that explored many facets of AI and its impact on engineering and society.

  • Chris Callison-Burch, associate professor in computer and information science, shares his own vision of an AI-integrated future.

  • When Pratik Chaudhari was in graduate school, his studies focused on deep learning, a field of artificial intelligence that employs multiple layers of computation to emulate the working of biological brains. Now, as assistant professor in Electrical and Systems Engineering and member of the General Robotics, Automation, Sensing and Perception (GRASP) Laboratory, Chaudhari’s lab at Penn Engineering draws on disciplines as diverse as physics and neuroscience to understand the process of learning itself. The ultimate goal is to discover the principles that underlie learning in both artificial and biological systems, so that engineers can harness those principles in the machines they design. “I would be very happy as a researcher if I could build a machine that can do most things that your pet dog can do,” Chaudhari says in “Building Robots That Learn.”

  • In 2019, Michael Kearns, National Center Professor of Management & Technology in Computer and Information Science (CIS), and Aaron Roth, Henry Salvatori Professor in Computer & Cognitive Science in CIS, published “The Ethical Algorithm: The Science of Socially Aware Algorithm Design.” Rather than rehash the social ills caused by AI-powered systems, Kearns' and Roth’s book enumerates technical improvements to the algorithms that increasingly govern our lives. “In short, we think it’s a good idea to bake ethical considerations into algorithms when it’s sensible and possible,” says Kearns in “The Science of Designing Ethical Algorithms.” “It is not always sensible and possible—and many effects of algorithms are exogenous to the development of the algorithms themselves.”

  • In “Star Trek: The Next Generation,” Captain Picard and the crew of the U.S.S. Enterprise leverage the holodeck, an empty room capable of generating 3D environments. Deeply immersive and fully interactive, holodeck-created environments are infinitely customizable, using nothing but language. Today, though, virtual interactive environments are in surprisingly short supply. That paucity is a problem if you want to train robots to navigate the real world with all its complexities. “Generative AI systems like ChatGPT are trained on trillions of words, and image generators like Midjourney and DALLE are trained on billions of images,” says Callison-Burch in “Penn Engineers recreate Star Trek’s holodeck using ChatGPT and video game assets.” “We only have a fraction of that amount of 3D environments for training so-called ‘embodied AI.’” Enter Holodeck, a system for generating interactive 3D environments co-created by Callison-Burch, Yatskar, Yang and Lingjie Liu, Aravind K. Joshi Assistant Professor in CIS, along with collaborators at Stanford, the University of Washington, and the Allen Institute for Artificial Intelligence (AI2). Named for its Star Trek forebear, Holodeck generates a virtually limitless range of indoor environments, using AI to interpret users’ requests. 

  • Aashika Vishwanath, a sophomore in Computer and Information Science (CIS) at Penn Engineering, is well-acquainted with the rigorous demands of her coursework. “It’s about wrestling with intricate ideas, not just coding,” she explains in “From Blackjack to Chatbots.” In the world of CIS, students are tasked with more than programming; they delve into the realm of mathematical proofs, wielding theory and practice to mine the entirety of their discipline. In terms of educational support, AI-powered chatbots can only do so much to explain these concepts, Vishwanath points out. For Vishwanath, this gap speaks to a broader issue in education—a global disparity in the availability and quality of support for engineering students. Vishwanath, a TA for CIS 2620: Automata, Computability and Complexity, wondered if she could use AI to help democratize access to high-quality educational support. “Every student deserves a teaching assistant that can guide them through their learning journey with precision and depth,” she says, “regardless of where they are in the world or what resources they have access to.”

  • How do large language models (LLMs), which power chatbots like ChatGPT, actually work? Few are better qualified to answer this question than Surbhi Goel, Magerman Term Assistant Professor in Computer and Information Science (CIS) at Penn Engineering. “You can find hundreds and hundreds of articles about large language models,” Goel told the audience at this year’s Women in Data Science (WiDS) @ Penn Conference in February. “They’ve entered our classrooms, they’re being used in all these real-world applications.” But, as Goel notes, these models are also something of a “black box,” even to the researchers who design them. Building from explanations of the simplest version of a language model to the design of the revolutionary “transformer” architecture behind tools like ChatGPT, Goel demystifies how some of the AI systems reshaping society actually work.

  • In “Overcoming the Odds: Matthew Cleaveland Designs Robots to Deal with Uncertainty, While Outrunning Cancer,” Cleaveland describes running two marathons; one before and one after his cancer treatment, while problem-solving autonomous systems with AI. Cleaveland, who is advised by PRECISE Center Director Insup Lee and UPS Foundation Professor of Transportation George J. Pappas, develops methods for determining the behaviors of autonomous systems, which must weigh the probabilities of various events to navigate an uncertain future. In one of his recent research projects, Cleaveland worked to develop a method of safely planning trajectories for machines in environments with moving obstacles. “One major application of this work,” says Cleaveland, “is for self-driving cars operating in pedestrian-filled environments. Our method can be used on top of any existing method for predicting the future behaviors of the pedestrians.”

  • Ro Encarnación, a Penn Engineering doctoral student, studies how algorithms can be held accountable and incorporate community concerns. In “Fighting for Algorithmic Justice,” the doctoral student in Computer and Information Science conducts research on algorithmic justice—the idea that people affected by algorithms in their day-to-day lives should have a say in how those algorithms are designed and used. “The goal is to make sure that these systems are designed in a way that adapts to community concerns,” she says. Encarnación hopes to see more research that puts users at the center of studying algorithmic bias. “Marginalized communities are not a monolith,” she says. “Not everybody’s going to be the same—but that should not deter engineers from trying to create systems that are still accountable to users.”

Read the full stories at Penn Engineering.