Skip to Content, Navigation, or Footer.

Graduate students complete first semester of new AI master’s degree, undergraduate student groups grow

Interest in AI ethics and safety has risen as students start organizations and complete a new degree in the field.

IMG_5668.jpg

The Tsungming Tu Complex is pictured on Oct. 4, 2022.

A group of students in the Tufts Graduate School of Engineering completed their first semester as the inaugural class of the Master of Science in Artificial Intelligence. The new program comes as student interest in AI continues to grow, sparking the creation of undergraduate student organizations.

Enrollment for the program exceeded expectations, according to Jeffrey Foster, the chair of the computer science department, who said it was made up of a diverse range of students. He also emphasized the diversity of the initial class as well as his hopes for the program to grow.

The program has also created opportunities for students in the health, security and education fields. Some students have reportedly found co-ops or are actively searching for research work related to AI.

“Some have already been funded as hourly students through the Human-AI Interaction Center,” Mattias Scheutz, a professor of computer science, wrote in a statement to the Daily. “The Tufts Institute for AI is currently accepting additional applications for their internship program.”

“Companies are really loving this [program], because they want people that have depth in AI, Karen Panetta, the dean of graduate education in the School of Engineering, said.

Panetta added that the involvement of the Gordon Institute, Tufts’s engineering management school, distinguishes the master’s in AI for allowing students to focus on a variety of disciplines, notably business in the context of AI.

As faculty highlight the preprofessional benefits of this program, they also remain cognisant of the ethical concerns involved with AI, which the master’s program hopes to educate its students on. Scheutz said that his department was “actively addressing” the ethical questions surrounding AI through a course in the master’s program curriculum.

“[It’s about] teaching, from birth, children how to use [AI], how to protect themselves,” Panetta said. “Whoever thought that you could see a video of someone and it could be so artificially manipulated that you believe that it is actually someone you love or someone you really know.”

In response to ethical concerns, new student groups have emerged, including the Tufts AI  Safety Student Association. Senior Andrew Lawrence, the group’s director, described how new risks associated with AI brought about the need for the organization on campus. 

“AI safety in general is sparked by the realization [that] there are a lot of risks and harms associated with AI and so the purpose of our organization … is to strengthen the AI safety ecosystem in the Tufts community and also to convince and prepare our members for careers in AI safety,” Lawrence said.

Lawrence also noted that the organization operates by sponsoring semesterly fellowships in which a small number of students participate in an eight-week curriculum, which features technical and policy-related AI issues.

“When we started last spring, we ran a few fellowship cohorts, and had 10 to 15 more committed members. Since then, we’ve grown substantially. Last semester we had 20 fellows. This semester we have 40,” Lawrence said.

Lawrence has observed a variety of different approaches to teaching AI ethics in higher education, including the use of mandated AI literacy courses. He adds, though, that such courses have not been incorporated into the Tufts curriculum.

“The Tufts University admin has generally taken a more hands-off or laissez-faire approach to this, which is understandable because of the liberal arts philosophy and independence of the different schools. But I do wish that there were resources that students had access to,” Lawrence said.

Despite ethical concerns with AI, Panetta insists that literacy in this technology is crucial for success.

“What we’re trying to do is teach our students how to use [AI] responsibly, and know how people are misusing it, so they’re aware of the implications of do no harm,” Panetta said.

The School of Engineering has also examined other ways to integrate AI into academic work.

“School of Engineering faculty are exploring many ways to bring AI into the classroom, including assignments tailored to help students learn about AI, classroom activities to help students understand AI’s strengths and weaknesses, and some faculty are even developing custom AI chatbots to support students in courses,” Foster wrote.

The attention to other applications of AI beyond computer science has helped attract applicants, which has contributed to a 23% increase in applications to the master’s in AI program compared to last year.