Your browser is unsupported

We recommend using the latest version of IE11, Edge, Chrome, Firefox or Safari.

Tizpaz-Niari receives NSF CAREER Award to build debugging tools for Responsible AI

Saeid Tizpaz Niari

Assistant Professor Saeid Tizpaz-Niari (سعید تیزپازنیاری) was awarded a prestigious National Science Foundation CAREER grant to develop debugging tools and techniques to meet comprehensive responsibility standards for AI-enabled software solutions before they are publicly released.

“We have emerging properties such as software fairness, transparency, and accountability that require novel debugging tools,” Tizpaz-Niari said. “We need to reimagine software debugging in the era of responsible AI-software development.”

AI has become an integral part of software development practices. Today’s software solutions increasingly include pre-trained neural models–large models already trained on massive datasets–to process high-dimensional inputs such as images, speech, or text, alongside components to perform procedural tasks like sorting and searching.

However, this presents a formidable challenge to traditional debugging techniques, as they become less effective due to the black-box nature of AI software with complex requirements that have profound socio-economic, legal, and ethical implications.

To solve for this, Tizpaz-Niari combines metamorphic testing, a method of analyzing output variations caused by input changes, and relational verification across three dimensions: causality, information theory, and extreme value theory.

Causality can generate realistic test cases to assess responsibility properties. Recent solutions to reduce biases and improve inclusion fail because the models are trained over unrealistic samples. For example, Google’s Gemini AI’s ability to generate AI images of people was paused after diversity errors were found. In the company’s effort to prevent racial and gender stereotypes, the tool began generating inaccurate historical models. When asked to depict the U.S. founding fathers or Nazi-era German soldiers, the tool returned results that included women and people of color.

“In the process of generating these cases, we need to make sure they are realistic,” Tizpaz-Niari said.

Information theory is used in metamorphic debugging to summarize properties, such as group fairness, that require simultaneous analysis over numerous inputs across varying social groups. Information theory can enable the detection and quantification of areas where AI software systematically disadvantages a marginalized community. And Tizpaz-Niari is using extreme value theory, a statistical guarantee to minimize the risk of missing bugs or wrong explanations during metamorphic debugging.

Tizpaz-Niari is teaching students how to develop software in the era of AI. Last spring, he taught CS 594, Responsible AI Engineering, and this semester he is teaching CS 516, Responsible Data Science and Algorithmic Fairness. He hopes to expand his current research into a future course centered around requirements, architecture design, quality assurance, and operations for AI software development with a responsible perspective that covers fairness, transparency, and accountability. Tizapaz-Niari wants students to understand the relational nature of emerging responsibility requirements that are prevalent in adversarial robustness, such as evasion, data poisoning, and prompt injections; ethics, including fairness, transparency, and accountability; and security issues including, privacy and confidentiality.

“The perception of responsible AI software may be different for different people. Maybe you prioritize privacy. Maybe another student prioritizes individual fairness, while another one is concerned about group fairness,” Tizpaz-Niari said. “So, what are the perceptions and priorities of responsible AI across our student population, and those who are developing real AI software?”

The five-year NSF CAREER award complements Tizpaz-Niari’s existing work. He has two other active NSF grants: one from the NSF Security, Privacy, and Trust in Cyberspace program (SaTC) and another from the agency’s Designing Accountable Software System program (DASS). The SaTC project is looking at the meta-functional properties in AI software, security issues beyond functional correctness, including availability and confidentiality. The DASS grant is focused on accountable tax preparation software. He is working to build open-source tax preparation software that enables low-income individuals to claim all the benefits they are entitled to, such as credits and deductions, under the tax law.

The $528,319 NSF CAREER grant began October 1 and runs through September 30, 2030.