UCF Researcher Receives DOE Funding to Advance Human Understanding of AI Reasoning

Making AI models understandable to humans

A researcher from the University of Central Florida (UCF) has been awarded funding from the U.S. Department of Energy (DOE) to further our understanding of artificial intelligence (AI) reasoning. This exciting project aims to develop algorithms that will provide robust multi-modal explanations for foundation AI models, addressing the need for explainable AI methods. The DOE has granted $400,000 to support this research, highlighting its potential for advancing scientific discovery.

Foundation models are trained using a vast amount of data and can be applied to various tasks, making them highly efficient. These models have already found applications in real-world scenarios, such as autonomous vehicles and scientific research. However, the lack of methods for explaining AI decisions to humans has hindered their widespread adoption. Establishing human trust in AI is crucial, particularly in fields like science that heavily rely on it.

The research team, led by Rickard Ewetz, an associate professor in UCF’s Department of Electrical and Computer Engineering, aims to create algorithms that offer meaningful explanations of AI models’ decision-making processes. By doing so, they hope to enhance human understanding and trust in AI systems. Ewetz emphasizes the importance of transparency, stating that AI models cannot be treated as black boxes. Instead, they need to explain how neural networks reason.

Unlike previous explainable AI efforts that primarily focused on model gradients, this project takes an innovative approach by incorporating symbolic reasoning. The team plans to describe AI reasoning using trees, graphs, automata, and equations. Additionally, the researchers aim to estimate the accuracy and knowledge limits of the model’s explanations, providing a more comprehensive understanding of AI reasoning.

Sumit Jha, a co-researcher from the University of Texas at San Antonio, stresses the urgency of explainable AI, especially with the rapid deployment of AI models. Currently, AI does not provide explanations for its decisions or acknowledge mistakes. This blind trust in AI is concerning, as the future will inevitably include both good and bad AI models.

Rickard Ewetz received his doctorate in electrical and computer engineering from Purdue University and joined UCF’s College of Engineering and Computer Science in 2016. His primary research areas include AI and machine learning, emerging computing paradigms, future computing systems, and computer-aided design for very large-scale integration.

This DOE-funded research project has the potential to revolutionize AI reasoning and pave the way for increased human trust and understanding of AI systems. The outcomes of this research could have far-reaching implications for various industries, enabling the secure and effective implementation of AI technologies.

FAQs

What is the goal of this research project?

The goal of this research project is to develop algorithms that provide meaningful explanations for AI models’ decision-making processes. By doing so, this research aims to enhance human understanding and trust in AI systems.

How will symbolic reasoning be incorporated into AI reasoning?

Symbolic reasoning will be used to describe AI reasoning through the use of trees, graphs, automata, and equations. This innovative approach will provide more comprehensive explanations of AI models’ decision-making processes.

Why is explainable AI important?

Explainable AI is crucial for establishing human trust in AI systems. Currently, AI models are often treated as black boxes, hindering their adoption in fields that require human trust, such as science. By providing meaningful explanations, AI systems can be deployed with higher levels of human trust and understanding.

Conclusion

The DOE-funded research project led by a UCF researcher aims to advance our understanding of AI reasoning by developing algorithms that provide meaningful explanations for AI models. With the growing deployment of AI systems, it is essential to establish human trust and understanding in these technologies. This project has the potential to significantly impact various industries and pave the way for the responsible and effective use of AI.