The Foundations of Artificial Intelligence is a research area within Georgia Tech’s School of Computer Science (SCS) that focuses on the development of algorithms that leverage data and statistical tools to solve complex human tasks, to explore novel applications of such tools, and to better understand the apparent success of AI in practice. Instead of focusing on specific applications (e.g., computer vision, NLP or robotics), the Foundations of Artificial Intelligence area focuses on general principles and novel approaches that can be applied across a wide spectrum of applications.
We are particularly interested in topics such as machine learning theory, scalable and distributed training, heterogeneity-aware inference, and robust dynamically adaptive algorithms that help navigate multi-dimensional tradeoff spaces spanned by ML accuracy, model size, latency, and spatio-temporal cost efficiency of both training and inference.
The Foundations of Artificial Intelligence area at SCS has made significant contributions in:
-
Online learning
-
Reinforcement learning
-
Systems support for distributed ML frameworks
-
Resource management for distributed ML frameworks
-
Continual learning
-
Learning theory
-
Federated learning
-
AutoML
-
Explainable ML
-
Systems support for heterogeneity-aware ML inference
-
Neural Architecture Search (NAS)
-
Neuro-inspired AI
-
Formal methods in AI
-
Combination of learning and reasoning
- Trustworthy AI
Our major sources of funding are the National Science Foundation (NSF) and the Defense Advanced Research Projects Agency (DARPA). Additionally, we participate in interdisciplinary research that brings together machine learning, neuroscience, biology, mathematics and statistics, and theoretical computer science. We welcome the involvement of graduate and undergraduate students in our research projects and the broader intellectual community.
Selected Recent Papers from FoAI Researchers (2021-2024)
- Ki Hyun Tae, Hantian Zhang, Jaeyoung Park, Kexin Rong, and Steven Euijong Whang. "Falcon: Fair Active Learning using Multi-armed Bandits." to appear at International Conference on Very Large Databases (VLDB) 2024.
-
Fatih Ilhan, Gong Su, and Ling Liu. "ScaleFL: Resource-Adaptive Federated Learning with Heterogeneous Clients." Computer Vision and Pattern Recognition Conference (CVPR) 2024.
-
Yiwen Zhang, Xumiao Zhang, Ganesh Ananthanarayanan, Anand Iyer, Yuanchao Shu, Victor Bahl, Z. Morley Mao, and Mosharaf Chowdhury. "Automatic Query Planning for Live ML Analytics." Symposium on Networked Systems Design and Implementation (NSDI) 2024.
-
Khashayar Gatmiry, Thomas Kesselheim, Sahil Singla, and Yifan Wang. "Bandit Algorithms for Prophet Inequality and Pandora's Box." 2024 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA).
-
Shreyas Malakarjun Patil, Loizos Michael, and Constantine Dovrolis. "Neural Sculpting: Uncovering hierarchically modular task structure in neural networks through pruning and network analysis." Advances in Neural Information Processing Systems (NeurIPS) 2023.
-
Stephen Mussmann and Sanjoy Dasgupta. "Constants matter: The performance gains of active learning." International Conference on Machine Learning (ICML) 2022.
-
Kishor Jothimurugan, Suguman Bansal, Osbert Bastani, and Rajeev Alur. "Compositional reinforcement learning from logical specifications." Advances in Neural Information Processing Systems (NeurIPS) 2021.
-
Jun-Kun Wang, Jacob Abernethy, and Kfir Y. Levy. "No-regret dynamics in the fenchel game: A unified framework for algorithmic convex optimization." Mathematical Programming (2023).
-
Zifan Wang, Saranya Vijayakumar, Kaiji Lu, Vijay Ganesh, Somesh Jha, and Matt Fredrikson. "Grounding Neural Inference with Satisfiability Modulo Theories." Advances in Neural Information Processing Systems (NeurIPS) 2024. (Spotlight paper)
-
Max Dabagia, Christos H. Papadimitriou, and Santosh S. Vempala. "Computation with Sequences in the Brain." Proc. of Algorithmic Learning Theory (ALT) 2024.