Robert Joseph

Ph.D. Math & CS, California Institute of Technology (Sep 2023 - Present)
B.Sc Honors Math & CS, First Class, University of Alberta (Sep 2019 - May 2023)
I am a 2nd year Ph.D. student in Math & Computer Science at Caltech under Professor Anima Anandkumar (Nvidia). I work in the Artificial Intelligence & Machine Learning (AI4Science) Group. I graduated from the University of Alberta in Honors Math & Computer Science with the Dean's Silver Medal in Science. My main research interests are:
1. AI4Science (Theory + Applications + Neural Operators)
My work in AI4Science focuses on developing novel neural operator architectures for solving complex partial differential equations (PDEs). Key contributions include:
- Incremental Fourier Neural Operator (iFNO): Developed spatial and spectral learning methods for large-scale PDEs (TMLR 2024)
- CoDA-NO: Pretraining Codomain Attention Neural Operators for Multiphysics PDEs, achieving state-of-the-art results (NeurIPS 2024)
- Neural Operator library: Contributing to an open-source library for learning neural operators (under review)
- Extended work on physics-informed neural operators and spectral methods for parametric PDEs
2. AI4Math (Theorem proving in Lean + Formalization + LLM reasoning)
Leading efforts to integrate large language models with formal theorem proving in Lean. Major projects include:
- LeanAgent: Lifelong Learning for Formal Theorem Proving - demonstrates curriculum learning over different mathematical domains (ICLR 2025)
- LeanProgress: First-of-its-kind reward model predicting proof progress in Lean (under review)
- LeanPDE: Formalizing PDEs in general Euclidean spaces, working toward millennium problems
- Collaborating on autoformalizers and unified interfaces for Lean tools
3. Optimization and Efficient training of Foundational models
Developing memory-efficient training methods and optimization techniques for large-scale models:
- Tensor-GaLore: Memory-Efficient Training via Gradient Tensor Decomposition (NeurIPS Optimization Workshop 2024)
- Working on efficient training methods for foundation models in scientific domains
- Exploring connections between optimization theory and neural operator architectures
- Investigating theoretical foundations of deep learning and generalization bounds
I used to work with Professor Martha White and Adam White (Google DeepMind) at the Reinforcement Learning and Artificial Intelligence (RLAI) Lab led by Rich Sutton (Google DeepMind) and the Alberta Machine Intelligence Institute (Amii), as well as with Professor John Bowman in the Mathematics Department. I was a CSRMP Research Scholar at Google AI and collaborated with Microsoft Research as a Data Science Intern. I did research in Numerical Algorithms affiliated with the Pacific Institute of Mathematical Sciences (PIMS) and the Applied Mathematics Institute (AMI).
I also am interested in giving back to the community by teaching and mentoring students and sharing my love for AI4Science & Math. Used to co-lead the ML Theory Reading Group at Cohere for AI. Feel free to contact me if any of these interests align with your research or if you have any questions. I am always open to new collaborations.
Email: rgeorge (at) caltech (dot) edu | Twitter: @Robertljg | LinkedIn: Robertj | Github: @Robertboy18 | Academic: CV
Latest News
- May 2025 - Invited to attend the Algorithmic stability: Mathematical foundations for the modern era Workshop at the American Institute of Mathematics, Caltech.
- April 2025 - Gave a talk at the Autoformalization for the Working Mathematician workshop at ICERM, Brown University. Slides+Video
- April 2025 - Attended the Simons Institute for the Theory of Computing and SLMath Joint Workshop: AI for Math and TCS.
- March 2025 - Accepted to work as a Research Scientist at Amazon in NYC this summer with the Reinforcement Learning Team.
- January 2025 - LeanAgent has been accepted to ICLR 2025 in Singapore. See you all there.