The Trillion Parameter Consortium – an emerging collective of national laboratories, universities, institutes, and companies – brings together individuals and groups who are developing, training, and harnessing large-scale models along with those operating the high-performance computing systems necessary for model training.
TPC supports collaboration among innovators in the fields of artificial intelligence, supercomputing, and data science. To that end, we are excited to announce a forthcoming series of seminars featuring some of the most prominent figures in these domains. These seminars will explore the incredible potential of Large-Language Models (LLMs) and their synergy with High-Performance Computing (HPC) techniques and technologies.
Upcoming Events
Check back for information on upcoming TPC seminars.
Hosted by:
Charlie Catlett
Senior Computer Scientist
Argonne National Laboratory

Past Events
2026

Building a virtual AI biomedical scientist
March 6, 2026
1:00 PM (CT)
Kexin Huang, final-year PhD student in Computer Science at Stanford University
2025

Vision AI for Science and Engineering Applications
September 18, 2025
09:00 AM CT
Mohamed Wahib, Team Leader of the “High Performance Artificial Intelligence Systems Research Team”

The Limitations of Data, Machine Learning & Us
September 3, 2025
11:00 AM CT
Ricardo Baeza-Yates, Director of the AI Institute at the Barcelona Supercomputing Center

Satoshi Matsuoka, RIKEN Center for Computational Science
July 23, 2025
09:00 a.m. (CT)

Building agentic co-scientist systems for accelerating scientific discovery at scale
July 2, 2025
12 p.m. (CT)
Arvind Ramanathan, Computer Science, Argonne National Laboratory

AI for Science Market Drivers, Application Areas, Technologies, Growth Rates, Trends and Results from Our Recent AI Studies
June 18, 2025
1 p.m. (CT)
Earl Joseph, Chief Executive Officer at Hyperion Research
Thomas Sorensen, Associate Analyst at Hyperion Research
Additional Resources:
Hyperion Research Top 2 AI Applications in 2024

Evaluating and Optimizing LLMs For Exploration In-Context
June 4, 2025
11 a.m. (CT)
Allen Nie
PH. D Student at Stanford University

Utilizing LLMs for Parallel Scientific Code Generation and Translation
May 28, 2025
1 p.m. (CT)
Valerie Taylor
Director of Mathematics and Computer Science Division and Distinguished Fellow at Argonne National Laboratory

Research Assistants in Molecular Biology
May 14, 2025
Miguel Vazquez
Head of the Genome Informatics Unit at the Barcelona Supercomputing Center (BSC)

Part of the AI Distinguished Lecture Series: AI-Driven Modelling of the Immune System
May 1, 2025
María Rodríguez Martínez
Yale School of Medicine

Scalable Training of Trustworthy and Efficient Predictive Graph Foundation Models for Atomistic Materials Modeling: A Case Study with HydraGNN
April 23, 2025
Prasanna Balaprakash
Director of AI Programs and a Distinguished R&D Scientist at Oak Ridge National Laboratory (ORNL)

EAIRA: Establishing a methodology to evaluate LLMs as research assistants
April 2, 2025
Franck Cappello
Senior Computer Scientist, Argonne National Laboratory

PDE-Controller: LLMs for Autoformalization and Reasoning of PDEs
March 19, 2025
Dr. Wuyang Chen
Simon Fraser University

Efficiently Learning at Test-Time with LLMs via Transductive Active Learning
March 5, 2025
Jonas Hübotter
Doctoral Researcher, Learning and Adaptive Systems Group at ETH Zurich

TPC Seminar Talk
February 19, 2025
Michael Levin
Tufts University, Levin Lab

Meta Platforms
February 5, 2025
Kevin Chan
Global Policy Campaign Strategies Director

Efficient Generation of Scientific Corpus from PDFs
January 29, 2025
Avaneesh Ramesh
Westwood High School

Adaptive Multimodal Conditional Diffusion for Complex Dynamic Systems
January 15, 2025
Dr. Alexander Scheinker
Los Alamos National Laboratory
2024

The Space of Possible Minds
December 18, 2024
Philip Ball
Freelance writer and broadcaster

Resource-friendly alignment in language models: from reward modeling to preference learning
December 4, 2024
Jiwoo Hong
MSc Student
Affiliate: KAIST AI

Scaling Large Vision-Language Models for Enhanced Multimodal Comprehension in Scientific Discovery
November 7, 2024
Chibuike Robinson Umeike
Graduate research and teaching assistant at University of Alabama

Towards Scientific Agents: From Foundation Models to Automated Discovery
October 30, 2024
Karthik Duraisamy
Professor of Aerospace Engineering at the University of Michigan and director of Michigan Institute for Computational Discovery and Engineering (MICDE)
Recording passcode:
G5*T61H.

AI Agents: Unleashing the Power of Superintelligence in Science and Technology
September 18, 2024
Dr. Neeraj Kumar
Chief Data Scientist at Pacific Northwest National Laboratory (PNNL)

Towards Generative Decision-Making Agents
Yuexiang (Simon) Zhai
Final year PhD candidate at Berkeley EECS
Presented on September 4, 2024

Scaling Generative AI and LLM Models on Aurora
Koichi Yamada
Sr. Principal Engineer in the Data Center and AI Group (DCAI) at Intel
Presented on August 7, 2024

Groq’s Approach to HW/SW Systems for LLM Inference
Valentin Reis
Software Engineer at Affiliation: Groq Inc.
Presented on July 10, 2024

Risk Assessment, Safety Alignment, and Guardrails for Generative Models
Bo Li
Neubauer Associate Professor in the Department of Computer Science
Presented on June 5, 2024

Curating Dolma, an Open Corpus for Language Model Pretraining Research
Kyle Lo
Research Scientist at the Allen Institute for AI in Seattle
Presented on May 22, 2024

Optimizing distributed training on Frontier for large language models
Sajal Dash
Research Scientist at Oak Ridge National Laboratory
Presented on May 8, 2024

Bridging the data gap between children and large language models
Michael C. Frank
Stanford University
Presented on April 24, 2024

How do we assess the behavior of AI agents when the question is hard, and the answer is complicated?
Dexter Pratt
Director of Software Development
Presented on April 10, 2024

Overview of Efforts to Pre-train LLMs in Japan
Rio Yokota
Global Scientific Information and Computing Center, Tokyo Institute of Technology
Presented on March 20, 2024

Can Artificial Intelligence Generate Meaningful Scientific Hypotheses?
Yuan-Sen Ting
Australian National University and Ohio State University
Presented on March 6, 2024

Large Language Models (LLMs): Tutorial Workshop
Several Presenters
Presented on February 12 & 13, 2024

Professor Irina Rish
Université de Montréal (UdeM)
Presented on February 7, 2024

Continual Pre-Training of Foundation Models
Kshitij Gupta
MSc student at Mila through the Université de Montréal (UdeM)
Presenting on January 25, 2024
2023

DeepSpeed4Science: Enabling System Support for Large Signature AI4Science Models at Scale
Leon Song
Senior Principal Research Manager at Microsoft Research
Presented on December 4, 2023

Argonne’s “AuroraGPT” Project
Rick L. Stevens
Associate Lab Director and Distinguished Fellow at Argonne National Laboratory
Presented on November 28, 2023

