The Trillion Parameter Consortium – an emerging collective of national laboratories, universities, institutes, and companies – brings together individuals and groups who are developing, training, and harnessing large-scale models along with those operating the high-performance computing systems necessary for model training.

TPC supports collaboration among innovators in the fields of artificial intelligence, supercomputing, and data science. To that end, we are excited to announce a forthcoming series of seminars featuring some of the most prominent figures in these domains. These seminars will explore the incredible potential of Large-Language Models (LLMs) and their synergy with High-Performance Computing (HPC) techniques and technologies.

Upcoming Events

Check back for information on upcoming TPC seminars.



Hosted by:

Charlie Catlett
Senior Computer Scientist
Argonne National Laboratory


Past Events

2026

A headshot of Kexin Huang

Building a virtual AI biomedical scientist

March 6, 2026
1:00 PM (CT)

Kexin Huang, final-year PhD student in Computer Science at Stanford University

2025

A headshot of Mohamed Wahib

Vision AI for Science and Engineering Applications

September 18, 2025 
09:00 AM CT

Mohamed Wahib, Team Leader of the “High Performance Artificial Intelligence Systems Research Team”

The Limitations of Data, Machine Learning & Us

September 3, 2025 
11:00 AM CT

Ricardo Baeza-Yates, Director of the AI Institute at the Barcelona Supercomputing Center

A headshot of Satoshi Matsuoka

Satoshi Matsuoka, RIKEN Center for Computational Science

July 23, 2025
09:00 a.m. (CT)

A headshot of Arvind Ramanathan

Building agentic co-scientist systems for accelerating scientific discovery at scale

July 2, 2025
12 p.m. (CT)

Arvind Ramanathan, Computer Science, Argonne National Laboratory

A headshot of Earl Joseph and Thomas Sorensen

AI for Science Market Drivers, Application Areas, Technologies, Growth Rates, Trends and Results from Our Recent AI Studies

June 18, 2025
1 p.m. (CT)

Earl Joseph, Chief Executive Officer at Hyperion Research

Thomas Sorensen, Associate Analyst at Hyperion Research

Additional Resources:
Hyperion Research Top 2 AI Applications in 2024

Hyperion Research TPC AI Update 6.18.2025

Headshot of Allen Nie

Evaluating and Optimizing LLMs For Exploration In-Context

June 4, 2025
11 a.m. (CT)

Allen Nie
PH. D Student at Stanford University

A headshot of Valerie Taylor

Utilizing LLMs for Parallel Scientific Code Generation and Translation

May 28, 2025
1 p.m. (CT)

Valerie Taylor
Director of Mathematics and Computer Science Division and Distinguished Fellow at Argonne National Laboratory

Headshot of Miguel Vazquez

Research Assistants in Molecular Biology

May 14, 2025

Miguel Vazquez
Head of the Genome Informatics Unit at the Barcelona Supercomputing Center (BSC)


Headshot of María Rodríguez Martínez

Part of the AI Distinguished Lecture Series: AI-Driven Modelling of the Immune System

May 1, 2025

María Rodríguez Martínez
Yale School of Medicine

Headshot of Prasanna Balaprakash

Scalable Training of Trustworthy and Efficient Predictive Graph Foundation Models for Atomistic Materials Modeling: A Case Study with HydraGNN

April 23, 2025

Prasanna Balaprakash
Director of AI Programs and a Distinguished R&D Scientist at Oak Ridge National Laboratory (ORNL)


A headshot of Franck Cappello

EAIRA: Establishing a methodology to evaluate LLMs as research assistants

April 2, 2025

Franck Cappello
Senior Computer Scientist, Argonne National Laboratory


A headshot of Dr. Wuyang Chen

PDE-Controller: LLMs for Autoformalization and Reasoning of PDEs

March 19, 2025

Dr. Wuyang Chen
Simon Fraser University

A headshot of presenter Jonas Hübotter

Efficiently Learning at Test-Time with LLMs via Transductive Active Learning

March 5, 2025

Jonas Hübotter
Doctoral Researcher, Learning and Adaptive Systems Group at ETH Zurich

A headshot of Michael Levin

TPC Seminar Talk

February 19, 2025

Michael Levin
Tufts University, Levin Lab

A headshot of presenter Kevin Chan

Meta Platforms

February 5, 2025

Kevin Chan
Global Policy Campaign Strategies Director

Efficient Generation of Scientific Corpus from PDFs

January 29, 2025

Avaneesh Ramesh
Westwood High School

Adaptive Multimodal Conditional Diffusion for Complex Dynamic Systems

January 15, 2025

Dr. Alexander Scheinker
Los Alamos National Laboratory

2024

Headshot of Phillip Ball. He wears thick framed glasses, a blue button up, and sits in front of a full bookshelf.

The Space of Possible Minds

December 18, 2024

Philip Ball
Freelance writer and broadcaster

A headshot of speaker Jiwoo Hong

Resource-friendly alignment in language models: from reward modeling to preference learning

December 4, 2024

Jiwoo Hong
MSc Student
Affiliate: KAIST AI

Headshot of Chibuike Robinson Umeike. He wears thin glasses, a brown pull over, and sits in an office.

Scaling Large Vision-Language Models for Enhanced Multimodal Comprehension in Scientific Discovery

November 7, 2024

Chibuike Robinson Umeike
Graduate research and teaching assistant at University of Alabama

A headshot of speaker Karthik Duraisamy.

Towards Scientific Agents: From Foundation Models to Automated Discovery

October 30, 2024

Karthik Duraisamy
Professor of Aerospace Engineering at the University of Michigan and director of Michigan Institute for Computational Discovery and Engineering (MICDE)

Recording passcode:
G5*T61H.

AI Agents: Unleashing the Power of Superintelligence in Science and Technology

September 18, 2024

Dr. Neeraj Kumar
Chief Data Scientist at Pacific Northwest National Laboratory (PNNL)

Towards Generative Decision-Making Agents

Yuexiang (Simon) Zhai
Final year PhD candidate at Berkeley EECS

Presented on September 4, 2024

Scaling Generative AI and LLM Models on Aurora

Koichi Yamada
Sr. Principal Engineer in the Data Center and AI Group (DCAI) at Intel

Presented on August 7, 2024

Groq’s Approach to HW/SW Systems for LLM Inference

Valentin Reis
Software Engineer at Affiliation: Groq Inc.

Presented on July 10, 2024

Risk Assessment, Safety Alignment, and Guardrails for Generative Models

Bo Li
Neubauer Associate Professor in the Department of Computer Science

Presented on June 5, 2024

Curating Dolma, an Open Corpus for Language Model Pretraining Research

Kyle Lo
Research Scientist at the Allen Institute for AI in Seattle

Presented on May 22, 2024

Optimizing distributed training on Frontier for large language models

Sajal Dash
Research Scientist at Oak Ridge National Laboratory

Presented on May 8, 2024

Bridging the data gap between children and large language models

Michael C. Frank
Stanford University

Presented on April 24, 2024

How do we assess the behavior of AI agents when the question is hard, and the answer is complicated?

Dexter Pratt
Director of Software Development

Presented on April 10, 2024

Overview of Efforts to Pre-train LLMs in Japan

Rio Yokota
Global Scientific Information and Computing Center, Tokyo Institute of Technology

Presented on March 20, 2024

Can Artificial Intelligence Generate Meaningful Scientific Hypotheses?

Yuan-Sen Ting
Australian National University and Ohio State University

Presented on March 6, 2024

Large Language Models (LLMs): Tutorial Workshop

Several Presenters

Presented on February 12 & 13, 2024

Professor Irina Rish

Université de Montréal (UdeM)

Presented on February 7, 2024

Continual Pre-Training of Foundation Models

Kshitij Gupta
MSc student at Mila through the Université de Montréal (UdeM)

Presenting on January 25, 2024

2023

DeepSpeed4Science: Enabling System Support for Large Signature AI4Science Models at Scale

Leon Song
Senior Principal Research Manager at Microsoft Research

Presented on December 4, 2023

Argonne’s “AuroraGPT” Project

Rick L. Stevens
Associate Lab Director and Distinguished Fellow at Argonne National Laboratory

Presented on November 28, 2023