Theme:
The emergence of Generative AI techniques, including Large Language Models (LLMs), applied to large-scale data sets has the potential to generate results resembling human reasoning. ChatGPT’s launch on November 30, 2022 has precipitated wide-spread thinking about the potential of Generative AI to revolutionize the economy and society, while at the same time highlighting the challenges of computer-driven thinking and black box models. While there is clear potential for synthesizing internet-scale information for answering general questions, GenAI techniques have significant challenges in turning that information into high-confidence solutions for specialized domains. Moreover, the significant increase in computing demand has also led to renewed concerns over the carbon footprint of AI and increased interest in energy efficient computing performance.
CLSAC 2024 will explore the potential of these models to: revolutionize analytics while balancing time and energy, apply generalized human-like understanding to domain-specific problems, and minimize the technical and human-factors gaps in taking high-confidence actions based on the model’s output. The conference is organized as five plenary sessions:
Throughout the conference we will integrate AI produced questions, answers, and positions as example demonstrations of “what the AI thinks is important”, vs what the human organizers believe is essential.
The emergence of Generative AI techniques, including Large Language Models (LLMs), applied to large-scale data sets has the potential to generate results resembling human reasoning. ChatGPT’s launch on November 30, 2022 has precipitated wide-spread thinking about the potential of Generative AI to revolutionize the economy and society, while at the same time highlighting the challenges of computer-driven thinking and black box models. While there is clear potential for synthesizing internet-scale information for answering general questions, GenAI techniques have significant challenges in turning that information into high-confidence solutions for specialized domains. Moreover, the significant increase in computing demand has also led to renewed concerns over the carbon footprint of AI and increased interest in energy efficient computing performance.
CLSAC 2024 will explore the potential of these models to: revolutionize analytics while balancing time and energy, apply generalized human-like understanding to domain-specific problems, and minimize the technical and human-factors gaps in taking high-confidence actions based on the model’s output. The conference is organized as five plenary sessions:
- Human Factors: Truth, Attribution and Applying Domain Specific Knowledge
- Applications: Workflows and Use Cases
- Software: Approaches to Model Augmentation and Domain-Specific Applications
- Hardware: Energy, Architecture and Approaches to Acceleration
- Policy and Psychology: Societal Impact, Human Interactions and Decision Making
Throughout the conference we will integrate AI produced questions, answers, and positions as example demonstrations of “what the AI thinks is important”, vs what the human organizers believe is essential.
Organizing Committee:
Jim Ang, Pacific Northwest National Laboratory
Almadena Chtchelkanova, National Science Foundation
John Feo, Pacific Northwest National Laboratory
David Haglin, Trovares, Inc.
Laura Monroe, Los Alamos National Laboratory
Richard Murphy, Gem State Informatics, Inc.
Ron Oldfield, Sandia National Laboratories
Steve Pritchard, Committee Advisor
Tyler Simon, Department of Defense
Shannon Zelitch, Department of Defense
Brad Spiers, Committee Advisor
Almadena Chtchelkanova, National Science Foundation
John Feo, Pacific Northwest National Laboratory
David Haglin, Trovares, Inc.
Laura Monroe, Los Alamos National Laboratory
Richard Murphy, Gem State Informatics, Inc.
Ron Oldfield, Sandia National Laboratories
Steve Pritchard, Committee Advisor
Tyler Simon, Department of Defense
Shannon Zelitch, Department of Defense
Brad Spiers, Committee Advisor
Agenda -- All times EST
Monday, Nov 4th, 2024
7:00 -- 8:30 pm
|
Welcome Reception and Registration (Senate A/B)
|
Tuesday, Nov 5th, 2024 (Election Day -- Vote Early)
7:00 – 8:00 am
|
Breakfast and Registration (Capitol ABC)
|
8:00 – 8:30 am
|
Welcome
George Cotter Award |
8:30 – 9:15 am |
Keynote
Ensuring Trustworthy AI for Intelligence Analysis
|
Jeff Kubina, Department of Defense
|
Session 1. Hardware: Energy, Architecture and Approaches to Acceleration (Chair, Tyler Simon)
9:15 -- 10:00 am
|
Benchmarking of Generative AI Applications on the Intel Gaudi2 Architecture and Examples of Multimodal AI for Real World Applications
|
Luca Longoni, FedData Technology Solutions
|
10:00-10:15 am
|
Break
|
10:15 -- 11:00 am
|
E2EdgeAI: Energy Efficient Edge AI for On-device Deployment
|
Tinoosh Mohsenin, Johns Hopkins University
|
11:00 -- 11:45 am
|
Hunting the Needle: The Performance/Energy Advantage of a Novel Architecture for a Complex Graph Analytic
|
Peter Kogge, University of Notre Dame
|
11:45 -- 12:30 pm
|
Panel Discussion
|
12:30 -- 1:45 pm
|
Lunch (Capitol ABC)
|
Session 2. Applications: Workflows and Use Cases (David Haglin, Moderator)
1:45 -- 2:30 pm
|
High-Security LLM Deployment: Lessons from Industry and Projections for Tomorrow
|
Sharon Zhou, Lamini
|
2:30 -- 3:15 pm
|
Joining Persistent Homology with Language Models to Scale Energy Microgrids with Resilience and Efficiency
|
Steve Reinhardt, Transform Computing, Inc.
|
3:15 -- 3:30 pm
|
Break
|
3:30 -- 4:15 pm
|
Accelerating Environmental Reviews with Automated Knowledge Synthesis Agents
|
Sameera Horawalavithana, Pacific Northwest National Laboratory
|
4:15 -- 5:00 pm
|
Panel Discussion
|
|
Dinner (Capitol ABC)
|
6:00 -- 7:30 pm
|
Responsible AI and LLMs at Work
|
Jillian Powers, Slalom
|
Wednesday, November 6th, 2024
7:00 – 8:30 am
|
Breakfast and Registration (Capitol ABC)
|
8:30 – 9:15 am |
Keynote
What does Generative AI Mean for the Hardware that has to Ultimately Run It?
|
Naveen Verma, Princeton University
|
Session 3: Policy and Psychology: Societal Impact, Human Interactions and Decision Making (Almadena Chtchelkanova, Moderator)
9:15:-- 10:00 am
|
Human-in-the-loop: Mayo Clinic's Progress Towards Responsible use of AI in Healthcare
|
David Holmes, Mayo Clinic
|
10:00 -- 11:00 am
|
Break
|
10:15 -- 11:00 am
|
Advancing AI Model Trust and Security with AIBOMs
|
John Cavanaugh, Internet Infrastructure Services Corp.
|
11:00 -- 11:45 am
|
Separating the Truth from the Myth: Controlling Language Models by Controlling Training Data
|
Stella Biderman, Booze Allen Hamilton
|
11:45 -- 12:30 pm
|
Panel Discussion
|
12:30 -- 5:00 pm
|
Free Time
|
|
Lunch and Dinner (on your own)
|
|
5:00 -- 6:30 pm
|
Random Access (Steve Pritchard and Ron Oldfield, Moderators)
|
Sign-up sheet will be at the registration table.
Talks are limited to 8 minutes. |
6:30 -- 8:30 pm
|
Poster Reception
|
Students
|
|
Leveraging Large Language Model Agents to Mimic Human Multi-Step Decision-Making in Environmental Reviews
|
Sai Koneru, Pennsylvania State University
|
|
AI Time Series Training with Synthetic Data
|
Kendrick Hood, Kent State University
|
|
Can Smaller Expert Modules Enhance RAG Performance?
|
Alexander Nemecek, Case Western Reserve University
|
|
Offloading "Everything" to DPUs: Why, How, and Potential Benefits?
|
Michael Beebe, Texas Tech University
Benjamin Michalowicz, Ohio State University |
Thursday, November 7th, 2024
7:00 – 8:30 am
|
Breakfast and Registration (Capitol ABC)
|
Keynote
|
8:30 – 9:15 am
|
|
Session 4. Software: Approaches to Model Augmentation and Domain-Specific Applications
9:15 -- 10:00 am
|
Using Open Source and Proprietary models for Parallel Code Generation
|
Siva Rajamanickam, Sandia National Laboratories
|
10:00-10:15 am
|
Break
|
10:15 -- 11:00 am
|
AI for Hardware Synthesis and Hardware Synthesis for AI
|
Antonino Tumeo, Pacific Northwest National Laboratory
|
11:00 -- 11:45 am
|
Accountability in the Age of LLMs
|
Charlie Burgoyne, Valkyrie
|
11:45 -- 12:30 pm
|
Panel Discussion
|
12:30 -- 1:45 pm
|
Lunch (Capitol ABC)
|
Session 5. Human Factors: Truth, Attribution and Applying Domain Specific Knowledge (Brad Spiers, Moderator)
1:45 -- 2:30 pm
|
Generative AI and the Social Sciences: An Experiment in Researching “What was Google Glass?”
|
Anne Fitzpatrick, Virginia Polytechnic Institute
|
2:30 -- 3:15 pm
|
Unleashing HPC Potential with Generative AI
|
David Haglin, Trovares
|
3:15 -- 3:30 pm
|
Break
|
|
3:30 -- 4:15 pm
|
Revolutionizing Knowledge Management with Project IRIS
|
Chris Garasi, Sandia National Laboratories
|
4:15 -- 5:00 pm
|
Panel Discussion
|
5:00 -- 5:15 pm
|
Closing Remarks and Adjourn
|
5:15 -- 6:30 pm
|
Closing Reception (Senate A/B)
|
2024 Sponsors