AI4LCA_Workshop_2026

AI4LCA Workshop 2026: Deriving Actions for Decarbonization from Verifiable LCAs

Reception at Climate Pledge Arena

Disclaimer: The following summary is generated using AI tools (e.g, ChatGPT, Google Gemini) and revised by workshop organizers

TL’DR

The field of AI-assisted Life Cycle Assessment (LCA) has shifted the core challenge from automation to verification.

The consensus across the workshop and practitioner survey is that ensuring credibility and quality in AI-assisted LCAs requires a focus on:

Day 1 Highlights

Dr. Sangwon Suh’s Keynote Speech

Dr. Suh positioned AI and LCA at an inflection point where the central challenge has shifted from whether to use AI to how to evaluate its outputs responsibly, using the Tiangong Initiative’s large-scale automated LCI dataset generation as evidence that AI fundamentally transforms LCA infrastructure from static to dynamic while raising critical quality and trust questions. He argued the field is transitioning from speed-focused automation to precision engineering, acknowledging that despite technical improvements through prompt engineering, RAG, fine-tuning, and multi-agent workflows, the community faces a “ground truth paradox” where even human experts show ±20% or greater variability due to methodological choices rather than arithmetic errors—meaning AI cannot resolve disagreements the field hasn’t settled over decades. Suh proposed reframing evaluation objectives away from forcing convergence to single answers toward defining boundaries of plausible methodological choices across three domains: procedural quality (ISO compliance), empirical validity (physical plausibility), and technical correctness (proper tool usage), operationalized through synthetic Q&A datasets from ISO standards, property-based validation checks, machine-readable goal and scope definitions, structured audit trails, and strategically focused human oversight on high-impact decisions. He concluded that the historical ISO emphasis on consistency over accuracy can now be revisited given modern computational capacity that enables embedded uncertainty analysis and scenario distributions, but cautioned that as LCA scales through AI, evaluation methods must evolve beyond pen-and-paper reviews to leverage computational resources, ultimately aiming not just for faster LCAs but for credible, defensible measurements that support real decarbonization decisions.
Link to Dr. Suh’s slides

Key Takeaways from Panel Discussion

Key Takeaways from Open Discussion

Day 2 Highlights

Research Presentations

The three presentations showcase a significant potential for scaling LCA generation and validation through the use of AI and agent-based systems. Below are the insights summarized by the notetaker (i.e, not the viewpoints from the presenters):
Potential for Scaling LCA Generation

Potential for Scaling LCA Validation

Key Takeaways from Panel Discussion

Framing the Baseline & Validation Categories

Question Key Insight / Answer
What are two key aspects of modeling assumptions to verify in an LCA report? The functional unit (including technical performance and reference flows) and recycled content or allocation decisions (subdivision) are critical checks. Inventory data and ISO standards conformance are also key.
How do examples of verification fall into the three validation categories (Procedural Conformity, Empirical Validity, Technical Correctness)? The Functional Unit (FU) check was noted to potentially overlap all three categories. Inventory data is primarily Empirical but can overlap with Technical Correctness. Bill of Materials (BoM) completeness is a Procedural Conformity check.
What is the single largest source of ambiguity when validating empirically? Ambiguity of what the product is (e.g., defining a heat pump or the level of input granularity of BoM), data quality (e.g., poor geographic representation of a unit process flow), and uncertainty in key parameters (hotspots).

Ground Truth Data & AI’s Role

Question Key Insight / Answer
How do you define what should be considered ‘ground truth’ data? Ground truth must be transparent (trace of calculations), verifiable, representative, and consensus-based. It should also be within a credible range of sources and aligned with technical standards and procedural conformity.
What are the key features for an automated ground truth data generation pipeline? AI should be used to point out where it is most valuable to gather primary data and to gather a wide range of data. It can also run statistical analysis at scale to provide guidance on data quality issues.
What is the human’s role in automated ground truth collection? The human must be an expert in the domain to build guardrails, own the product carbon footprint (i.e., responsibility), and do a spot-check on AI-escalated red flags (not every single assumption).
Under what circumstances will you trust the validation result from an AI workflow? Trust is based on an auditable trace of reasoning and complete transparency in the approach and data. A system needs a rigorous methodology, with all assumptions and emission factors associated with a ground truth database.
Who should own, maintain, and govern the ground truth database? The consensus suggests that it should not be in the private domain. A non-profit organization structure, learning from other domains like OpenStreetMap, was suggested.

Groundtruth Data Creation

The session was structured around a sequence of concrete steps designed to build consensus on the criteria and process for generating a high-quality groundtruth Question-Answer (QA) dataset for AI-assisted LCA validation.

Defining Core Principles (“Tenets”):

Developing an Evaluation Rubric: The group then moved to defining an evaluation rubric for data quality. This resulted in distinct sets of metrics for Question Quality and Answer Quality.

Applying the Rubric to Generate New Data: After establishing the quality criteria, attendees organized into breakout groups to create new groundtruth QA data in specific areas of LCA methodology, such as:

Collective Data Input and Evaluation:

We wrapped up the workshop with a brainstorming session on “What’s next?” (survey result)

Call for Actions

Illustration of a synthetic QA groundtruth data generation workflow
Illustration of a synthetic QA groundtruth data generation workflow

For more details, please read the full summary here


[Archived] Overview

In an era where artificial intelligence (AI) is increasingly integrated into life cycle assessment (LCA), verifying the integrity and trustworthiness of LCA results has become paramount. This verifiability is the foundation for deriving actionable solutions from LCA to decarbonize business operations, product manufacturing, and supply chain networks through data-driven prioritization and targeted interventions.

We welcome participants from academia, LCA/sustainability consulting firms, AI solution providers, industrial associations and many other groups of the broad LCA/sustainability modeling community, to address this critical challenge. Together, we will explore standardized AI solutions that will transform how we validate LCA and derive actionable insights- making it faster, more consistent, and significantly more scalable.

This workshop’s conceptual focus examines how verifiability serves as a cornerstone for credible LCA studies and how aligning AI-driven analyses with established standards (such as ISO 14040/44) can build stakeholder trust. Attendees will conduct focused discussions on the best practices in validation and transparency, as well as the definition of actionability in the context of LCA/carbon accounting.

Key outcome of the workshop includes (1) developing a template that captures core reasoning steps in LCA model construction and review (e.g., boundary setting, dataset selection, cutoff rules), (2) developing a small “golden dataset” (i.e., a curated set of LCA modeling questions with expert-annotated answers) for LCA validation

Registration

This in-person only workshop is free to attend, there will be no virtual attendance option. Attendees need to cover their own travel expense.

Registration Closed

Date and Location

Date: February 2-3, 2026 (Monday & Tuesday)
Location: 05.301 in Amazon DAY 1 building (Room 301 on 5th floor at 2121 7th Ave, Seattle, WA 98121)

Agenda

Keynote Speaker

Dr. Sangwon Suh (Tsinghua University): From theoretical scale to precision engineering in sustainability data

Research Presentations

Panelists

Focused Discussions

[Discussion #1] Evaluating AI-Assisted LCA & PCF: procedural aspects and standard conformity

[Discussion #2] Define validation dataset for common LCA modeling assumptions

Day 1 Schedule

Time Session
08:30-09:30 Breakfast & Networking
09:30-09:50 Welcome and Opening Remarks
09:50-10:20 Keynote
10:20-11:00 Coffee Break
11:00-11:45 Panel Discussion #1 (Moderator: Dr. Sangwon Suh, Tsinghua University)
11:45-13:15 Lunch
13:15-16:15 Focused Group Discussion #1: Solution Development
16:15-16:45 Coffee Break
16:45-17:30 Focused Group Discussion #1 Results Sharing and Improvement
18:00-20:30 Private welcome reception at Climate Pledge Arena (RSVP preferred by Friday, January 30, 2026)

Day 2 Schedule

Time Session
08:30-09:30 Breakfast & Networking
09:30-10:30 Research presentations
10:30-11:00 Coffee Break
11:00-11:45 Panel Discussion #2 (Moderator: Dr. Qingshi Tu, University of British Columbia & Amazon)
11:45-13:00 Lunch
13:00-16:00 Focused Group Discussion #2 Solution Development
16:00-16:15 Coffee Break
16:15-17:00 Focused Group Discussion #2 Results Sharing and Improvement
17:00-17:15 Closing Remarks