In this series, I explore the world of environmental indicators. In the journey from a polluting, carbon-intensive world to one with reduced emissions and protected nature, indicators point us to how progress is measured, what is happening, the direction of change, and whether policies are working.
Before discussing the wide world of environmental indicators, we must know what an indicator is. Here, I use widely known terminologies and seminal literature to clarify.
A weight loss journey is exactly that – loss of weight. In this case, the unit of measure is “mass” or “weight” measured in kilograms, pounds, stones. In this weight loss journey, we look at the metric “mass in kg” and track it from a starting point to a predicted future date, let’s say 1 year, i.e. N + 1, from Weight W to a new weight W-10. We also need a purpose and boundary to follow this decision and what is it tracking. As the decision maker, you decide the parameters by which you follow this change over time. You decide if you stop when you reach your goal or follow it over the whole 1 year, make changes, and so on.
Now, when the journey changes from weight loss example to “reducing global carbon emissions”, we go on a different journey. In this article, I break down the basics of indicator science.
1. Conceptual Foundations: Unit, Metric, Indicator, KPI
Unit – The measurement scale (e.g., tonnes, hectares, %, NOK)
Metric – A quantified measurement using a unit (e.g., tonnes of CO₂ emitted)
Indicator – A metric (or combination of metrics) interpreted within a defined purpose and boundary to inform a decision
KPI (Key Performance Indicator) – A strategically selected indicator tied explicitly to a goal or target and used to track performance
An indicator is not just a number. It is a decision tool.
A metric can be purely descriptive without a defined purpose and boundary. It becomes an indicator when it answers questions relevant to governance, strategy or policy. Questions to ask of a metric:
What decision is this informing?
What boundaries are assumed?
What does it exclude?
What progress does it measure?
When it moves from being a number (e.g. kilogram or tonnes of CO2) to representing change over time (tonnes of CO2 from 2020 to 2024), it starts to function as an indicator.
2. The purpose and boundary of an indicator
Indicators exist to simplify complexity. They remove layers of environmental complexity to small and often easy to comprehend ideas. When removing the unnecessary complexity, the nuance is reduced, and the indicator risks becoming oversimplified. To prevent this from occurring
The purpose of measurement is defined
The system boundary is explicit
The relationship to decisionmaking is clear and well defined
Within corporate environments, sustainability indicators like carbon emissions link to strategies and KPIs. These are also discussed in relation to economic and profitability outcomes in efficiency or risk management terms. ESG reporting must have demonstrated value – reduced carbon emissions, improved water quality, restored habitat, reduced exposure to human rights violations, and so on. These show movement towards targets, which depend on resource allocation.
Indicators internally support
Policy development
Identification of environmental drivers and pressures (e.g. increased water stress in a region with drought)
Monitoring of policy efficacy
Public health and awareness
Indicators are thus governance instruments. They structure attention and shape action.
3. The DPSIR Framework – another framework?
A common conceptual structure for environmental indicators used by the EEA is the DPSIR framework. See Fig.1.
D – Drivers (economic sectors, demographic changes)
P – Pressures (emissions, land-use change, resource extraction)
S – State (environmental condition)
I – Impacts (effects on ecosystems or human well-being)
R – Responses (policy, mitigation, adaptation measures)
DPSIR organizes indicators along causal chains. It links human activity to environmental outcomes and policy responses.
Importantly, the DPSIR framework is not merely classificatory. It encourages causal thinking:

Drivers create pressures
Pressures alter state
State changes generate impacts
Impacts trigger societal responses
This causal logic is crucial and is what drives indicator selection.
4. Types of Indicators
Indicators can serve different functions:
Descriptive (Type A) – Describe drivers, pressures, states, impacts, responses.
Performance (Type B) – Measure progress toward a defined target.
Efficiency (Type C) – Measure output relative to input.
Welfare (Type D) – Reflect broader societal well-being.
Each type answers different governance questions.
5. Indicator choice depends on Purpose
If the objective is:
To understand how serious a problem is → Use State or Impact indicators. State: % of natural habitat in “good ecological condition”, Impact: Number of threatened species affected by operations
To understand how to control or influence a situation → Use Pressure or Response indicators. (Pressure: Area of land converted per year, volume of groundwater extracted. Response: area of habitat restored, volume of water recycled)
To track structural economic trends → Use Driver indicators
At national or global scales, there is a tendency to rely on driver or pressure indicators due to data availability. However, this may obscure ecological condition (state) or consequences (impact). For instance, a reduction in fertilizer use does not automatically mean improved river health.
6. Indicator Selection: The Core Problem
The literature (e.g., Niemeijer & de Groot) identifies a major weakness in environmental reporting, i.e. insufficient rigor in selecting indicators, and little to no documentation of why other indicators were excluded.
Indicators are often selected based on historical practice, regulatory requirements or expert judgment, but not necessarily to answer the clearly defined environmental question. This risks making the indicator selection process less transparent.
Common shortcomings include:
No explanation of why a particular constellation of indicators was chosen.
No documentation of why certain indicators were rejected.
Lack of transparency in methodological reasoning.
Weak articulation of causal linkages.
This is not a minor issue. Indicator selection determines what becomes visible—and therefore governable. Without explicit reasoning, imbalances can arise in what is measured and what is overlooked.
Niemeijer and de Groot illustrate this by comparing two studies – one from OECD and the other from EEA. Despite similarities in mandates, institutional structures and subject matter, the two organisations chose different indicators that measured the same phenomenon: ozone depletion.
Niemeijer and De Groot have explained that this could be due to different “frames of reference”, but what was lacking in both studies was a clearly documented section articulating the logic behind indicator selection methodology.
In their paper, the authors call for a more systematic and structured methodology to maintain consistency and repeatability. Their approach, which they refer to as a causal network methodology and its implications for choosing environmental indicators will be explored in the next article.
7. Key Takeaways
That is the foundation for building serious environmental indicator architecture —whether at policy, national, or corporate level.
References:
Environmental Indicators: Typology and Review, Technical report 25/1999, Environmental indicators: Typology and overview | Publications | European Environment Agency (EEA)
David Niemeijer, Rudof S. de Groot, A conceptual framework for selecting environmental indicator sets, Ecological Indicators, Volume 8, Issue 1, 2008, Pages 14-25, ISSN 1470-160X https://doi.org/10.1016/j.ecolind.2006.11.012
Groundwork is an independent research studio analysing nature- and climate-related risks to economies, organisations and communities.
We combine rigorous analysis, practical tools and cross-sector insight to support decision-making in a rapidly changing world.