IAP-25-062
From Sound to Structure: Explainable AI for Multimodal Ecosystem Health Monitoring
Monitoring ecosystem health and biodiversity is essential for understanding environmental change and evaluating the effectiveness of Nature-based Solutions (NbS). However, conventional methods, such as satellite imagery and field surveys, face persistent limitations: atmospheric interference (e.g. cloud cover), infrequent temporal resolution, and difficulty in capturing biodiversity dynamics at scale.
This project introduces a novel multimodal framework that integrates environmental sound with satellite Earth Observation to monitor ecosystem condition and evaluate biodiversity and ecological recovery. By coupling soundscape AI models with remote sensing indicators (e.g. NDVI, NDWI, land cover), it will develop an Acoustic-Remote Sensing Index that captures both functional (sound-based) and structural(remote sensing) dimensions of ecosystem health. This fusion enables scalable continuous, and ecologically meaningful monitoring of biodiversity and environmental change, particularly in NbS sites where recovery trajectories are critical.
To ensure transparency and deployment realism, the project will develop explainable AI models based on the Tsetlin Machine-a logic-based learning algorithm that produces human-readable decision rules with minimal computational overhead. These models will be embedded in IoT devices powered by energy harvesters(e.g. solar, thermoelectric, microbial), enabling autonomous, long-term monitoring in remote or data-sparse environments.
The study advances digital environmental science by linking complementary data modalities and promoting explainable, low-power AI for real-world ecological applications. It contributes directly to NERC’S Digital Environment and Resilient Environment priorities and aligns with IAPETUS themes of biodiversity and ecosystem resources, hazards and resilience, and carbon and nutrient cycling. The project offers methodological innovation, strategic relevance, and a scalable pathway for future environmental monitoring and NbS evaluation.
Methodology
This project combines eco-acoustic analysis, explainable AI modelling, low-power hardware deployment, and multimodal data fusion to develop a scalable framework for ecosystem health monitoring and biodiversity evaluation.
1. Eco-acoustic Feature Engineering
– Extract ecologically relevant features from soundscape recordings, including biophonic, geophonic, and anthropogenic components.
– Annotate events linked to ecosystem condition and recovery(e.g. post-fire regeneration, seasonal transitions) to support supervised learning and index calibration
2. Multimodal Data Fusion and Index Development
– Integrate acoustic data with Earth Observation datasets (e.g. Sentinel-2 MSI, Sentinel-1 SAR) via platforms such as Google Earth Engine.
– Combine soundscape features with remote sensing indicators (e.g. NDVI, NDWI, land cover) to contextualise acoustic signals and improve generalisation across ecosystems.
– Develop a multimodal Acoustic-Remote Sensing Index to evaluate biodiversity and ecosystem condition in Nature-based solution (NbS) sites.
3. Explainable AI modelling with Tsetlin Machines
– Train Tsetlin Machine classifiers to distinguish ecological states based on acoustic and fused features, enabling transparent decision-making and low computational overhead.
– Benchmark against black-box models (e.g. CNNs) to evaluate trade-offs in accuracy, explainability, and energy efficiency.
– Use rule-based outputs to support index interpretation and stakeholder trust in AI-driven monitoring.
4. Energy-Harvesting IoT Deployment
– Test Tsetlin Machine inference on embedded devices (e.g. STM32, MSP430, Raspberry Pi) to assess performance under constrained energy budgets.
– Evaluate energy-performance trade-off using duty cycling, adaptive sampling, and inference scheduling strategies.
– Select appropriate energy harvesters(solar, thermoelectric, microbial) based on site-specific conditions and device requirements.
– Integrate acoustic sensors and energy harvesters into edge-ready systems capable of autonomous, long-term monitoring in remote or disturbance-prone environments.
5. Impact Pathway and Open Dissemination
– Collaborate with stakeholders via supervisory networks to validate index design and deployment priorities.
– Share annotated datasets, explainable models, and hardware integration insights via open platforms (e.g. GitHub, Zenodo) to support reproducibility and future uptake.
– Contribute to open science and policy relevance by documenting methods and findings in accessible formats for academic and practitioner audiences.
Project Timeline
Year 1
Foundations and Feasibility
– Conduct literature review on eco-acoustics, remote sensing indices, ecosystem resilience metrics, and explainable AI
– Explore public eco-acoustic datasets (e.g. Open Ecoacoustics, ecoSound-Web, Australian Acoustic Observatory) and assess suitability for training and benchmarking.
– Explore relevant Earth Observation datasets (e.g. Sentinel-2 MSI, Sentinel-1 SAR) via Google Earth Engine.
– Develop initial acoustic feature extraction pipeline and test basic classification tasks.
– Evaluate the suitability of Tsetlin Machine models for environmental sound classification.
– Survey energy harvester technologies and assess feasibility across target ecosystem types.
– Publication target: conceptual/methods paper on explainable AI for ecosystem monitoring
Year 2
Conceptual Fusion and Explainable AI Modelling
– Begin conceptual design of the Acoustic-Remote Sensing Index by aligning acoustic features with remote sensing indicators
– Train and validate Tsetlin Machine models on annotated soundscapes.
– Benchmark model performance against black-box alternatives (e.g. CNNs) for explainability and energy efficiency
– Test Tsetlin Machine inference on embedded devices (e.g. STM32, MSP430, Raspberry Pi).
– Evaluate energy-performance trade-offs using duty cycling, adaptive sampling, and inference scheduling
– Integrate embedded devices with acoustic sensors and selected energy harvesters
– Publication target: Technical paper on acoustic classification and embedded inference
Year 3
Operational Fusion and Ecosystem Generalisation
– Integrate satellite and climate data to contextualise acoustic signals and operationalise the Acoustic-Remote Sensing Index
– Refine models to account for seasonal drift and improve generalisation across sites.
– Expand deployments to additional ecosystems with contrasting disturbance regimes (e.g. wetland, urban fringe)
– Conduct stakeholder-informed validation (e.g. interviews, workshops via supervisory networks)
– Plan and begin writing thesis
– Publication target: Applied paper on multimodal index and generalisation
Year 3.5
Thesis Writing and Final Dissemination
– Focus on thesis writing
– Release open-source tools, annotated datasets, and hardware integration insights via GitHub and Zenodo
– Submit thesis and prepare for viva
Training
& Skills
Technical
– Machine learning, signal processing and explainable AI
– Embedded system, IoT integration, and low-power optimisation
Environmental Science
– Ecosystem resilience, biodiversity metrics, and acoustic ecology
– GIS and remote sensing for environmental monitoring
Data Management
– Handling multimodal datasets
– Metadata documentation, version control, and reproducible workflow.
– Open science practices and ethical data stewardship
Professional Development
– Interdisciplinary collaboration and stakeholder engagement
– Scientific writing, open science, and public communication
References & further reading
Alamu, O., Olwal, T. O., & Migabo, E. M. (2025). Machine learning applications in energy harvesting internet of things networks: A review. IEEE Access, vol. 13, pp. 4235-4266, 2025.
Gairí, P., Pallejà, T., & Tresanchez, M. (2025). Environmental sound recognition on embedded devices using deep learning: a review. Artificial Intelligence Review, 58(6), 163.
Granmo, O. C. (2018). The Tsetlin Machine–A Game Theoretic Bandit Driven Approach to Optimal Pattern Recognition with Propositional Logic. arXiv preprint arXiv:1804.01508.
Granmo, O. C. (2021). An Introduction to Tsetlin Machines. Agder, Norway.
Mu, W., Yin, B., Huang, X., Xu, J., & Du, Z. (2021). Environmental sound classification using temporal-frequency attention based convolutional neural network. Scientific Reports, 11(1), 21552.
Sánchez-Giraldo, C., Ayram, C. C., & Daza, J. M. (2021). Environmental sound as a mirror of landscape ecological integrity in monitoring programs. Perspectives in ecology and conservation, 19(3), 319-328.
Sharma, S., Sato, K., & Gautam, B. P. (2023). A methodological literature review of acoustic wildlife monitoring using artificial intelligence tools and techniques. Sustainability, 15(9), 7128.
Sueur, J., & Farina, A. (2015). Ecoacoustics: the ecological investigation and interpretation of environmental sound. Biosemiotics, 8(3), 493-502.
Towsey, M., et al. (2018). The Australian Acoustic Observatory: Real-time ecoacoustic monitoring. Methods in Ecology and Evolution.
Open Ecoacoutics, https://openecoacoustics.org/
Google Earth Engine (GEE), https://earthengine.google.com/
NASA Earth data Search / LP DAAC, https://search.earthdata.nasa.gov/search
Sentinel-2 MSI (10–20 m), https://sentiwiki.copernicus.eu/web/sentinel-2
Sentinel-1 SAR (10 m), https://sentiwiki.copernicus.eu/web/sentinel-1
