Optimizing parameters in a WOFOST crop model using diffWOFOST
This Jupyter notebook demonstrates the optimization of parameters in a
differentiable model using the diffwofost package. The package provides
differentiable implementations of the WOFOST model and its associated
sub-models. As diffwofost is under active development, this notebook focuses on
one model: wofost72_pp.
1. WOFOST72_PP¶
In this section, we demonstrate how to optimize the parameters TDWI (initial total dry weight)
and SPAN (leaf lifespan) in the wofost72_pp model using a differentiable version of WOFOST.
The optimization uses simulated LAI (Leaf Area Index) as the target signal
and the Adam optimizer from torch.optim.
1.1 Software requirements¶
To run this notebook, we need to install diffwofost, the differentiable
version of WOFOST models. Since the package is constantly under development, make
sure you have the latest version of diffwofost installed in your
python environment. You can install it using pip:
# install diffwofost
!pip install diffwofost
# ---- import libraries ----
import copy
import torch
import numpy
from pathlib import Path
from diffwofost.physical_models.config import Configuration, ComputeConfig
from diffwofost.physical_models.crop.wofost72 import Wofost72
from diffwofost.physical_models.soil.classic_waterbalance import WaterbalancePP
from diffwofost.physical_models.utils import EngineTestHelper
from diffwofost.physical_models.utils import prepare_engine_input
from diffwofost.physical_models.utils import get_test_data
# ---- disable a warning: this will be fixed in the future ----
import warnings
warnings.filterwarnings("ignore", message="To copy construct from a tensor.*")
1.2. Data¶
A test dataset of LAI (Leaf Area Index) will be used to optimize the parameters:
TDWI: total initial dry weightSPAN: life span of leaves
The data is stored in the PCSE tests folder, and can be downloaded from the PCSE repository.
You can select any of the files related to potentialproduction_wofost72 model with a file name that follows the pattern
test_potentialproduction_wofost72_*.yaml. Each file contains different data depending on the location and crop type.
For example, you can download the file "test_potentialproduction_wofost72_03" as:
import urllib.request
filename = "test_potentialproduction_wofost72_03.yaml"
url = f"https://raw.githubusercontent.com/ajwdewit/pcse/refs/heads/master/tests/test_data/{filename}"
urllib.request.urlretrieve(url, filename)
print(f"Downloaded: {filename}")
Downloaded: test_potentialproduction_wofost72_03.yaml
# ---- Check the path to the files that are downloaded as explained above ----
test_data_path = "test_potentialproduction_wofost72_03.yaml"
# ---- Here we read the test data and set some variables ----
test_data = get_test_data(test_data_path)
crop_model_params = [
# Leaf dynamics
"SPAN",
"TDWI",
"TBASE",
"PERDL",
"RGRLAI",
"KDIFTB",
"SLATB",
# Phenology
"TSUMEM",
"TBASEM",
"TEFFMX",
"TSUM1",
"TSUM2",
"DLO",
"DLC",
"DVSI",
"DVSEND",
"DTSMTB",
# Assimilation (KDIFTB already included above)
"AMAXTB",
"EFFTB",
"TMPFTB",
"TMNFTB",
# Respiration
"Q10",
"RMR",
"RML",
"RMS",
"RMO",
"RFSETB",
# Evapotranspiration (KDIFTB already included, IAIRDU already in root)
"CFET",
"DEPNR",
"IAIRDU",
"IOX",
"CRAIRC",
"SM0",
"SMW",
"SMFCF",
# Root dynamics (TDWI already included)
"RDI",
"RRI",
"RDMCR",
"RDMSOL",
"RDRRTB",
# Stem dynamics (TDWI already included)
"RDRSTB",
"SSATB",
# Storage organ dynamics
"SPA",
# Partitioning
"FRTB",
"FLTB",
"FSTB",
"FOTB",
# Wofost72 top-level conversion factors
"CVL",
"CVO",
"CVR",
"CVS",
]
(crop_model_params_provider, weather_data_provider, agro_management_inputs, _) = (
prepare_engine_input(test_data, crop_model_params)
)
expected_results = test_data["ModelResults"]
expected_lai = torch.tensor(
[float(item["LAI"]) for item in expected_results],
dtype=ComputeConfig.get_dtype(),
device=ComputeConfig.get_device(),
) # shape: [time_steps]
# ---- don't change this: in this config file we specify the differentiable version of Wofost72_PP ----
wofost72_config = Configuration(
CROP=Wofost72,
SOIL=WaterbalancePP,
OUTPUT_VARS=["DVS", "LAI", "RD", "TAGP", "TRA", "TWLV", "TWRT", "TWSO", "TWST"],
)
1.3. Helper classes/functions¶
The model parameters should stay in a valid range. To ensure this, we will use
BoundedParameter class with (min, max) and initial values for each
parameter. You may change these values depending on the crop type and
location. But don't use a very small range, otherwise gradients will be very
small and the optimization will be very slow.
# ---- Adjust the values if needed ----
TDWI_MIN, TDWI_MAX, TDWI_INIT = (0.0, 1.0, 0.40)
SPAN_MIN, SPAN_MAX, SPAN_INIT = (10.0, 60.0, 25.0)
# ---- Helper for bounded parameters ----
class BoundedParameter(torch.nn.Module):
def __init__(self, low, high, init_value):
super().__init__()
self.low = low
self.high = high
# Normalize to [0, 1]
init_norm = (init_value - low) / (high - low)
# Parameter in raw logit space
self.raw = torch.nn.Parameter(
torch.logit(
torch.tensor(
init_norm, dtype=ComputeConfig.get_dtype(), device=ComputeConfig.get_device()
),
eps=1e-6,
)
)
def forward(self):
return self.low + (self.high - self.low) * torch.sigmoid(self.raw)
Another helper class is OptDiffWofost72PP, a subclass of torch.nn.Module.
It wraps the EngineTestHelper class to make it easy to run the Wofost72PP model with optimizable parameters.
# ---- Wrap the model with torch.nn.Module----
class OptDiffWofost72PP(torch.nn.Module):
def __init__(self, crop_model_params_provider, weather_data_provider, agro_management_inputs, wofost72_config):
super().__init__()
self.crop_model_params_provider = crop_model_params_provider
self.weather_data_provider = weather_data_provider
self.agro_management_inputs = agro_management_inputs
self.config = wofost72_config
# bounded parameters
self.tdwi = BoundedParameter(TDWI_MIN, TDWI_MAX, init_value=TDWI_INIT)
self.span = BoundedParameter(SPAN_MIN, SPAN_MAX, init_value=SPAN_INIT)
def forward(self):
# currently, copying is needed due to an internal issue in engine
crop_model_params_provider_ = copy.deepcopy(self.crop_model_params_provider)
tdwi_val = self.tdwi()
span_val = self.span()
# pass new value of parameters to the model
crop_model_params_provider_.set_override("TDWI", tdwi_val, check=False)
crop_model_params_provider_.set_override("SPAN", span_val, check=False)
engine = EngineTestHelper(
crop_model_params_provider_,
self.weather_data_provider,
self.agro_management_inputs,
self.config,
)
engine.run_till_terminate()
results = engine.get_output()
return torch.stack([item["LAI"] for item in results]) # shape: [1, time_steps]
# ---- Create model ----
opt_model = OptDiffWofost72PP(
crop_model_params_provider,
weather_data_provider,
agro_management_inputs,
wofost72_config,
)
# ---- Early stopping ----
best_loss = float("inf")
patience = 10 # Number of steps to wait for improvement
patience_counter = 0
min_delta = 1e-4
# ---- Optimizer ----
optimizer = torch.optim.Adam(opt_model.parameters(), lr=0.02)
# ---- Loss: relative MAE avoids bias from different output magnitudes ----
denom = torch.mean(torch.abs(expected_lai))
# Training loop
for step in range(101):
optimizer.zero_grad()
results = opt_model()
mae = torch.mean(torch.abs(results - expected_lai))
rmae = mae / denom
loss = rmae.sum()
loss.backward()
optimizer.step()
print(f"Step {step}, Loss {loss.item():.4f}, TDWI {opt_model.tdwi().item():.4f}, SPAN {opt_model.span().item():.4f}")
# Early stopping logic
if loss.item() < best_loss - min_delta:
best_loss = loss.item()
patience_counter = 0
else:
patience_counter += 1
if patience_counter >= patience:
print(f"Early stopping at step {step}")
break
Step 0, Loss 0.3535, TDWI 0.4048, SPAN 25.2108 Step 1, Loss 0.3452, TDWI 0.4096, SPAN 25.4234 Step 2, Loss 0.3385, TDWI 0.4145, SPAN 25.6378 Step 3, Loss 0.3293, TDWI 0.4193, SPAN 25.8541 Step 4, Loss 0.3204, TDWI 0.4242, SPAN 26.0721 Step 5, Loss 0.3122, TDWI 0.4291, SPAN 26.2921 Step 6, Loss 0.3044, TDWI 0.4340, SPAN 26.5141 Step 7, Loss 0.2938, TDWI 0.4389, SPAN 26.7380 Step 8, Loss 0.2872, TDWI 0.4439, SPAN 26.9640 Step 9, Loss 0.2787, TDWI 0.4488, SPAN 27.1919 Step 10, Loss 0.2701, TDWI 0.4537, SPAN 27.4219 Step 11, Loss 0.2597, TDWI 0.4587, SPAN 27.6540 Step 12, Loss 0.2525, TDWI 0.4636, SPAN 27.8882 Step 13, Loss 0.2422, TDWI 0.4686, SPAN 28.1246 Step 14, Loss 0.2322, TDWI 0.4736, SPAN 28.3632 Step 15, Loss 0.2234, TDWI 0.4785, SPAN 28.6040 Step 16, Loss 0.2146, TDWI 0.4835, SPAN 28.8470 Step 17, Loss 0.2035, TDWI 0.4885, SPAN 29.0922 Step 18, Loss 0.1944, TDWI 0.4934, SPAN 29.3397 Step 19, Loss 0.1850, TDWI 0.4984, SPAN 29.5894 Step 20, Loss 0.1744, TDWI 0.5034, SPAN 29.8414 Step 21, Loss 0.1641, TDWI 0.5083, SPAN 30.0957 Step 22, Loss 0.1545, TDWI 0.5133, SPAN 30.3525 Step 23, Loss 0.1468, TDWI 0.5175, SPAN 30.5764 Step 24, Loss 0.1396, TDWI 0.5211, SPAN 30.7641 Step 25, Loss 0.1383, TDWI 0.5240, SPAN 30.9174 Step 26, Loss 0.1328, TDWI 0.5263, SPAN 31.0394 Step 27, Loss 0.1306, TDWI 0.5280, SPAN 31.1319 Step 28, Loss 0.1277, TDWI 0.5293, SPAN 31.1977 Step 29, Loss 0.1271, TDWI 0.5301, SPAN 31.2393 Step 30, Loss 0.1270, TDWI 0.5304, SPAN 31.2576 Step 31, Loss 0.1258, TDWI 0.5304, SPAN 31.2548 Step 32, Loss 0.1258, TDWI 0.5300, SPAN 31.2330 Step 33, Loss 0.1270, TDWI 0.5293, SPAN 31.1939 Step 34, Loss 0.1277, TDWI 0.5283, SPAN 31.1391 Step 35, Loss 0.1278, TDWI 0.5271, SPAN 31.0714 Step 36, Loss 0.1301, TDWI 0.5256, SPAN 30.9919 Step 37, Loss 0.1314, TDWI 0.5240, SPAN 30.9028 Step 38, Loss 0.1329, TDWI 0.5221, SPAN 30.8048 Step 39, Loss 0.1369, TDWI 0.5201, SPAN 30.6988 Step 40, Loss 0.1381, TDWI 0.5180, SPAN 30.5867 Step 41, Loss 0.1395, TDWI 0.5157, SPAN 30.4690 Early stopping at step 41
# ---- Compare optimized vs. ground-truth parameters ----
print(f"Optimized: TDWI = {opt_model.tdwi().item():.4f}, SPAN = {opt_model.span().item():.4f}")
print(f"Actual: TDWI = {crop_model_params_provider['TDWI'].item():.4f}, SPAN = {crop_model_params_provider['SPAN'].item():.4f}")
Optimized: TDWI = 0.5157, SPAN = 30.4690 Actual: TDWI = 0.5100, SPAN = 35.0000
2. Conclusion¶
This notebook demonstrated parameter optimization in wofost72_pp using automatic differentiation.
By treating TDWI and SPAN as learnable parameters,
and minimizing the relative MAE between simulated and reference LAI,
the Adam optimizer successfully recovered values close to the ground truth.
The plot below shows the reference LAI curve alongside the LAI simulated with the optimized parameters.
import matplotlib.pyplot as plt
with torch.no_grad():
optimized_lai = opt_model().numpy()
reference_lai = expected_lai.numpy()
steps = range(len(reference_lai))
fig, ax = plt.subplots(figsize=(9, 4))
ax.plot(steps, reference_lai, label="Reference LAI", linewidth=2)
ax.plot(steps, optimized_lai, label="Optimized LAI", linewidth=2, linestyle="--")
ax.set_xlabel("Time step (days after sowing)")
ax.set_ylabel("LAI (m² m⁻²)")
ax.set_title("Reference vs. Optimized LAI")
ax.legend()
ax.grid(True, alpha=0.3)
plt.tight_layout()
plt.show()
print(f"\nFinal loss: {best_loss:.4f}")
print(f"Optimized: TDWI = {opt_model.tdwi().item():.4f}, SPAN = {opt_model.span().item():.4f}")
print(f"Actual: TDWI = {crop_model_params_provider['TDWI'].item():.4f}, SPAN = {crop_model_params_provider['SPAN'].item():.4f}")
Final loss: 0.1258 Optimized: TDWI = 0.5157, SPAN = 30.4690 Actual: TDWI = 0.5100, SPAN = 35.0000