Land is a finite resource with many competing demands. Land use models can help us make better, more informed decisions about how we use that land to tackle some of the most pressing issues we’re facing. For instance, I’m currently working on the OpenCLIM project to develop the first integrated assessment for climate impacts and adaptation in the UK. This will help decision-makers better understand the direct and indirect effects of climate change, including critical impacts on the land, such as flood risk, biodiversity and agricultural production.
At Newcastle University, we’re also simulating different possible future urban land use scenarios to help decision-makers understand where new housing development might take place. They can then use the model to test the likely impacts of different policies and prioritise the land made available for development. For example, a local authority might want to minimise emissions from transport and protect important habitats, while creating new homes that will not be affected by flooding or flight noise and that are within a certain distance of schools and jobs. The model can help identify optimal, possible and no-go areas based on these criteria.
Developing an effective land use decision-making tool can be a bit of a balancing act. The model needs to be complex enough to capture the many competing demands on the land so that we’re confident that any decisions informed by the model are as robust as possible. But at the same time, decision-makers need to be able to understand the model and why they’re getting a particular set of outputs. So the right balance has to be struck between complexity on the one hand and the legibility of any outputs on the other.
Getting the right data for land use models isn’t always easy, especially when it’s held by a diverse set of local authorities. There is no accepted standard for land use data in the UK, so each authority has their own approach, different formats and there are gaps. Scale is also a challenge. There isn’t always data available at a national scale so we have to take very detailed Ordnance Survey Mastermap data and work out how best to abstract that to a lower resolution map for the UK. There are tricky decisions that have to be made about what to leave out and what to keep in as you rescale the data to allow the model to run quickly.
When it comes to new housing, some people worry that these tools will simply give the green light to more housing development, but this doesn’t have to be the case. It’s about building in the places that make the most sense. There’s an onus on us as model developers to be very open and transparent about what we’re doing so that we build trust. All code used in these models should be open source and ideally these tools should have a user-friendly interface so that anyone can interact with them – Newcastle City Council’s budget simulator is a good example of this kind of transparency and can help the public better appreciate the complexity of the choices that decision-makers are faced with. For land-related issues, displaying model outputs on colour-coded maps and visualisations can also help
Another misconception is that nuanced human decision-making will be replaced with simplistic rule-based computer decisions. The tools we’re developing are intended as aids and do not provide decisions – they simply support human decision-makers so that they can focus their skills and experience in the right areas. They can also continue to iterate and adapt the model to suit their needs and priorities.
And finally it’s important to remember that no model can ‘predict’ the future, especially when we’re thinking about what land might look like in 2100. In our work we talk in terms of ‘scenarios’. We can come up with a number of plausible future scenarios, driven by things like the UK SSPs, against which we can test the impact of different policy options and try to minimise risks. But no model can tell us exactly what the future holds.