In part 1 we developed our strategy and R code for measuring the efficiency of the placement of tube stations. In this post we will scale this up so that we can input empirical data that we will gather from openstreetmap and Transport for London.

**To briefly recap**

We are measuring the efficiency of station placement by:

- finding the sum of the squared shortest distance (sssd) from every building (in the area of interest) to a tube station – this is the measure of current performance;
- using R optim to find the optimal placement of stations by minimising the sssd from every building to a tube station;
- finding the sssd from every building (in the area of interest) to randomly placed tube stations to create a baseline measure of performance;
- comparing the difference in sssd between (1) current, (2) optimal and (3) baseline.

The R code that we developed was as follows:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
#many stations case buildings=data.frame( px=rnorm(10, 0 , 1), py=rnorm(10, 0 , 1) ) hubs=c( 0 , 0 , -1 , 1 , 1 , -1 ) min.distance <- function(data, par) { #calculate the distance from each point to each transport hub M <- matrix( nrow=nrow(data) , ncol=length(par)/2 ) for ( i in seq(1, length(par), by=2) ) { col <- (i+1)/2 M[,col] <- (data$px-par[i])^2 + (data$py-par[i+1])^2 } #calculate the min distance to any transport hub minD <- vector(mode = "numeric" , length=nrow(data) ) for ( i in seq(1, nrow(M)) ) { minD[i] <- min(M[i,]) } min.distance <- sum ( minD ) } result <- optim(par = hubs, min.distance, data = buildings , method = "BFGS") plot ( buildings ) for ( i in seq(1, length(result $par), by=2) ) { points ( result$par[i] , result$par[i+1] , col = "blue", cex = 1.5) } |

**Scaling this up to London?**

In this blog post I will share with you how I went from using toy data to using openstreet map for the location of buildings and the Transport For London website ( https://www.tfl.gov.uk/cdn/static/cms/documents/stations.kml ) for the locations of the 301 tube and DLR stations that make up the TfL station infrastructure.

- Download the TfL station location data and import it into QGIS by adding it as a new vector layer.
- Convert the locations into WGS84-UTM30N coordinates as a shapefile so that one unit is one meter.
- Download the building data from open street map using the built in features in QGIS i.e. Vector->Openstreetmap->download data
- Export the polygon layer and filter away non-buildings e.g. surface water, agriculture.
- Calculate the centroids of your buildings using the built in QGIS features e.g. Vector->Geometry tools->Polygon centroid
- Save your building centroids as WGS84-UTM30N coordinates within a shapefile.

To process this empirical data in R, I modified our code into the following:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 |
#many stations case using empirical data library(maptools) setwd("C:\\Users\\David\\Documents\\GIS\\LondonTubeStations") tubestations <- readShapeSpatial("10K_Buf_LondonStations.shp") buildings <- readShapeSpatial("10K_Buf_Westminster_Bldg_Centroids.shp") #load building data into dataframe buildings=data.frame( px=coordinates(buildings)[,1], py=coordinates(buildings)[,2] ) #load tubedata into vector hubs <- vector(mode="numeric" , 2 * nrow(coordinates(tubestations)) ) j=1 for ( i in seq(1, nrow(coordinates(tubestations)) ) ) { hubs[j] <- coordinates(tubestations)[i,1] j=j+1 hubs[j] <- coordinates(tubestations)[i,2] j=j+1 } #min.distance function for measuring efficiency of layout for optimisation min.distance <- function(data, par) { #calculate the distance from each point to each transport hub M <- matrix( nrow=nrow(data) , ncol=length(par)/2 ) for ( i in seq(1, length(par), by=2) ) { col <- (i+1)/2 M[,col] <- (data$px-par[i])^2 + (data$py-par[i+1])^2 } #calculate the min distance to any transport hub minD <- vector(mode = "numeric" , length=nrow(data) ) for ( i in seq(1, nrow(M)) ) { minD[i] <- min(M[i,]) } min.distance <- sum ( minD ) } #min.distance.measure function for looking at distribution of distances min.distance.compare <- function(data, par) { #calculate the distance from each point to each transport hub M <- matrix( nrow=nrow(data) , ncol=length(par)/2 ) for ( i in seq(1, length(par), by=2) ) { col <- (i+1)/2 M[,col] <- (data$px-par[i])^2 + (data$py-par[i+1])^2 } #calculate the min distance to any transport hub minD <- vector(mode = "numeric" , length=nrow(data) ) for ( i in seq(1, nrow(M)) ) { minD[i] <- min(M[i,]) } min.distance.compare <- sqrt( minD ) #use this for comparisons in meters } #find minimum distance for empirical data dist <- min.distance.compare ( buildings , hubs ) summary(dist) hist(dist , breaks = "Sturges") #optimisation code result <- optim(par = hubs, min.distance, data = buildings , method = "BFGS") #display results plot ( buildings ) for ( i in seq(1, length(result $par), by=2) ) { points ( result$par[i] , result$par[i+1] , col = "blue", cex = 1.5) } |

As can be seen the code is more or less the same as before except that maptools is being used to read in the shape files and some data preprocessing is performed to massage the data into the format expected by our previous code.

- The only modification perhaps worthy of explanation is the new ‘min.distance.compare’ function which provides a vector of the shortest distance in meters from each building to the nearest tube station. This function is nice because you can plot the distribution of distances and perform useful summary statistics.

**Findings**

As can be seen below I chose to limit my analysis to a 10KM radius around central london. The coverage of the OSM data is pretty good with the vast majority of the areas being well populated with building polygons from which we generated centroid points.

Using the output of min.distance.compare we are able to identify how likely you are to be within n meters of a TfL station. It appears from the histogram below that there is an exponential distribution of walking distances from buildings to TfL stations. The sssd (sum of squared shortest distances) was **1.95e11** and the summary statistics were as follows:

Min. 1st Qu. Median Mean 3rd Qu. Max.

1.108 338.600 567.000 950.500 1253.000 5844.000

For the purpose of comparison I generated a set of random station points using a uniform distribution between the min and max observed building coordinates – this can be easily achieved using the ‘runif’ function. The sssd was **7.83e10** whilst the summary statistics were as follows:

Min. 1st Qu. Median Mean 3rd Qu. Max.

2.863 465.500 704.500 753.900 988.600 2246.000

Interestingly enough the random allocation does not perform badly in comparison to the current placement of TfL stations. The random allocation outperforms the current TfL setup with respect to it having a lower sssd, mean and max case. The current Tfl allocation however do have a better median but at the expense of a poor coverage in south east and north east london.

In the next post I will share the outcome of the optimisation and attempt to quantify how far from optimal the placement of stations is for walking distances.