I have been using the process of photogrammetry for nearly 4 years now, and it’s slowly moving from the realm of what I originally thought of as witchcraft, to the realms of an exacting process, that, if not planned and carried out correctly, can bring a whole world of hurt. I am personally, not one to rely on a process, without trying to understand the principles and the technology behind it.
The origins of photogrammetry date back to the renaissance, when art, science and mathematics were used by scholars in design, architecture and invention. Since the birth of photography in the mid nineteenth century, photogrammetry has been used to calculate a point in 3d space using 2 images from parallel but different locations, lines of sight can be extracted for a point, and where these lines or ‘rays’ intersect a position in 3d space is calculated. It is a process that is used today by mapping agencies worldwide, often using very expensive and heavy optical equipment from traditional aircraft.
I started using the process on the ground, creating 3d models of nearly anything I needed, stones, cliff faces, footprints in sand and complete buildings. Using a camera and open source software, it often took days of processing, but I could immediately see its applications went far beyond what I was doing at the time, That’s when my son introduced me to model aircraft and what he was doing, aerial video, this was before DJI. The first GoPro had just been released, light and reasonably good quality and ideal for mounting on an aeroplane. I dabbled in the software side, while my son experimented with mounting the camera, we had success and shortly mapped out our first 100 hectares, with ground control points taken from Google Earth, so accuracies where not great, but adequate.
Today, within my own company, we have mapped over 5000 hectares with each individual project ranging from 5 to 200 hectares with accuracies better than 10cms. Ten centimetres I hear you say, some will state better than 5mm, well that remains to be seen as this article will demonstrate. You see, as it stands, even Ordnance Survey 5m digital terrain models only promise an accuracy of 2m root mean square error. So when you’re dealing with a resolution 100 times greater, over 200 hectares then 10cms is pretty good. The important thing to bear in mind here is that we are talking about a National Grid data set, not a local grid data set. The larger the area, the more prone to discrepancies, especially over time and even more so when you are talking about accuracies at cm and sub-cm level. Ever thought about the slight sinking of Britain when the tide comes in over the continental shelf?, the sinking of inland areas under a weather system ‘high pressure’ (about 5 mm), and the rising of the land in response to the melting of the last ice sheets (about 2 mm per year in Scotland, up to 1 cm per year in Scandinavia), some things that have not previously needed to be considered now do as the demand for higher resolution data increases then the levels of accuracy need to be revised. This article is not going to go into this in depth, but it’s worth mentioning, food for thought perhaps?
From a personal stand point, the work we have carried out to date has been the mapping of very rural areas with a view to repeating the survey in the future for comparison. Using unmanned aerial vehicles (UAVs) is a fairly new technique and it has been very important for us, as a company, to do our best at determining the levels of accuracy that we can achieve so as to never promise what we cannot deliver. To this end, we have been fortunate to work with Bangor University’s School of Ocean Sciences within a European funded project called SEACAMS, whose staff have been actively interested in the process and what it can produce. We have carried out a number of joint projects and this article is about one project we carried out which formed part of a student’s M.Sc. project and the different methods that can be used to produce an accurate digital elevation model.
Treborth sports fields are situated on the banks of the Menai Strait in North Wales and belong to Bangor University. It has four football and rugby pitches on two distinct levels (Fig1)
Ground Control Points
This is where it gets interesting, and I need to explain something, if you have a static, known and permanent position and give it a position of X=0 Y=0 and Z=0, no one can question that, as all it is, is a reference in 3d space, from this you can measure any other marker, object, fixture, permanent or temporary, RELATIVE to that reference point This can be done with a total station which can bounce a laser off a prism mounted on a staff giving a precise sub-cm level reading of where the staff is situated. It gives a precise position RELATIVE to the reference point, NOT to a national grid. The reference point could also be created in the same way from an actual passive station, this could then be mapped accurately and precisely onto the British National Grid.
Using GPS based positioning systems gives us an accurate position (dependent on measuring device) with reference to the European Terrestrial Reference System 1989(ETRS89), which is the UK’s national coordinate system for 3D GPS positioning. We often convert this to OSGB36 National Grid (Ordnance Survey Great Britain 1936) which is the national coordinate system for topographic mapping in the UK. Using any positioning system based on GPS will always be subject to possible errors. It could be that the quality of equipment used is suspect (has it been tested or calibrated). The positioning of satellites and the atmospheric conditions may also play a part. So what it comes down to is scale, if an area to be surveyed is undertaken using a local grid, precision and accuracy is possible. When surveying an area to a national grid, accuracy depends on the base being as close to truth as possible, precision is dependent on the rover’s alignment to the base, there are so many factors outside of the control of the surveyor that need to be accounted for. On the plus side complex algorithms and expensive electronics are constantly being developed to improve precision and accuracy, which of course is the ultimate goal.
In our example survey each marker was measured independently with both the base and rover and the GNSS networked RTK. In the table below (Fig3) you can see an example of the first 10 marker measurements. The average is from all the marker measurements, not just these 10 and I removed 1 measurement, which had a difference of 3.2 metres (point 36 on fig2) compared to its companion measurement.
So with a total of 49 markers, (1 removed) and an average difference in elevation of 2.8cms and average difference in position of 6cm, which ones are correct? Do we accept that the positioning is OK at +/-3cm, is that good enough? When you think about it, the British National Grid is 700km by 1300km so I would say, that +/- 3cm is both precise and accurate. Its meets the criteria set out for topographic surveys, it is not good enough for civil engineering. If we had used a total station, the precision could have been greatly enhanced (something we will look at in the very near future) We could repeat the scenario over a number of days, and average out the numbers, but that’s what the base station and post processing with Rinex data does anyway and so does the networked RTK, it just takes the data from some of the 110 reference stations available, in our case Topnet from Topcon. So with these results in, our accuracy is obviously going to be limited to those figures, but when you think about it, with those figures, anything that is going to be mapped is going to be far more accurate and of much higher resolution than anything you can get ‘off the shelf’. For our aerial survey, we have used our ground control points, which we deem as correct and accurate for this exercise. SEACAMS will use their ground control point data, which they deem correct for geo-referencing their laser scan data also collected from the same site. We will compare digital model results later.
Now that the points have been measured and the data stored from both methods, it’s time to carry out the actual surface data capturing techniques. The aerial survey was done with a small commercially available UAV and attached was a GoPro Hero 4. For larger areas, 50 hectares+ we use a 3-man crew, a much larger UAV, usually with a Sony Nex7 camera. We used commercially available software for the photogrammetry processing. The ground based scanning was carried out using a Leica Scan station C10 HAS laser scanner which uses pulses of laser light to build up a 3D digital model of the surrounding environment. We also scanned the area with our Faro S120 scanner, which is not used in this comparison.
Fig 5 shows the first 10 control point root mean square errors (RMSE) errors for the points in relation to the final digital elevation model. RMSE measures how much error there is between the two datasets, in this case the known values of the ground control points, and the processed value of the digital terrain model. The average values over the whole 49 points can be seen in fig6.
The number of images captured to create the digital terrain model, or digital surface model, to name it correctly, was 125, each with a resolution of 4000×3000 pixels which produced an aerial mosaic image with a resolution of 1.92cm per pixel and a digital surface model with a grid spacing of 3.8cm (fig 7 & fig 8) The point cloud that the processing produced had a point density of 677 points per square metre. In total for the area, the number of points generated was 21,371,820.
There are a number of obvious advantages to capture this data via a UAV.
So with both sets of data in and processed, how do the two techniques compare from a measured point of view. Fig 9 shows an area in the centre of the survey quite nicely. The two digital elevation models were overlaid and a difference algorithm was run. The image shows the differences in colour. The biggest difference shows up on the bank, this is where the grass was very long, photogrammetry creates a surface model, i.e. it averages out the tops of any vegetation, whereas the laser scanner could penetrate the grass, hence the biggest difference. The overall difference ranges between 0 and 10cms, which roughly matches the height of the grass on the main field, enough for us to feel that it was the main cause for the difference in the 2 data sets. You can clearly see a circular feature where the scanner was located, therefore never captured data at that point. Fig11 shows a transect across 80m of the survey area, with the red lines showing where values were extracted to show the differences in height, these are displayed in the table (fig12).
The final table (fig 12) shows the distances between 12 central surveyed points and where those points were positioned on the generated and geo-referenced ortho-mosaic.
This project was carried out to compare 2 methods of terrain data capture, laser scanning and aerial photogrammetry. In general, there seems to be no questions raised as to the validity and accuracy of laser scanning. I think, to the layperson, it is easier to understand how it works, it has a more tangible or physical process. It fires a laser, it bounces back, you have a measurement, what’s to question! Photogrammetry, on the other hand like I mentioned at the beginning of this article can be viewed as somewhat mythical in terms of how it works. If it’s not understood, then how can its accuracy be believed. Hopefully, with this article you can see how it measures up, without going into the process too much.
We didn’t use the best unmanned aerial platform that we could have, why? Well, camera lenses cost, they are all a matter of opinion. I think the point of carrying out this project is to demonstrate that the process is viable for topographic surveying regardless of the payload. Discussing platforms, cameras and software is a whole other subject. This was more about controlling the environment as best as possible in order to create a viable set of data for comparison.
So how do things measure up? Well I think the first point to come out of this project is making sure of the accuracies and levels of precision coming from the ground control points and how they are measured. We have shown that there can be discrepancies of a few cm. The manufactures statement on the networked rover is 10mm horizontally and 20mm vertically+/- 1ppm RMS, 1ppm = given error increases 1mm / 1km distance from base. So we could expect accuracies of anything below 30mm, in perfect conditions
Using the latest GPS equipment, networked or base and rover are going to give you a very high accuracy, there are other methods, using local grids, total stations etc, but we won’t go into that here. What is important, as in any survey, is care, preparation and planning to gather as accurate ground measurements as is possible. This will determine the quality of any processed terrain data, but each stage is important, using UAVs is no shortcut to great data, just another method.
I will digress a little here and add, that if a UAV has the capability of determining its exact location and height when taking a photograph and has the capability of geo-tagging that photo, this can also create very accurate terrain models. At the present time, many machines have this capability, but accuracies are only equivalent to consumer grade GPS devices, i.e. +/- 5m. There are devices and methods being developed and in some cases, in use, where cm level accuracies are available. You would still need to have some sort of control on the ground in order to verify the data.
So we have ascertained that the GCP’s are as accurate as possible but what about the terrain data created? One thing that jumps at us straight away is the that when dealing with very high resolutions, more things come into play and become relevant. In this project, having about 10cm of grass length on the field is very relevant, especially on the bank, which was uncut and about 30-50cm in length. The process of laser scanning, gives a truer indication of a digital terrain model (bare earth model) or at least the ability to extract any vegetation. This becomes a lot trickier with the photogrammetry process, it averages out the image information and gives a surface model, including tops of bushes, trees and grass. In a nadir aerial image, even if the actual ground is viewable through any vegetation, it is more likely to be very dark, i.e. in shadow, and therefore indistinguishable from the vegetation so the information is lost in the photogrammetry process. This is fine, if that’s what is needed, or is acceptable to the client. In many cases, that’s exactly what is required, they want all the information, trees, tree canopy, bushes, hedges, walls, buildings and everything else that exists in the landscape, it gives a true picture of what’s there and let’s not forget the aerial ortho-mosaic that we can drape over the model.
To sum up, there is not ‘one size fits all’ method of creating high resolution accurate digital elevation models. What levels of resolution and accuracy along with experience should determine what tool is best for the job? What we have determined in this project is that aerial photogrammetry at these levels of resolution is a perfectly viable solution, it has benefits and pitfalls when compared to other methods. What should determine its use is dependent on what the client wants.
What I will add is that for large rural areas, in terms of budget and timescale, in my humble opinion, it can’t be beaten, when done correctly.
This post has been seen 7893 times.