CAA UK 2014

Members of the EngLaID team helped to organise the UK Chapter Meeting of Computer Applications and Quantitative Methods in Archaeology (CAA) this year.  The conference took place in the Pitt Rivers Museum, Oxford, on 21-22 March 2014.  The full conference programme can be found here.

1_GaryLock
Gary Lock welcomes attendees

Following a welcome by Emeritus Professor Gary Lock of Oxford University, who is the current Chairman of CAA International, speakers were heard hailing from a good selection of UK universities and other institutions, including English Heritage and the British Museum.

2_Vicky
EngLaID team member, Vicky Donnelly, presents her research

Amongst this varied and excellent selection of talks, EngLaID DPhil student, Vicky Donnelly, spoke about her research into the role of grey literature in archaeology and what it can enlighten us on.

3_crowd

Feedback on the conference was mostly very positive, with some minor complaints about lack of internet access for non-academic attendees.

4_MarkGillings
Mark Gillings presents the keynote lecture

Particularly inspiring was our keynote speaker, Dr Mark Gillings.  Mark is Reader in Archaeology at the University of Leicester and a well known figure in the field of archaeological computing.  He gave an excellent lecture on what he terms “Geosophical Information Systems”, which is (I believe) an attempt to reframe archaeological GIS as a more exploratory technique.  Particular resonant with me were his ideas about “shallow but juicy” GIS experiments.

5_crowd

On the Friday evening, a beer reception was held in the Pitt Rivers Museum, which seemed to be thoroughly enjoyed by all who attended.

8_reception 7_reception 6_reception

Amazingly, despite the presence of a large number of archaeologists for two hours, only two-thirds of the beer provided was drunk!  But we made a very good effort.

9_beer
Conference beer ration!

The conference twitter feed can be found here: #caauk14 or storify

Chris Green

PAS ‘affordances’

Building out of the context of Anwen’s recent work on her Isle of Wight case study, we have recently been playing around with sampling biases in the PAS.  This is in very large part based upon the pioneering work of Katie Robbins, who did her PhD and is doing a postdoc on the subject (see references below: Katie’s thesis is available online).

Katie discussed many different relevant factors in her work, but three stood out to us as being particularly suitable for spatial modelling on a national scale: land cover, obscuration, and proximity to known monuments.  Other factors, such as landowner permissions or proximity to detectorists’ houses, would be very difficult to map nationally without a great deal of work.

Land cover: Using a simple reclassification of LCM 2007 data (via Edina Digimap), around 69% of PAS findspots of our period fall upon arable land, c.21% on grassland, c.4% in suburban areas, just short of 3% in woodland, and c.1% in urban areas. Other land cover types each accounted for less than 1% of PAS findspots.  The affordance surface constructed for this category was given a weighting of 1.0 for arable cells, with each other type given a weighting relative to this (e.g. grassland was given a rating of 0.2133/0.6914 or 0.31).

Obscuration: Various other factors should completely block out the possibility of finding artefacts through metal detecting (although other finding methods might still result in discovery, such as finding something sitting on a molehill whilst on a walk). Easily mappable elements that fall within this category are: scheduled monuments (via EH), Forestry Commission land (via the Forestry Commission), ancient woodland, country parks, local nature reserves, national parks, RAMSAR sites, SSSIs (all via Natural England), and built up areas (via OS OpenData).  The affordance surface was constructed by combining shapefiles for all of these elements, calculating the percentage obscuration of 1 by 1km grid cells and then constructing a kriged surface from the centroids of that data with 100x100m cells.  This was then reclassified so that 0.0 was high obscuration (i.e. low affordance) and 1.0 was low (i.e. high affordance).  Incidentally, the South Downs National Park is the one National Park with a relatively high number of PAS finds, as this was only founded in 2011, but I decided not to correct for this at this time.

Proximity to monuments: I undertook a simple spatial concurrence test of 1 by 1km grid cells (via our latest synthesis iteration: see this post for discussion of methodology) of presence of finds against presence of “monuments” (in the broadest sense) of each broad monument class for each of our period categories (e.g. Roman finds vs Roman agriculture and subsistence).  The major areas of concurrence between (broadly) contemporary finds and monuments were with Roman monuments of most types and early medieval monuments of a funerary nature.  Centroids of grid cells containing Roman monuments of most types or early medieval funerary monuments were used to construct a kernel density estimate layer, which was then tested against the PAS distribution for our period.  However, the relationship was not particularly strong, therefore this layer was reclassified so that any value above the first quantile of the surface was given an affordance value of 1.0, with values below that being classified relative to the first quantile.

The relationship between these three derived affordance surfaces and the relevant PAS data was then graphed to see how valid the model appeared.  Each line produces something close to the expected pattern.

biases_graph
Comparison of different PAS affordances, inc. mean of three coloured lines.

Combining the three input factors into a mean averaged model produces a very strong result in terms of spatial patterning.  Looking at the black combined line on the graph, we can see that c.60% of PAS records have an affordance (‘bias on the axis title’) value of over 0.8 and that c.90% exceed 0.6.  This is a strong pattern, showing that areas of high affordance on our map are much more likely to feature PAS finds than areas with low affordance.

Plotting individual findspots onto the map of this surface shows that most fall within high affordance areas.  We can also see this quite clearly if we plot a kernel density estimate of PAS finds (Bronze Age to early medieval) over the affordance surface (red is low affordance, blue is high), although the interpolation does result in some false overlaps with small areas of low affordance (particularly in East Anglia):

PAS_affordance
Main distribution of PAS finds of our period (Bronze Age to early medieval) over PAS affordances surface.

Two things stand out from this map: (a) that finds cluster in areas of high affordance; and (b) that there are areas of high affordance with few finds.  (a) is an excellent result as it shows that the model is teaching us something valid.  (b) can be explained in several possible ways (most likely a combination of all): differences in detecting practice / differences in reporting practice / the presence of other biases feeding into affordance  but not included in the model.

There are some areas of “double jeopardy” feeding into this model, particularly between the obscuration and land cover layers (e.g. buildings appear in both land cover as urban / suburban and in obscuration; most national parks are of an upland / wild character in land cover).  However, as the pattern seems robust, I am not too worried about this for now.  A more developed model might, instead of the mean average of the three surfaces, be the mean average of the land cover and monument surfaces multiplied by the obscuration surface.  I will experiment with this later, perhaps.

As such, although our model is clearly not perfect (but then, no model ever will be), it does help us to understand something of the underlying affordances helping to shape the distribution of PAS data.  The next stage in this analysis will be to use the affordance surface to try to smooth out variation caused by this factor in our PAS distributions.

Chris Green

References:

Robbins, Katherine.  2013a.  From past to present: understanding the impact of sampling bias on data recorded by the Portable Antiquities Scheme. University of Southampton, Archaeology, Doctoral Thesis.

Robbins, Katherine.  2013b.  “Balancing the scales: exploring the variable effects of collection bias on data collected by the Portable Antiquities Scheme.” Landscapes 14(1), pp.54-72.

OS terrain models

This week, in an attempt to avoid any substantive work, I have been playing around with the Ordnance Survey’s Digital Terrain Models (DTM) that are available for free as part of their OpenData archive to anybody who wishes to use them.  The spur for this was the launch in July of a new DTM onto the OpenData site.

Previously (and still today), the OS made available a dataset known as PANORAMA.  This was created using contour data surveyed in the 1970s.  In order to turn this into a rasterised DTM, some interpolation algorithm (I don’t know which) was used to estimate elevation values between contours to result in a continuous field (50m by 50m pixels) of elevation values for all of the UK.  The heights in PANORAMA are recorded as integers, i.e. to the nearest whole metre.

In July, the OS released a new product, known as Terrain 50.  This DTM was created using LiDAR data surveyed from the air and then averaged out to 50m by 50m grid cells.  A lot of data processing goes into turning raw LiDAR data into a terrain model, but this all takes place behind the scenes, so it is difficult to know exactly what has been done.  The heights in Terrain 50 are recorded as floating point numbers, so apparently convey more precision than PANORAMA.  However, due to the relatively coarse nature of the grid used (50m by 50m pixels), this does carry a degree of spurious accuracy (as we are inevitably dealing with averages).

This map shows both products for comparison (click to enlarge):

humber_dtm_compare
Comparison of PANORAMA and Terrain 50 DTMs for an area of East Yorkshire / Humber.

Certain things stand out when you compare these images, but more obviously when you look at the hillshade (click to enlarge):

humber_hillshade_compare
Comparison (hillshade) of PANORAMA and Terrain 50 DTMs for an area of East Yorkshire / Humber.

The main things to note are:

  • The contour origin and whole number data model of PANORAMA produces a stepped plateau appearance, being especially apparent in areas of gradual change in elevation.
  • PANORAMA produces a substantially smoother picture of change in elevation over space.
  • Terrain 50 appears much more accurate, but also “noisy”.
  • Human impacts on the landscape (e.g. quarrying) show up much more obviously in Terrain 50.

On the face of it, Terrain 50 looks a much more accurate representation of the terrain of the UK and, as such, would likely be most peoples’ first choice when choosing between these two DTMs.

As I have so far been working with the PANORAMA DTM, I wanted to test how different it was from Terrain 50 in order to see if I should go back and rerun some of my analyses with the newer product.  The simplest way to do this is to compare the elevation values recorded in each product for the same piece of terrain, i.e. subtract one grid from the other in the Raster Calculator in ArcGIS and then calculate some basis statistics on the result.

However, this is complicated somewhat by the fact that the two grids are not aligned directly on top of each other: the origin of a pixel in one is in the middle of a pixel in the other, i.e. they are offset by 25m east / west and 25m north / south.  To enable a direct comparison to be made, I reprocessed the PANORAMA DTM to split each cell into four and then aggregated sets of four cells (using the mean) on the same alignment as Terrain 50.  This will have resulted in some smoothing of the resulting surface, I expect, but hopefully not to the extent of making the comparison invalid (as PANORAMA already possessed a relatively smooth surface).

The results can be seen on this map (click to enlarge):

dem_diff_rb
Difference between PANORAMA and Terrain 50 cells.

White cells show little difference.  Yellow cells are slightly higher elevation in Terrain 50 and red cells are significantly higher.  Cyan cells are slightly higher elevation in PANORAMA and blue cells are significantly higher.  Certain things stand out on this map:

  • Differences between the two DTMs are greatest in upland areas.  This will at least partly be due to the need to draw contours legibly forcing cartographers to underplay the steepness of very steep slopes.
  • The sea tiles are quite interesting in the way they vary.  This seems to be due to PANORAMA using a single value for sea cells across the whole dataset, whereas Terrain 50 seems to use a single value for sea cells on each 10km by 10km tile, but different values between tiles.
  • We can also see some differences being much greater on one side or other of the division between tiles aligning with 1000m divisions on the OS grid.  This must be due to Terrain 50 data being processed on a tile by tile basis, more on which later.

Overall, however, the differences between the two DTMs are not great.  If we remove the negative sign from the difference layer (by squaring, then square rooting the result) and clip out sea cells, we can plot a histogram of the difference in elevation (across all 92 million cells):

dem_diff_hist_aligned
Histogram of elevation difference between PANORAMA and Terrain 50.

From this graph, we can see that although there are cells with differences of up to nearly 230m, the vast majority of cells are within 5m of elevation of their counterpart.  The mean difference is 1.91m and the standard deviation 2.26m; 75% of all values are within 2.5m of their counterpart.  As such, PANORAMA and Terrain 50 are actually very similar in elevations recorded.

We can also plot this difference layer on a map, with some interesting results:

dem_diff_somerset
Difference in elevation between PANORAMA and Terrain 50 for an area of Somerset (black = no difference, white = high difference).

Black cells on this map show no difference or minimal difference, shading up through grey to white for cells of relatively high difference in elevation between the two DTMs.  Certain features stand out, some of which I have annotated onto this map:

dem_diff_somerset_annotated
Difference in elevation between PANORAMA and Terrain 50 for an area of Somerset (black = no difference, white = high difference). Features annotated.

The motorway is clearly a feature that appears in Terrain 50 but not PANORAMA.  The contour lines are clearly an artifact of the origins of PANORAMA.  The reservoir is presumably a similar issue to the sea level variation.  The variation on the Mendips is presumably due to the “noisier” more precise nature of Terrain 50 contrasting against the smoothed appearance of PANORAMA.

The appearance of the grid lines worries me somewhat though.  They were not apparent (to my eye) when looking at the raw data or hillshade layers for either dataset, so presumably they are the result of quite a subtle effect.  My assumption (as mentioned above) is that these arise from the LiDAR data behind Terrain 50 being processed as a series of tiles rather than as a single dataset: this is of course inevitable as a continuous high resolution LiDAR dataset for all of the UK would be mind bogglingly immense.  My fear is that any sensitive analyses of terrain using Terrain 50 might show up these grid edges in their results.  However, this is even more true of the 1m contour “cliff edges” that appear in PANORAMA.  At least grid lines will be obvious to the human eye if they do cause strange effects.

So, what does this all mean?  Well, I would argue that the generally minimal difference between elevations recorded for the same place in the two datasets means that previous analyses (especially coarse analyses) undertaken using PANORAMA should not be considered invalidated by the (presumably) more accurate new Terrain 50 DTM.  Also, the “noisy” nature of Terrain 50 and the presence therein of more features of human origin might mean that the smoother PANORAMA could still be the best choice of DTM for certain applications (especially in archaeology, where features like the M5 would not generally be a useful inclusion).

Chris Green

Fuzzy time (and the PAS)

We’ve been thinking recently about why and how we might apply the concept of temporal fuzziness (uncertainty) to our data, particularly because it is a research interest of mine (see Green 2011 for more details).

The reason why dealing with temporal fuzziness is important is well illustrated by the following graph, based upon the work of Frédéric Trément.  The graph shows how a dating of this villa site based purely upon the well-dated finewares would disguise the fact that the villa was very active into the fifth and sixth centuries, which actually account for the greatest amount of coarseware pottery.  If you ignore the coarsewares because of their poor dating, thus, you produce a false narrative of the history of activity on the site.

Comparison of fineware and coarseware dates from the villa site at Sivier, France (redrawn from Trément 2000 - Fig 9.16)
Comparison of fineware and coarseware dates from the villa site at Sivier, France (redrawn from Trément 2000 – Fig 9.16)

One way in which we can include less closely dated material in our analyses is to take account of temporal fuzziness.  In essence, this means defining a set of sub-periods and then calculating the probability (as a percentage in this simplest instance) of each object in the dataset falling within each sub-period.  This is essentially an adaptation of aoristic analysis, created for the study of crime patterns by Ratcliffe (his 2002 paper covers a more robust method than his previous work) and experimented with by various archaeologists.  Where appropriate, we can then sum these probabilities for each time-slice, to produce a model of changing deposition over time.

The most obvious dataset of ours to apply this fuzzy temporal analysis to is the PAS (Portable Antiquities Scheme) data.  This is because most PAS records represent a single object which has had start and end dates defined for it by the PAS team.  Some records need start and end dates adding (based upon the start and end periods, or in the absence of those, the broad periods) and some records need their start and end dates correcting (typically where they have been mistakenly reversed or where dates BC have not been given negative numbers), but all of this is possible to automate using Python scripts.  Once this data standardization has been completed, it is then possible to define a set of sub-periods and calculate the probability of each object falling within each sub-period (again, using a Python script).

PAS: summed probability by century
PAS: summed probability by century

The graph above shows the summed probabilities of PAS data when calculated and collated by centuries.  We can see here the general temporal profile of the PAS for our period, involving low levels of Bronze Age finds, increasing activity during the Iron Age, especially after the introduction of coinage, a massive increase through the Roman period, and then a return to lower levels of activity through the early medieval period.

PAS: summed probability by century: only objects of greater than 90% probability
PAS: summed probability by century: only objects of greater than 90% probability

The graph above then shows how the summed probabilities look if we only include objects with a greater than 90% chance of falling within each century.  Obviously, this example is a little fatuous, as it is not really very easy to date objects that precisely prior to the introduction of coinage, but it does make the point that only including very precisely dated material produces a biased temporal pattern.

PAS: summed probability by century: count of objects of greater than 0% probability
PAS: summed probability by century: count of objects of greater than 0% probability

At the opposite extreme, the graph above shows the count of objects within each century that have a greater than 0% probability of falling within said century.  Thus, in this graph, if an object spans three centuries, it is counted equally in all three.  Naturally, this method then produces another biased temporal pattern, this time over-representing activity in each century.

As such, the first graph, which takes account of the probability of every object is, in my opinion, the most honest representation of the temporal pattern.  However, as hinted above, century brackets are not really ideal, as objects can only very rarely be dated that precisely before coinage came into use.

PAS: mean probability by century
PAS: mean probability by century

This graph shows the mean probability (from 0.0 [0%] to 1.0 [100%], albeit the graph doesn’t scale that far) for all of the objects within each century bracket.  It shows that (on average) Middle Bronze Age and most Iron Age material is coarsely dated, that Later Bronze Age and Late Pre-Roman Iron Age material is better dated, that Roman material (particularly 4th century) is finely dated, and that most early medieval material falls somewhere between the prehistoric and Roman data in terms of its precision of dating.  The very low probabilities in the 5th century partially reflect the post-Roman transition, but are likely to be largely caused by the huge bulk of essentially 4th century Roman material that has been given an end date of 410 or 411.

The conclusion I draw from this graph is that we need to vary the width of our sub-periods over time to reflect the changing level of precision of dating within each period.  This ought to produce the most useful representations of temporal pattern, and is as equally simple to calculate as fixed century blocks.

PAS: alternative sub-period groupings
PAS: alternative sub-period groupings

This final graph, then, shows the summed probabilities for three different possible sets of sub-periods.  The y-axis is the summed probability and the x-axis is time from -1500 (1500 BC) to +1065 (AD 1065).

The orange line shows the same century brackets as before, which clearly is the worst model, as it reduces prehistory to a low-level trace yet also removes significant change in the Roman period (notably the sharp drops around AD 200 seen in both other lines).

The blue line shows a set of conventional sub-periods.  This shows a much more interesting temporal pattern than the century brackets.

The red line shows an alternative set of sub-periods, designed to break away from conventional dates assignments and to take more account of the changing rate of dating precision over time.  This is probably my preferred model, but there is no reason to make hard and fast choices, we can continue to experiment with multiple sets of sub-periods for now.

In the context of our project, this is very much preliminary work, intended to test out some of the possibilities of working with fuzzy temporality using our datasets.  I have also begun experimenting with building the EMC (the Early Medieval Corpus of Coin Finds maintained at the Fitzwilliam Museum, University of Cambridge) into this dataset.  There is also potential for doing something similar with the HER data that we have gathered, albeit implementation is more complex due to the variable structure of that data.  Once we have our methodology nailed down, it will become possible to construct graphs like the final one above for different types of object or for different regions of England.  We could also create a series of maps showing changing probabilities over time, perhaps combined into animations.

Whether this proves fruitful, only time will tell, but I do believe that this type of analysis has great potential for helping to explore continuity and change in EngLaId data.

Chris Green

References:

Green, C.T. 2011. Winding Dali’s clock: the construction of a fuzzy temporal-GIS for archaeology.  BAR International Series 2234.  Oxford: Archaeopress.

Ratcliffe, J.H. 2002. “Aoristic signatures and the spatio-temporal analysis of high volume crime patterns.” Journal of Quantitative Criminology 18(1), pp. 23-43.

Trément, F. 2000. “Prospection et chronologie: de la quantification du temps au modèle de peuplement. Méthodes appliquées au secteur des étangs de Saint-Blaise (Bouches-du-Rhône, France).” In Francovich, R. and Patterson, H. (eds.) Extracting meaning from ploughsoil assemblages. Oxford: Oxbow, pp. 77-91.

Extracting trends (V)

One final post (for now) on extracting trends [see: (1)(2)(3)(4)]…

As I suggested I would in my last post on trend surfaces, I have been experimenting today with constructing individual trend surfaces for the four main broad periods of interest to our project in the NRHE data.  I have also been experimenting with some alternative colour schemes after talking to our project artist, Miranda.  So, without further ado, here are the logistic presence / absence trend surfaces for the Bronze Age (excluding specifically Early Bronze Age), Iron Age, Roman, and early medieval periods (blue being low likelihood and red being high):

1 BA
Logistic trend surface (12th power) for NRHE data for Bronze Age
2 IA
Logistic trend surface (12th power) for NRHE data for Iron Age
3 RO
Logistic trend surface (12th power) for NRHE data for Roman period
4 EM
Logistic trend surface (12th power) for NRHE data for early medieval period

These patterns all look intuitively sensible, albeit with some possible edge effects along the coastlines (as a trend surface becomes more unreliable towards its edges, due to comparative lack of data).  To take this further, we can then compare the difference between each surface and its preceding period (not including Bronze Age, as we are not so interested in the Neolithic / EBA):

5 BAtoIA
Difference between Iron Age and Bronze Age trend surfaces
6 IAtoRO
Difference between Roman and Iron Age trend surfaces
7 ROtoEM
Difference between early medieval and Roman trend surfaces

Again, these results do appear to make sense, with some changes of focus between the Bronze Age and Iron Age (with continuing focus in Wessex), a massive expansion in activity / visibility between the Iron Age and Roman periods, excluding the far north (beyond the Wall) and the south west (with a particular increase in the east of England, north of London), and then a large reduction in activity / visibility across the peak of Roman activity moving into the early medieval.

The next stage would be to start building in data from our other data sources into these models, but that will be something for the future.

Chris Green

Extracting trends (IV)

I have been having another little play around today with extracting trends from our data [previous: (1)(2)(3)], this time from English Heritage’s National Record of the Historic Environment (NRHE).  We have this data for all time periods for all of England, except London (as such, London is masked out in the maps below).  I was wondering how the broad trends in this data for our period would compare to the broad trends for all time periods.  This time, I created logistic trend surfaces, which vary between 0 and 1 to reflect a binary record of presence or absence.

The result for all time periods showed that there was a very consistent presence of NRHE records across pretty much all of England, with the exception of southern Cumbria and the Scottish Borders:

trend_AMIE_all
Logistic trend surface (12th power) for NRHE data for all time periods

However, the picture for our overall time period of interest was very different (Bronze Age, Iron Age, Prehistoric, Roman, early medieval):

trend_AMIE_englaid
Logistic trend surface (12th power) for NRHE data for Bronze Age through to early medieval (plus prehistoric)

Here, we can see that there is a clear peak across England from Wessex across the Home Counties and up towards North Yorkshire, with clear troughs in the Weald, most of the south west, and most of the West Midlands, north east and north west.  Smaller peaks exist in north Northumbria, north Cumbria, and south west Cornwall.  The fact that data clearly exists in great quantities for later periods across some of the troughs in this trend surface could, perhaps, suggest that this represents a genuine absence of activity during our time period of interest (as archaeology has clearly been found for other time periods)?

As these are both logistic trend surfaces that vary across the same numerical scale from 0 to 1, we can also perform some simple mathematical calculations using the rasters as algrebraic terms:

trend_AMIE_compare
Trend surface for all time periods subtracted from trend surface for EngLaId time periods

On this surface a value of -1 shows a strong trend across all periods but a weak trend within our time period, a value of 0 shows similar trends in both, and a value of 1 shows a strong trend in our period with a weak trend across all periods as a whole.  However, as should be expected, because the trend surface for all periods is distinctly high value for most of the country (and because it includes our time period), no areas have come out with a strong trend in our period and a weak trend across all periods.  As such, this result is not particularly interesting, but might be made more interesting by removing the data for the EngLaId time periods from the “all periods” data or by comparing two specific time periods (e.g. Roman against early medieval or Roman against Iron Age).  I shall continue to experiment.

Happy new year!

Chris Green

Processing raster NMP tiles (part 3)

We are now in receipt of all the NMP data (and associated NRHE data) currently possessed by English Heritage, alongside a couple of regions which were kindly supplied directly by the local HERs (Norfolk and Essex), and we would like to extend our thanks to Simon Crutchley, Lindsay Jones and Poppy Starkie for their work in pulling together these datasets for us.

I have previously discussed methodologies for processing the scanned (raster) maps which represent the results of the earlier NMP surverys (1) (2).  I am reasonably satisfied with the polygon result, but one issue that I have discussed with Simon Crutchley is whether it is possible to convert areas of rig and furrow (drawn with a dotted outline) into polygons representing their extent (rather than individual polygons for each dot).  Here is an example of the raster NMP:

1 raster
Raster NMP example

And here is the same data converted into polygons (with grid marks removed):

2 vectorised
Vectorised polygons generated

The first stage in converting the dotted outlines into filled polygons is to generate the line version of the same raster input data:

3 lines
Vectorised lines (red) generated, overlaid on polygons (click to enlarge)

We then create a 5m buffer around these lines (i.e. total width 10m):

4 mask
5 metre buffers around lines (red)

And then use this buffer layer to delete most linear features from the polygon version of the data (using the Erase tool in ArcGIS):

5 erase
Buffered areas erased from polygons

Most of the remaining objects are associated with areas of rig and furrow.  However, we can further improve the result.  First, we recalculate the areas of each polygon and filter the layer down so that we are only dealing with polygons of between 3 and 30 square metres in extent:

6 selection 3-30m2
Erased result filtered down to polygons of 3 to 30 sq. metres

This removes a few remaining linear features that were not previously erased.  Next, we generate centroids for each polygon (using the Feature to Point tool), run the Near tool on the result to get the distance to each resulting point’s nearest neighbour and filter (red dots) out those that are above a certain distance from their nearest neighbour (blue dots).  In this instance, I chose 40 metres, but I think a smaller value would have been better (probably 30m):

7 centroids red near 40m
Centroids generated for each polygon, filtered down to those within 40 metres of another point (red); blue points are those eliminated

We now have a point layer which for the most part represents the vertices for creating our rig and furrow polygons.  We can run the Near tool again, this time asking it to give the spatial location of the nearest neighbouring point for each point, and use the Calculate Geometry tool to insert two fields into the layer giving the location of each origin point.  The XY to Line tool can then create lines between each point and its nearest neighbour:

8 generated lines
Lines generated from points to their nearest neighbour

This result is getting fairly close to what we desire, but has some considerable problems.  First, none of the lines perfectly enclose the polygons, making it currently impossible to immediately process this result into a polygon layer.  It might be possible to fill some of these gaps using the Extend Line tool, but that is a very computer intensive task and liable to still produce an imperfect result (I left it running for 24 hours on this relatively small dataset before I gave up and cancelled it).  Second, in some instances, two lines of parallel dots can be closer to each other than the dots are within each line.  In this instance, the generated lines are drawn as links between the two parallel lines rather than along each individual line.

As a result, currently, if we were to try to convert this result into polygons, we would first need to fill in all of the gaps manually using editing tools and possible also delete all of the small sections of line that have no association with rig and furrow (albeit we could ignore most of these as they will not produce closed polygons in the result or, if they did, they are likely to be small in area and thus possible to easily filter out).  I do not think it is possible with out-of-the-box tools in ArcGIS to improve upon the line generation result, although it might be possible to develop a new tool to do so (perhaps with a directional bias towards its nearest neighbour to encourage linearity?).

I shall keep thinking about how to improve this process.

Chris Green