Geocomputation for Geoscience: Crowdsourcing

PhD student and King’s Geocomputation member Alejandro Coca-Castro attended Europe’s premier geosciences event, The European Geoscience Union (EGU) General Assembly, in Vienna, Austria (April 24th – 28th 2017). In addition to presenting his preliminary PhD results in the session “Monitoring the Sustainable Development Goals with the huge Remote Sensing archives”, Alejandro kindly dedicated part of his attendance at EGU to capture the emerging Geocomputation fields applied to Geosciences, and in particular for land and biosphere research. In this post Alejandro summarises the latest advances in crowdsourcing presented at EGU, which he sees as one of the two main emerging fields revolutionizing the data-driven analysis allows knowledge-production.


 

Public participation in science is on the rise, and citizen science is playing a fundamental part in this. Citizen science is the participation of the public, non-professional scientists, in scientific research – whether it be in data analysis, data collection, community-driven studies or global research. According to a recent special issue of the Remote Sensing Journal, citizen science and projects which are based on user-generated content have dramatically increased during last decade, in particular to support analysis based on Earth Observation and Environmental sensing data. The EGU session “Citizen science and observatories for environmental monitoring, planning, and disaster resilience building” presented developments in the management of crowd-sourced environmental data, and how it can be used in the context of policy support and local planning.

picturepile

Fig 1. Traditional scientific data-driven analysis is now being favoured by so-called ‘citizen science’, through which  citizens can contribute to science and increase awareness of the global sustainability challenges. Source: Geo-wiki (2017).

One of the research initiatives presented in the session was the successful Geo-wiki project led by International Institute for Applied Systems Analysis (IIASA). Through involving volunteers from all over the world, the Geo-Wiki project has been able to  tackle environmental monitoring problems relating to flood resilience, biomass data analysis and classification of land cover. Geo-wiki’s most recent campaign called ‘Picture Pile’ was presented to the EGU attendees as a citizen-powered tool for rapid post-disaster damage assessments (Figure 2, below). Picture Pile, which was originally designed to identify tree loss over time from pairs of very high resolution satellite images, announced the start of its new campaign to crowdsource post-disaster data from the Hurricane Matthew, which affected large regions of Haiti in September 2016. According to the authors’ campaign, “the proposed campaign will not only help to increase citizen awareness of natural disasters, but also provide them with a unique opportunity to contribute directly to relief efforts”. Anyone can get involved in the  current Picture Pile campaign and further info is provided here.

paperpile

Figure 2. Example of the mobile application interface designed as part of the Paper Pile campaigns for crowdsourcing rapid post-disaster damage assessments in developing countries. Source: IIASA (2017).

Dr. Steffen Fritz, main leader of the Geo-wiki project, explained to me that part of the success of the Geowiki campaigns is based on the transparency of the project (i.e. making all the collected data openly available), a dedicated research investment in rigorous methods/collaborative networks to use, analyse and recycle the collected data and last but not least providing fair acknowledgements to all volunteers involved (i.e. via co-authoring them in peer-review publications derived from each campaign).

Dr. Fritz admits even though the use of crowdsourcing for earth observation is still at an early stage, the huge potential arising from the combination of both data streams is already very clear. Challenges still remain, not least the need for more efficient methods to encourage citizens to collect data, the quality of crowdsourced data, data conflation, and the combination of crowdsourcing with other technologies and methods applied by experts (further details are provided here).


 

Interested in how Big data technologies are revolutionizing the way to collect/extract knowledge for data-driven? See Alejandro’s earlier post.

The author is grateful to the Geography Department Small Grants and the P4GES: Can Paying for Global Ecosystem Services reduce poverty? project for providing funding for Alejandro’s successful attendance to the EGU General Assembly. Revision of English version by Sarah Jones.

For updates about Alejandro’s research follow @alejo_coca on twitter.


Geocomputation for Geoscience: The Earth System Data Cube

PhD student and King’s Geocomputation member Alejandro Coca-Castro attended Europe’s premier geosciences event, The European Geoscience Union (EGU) General Assembly, in Vienna, Austria (April 24th – 28th 2017). In addition to presenting his preliminary PhD results in the session “Monitoring the Sustainable Development Goals with the huge Remote Sensing archives”, Alejandro kindly dedicated part of his attendance at EGU to capture the emerging Geocomputation fields applied to Geosciences, and in particular for land and biosphere research. In this post Alejandro summarises the latest advances in Big Data technologies presented at EGU, which he sees as one of the two main emerging fields revolutionizing the data-driven analysis allows knowledge-production.


 

Well-known remote sensing data producers such as The European Space Agency (ESA) and NASA are developing a wide range of data products relevant to understand land surface processes and atmospheric phenomena as well as human-caused changes. However, although there is an unprecedented variety of long-term monitoring data, it remains challenging to understand exchanging processes between atmosphere and the terrestrial biosphere. To overcome this issue, ‘Big Data’ technologies are being proposed to tap the question of how to simultaneously explore multiple Earth Observations (EOs).

EarthDataCube3

Fig 1. Emerging Big Data technologies make possible to co-explore multiple datasets with different characteristics and under different assumptions with an efficient and faster manner than traditional data management technologies.  Source: M. Mahecha (2017) https://doi.org/10.6084/m9.figshare.4822930.v2

Amongst all collaborative initiatives presented at EGU, the Earth System Data Cube project led by the Max Planck Institute for Biogeochemistry and funded by ESA presented an emerging platform (E-Lab). The project aims to maximize the usage of ESA-EOs and other relevant data streams. The main concept behind the E-lab’s stream data maximization is the so-called ‘Data Cube’ concept. This ‘cube’ concept enables handling and extracting information for a given georeferenced dataset, optimising the management of its spatial and time dimensions. These dimensions are use to split data into smaller sub-cubes of of varying dimensions. In this way, dimension X and Y are the spatial dimensions (i.e., latitude and longitude). The third dimension corresponds to time; the fourth are the multiple variables or data streams themselves. All data uploaded into E-Lab are under the elegant and efficient ‘Data Cube’ umbrella and simultaneously exploration is mainly permitted by a set of predefined preprocessing rules applied during the data ingestion process.

CABLAB_structure

Fig 2. Representation of the ‘Data cube’ concept and its related-structure applied to three different data sets (V1, V2, V3). Source: Earth System Data Cube (2017).

E-lab provides scientists a virtual online laboratory where the “Data Cube” can be explored, standard processing chains can be examined, and new work-flows can be tested. Jupyterhub is the underlying framework of the platform. This makes it simple for the users to work on the data cube using the popular Jupyter notebook, which supports high-level programming languages (mainly in Julia, Python and also a bit in R, although the latter is a bit underdeveloped at this stage).

The Earth System Data Cube initiative is a pioneering project offering an open and free-of-charge collaborative virtual platform with a solid background in the analysis of large data-sets and a sound understanding of the Earth System. However, a challenge remains in regards to the standards of data infrastructure, metadata and sharing protocols for existing and incoming, either private or public, projects supported by the ‘Data Cube’ concept. A first step towards tackling this concern is being led by The EarthCube and NextGEOSS initiatives which also were part of the transdisciplinary programme covered by the EGU of this year.


 

Interested in how Crowdsourcing is revolutionizing the way to collect/extract knowledge for data-driven analysis? If so, look out for a blog post on the topic right here this coming Friday.

The author is grateful to the Geography Department Small Grants and the P4GES: Can Paying for Global Ecosystem Services reduce poverty? project for providing funding for his successful attendance at the EGU General Assembly. Revision of English version by Sarah Jones and content by Miguel Mahecha.

For updates about Alejandro’s research follow @alejo_coca on twitter.


Urban mobility data analysis

Introducing a new member of King’s Geocomputation – Dr Chen Zhong! Chen joined King’s College London in September 2016 and her work on urban mobility directly contributes to the Geocomputation Research Domain. Here she provides a brief intro to her work.

“Space shapes transport as much as transport shapes space, which is a salient example of the reciprocity of transport and its geography.”

Rodrigue, Comtois et al. 2013

Quite often, I use this quote to explain the behind story of my research. And I keep on correcting people that I am working on urban mobility, not transportation. The former, to me, has a much broader meaning and is about people and their interactions with the built environment.

skytrain

Train in the sky, Singapore, 2013, source: Google

About Urban Mobility data

Most of my research explores the usage of automatically generated urban mobility data, such as smart-card data (my main source), mobile phone data and social media data. These types of data are generated by “citizens as sensors” as described by Goodchild (2007). People are carrying all kinds of sensors, such as mobile phones, smart wristbands and so on, all the time. The network formed by such sensors consists of the people themselves; therefore, it contains explicit spatial as well as implicit social information. These data sets offer us new potential to have a direct look into human behavior.

Compared to conventionally surveyed data, sensor data has a significant advantage in terms of granularity, coverage, efficiency and reliability. However, they are not perfect and often demographic information about the people carrying the sensors is absent. Nevertheless, these data sets still have a significant advantage for pattern detection and behavior analysis, thanks to its large sample size and less questionnaire bias. The challenge here is that data are collected with untapped purpose. We need be creative to make the best use of it and how. This challenge, as I see, is also the beauty of the “Big Data” concept.

Smart–card data

I would like to show a few examples from my previous research – big data informed urban planning – which is one of many potential uses of mobility data. The first is about investigating functional urban changes in Singapore. There, we used a set of urban indicators to identify human activity centres and boundary of urban regions. Changing structure of traffic flows over years proved the successful implementation of decentralization in Singapore. Moreover, the significantly growing emerging sub-centre reveals how rapid the urban development of Singapore has been. This is definitely unique among all developed countries. When we mapped out the redrawn regional boundaries (see image at top), even a non-analytical government officer immediately interpreted the graphics and presented us the impact of new development on people’s locations choices.

smart_singapore2013

One-north MRT station, Singapore, 2013

Note: Smart-card data is generated by automatic fare-collection systems. In London, it is Oyster card data. To find out more about smart-card data, and the above mentioned work, see my paper  on detecting the dynamics of urban structure through spatial network analysis.

Comparing Cities

It is always interesting to compare things. We compared three world cities, namely London, Singapore and Beijing. You may expect Singapore to be the one with the most regular travel patterns using public transportation. Beijing, however, has the highest regularity with respect to “when to travel” and is second with respect to “where to go”. The most important reason is the regular passenger control measure which is applied to about 40 stations where passengers are held outside the stations before being allowed to enter at regular time intervals during the morning peak. Such queues can last for miles. Passengers can either wait there, search or use an alternative station or mode according to their situation. Moreover, this inconsistency of regularity can also subscribe to another unique phenomenon in Beijing which is due to Vehicles Plate Number Traffic Restriction Measures where many private car owners drive a car on most days but for one day use public transport system. Though people in China sometimes complain about the inconvenience of such policy, it indeed helps reducing Carbon emissions and relieving road congestion. Read more about this in my paper entitled Variability in Regularity: Mining Temporal Mobility Patterns in London, Singapore and Beijing Using Smart-Card Data.

Outlook

Looking forward I have some ideas in mind and could easily list at least three directions:

  1. Comparative study of cities using urban mobility data (of course, not limited to smart-card data) is an option, and is already ongoing;
  2. Linking urban mobility patterns to urban health is another direction that could greatly widen the horizon of my research;
  3. Cross-checking detected patterns with multi-sources data could enhance and deepen previous findings.

If you are interested in any of these ideas or you want to chat about your great idea, please do not hesitate to contact me.  To read more, click here.

References

Goodchild, M. F. (2007). “Citizens as sensors: the world of volunteered geography.” GeoJournal 69(4): 211-221.

Rodrigue, J.-P., et al. (2013). The geography of transport systems, Routledge.

 


Big Data and Bayesian Modelling Workshops

In this blog post two PhD students associated with the Geocomputation Hub – Alejandro Coca Castro and Mark de Jong – report back on workshops they recently attended. Alejandro attended a UK Data Service workshop and Mark an ESRC-funded advanced training course on Bayesian Hierarchical Models.

Hive with UK Data Service – Alejandro

Big data is a broad term for data sets so large or complex that traditional data processing applications are inadequate. In practice, researchers can face a big data challenge when the dataset cannot be loaded into a conventional desktop package such as SPSS, Stata or R. Besides being the curator of the largest collection of digital data in the social sciences and humanities in UK, the UK Data Service initiative is also currently organising a series of workshops focused on big data management. These workshops aim to promote better and more efficient user-manipulation of their databases (and other sources).

UKdataservice

Keen to attend to one of UK Data Service’s workshops, I visited the University of Manchester on 24 June 2016 to participate in “The Big Data Manipulation using Hive”. In a short, Hive™ is a tool that facilitates reading, writing and managing large datasets that reside in distributed storage using SQL (a special-purpose programming language). Although a variety of applications to access the Hive environment exist, the workshop trainers showed attendees a set of tools freely available for download (further details can be accessed from the workshop material). One of the advantages of the tool mentioned is its flexibility for implementation in well-known programming languages such as R and Python.

My attendance at the workshop was an invaluable experience to gain further knowledge about the existing tools and procedures for optimising the manipulation of large datasets. In the Geography domain, these large datasets have started to be more common and accessible from multiple sources i.e. Earth Observation Data. Consequently, optimised and efficient data manipulation is key to identifying trends and patterns hidden in large datasets not possible by traditional data-driven analyses derived from small data. The find out more yourself, consider joining the free UK Data Service Open Data Dive, scheduled for 24 September 2016 at The Shed, Chester Street, Manchester

Alejandro Coca Castro

 

Bayesian Hierarchical Models – Mark

I recently attended an ESRC funded advanced training course on spatial and spatio-temporal data analysis using Bayesian hierarchical models at the Department of Geography, Cambridge (convened by Prof. Bob Haining and Dr Guangquan Li), to gain an overview of Bayesian statistics and its applications to geographical modelling problems.

Compared to ‘classical’ frequentist statistical techniques, modern modelling approaches relying upon Bayesian inference are relatively new, despite being based upon principles first proposed in the works of Thomas Bayes in 1763. Since the 1990’s Bayesian methods have begun to be widely applied within the scientific community, as a result of both an increasing acceptance of the underpinning philosophy and increased computational power.

Thomas_BayesThomas Bayes

In contrast to frequentist approaches, Bayesian methods represent processes using model parameters and their associated uncertainty in terms of probabilities. Using existing knowledge (e.g. from previous studies, expert knowledge, common sense etc) about a process or parameter of interest, a ‘prior distribution’ is established. This is then used in conjunction with a ‘likelihood’ (derived entirely from a dataset relating to the specific parameter) to produce a ‘posterior distribution’ – essentially an updated belief or opinion about the model parameter.

In some situations, Bayesian approaches can be more powerful than traditional methods because they:

  1. are highly adaptable to individual modelling problems;
  2. make efficient use of available evidence relating to a quantity of interest;
  3. can provide an easily interpreted quantitative output.

Historically however, Bayesian approaches have been considered somewhat controversial, as the results of any analysis are heavily dependent upon the choice of the prior distribution and identification of the ‘best’ prior is often subjective. Arguably though, the existence of multiple justifiable priors may actually highlight additional uncertainty about a process that would be entirely ignored in a frequentist approach! Moreover, in many studies, it is common for researchers to make use of a ‘flat prior’ in order to reduce some of the subjectivity associated with prior selection.

Li_etal_2014

Hotspots in Peterborough with a persistently high risk of burglary, 2005/8,
as identified with a Bayesian spatio-temporal modelling approach.
[Kindly reproduced from Li et al. (2014) with author’s permission.]

 As part of the course, we learned to use the winBUGs software with a Markov Chain Monte Carlo approach to explore a variety of spatial modelling problems, including: the identification of high intensity crime areas in UK cities, investigating the relationships between exposure to air pollution and stroke mortality, and examining the spatio-temporal variations in burglary rates. More information on the approaches taken in these studies can be found in Haining & Law (2007), Maheswaran et al, (2006), and Li et al. (2014).

Overall, the course provided a very engaging, hands-on overview of a very powerful analytical framework that has extensive applications in the field of quantitative geography. A comprehensive introduction to Bayesian analysis from a geographical perspective such as this is hard to find, and I would highly recommend that anyone with an interest in alternative approaches to spatio-temporal modelling should attend this course in future years!

Mark de Jong


Better regulation through ‘Big Data’?

This Thursday 23rd June, Alex Griffiths from the School of Management & Business will give a seminar on the use of ‘big data’ in regulating public service provision. 

Better regulation through ‘Big Data’:
A triumph of hope over reality?

 Alex Griffiths, School of Management & Business, King’s College London
10:30-12:30, Thursday 23 June 2016
Room K1.26, King’s Building, King’s College London, Strand, London, UK

‘Big data’ enthusiasts often claim that data analytics is the key to better regulation and improved public service provision. By harnessing the power of big data, regulators can identify those service providers at greatest risk of non-compliance and target their interventions accordingly. This promises both to concentrate regulatory efforts where improvements are needed most, while freeing others from unnecessary scrutiny.  Whilst such data-led approaches have been widely adopted in the private sector, whether in credit scoring loan applicants, or recommending similar products to online shoppers, to what extent can they be successfully extended to the regulation of public services?

This seminar evaluates two extant data-driven approaches to regulating healthcare quality, before assessing whether machine-learning techniques can provide a more effective means of targeting regulatory resources in health and higher education. The presentation concludes with a discussion on the preconditions necessary for a successful ‘big data’ approach.


Directions: From main Strand reception, go straight ahead down the corridor. Turn left into the East Wing corridor just after the vending machine, the following rooms are up the small staircase to your immediate right: K1.26 (21B): King’s Building


Posted in Uncategorized | Tagged

Analysing Drone Data: 3D forest point clouds

In this guest post from King’s Geography PhD Student, Jake Simpson describes some of his geocomputational work analysing data from tropical peat swamp forests to estimate carbon emissions.

In December 2015, I travelled to an area just outside Berbak National Park, Sumatra.  This area comprises both pristine and impacted tropical peat swamp forest, which is one the world’s most important terrestrial carbon stores, and home to a small, endangered population of Sumatran tigers.  The area was heavily impacted by wildfires between July and October, which made headlines across the world when the pollution from the fires reached as far as Vietnam!  In total, about a fifth of the area burned, and in the process, emitted a globally significant amount of carbon into the atmosphere.

Haze_small
A NASA satellite image showing the extent of the haze on 24 September 2015 (Public Domain)

Quantifying the carbon emitted is a tricky business because it cannot be measured directly.  One way to estimate the emissions is to measure the amount of peat that burns away.  Using digital elevation models (DEMs), the volume of peat burned is estimated by subtracting the post-burn DEM from the pre-burn DEM.  We have access to a pre-burn DEM from an airborne LiDAR survey for an area of peat swamp forest.  With some clever filtering, the ground level can be extracted from the LiDAR data, even when dense forest is present.

We then used a very cheap unmanned air vehicle (UAV) with a camera strapped to the bottom to survey the post-burn area to extract a DEM.  This technique is called structure from motion (SfM).  For a given area, multiple photos are taken from different angles and then loaded into software called “Agisoft Photoscan”.  The software uses photogrammetry algorithms to identify common points between photographs, and aligns them.  Other algorithms compare the location of these common points in relation to each other, and in doing so reconstructs a 3D point cloud of the surface.  This process is incredibly computer-intensive and can take several days to complete, especially when up to 1,500 photos are used per survey. The steps I took in the analysis are summarised below.

Overall, I processed 8 UAV surveys, which equates to over 8,500 photos and over 2.5 billion point cloud data points.  Thanks to the Geocomputational hub, I was able to process these photos and am in the process of writing up the analyses for a paper. Stay tuned…

Step 1:  Photos are aligned, camera positions are predicted (blue), tie points detected.

Setp1a

Step 2:  Identify ground control points (with coordinates measured in the field) in the photos for georeferencing purposes

Step2

Step 3:  Build dense point clouds, DEMs, orthomosaic photos.  Here is a before and after shot of the forest we surveyed. 

GeoComputation – The Next 20 Years

Last December we held a workshop at King’s on the Future of Geocomputation. Now we’re looking forward to participating in another day of Geocomputation discussion, this time at the Royal Geographical Society Annual Meeting 2016 in London on 31st August.

Ed Manley, Nick Malleson, Alison Heppenstall and Andrew Evans are convening two presentation sessions and a panel discussion at the RGS conference, entitled GeoComputation – The Next 20 Years. The session name is a reference to the 1st International Conference on GeoComputation held in Leeds in 1996. Next year, another Leeds GeoComputation conference is planned to reflecting on the successes initiated by the original meeting started and to consider future directions.

In the meantime, at this year’s RGS Annual Conference geographers who use computational techniques will speculate on the future of this area of research. What will The Internet of Things mean for geography? What about group cognition modelling? In the first session of the day I will be speaking about some of the things we learned from our workshop in December. Then later in the day Jon and Faith will be on the panel for the discussion about where GeoComputation is heading and what we should look out for over the next 20 years.

Registration for the RGS Annual Meeting is now open, with earlybird registration closing Friday 10th June. Hope to see some of you there for lots of interesting discussions!


Research Associate – ABM, Food and Land Use

This week we started advertising a post-doctoral Research Associate position to work with James on a project looking at the global food system, local land use change and how they’re connected. The successful candidate will drive the development and application of an integrated computer simulation model that represents land use decision-making agents and food commodity trade flows as part of the Belmont Forum (NERC) funded project, ‘Food Security and Land Use: The Telecoupling Challenge’.

Telecoupling is the conceptual framework of socioeconomic and environmental interactions between coupled human and natural systems (e.g., regions, nations) over distances and across scales. Telecouplings take place through socioeconomic and/or biophysical processes such as trade, species invasions, and migration. For example, while a number of countries such as China have experienced a shift from net forest loss to net forest recovery, this forest transition has been often at the cost of deforestation in other countries, such as Brazil where forested land is converted to meet global food demands for soybean and beef.

The goal of the project is to apply the telecoupling framework to understand the direct and collateral effects of feedbacks between food security and land use over long distances. To help achieve this the successful candidate will contribute to the development and application of an innovative computer simulation model that integrates data and analysis to represent coupled human and natural system components across scales, including local land use decision-making agents and global food commodity trade flows.

We’re looking for a quantitative scientist with a PhD (awarded or imminent) or equivalent in Geography, Computer Sciences, Earth Sciences or other related discipline. You should have experience in computer coding for simulation model development, preferably including agent-based modelling. Previous experience studying land use/cover change processes and dynamics or food production, trade and security is desirable.

This is a full-time position, with fixed term for up to 18 months. The deadline for applications is midnight on 19 April 2016. Interviews are scheduled to be held the week commencing 9 May 2016. For more details and how to apply see http://www.jobs.ac.uk/job/ANG825/research-associate/ and direct questions to James via email: james.millington at kcl.ac.uk

If this doesn’t sound quite like your thing, maybe you would be interested in one of the other positions we currently have open (with application deadline 30 March).

Image credit: Liu et al. (2015) Science doi: 10.1126/science.1258832


Mobile Apps & Tech for Fieldwork

Last week several members of King’s Geocomputation activity hub participated and contributed to a fieldwork mapping and monitoring party held at The Royal Geographical Society in London. Presentations and demos included crowdsourcing & OpenStreetMap, low-cost research drones and Arduino micro-controllers. This blog post summarises another presentation that explored the options for using mobile apps for fieldwork .

My contribution at the mapping and monitoring party was to look at mobile apps for fieldwork. I’ve posted my slides from the presentation online in pdf and Gslides formats and provide a summary of some of the apps below. I provide plenty of links to the apps I refer to in both my presentation slides and this summary.

I focus on android apps but Faith Taylor at King’s focused on Apple apps and used her massive iPad at the party to highlight what she finds useful to have on her real iPad in the field. Also at the party Michele Ferretti gave a quick highlight of using the OpenStreetMap Overpass API to obtain field site data and Tom Smith demonstrated auto-tweeting arduinos for monitoring soil moisture.

 

There was lots of other interesting stuff at the party on twitter, which you can get a taste of from the #rgsfieldtech hashtag on social media. As you can see from the twittersphere it was a great event and we look forward to the next!

Mobile Apps for Fieldwork

I suggested in my presentation we can think about using apps for fieldwork in a few different ways:

  • Planning where and when to go in the field
  • Managing data collection and storage using GPS
  • Measurement using device sensors

Planning

Considering what the weather/tide conditions will be like when you are in the field is an important part of fieldwork preparations. There are a plethora of apps to help with this. Weather Forecast UK is my personal favourite for UK weather and the paid version includes observation and forecast maps. LunaSolCal Mobile is great for finding out rising and setting of sun and moon, whereas Sun Position can also show the solar and lunar path on an augmented reality camera view for any day of the year at your current location.

 

Mapping is another important aspect of preparation – where will you go in the field? Several apps are useful for both planning where to go, but also tracking where you have been in the field (and add notes, photos, etc as you go). OruxMaps is possibly the best Andoid app for tracking while adding notes, photos, video, audio in a single integrated package. Alternatively use a light-weight tracker (such as GPS logger for Android) and then additional apps for photos (turn on ‘save location’ in settings in your device camera app), notes (e.g. MAP note) and other recording.

Managing

When in the field you will likely be collecting data. One of the best apps for data collection during UK fieldwork is Fieldtrip GB which is built on Ordnance Survey map data. The app allows you to capture georeferenced notes, photos, tracks, download maps for offline use, save data to your device which can later be snyced to your dropbox account later. One of the nicest features of the app is the ability to create your own custom forms for data collection.

Collector

The main drawback of the Fieldtrip GB app is that it only works in the UK as uses it uses OS mapping. When venturing beyond the UK, you could try Map It for recording data collected in the field. Map It allows multiple (global) map sources and export formats, map polygons and the like. It also allows you to create custom forms for data collection/recording. If doing human geography data collection, the Collect app is designed specifically for questionnaires or surveys. Again, it allows you to set up custom forms that match the questions you wish to ask – this can be done on a PC before you visit the field and results are automatically synced to a database for later desk analysis.

Measuring

Moving on from managing data in the field, we can also think about apps to actually make measurements with sensors on current mobile devices. There are many possibilities for using mobile devices for surveying. For example, the theodolite app Measure Angle provides functionality to view data lat, long, azimuth, angles, and more in an augmented reality perspective. The precision of these apps are dependent on the hardware on which they are used – don’t necessarily expect professional grade precision, but they are good for the fraction of the price of professional equipment (or even free!).

There are also many stand-alone apps useful for measuring different physical properties in the field:

  • Slope: use device accelerometers to evaluate orientation – these apps possibly require calibration and you may want to assess their accuracy before use
  • Aspect: there’s no dedicated app that I know of for this but aspect could be readily measured by combining compass/theodolite and sun position apps (links above)
  • Albedo: there are few apps for measuring this out there, but according to Dr Tom Smith at KCL this one is quite good

 

There are several apps out there useful for assessing the geology and soil in your study area. Several of these have been developed by British research groups. For example, iGeology and mySoil are developed by the British Geological Survey (with partners). Other apps enable you can use your smartphone or tablet in the same way you would use a brunton compass. You simply orient the phone or tablet along the planar or linear feature, choose a symbol, and tap. The device’s built-in compass and orientation sensors instantly record the strike, the dip, and dip direction.

iGeology

Finally, when in the field you may need to identify flora and fauna. There are two general types of app for this. First, those that recreate traditional guide books but in digital format (including possibly with sound or video). Be sure to select these apps as appropriate for the region you are visiting. Second, there are apps that attempt to use sound and video detected by your mobile device to identify species. The success of these is variable. Google Goggles is one of the most widely known for identifying the contents of images taken by your device camera (I have used with mixed success). For birds there are apps like Warblr that do something similar to Shazam, attempting to identify birds from recordings of their song. Bird Song Id claims 85% success!

General Tips for using apps in the field

Test your apps before you start your fieldwork proper. This is important both so you understand and feel comfortable with how the app works, but also so that you know what it can do and how accurate it is likely to be. You may want to do some benchmarking before you go in the field, for example testing the app against known conditions (e.g. known slope angles).

Think about the need for internet connection – check what data connection apps need before going in the field. Sometimes you may have a data connection, but in many places you may not. Both Connectivity and testing are particularly important if you are linking data to cloud or online databases

Consider upgrading hardware – a device with additional core processors or memory will be worth it if you will be doing a lot of fieldwork. Also consider buying a storage card to expand built-in device storage and for tablets consider getting a device that can use a SIM to connect to cellular data networks.Some apps may need specialist sensors that expensive smartphones have but others do not.

Experiment with alternatives and buy pro versions – they often don’t cost much (relative to professional field equipment) and can offer much better functionality, precision and ease of use. However, beware some apps require in-app purchases and also look out for ‘expert id’ species identification apps in which identification is not automatic (compared to a database) but actually sent to a human to identify. There are two main limitations to this; first, it can take time to receive results (hours to days), and second you will likely have to pay for each id. So this might be good for particularly difficult species but not for general ID.

Finally, get creative with your use of apps. Just because an app is branded for one thing does not mean it cannot be used for another purpose. Combine apps together if necessary. And if there’s no app out there to do what you want, maybe consider making our own! It may not be trivial but there are many guides and courses to get you started. And if you’re thinking about undergraduate study, the Geocomputation and Spatial Analysis pathway at King’s will give you some of the skills you need to do this too!


Come and Join Us!

We’re looking for someone with a passion for teaching and research that uses quantitative and computational methods to understand geographical systems. If that sounds like you, submit your application for the position of Lecturer in Spatial Analysis at King’s College London.

King’s Geocomputation really began life when Jon Reades joined the Department of Geography as Lecturer in Quantitative Human Geography and James Millington switched from his Leverhulme Fellowship to become Lecturer in Physical and Quantitative Geography. Together they kick-started the Geocomputation and Spatial Analysis (GSA) pathway through the undergraduate Geography degree, and soon after Naru Shiode joined as Reader in Geocomputation and Spatial Analysis. Now we’re looking to expand even further with the appointment of a Lecturer in Spatial Analysis.

The person we’re looking for will have expertise in spatial analysis and computational methods for understanding geographical systems. They will contribute to the delivery of the GSA pathway which emphasizes quantitative methods, spatial statistics, programming, simulation modelling, and behavioural ‘big data’. The pathway makes use of free and open source software where possible, and the successful candidate will also likely have expertise in tools like R and Python.

Alongside teaching, of course we’re also looking for someone who will contribute to the capacity of the Department of Geography to undertake world leading research, education and public engagement activities. The particular substantive area of research interest is open but should be broadly aligned with the research interests of existing members of the Department. Engagement with other departments and programmes, such as Informatics and Health, to deliver world-leading and boundary-pushing knowledge would also be welcomed.

You can find full details of the position and how to apply online. For an informal discussion of the post please contact Professor Nick Clifford or feel free to contact existing members of King’s Geocomputation.

And this isn’t the only opportunity to join King’s – there are currently multiple positions open across the Department of Geography that could potentially contribute to King’s Geocomputation activities. These positions are at Professor and Teaching Fellow levels.

All deadlines for applications are 30th March 2016, so get cracking!