Analysing crisis mapping data

Many humanitarian actors recognize the relevance of information produced by crisis mapping initiatives. One obstacle to effectively utilizing this new data is simply that there is a lot of it. The stream of real-time data produced by crisis mapping initiatives can be overwhelming when decision times are short and workloads are high. Besides, there are already a number of more familiar data sources that might suffice for decision making. Humanitarian workers on the ground may at times understandably choose to ignore new streams of data and carry on with their standard operating procedures.

Yet to ignore the data produced through crisis mapping is to miss the opportunity of integrating voices gathered from the ground in real time. The Standby Task Force has set up an analysis team to facilitate the use of information gathered through crisis mapping initiatives. The key is to turn a stream of individual reports into actionable summary data for actors on the ground. This is a challenge that some members of the humanitarian community posed to the crisis mapping community at the ICCM 2010.

The OCHA Colombia Earthquake simulation provided a testing ground for this analysis service. The SBTF analysis team produced four summary Task Force Reports in the period of the simulation. The reports provided a summary of emerging issues, geographic areas of interest and the general situation on the ground. The reports then showed clustering patterns of information coming in to the platform and a brief analysis of access to hospitals and emergency hubs from reporting locations. Finally, the reports provided detailed tables of all information coming in to the platform, categorized by type of report. A full copy of the final Task Force Report can be downloaded here.

Producing the Task Force Report during the simulation raised a number of issues for future deployments. The first is the importance of unpacking what questions people operating on the ground want answered. The SBTF humanitarian liaison team already flags reports requiring immediate attention to teams of first responders. The Task Force Report rather looked to answer questions that would help prioritize areas or issues requiring attention in response to emerging trends. We did this mainly by looking at the spatial distribution of reports and mapping that against locations providing services to the affected community. We also looked through the data to find issues that might not be receiving many reports yet, but could indicate a turn in the tide, a new emerging set of issues.

The main difficulty during the simulation was that we did not have a chance to ask first responders directly what products would be most useful. We’ve received feedback that the summary was useful, particularly the heatmap of reports. Still, for future deployments, it is crucial to start a conversation with the humanitarian community so we can jointly plan ahead what products would be most useful.

Second, there is a balance to be struck between providing rigorous analysis and keeping it simple. Finding patterns by eye-balling data (mapped or tabular) is a highly subjective process that can result in misleading conclusions. We made an attempt to mitigate this by running a statistical test of the clustering pattern of reports using SaTScan’s multinomial and ordinal models. The test returned that any apparent clustering of events was not sufficiently statistically significantly different from a random process. There are a number of reasons why running a statistical test on this type of data is problematic. Not only are there obvious sampling problems, but there is the added difficulty that we are picking up clusters of reports rather than clusters of events. This is only a relevant pattern if we believe that the number of times an event is reported somehow ascribes it more significance. Nonetheless, volunteers reading the reports could spot an emerging story that suggested priorities for action. An open question then: are the patterns subjectively gleaned by volunteers useful to first responders? Or would we do best to just stick to making the raw data available?

Third, we assumed that the users of the information generated during the simulation were humanitarian actors. And yet the first responders in an emergency are often affected communities. As we move forward, we will need to find ways for the SBTF to facilitate two-way communication with affected communities, and identify what summary reports and maps would be most helpful to affected communities.

We still have many questions about how best the Standby Taskforce’s can provide an analysis service. However, our motivation – like that of the SBTF in general – is clear. Humanitarian responses that fail to take into account needs as reported by communities at the time risk missing some of the critical needs of their intended beneficiaries. Conversely, a response that listens only to community reports without matching these to the priorities identified by actors with an overview of the situation could turn out an overly populist response.

The work of the analysis team is to support first responders to strike a balance between these two ways of identifying priorities. The SBTF is aware that such analysis, reports and data cannot fully replace traditional needs assessments. However, we believe they can significantly improve the situational awareness of responders and have a strong positive impact on the effectiveness of timely response.

Helena Puig
Standby Task Force
@HelenaPuigL

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s