[Cross-posted from Helena’s blog.]
This week I spoke at the Understanding Risk conference in Cape Town on a panel that explored successes and difficulties in the application of crowdsourcing for development and disaster risk reduction, together with colleagues from Humanitarian Open Street Map, the Public Laboratory, Ushahidi, Idibon and the World Bank. I focused on the particular challenges of conflict sensitive crowdsourcing. Disaster risk management practitioners are often concerned with the additional challenges of disaster response in conflict and post-conflict settings. I shared four case studies in Somalia, Syria, Libya and Sudan. Only one of these (Somalia) relates to a natural disaster, but together they illustrate some of the core questions around conflict sensitivity and crowdsourcing.
Somalia: UNHCR asked the Standby Task Force (SBTF) to crowdsource the location of shelters in the Afgooye corridor in Somalia, where a large number of people were displaced during the drought in 2011. Tagging shelters helped with population estimates to support UNHCR’s logistics planning. Volunteers had access to satellite imagery of the corridor courtesy of Digital Globe. An application developed by Tomnod presented them with squares of imagery where they tagged anything that looked like a shelter (following a set of guidelines). Tomnod’s crowdrank algorithm then determined what tags were trustworthy.
Syria: The SBTF was asked by a big humanitarian response organization to carry out a similar exercise in Syria – using the crowd to map health facility locations. However, the source of information would be local informants, who would also report on functionality. After careful consideration of potential security implications for informants and political repercussions for the organization, the deployment was cancelled.
Libya: The Somalia and Syria projects were both simple “ask the crowd to find a location” exercises. Crowdsourcing can also be a tool for collecting more complex, contextual information in a conflict setting to aid response. In 2011, UN OCHA asked the SBTF to put together a map of Libya that would collect information to help put together a picture of what was happening on the ground at the very start of the picture. UN OCHA believed that curating information from traditional media, social media and selected NGO workers and journalists on the ground would support humanitarian preparedness. Volunteers also collected information on responses, putting together a first draft of the “3Ws” (who, what, where) – a traditional UN OCHA information product for humanitarian response. A number of organizations, including OCHA, WFP, UNHCR and the Red Cross, reported that they found the useful. UN OCHA explained that the map “reduced information overload, produced an output that was manageable and digestable and led to better planning and decision making”.
Sudan: Crowdsourcing to report on a complex situation has been less successful in Sudan. UNDP has for the past few years collected information on conflicts, threats and risks, using local focus groups to generate localized, geo-referenced accounts of community perceptions. The perceptions are used for planning by both Government institutions and the UN. But organizing focus groups takes time and requires access to remote locations, so the information is inevitable always somewhat dated. UNDP designed a pilot to extend this information collection system to include information crowdsourced through SMS reporting, updating on the same topics focus groups discuss. At a time when information is highly sensitive, the pilot has faced a number of hurdles and is yet to be operationalized.
These four examples suggest that complexity is not the main hurdle to crowdsourcing in conflict settings; ethical questions are. Applying the “Do No Harm” principles – a benchmark of conflict sensitive programming – helps unpack some of the ethical questions these projects raise. Do No Harm stipulates that the highest priority should be placed on the safety of the general public. In the case of crowdsourcing, any activity that could potentially endanger affected populations that are the source or target of information during a disaster response operation must be carefully considered. Where information is sensitive, anyone involved in providing or collecting data can become a target for repercussion. The safety considerations are different for people local to the conflict (Syria) versus crowd members operating remotely (Somalia). There is a difference if the crowd is the workforce processing information (Somalia) as opposed to the source of the information (Syria). This concern about safety accounts for the different outcomes of the Somalia and Syria deployments.
Even where safety is not a pressing concern, Do No Harm requires that we ask questions around neutrality. Specifically, any crowdsourcing activity should assess whether it is building on what connects groups and avoiding anything that increases divisions. The Libya map, with its focus on contextual data already publicly available, passes this neutrality test. Work on a crowdsourced early warning system in Sudan is more complicated – debates between different actors on data ownership and verification of reports highlight the potential effects of data on connections and divisions in a fragile setting.
These are only some initial thoughts. Crowdsourced solutions to data collection and processing are growing in many fields, including disaster risk management. As these solutions are adopted into the mainstream, deployments in conflict and post-conflict settings are also likely to grow. Our thoughts on conflict sensitive crowdsourcing will need to develop alongside.