Women, Peace and Security Frameworks Must Apply to Defense AI

by Moira Whelan, Jesper Frant / Apr 21, 2026
Moira Whelan and Jesper Frant serve as fellows for Our Secure Future.

This post was originally posted at https://www.techpolicy.press/women-peace-and-security-frameworks-must-apply-to-defense-ai

AI tools are already operational in multiple conflict zones. The headlines are filled with examples, and a recent report by the Brennan Center for Justice at NYU Law details the extent of the deployment of these tools. The US military has used Project Maven to identify targets for strikes in Iraq, Syria, Yemen, and Ukraine. In Gaza, Israeli forces have relied on AI-generated intelligence to inform strikes that killed scores of civilians, and Claude was reportedly used by US forces during a raid on Venezuela and in strikes on Iran.

States participating in these conflicts have adopted Women, Peace and Security (WPS) frameworks that inform how security decisions are made, but there is no indication those commitments have been extended to the AI systems now informing those same decisions. According to our research, commercially available large language models (LLMs)—the same foundation models now being deployed in defense contexts—systematically fail to operationalize WPS standards, let alone others. How, then, can we be assured that AI systems used in conflict are complying with existing obligations?A central recommendation of the Brennan Center report is to strengthen AI testing, expanding operational evaluation and restoring capacity gutted by recent cuts. But the report stops short of defining what exactly should be tested. Its examples of testing failures are exclusively technical. A next step could be to assess whether these systems produce output that complies with the policy frameworks that already govern the conflicts into which they’re deployed.

This assessment is exactly what drove Our Secure Future’s focus on technology through Project Delphi and the Women, Peace and Security and Technology Futures report. Building on this work, our months-long study of AI systems concluded that AI models customized and evaluated with a robust WPS perspective will deliver higher accuracy in high-stakes, real-world conflict and humanitarian scenarios. We found that models informed by WPS data and policy frameworks reduce operational and strategic blind spots, and enable end-users to make faster, better-informed decisions, because they draw upon more comprehensive, community-wide, and policy-informed information.

Furthermore, our research found that when WPS language is omitted from AI prompts—mimicking the sparse format of actual field situation reports and intelligence briefs—model performance on WPS integration drops by nearly 90 percent. When the models fail to consider women in their analysis, it means the actions they recommend do not factor in these populations. That can have real consequences for, in this case, over 50 percent of the global population.

To come to this conclusion, we tested three leading AI models across 13 conflict scenarios at three levels of contextual detail. When prompts explicitly named affected populations (displaced women, female ex-combatants, women-led organizations), average scores were 0.65 out of 1.0. When prompts used the minimal formats practitioners actually use, the same models scored 0.08. A score below 0.2 indicates the model failed to surface any WPS-based analysis. Trust Building—whether the model recommended engaging affected communities—collapsed from 0.71 to 0.22, a 69 percent decline.

This is not a hypothetical gap. It is a measured disconnect between what WPS commitments require and what deployed AI tools actually produce. It is also an indicator of a seriously flawed system in use by militaries today. As these systems become more capable and more integrated into operational decision-making, the gap will only widen unless proactive measures are taken. Our research demonstrates that closing this gap is technically possible.

AI tools in conflict are failing decision-makers

In July 2025, an Institute for Integrated Transitions (IFIT) AI on the Frontline study tested LLMs on conflict resolution scenarios and found structural performance failures across the board—concluding that current AI models are not fit for high-stakes peace and security decision-making without significant intervention. Critically, a follow-up study found that adding a structured prompt—instructing models to follow basic conflict resolution best practices before responding—increased average scores by 65 percent. IFIT recommends embedding such guidance directly into system prompts, an approach consistent with what Anthropic CEO Dario Amodei terms “Constitutional AI”, which leverages a defined set of principles to align model behavior. This is possible, but we have no evidence to suggest that this is taking place.

Our research independently replicates those findings using a distinct scenario set focused on WPS-relevant conflict contexts and a WPS-specific scoring rubric. Across our MVP agent customization experiment and a WPS AI Benchmark that we tested using Weval.org, we found the same structural failures IFIT documented. “Due Diligence”—whether models recommended consulting affected communities and gathering context before responding—remained consistently low for out-of-the-box AI models that are widely available to the public. The convergence between IFIT’s conflict resolution evaluation and our WPS benchmark establishes these as characteristics of current LLMs in conflict contexts, not artifacts of any single methodology. The problem for decision-makers is plain to see. They are increasingly being directed to use tools that simply do not adhere to existing policies, but with existing processes such as benchmarking and developing agents, we could see better informed decisions in conflict and peace building scenarios.

The WPS competence gap

Our research extends those findings by applying a WPS lens: identifying a specific, quantifiable compliance gap and a documented path to closing it.

We call this the WPS competence gap—the measurable performance drop AI models show when WPS language is absent from operational prompts. No model in our evaluation surfaced WPS considerations unless prompted with explicit contextual cues. This matters because field situation reports, intelligence summaries, and policy briefs rarely contain that framing. This is compounded by the fact that AI tools are designed to produce mid-grade answers. For example, if you ask an AI tool to write a book report, it is likely to give you a “C” grade product, not an “A+”. It is even less likely to give you an analysis of how the dynamics of female characters in the book influence the plot…unless it is directed to do so. In conflict, this means decision makers are getting predictable answers, not doctrinal creativity. It is a behavioral default that has not been reconfigured to meet peace and security standards and models do not apply a WPS lens because nothing requires them to do so.

The compliance framework to close this gap already exists. WPS frameworks—grounded in UN Security Council Resolution 1325 and implemented through National Action Plans (NAPs) in over 100 countries—establish commitments for how conflict and security operations should account for women, protect civilian populations, and include affected communities in decision-making. NATO has integrated WPS into doctrine. The US, UK, and most major allied defense establishments have signed NAPs that apply to their operations. Yet we have seen no indication that procurement and deployment of AI tools integrate this doctrine into technical requirements. The tools decision-makers rely on are not built on the same standards on which they have been professionally trained.

Closing the gap is a configuration problem, not a capability problem

Organizations evaluating AI vendors for conflict-relevant applications should not just be asking whether a model is generally capable. They should be asking whether it has been configured and validated against their own policies and standards—and demanding evidence.

Our experiment tested four configurations of the same model against a common prompt, evaluated by AI judges and confirmed by a WPS expert review. The results show a clear customization ladder:

ConfigurationPerformance
Off-the-shelf — standard chatbot, no customizationC– / D — generic output, omits WPS
+ WPS instructions — detailed system prompt with principles on how to apply a WPS lens; no added knowledgeB — mentions women, thin on evidence and policy depth
+ Retrieval augmentation with evidence base — connected to curated WPS research and field case studiesB+ — substantive analysis grounded in real evidence
+ Retrieval augmentation with National Action Plans — connected to country-specific policy commitmentsA — policy-aligned, WPS KPIs

Our WPS AI Benchmark takes this analysis one step further, making it an effective mechanism to operationalize WPS compliance as a procurement requirement. Models are scored against a standardized WPS scenario set using a structured rubric, yielding measurable, nuanced, and operational evidence that can be used to improve model compliance. Defense organizations and other entities with WPS obligations should be writing this benchmark into their contractual requirements to specify minimum performance thresholds rather than accepting generic vendor claims of “ethical AI” alignment.

The recent standoff between Anthropic and the Pentagon over autonomous weapons and mass domestic surveillance illustrates a broader structural problem: when AI vendors and defense organizations negotiate deployment boundaries, those conversations tend to play out in terms of broad use policies and ethical principles—not measurable, domain-specific performance standards. Without a shared benchmark, procuring organizations have no way to specify what compliant output actually looks like, and vendors have no way to demonstrate it. A WPS benchmark changes that equation. The burden can then shift to the vendor to prove the models they provide actually meet the operational requirements their customers have already committed to.

The default is already a choice

One useful framing likens AI governance to brakes on a fast-moving car—necessary, but always reactive, always trailing the technology. But in WPS-governed contexts, the problem isn’t speed. It’s that the car was never engineered for the road it’s on. Brakes slow it down, but they don’t prevent it from producing structurally flawed output. When an AI system defaults to analysis that omits women, girls and boys in a conflict environment, adding oversight after the fact doesn’t fix the system—it just adds a review layer on top of output that was wrong from the start.

The phrase “oversight hasn’t caught up” frames the gap as a timing problem—as if the standards don’t yet exist and organizations just need more time to develop them. But the standards do exist. The WPS competence gap is not a governance failure that better brakes can catch. It is a design failure: the organizations that wrote the procurement specs, chose the vendors, and decided what standards to require did not require WPS compliance. It is an omission with measurable consequences for operational effectiveness and the protection of civilian populations.

How do we fix it?

Our Secure Futures research is ongoing. The WPS AI Benchmark is an open evaluation framework—the scenario setevaluation criteria, and methodology are publicly available.

A first step would be to use this model to expand into other areas that govern conflict such as broader human security and to require through laws and policies that procurements adhere to existing standards with evidence produced to confirm this.

Second, advisors within organizations need to become experts. Sadly in the case of our work, it is increasingly clear that commanders are relying more on AI tools than the WPS advisors that exist in the command structure. This is something decision-makers can fix. Training, empowering, and resourcing WPS Advisors to concentrate their energy on influencing the AI tools would not only produce better decision-making almost immediately but would serve as an organizational model for other areas such as human security, humanitarian response and localization.

Third, we know commanders rely on AI tools for speed, but experiments such as this one took hours, not months. Empowering academic partners and outside groups to test assumptions—just as is done in doctrine development—is critical to the process.

We have an important role to play by building benchmarks to evaluate the operational readiness and effectiveness of LLMs. Jack Clark, co-founder of Anthropic, recently said: “Give us a goal. The AI industry is excellent at trying to climb to the top of benchmarks. Come up with benchmarks for the public good that you want.” It’s clear that AI has already entered the battlefield, but humans are still in control. The decision about which humans are empowered to influence the direction of AI systems that can determine war and peace needs to be made now.

NOTE: This post references results for the fourth iteration of its benchmark. Those results can be accessed at weval.org. The WPS AI Agent is available for demonstration at https://wps-agent.streamlit.app.

Why “Secondhand Worlds”?

I named this blog after a line from C. Wright Mills’ 1960 essay The Cultural Apparatus, which I first read in 2008 during a journalism school class at the University of Colorado:

“The first rule for understanding the human condition is that men live in second-hand worlds. They are aware of much more than they have personally experienced; and their own experience is always indirect. The quality of their lives is determined by meanings they have received from others.”

That quote has stayed with me—not just because it’s a sharp observation about how we process the world, but because it remains unsettlingly relevant. We don’t encounter reality raw; we inherit it through headlines, feeds, photos, slogans, and the countless interpretations of others. Mills called the system that produces and distributes these interpretations the “cultural apparatus.”

Back in 2008, journalism as a profession was entering a crisis that has only deepened since. The demise of local newspapers and public-interest reporting, the erosion of journalistic ethics, the rise of social media, the fragmentation of the internet, and now the explosion of AI and synthetic media—seen through the lens of C. Wright Mills, these shifts help explain much about our current moment. The cultural apparatus isn’t just evolving; it’s fragmenting, accelerating, and becoming harder to trace and trust.

Today, that apparatus is both more expansive and more manipulable than Mills could have imagined. Platforms like TikTok, YouTube, and X (formerly Twitter) deliver curated slices of experience in real time. AI-generated content blurs the line between authentic and synthetic, while billionaires, governments, and opaque algorithms shape what rises to the top. Conspiracies scale faster than facts. The experience of “seeing it with your own eyes” is often preempted by a push notification or a viral meme.

In this environment, the question isn’t whether we live in secondhand worlds—it’s who’s furnishing them, and to what end.

That’s why I started this blog. Over the years, I’ve used this space to explore those questions directly—writing about civic tech, participatory democracy, communication systems, and the ethical design of digital tools—all efforts to interrogate and influence the cultural apparatus itself. It’s a place for me to think critically about the cultural apparatus we all live within—and to make my own small contribution to it. I’ve worked in digital communications, civic tech, and democracy support. I’ve seen how narratives can be built for liberation, and how they can be weaponized. I’ve tried to help build tools and spaces that make democratic values legible, accessible, and resilient.

If we’re going to live in secondhand worlds, then let’s at least try to make them better ones—rooted in equity, truth, and human dignity.

A mosaic of prototype screens from the Easy Read Generator redesign—an accessibility-focused civic tech tool reimagined by UMD students to better serve users with diverse cognitive and digital literacy needs.

Forked, Not Finished: Mentoring Civic Tech the Open Source Way

This spring, I had the opportunity to support several student-led civic tech projects through the University of Maryland’s iConsultancy program. The partnership was originally facilitated through my role at the National Democratic Institute (NDI), but when NDI’s participation was disrupted by a sweeping freeze on U.S. foreign assistance programs, I continued advising the students in a personal capacity.

What started as a straightforward mentorship experience became a much more fluid—and in some ways more meaningful—engagement, shaped by shifting roles, student initiative, and a shared interest in public-interest technology. In many ways, it reminded me of the spirit of open source: people stepping in, adapting to change, and contributing however they can. NDI itself has long embraced open source platforms like Decidim and CiviCRM as part of its commitment to digital democracy—tools that reflect the values of transparency, adaptability, and shared ownership.

Three Projects, Three Distinct Challenges

Each iConsultancy team focused on a different scope of work—specifically related to Decidim, an open-source platform for democratic participation, and a new tool that NDI was designing to make information more accessible to people with intellectual disabilities. These projects were all rooted in the open source ethos: building in the open, iterating in real time, and aiming for impact beyond the immediate team.

1. Decidim Alternate Deployment Methods

This team explored ways to simplify and modernize how Decidim is deployed across different environments. The official Heroku option had become outdated, and the manual installation process was prohibitively complex for non-expert users.

The students conducted a technical evaluation of Docker and Heroku deployment methods, tested them across operating systems, and ultimately created an updated Docker configuration tailored for production environments. Their contributions were submitted to the Decidim GitHub repo. These additions make it significantly easier to deploy Decidim in a production environment using Docker Compose. Like many open source contributions, their work advanced on community-maintained tools, with the potential to be picked up and improved by others.

2. Easy Read Generator UX Redesign

The second team focused on redesigning the user interface for NDI’s Easy Read Generator project, a tool that simplifies complex civic documents to make them more accessible for individuals with intellectual disabilities and those with lower literacy levels.

Drawing on user research, accessibility guidelines (like WCAG), and competitive analysis, the students developed a high-fidelity prototype and detailed UX recommendations. While I had envisioned an iterative redesign of existing wireframes, the team pushed the concept further—exploring new features such as login options and donation functionality. Their willingness to experiment expanded the conversation about what this tool could become. 

3. Manual Installation Documentation Enhancements

The third project aimed to unify and improve Decidim’s manual installation documentation. English-language instructions were incomplete, and more robust Spanish-language documentation had yet to be translated or standardized.

The team was tasked with consolidating and testing these disparate guides, streamlining the process for deploying Decidim with all its intended features. Documentation is the connective tissue of any open source ecosystem, and while this team faced challenges in delivering their final product, the importance of the task—and the gaps it sought to fill—remains clear.

Lessons from the Field

Each project reflected the realities of open collaboration: sometimes productive, sometimes messy, always instructive. The teams that stayed organized and engaged produced genuinely useful outputs that could be built upon by others. In other cases, student groups struggled to balance their workload or needed more support to stay aligned with the project’s goals.

To be clear, this isn’t a critique of the iConsultancy model—student-led learning is, by design, exploratory. But like any open source initiative, success is rarely the result of individual effort alone. It depends on a thoughtful mix of initiative, shared norms, and an ecosystem of support. Civic tech projects, especially those aiming for real-world relevance, demand a working knowledge of community context, accessibility, and technical infrastructure—all challenging to fully absorb in a single semester. And just as open source contributors rely on documentation, mentors, and community to navigate complex codebases, student teams benefit from structured feedback, clear goals, and a culture that rewards asking questions. Those ingredients can turn short-term projects into lasting contributions.

Why I Stayed

Even after my layoff from NDI, I chose to remain involved because my commitment to the projects didn’t depend on a formal title. The UMD students brought real energy and fresh ideas. And continuing to mentor them gave me a sense of continuity and purpose at a time when many other structures were unraveling.

In civic tech, we often talk about resilience, distributed leadership, and decentralization. These principles are foundational to the open source ecosystem, where no single person or entity controls the project and leadership often emerges organically from contributors. This experience reminded me that these values aren’t just theoretical—they show up in how we navigate change. Open source projects are a fitting metaphor: they can survive the loss of their initial stewards, thriving as new contributors pick up the thread. Our work, too, can have a life beyond any single job or institution. Even when a formal role ends, the ideas, tools, and momentum we create can continue evolving—adapted, expanded, and reimagined by others who care.

How DemTech Supports Digital Organizing Around the World

DALL-E generated image of a women sitting at a desk doing a water color in front of her computer.

Digital organizing is a key component of any successful political campaign. It involves using technology to mobilize supporters, raise funds, communicate messages, and get out the vote. It can also be a powerful tool for governing. Digital tools enable members of parliament to manage constituent correspondence or even manage interactions with citizens. However, not all digital organizing tools are created equal. Some are tailored to specific contexts, while others are better suited to business or sales applications.

That’s why the National Democratic Institute’s (NDI) Democracy for Technology (DemTech) team, decided to invest in developing and supporting the open-source platform CiviCRM. CiviCRM is a constituent relationship management (CRM) system that can be used to conduct many democracy activities, including conducting surveys and running campaigns, as well as basic CRM activities like managing contacts and sending emails. CiviCRM is a good fit for our partners that don’t have a lot of money to spend on digital campaigning tools, which is most of them. For partners with little experience with digital organizing, CiviCRM also provides a hands-on opportunity to introduce the concept. Partners who complete the training have the option to continue to use CiviCRM at no cost through DemTech’s DemCloud hosting service. They can also migrate their CiviCRM site off DemCloud to their own hosting environment, decide to use another CRM solution or simply choose not to use a CRM system at all.

For DemTech, one of the biggest advantages of CiviCRM is that it can be easily localized to a new country. It has been translated into dozens of languages, including Catalan, Dutch, French, Japanese, Polish, Portuguese, Serbian, Spanish, and Turkish. Not only can our partners use the tool in their own language, but they can adapt it to their specific needs and challenges. For example, CiviCampaign is a component of CiviCRM that allows users to create and manage advocacy campaigns, and it can be tailored to suit different electoral systems, voter registration processes, and campaign strategies.

DemTech has supported the use of CiviCRM across a wide variety of contexts. For example, a group of organizations in Democratic Republic of Congo (DRC) used Civi to survey key target audiences about their policy priorities as they prepared to launch broad advocacy campaigns in the runup to 2023 elections. 

DemTech also maintains relationships with technology vendors that specialize in supporting the tool such as iXiam and CoopSymbiotic

CiviCRM is not always the right digital organizing tool. NDI looks at a wide array of tools available such as MailChimp, NGP VAN, NationBuilder and Salesforce, and makes recommendations based on the ease of localization of the tool, how the tool has been used by democratic organizations, ease of use and cost. We are always exploring new contact management tools or opportunities to partner with companies that support digital organizing.

DemTech’s mission is to provide tailored support and advice on topics related to the impacts of technology on democracy, the use of technology in democratic development, and applying human-centered design approaches to democracy programming. We believe that digital organizing is a powerful way to empower citizens and strengthen democracy around the world. That’s why we support CiviCRM and other contact management tools that can help our partners achieve their goals.

This blog was originally posted on dem.tools.

Defeating Zoom Fatigue with Open edX

A pen drawing of a woman sitting at a computer looking tired.

Editor’s Note: This post was co-authored with Caitlyn Ramsey and edited with Microsoft Bing Chat.

It’s September 28, 2020 and COVID deaths have just surpassed one million worldwide. And as you watch the news, your boss sends you an email. You’ve been stuck inside for months watching the pandemic, political unrest, and natural disasters unfold with little to no interaction with anyone outside your bubble, and you’re expected to keep working as normal. And as all of your activities, including work, were forced online, you find yourself realizing something you never would’ve imagined: you are fed up with the internet. You have, as it turns out, a severe case of Zoom fatigue.

Zoom Fatigue has been an unexpected side effect of the pandemic. Individuals are experiencing exhaustion and burnout due to the excessive use of video conferencing calls. To address this issue, innovative platforms are being utilized by promoting engaging interactions and enhancing the overall experience of remote learning and communication. One such platform is Open edX, an open-source learning management system, which supplies a ready-built framework for mitigating Zoom fatigue for programs that deliver training online. Instead of relying solely on video conferences, Open edX enables engaging educational methodologies designed for the internet. Since its founding in 2012, OpenEdX has been used by a wide range of organizations, from institutions of higher education to major corporations, and even national governments. The platform uses a combination of video lectures, interactive exercises, quizzes, and other tools to deliver course content. The open-source nature of Open edX means that anyone can access and use the software, and modify and improve it as needed, without software licenses or subscription costs.

While the pandemic has abated in most regions (or at least been accepted as the new normal), the pre-pandemic “business as usual” where programming is delivered almost exclusively in-person has shifted permanently. In the post-pandemic world, there is a greater reliance on online training as in-person events are not always feasible and are more expensive. Moreover, air travel is a large contributor to climate change putting pressure on organizations to rethink the sustainability of programming that requires frequent international travel. This shift toward convening online has also contributed to the rise in Zoom fatigue as programs attempted to move their programs out of meeting rooms and into Zoom meetings, without fundamentally rethinking program delivery or design. 

Well before the pandemic, NDI hosted its own instance of Open edX (ed.ndi.org) to offer a wide range of courses aimed at strengthening democratic institutions and promoting citizen participation. These courses cover various topics such as cybersecurity for democracy activists, combatting information manipulation, digital rights advocacy, and best practices for leveraging technology to support democratic development. Some of the courses are self-paced and can be accessed anytime, while others are delivered through virtual classrooms accompanied by live instructors. Additionally, NDI offers customized training programs tailored to specific organizations and contexts. The courses are designed for individuals and groups interested in enhancing their knowledge and skills to effectively engage in democratic processes and advance democratic values.

Recent adopters of Open edX at NDI have used it to turn toolkits or guides, that would historically have been published in PDF format, into engaging multimedia online courses with integrated features that track learner progress and evaluate learning outcomes. 

Edx courses enable engaging online approaches that yield real learning. This, we’ve found, is something that even the most expertly-facilitated Zoom call cannot provide. Courses can have videos, slide shows, text, audio, live broadcasts, or a range of other methods of sharing information. The platform also can facilitate quizzes and evaluations, provide discussion boards and interactive games, and even integrate surveys for post-class feedback. Many people value the credentials that can come with education so NDI worked to improve the open-source OpenEdX software to provide elegant certificates personalized with their information for those who successfully completed a course.

Interest in the online learning platform has recently spiked. Ironically, just as the pandemic is easing, new programs are coming online that are making online methodologies for program delivery central to their approach. This includes the House Democracy Partnership – an initiative of the U.S. Congress supported by the National Democratic Institute and International Republican Institute – which is turning their Legislative Oversight Guide into a series of mini-courses, and NDI’s Movement-Based Parties initiative which is using Open edX to deliver engaging online training at scale.

These new online courses are a positive sign that NDI is moving beyond attempting to deliver via Zoom programs designed to be done in-person. Almost any program that has some educational component can emulate this approach and consider using Open edX to improve their program delivery and learning outcomes. Exceptions may exist in cases where intended learners have high security risk or do not have access to quality internet connections. Any online approach could further the marginalization of groups with limited or no access to the internet. If you’re interested in exploring the possibilities of Open edX for your own programs or want to learn more about NDI’s use of the platform, I encourage you to visit ed.ndi.org to see what courses NDI is currently offering and try Open edX for yourself.

This blog was originally posted on dem.tools.

ICT Innovation Is Key to Unlocking Nigeria’s Demographic Dividend

A recent Dalberg report highlights technology-enabled innovations that have the potential to unleash Nigeria’s demographic dividend and help millions of people escape poverty.

Thirty eight percent of Nigeria’s population is between the ages of 15 and 35. Since Nigeria is the most populous country in Africa, this means that the country has 64 million working-age people – or the equivalent of the population of both Malawi and South Africa combined. Economists call a large working-age population a “demographic dividend” because a big proportion of the country’s citizens is able to contribute to the economy.

Unfortunately, favorable demographics do not necessarily translate into more rapid economic development. A young population also puts pressure on many social systems – the food system must expand to feed a growing population, and the education system must be capable of preparing billions of minds for a rapidly shifting job market. The Dalberg report sees great potential in Nigeria’s tele-communications sector to improve its competitiveness in these two key areas.

Technology and innovation are driving forces behind economic growth around the world, and Nigeria is no different. In 2012, 30 percent of Nigeria’s GDP growth was attributed to information and communications technology (ICT). In a country were nearly 60 percent of the population lives on less than one dollar per day, two-thirds of the total population has an active mobile phone subscription.

Dalberg identified a number of ICT solutions that are focused on providing teachers with tools that enable them to provide quality education to an increasing number of students. EduTech is designed to deliver educational material to university students through customized tablets. English Teacher, an initiative of Nokia and UNESCO, provides pedagogical advice to thousands of Nigerian teachers through daily messages. Bridge International Academies is a chain of low-cost primary schools that provides educators not just with a well-designed curriculum and educational materials, but also administrative systems to minimize overhead and help track educational outcomes.

Agriculture is also an important sector of the Nigerian economy. Seventy percent of Nigerians are employed in agriculture and the sector accounts for 42 percent of the country’s economic output. However, Nigerian farm yields are far below the global average. According to Dalberg, “Only four of Nigeria’s 29 most cultivated crops by area harvested (cashew nuts, yams, melon seed, and cassava) are in the top quartile of global yields.”

ICT has the potential to improve the enabling environment for Nigeria’s farmers in everything from improving market access to educating farmers about agricultural best practices. Dalberg highlights three such innovations. The Nigerian Ministry of agriculture has developed an e-wallet to make agricultural subsidies more efficient and transparent. MoBiashara improves access to inputs, such as fertilizer, by creating a market for farmers to compare prices and check local inventories via text-message. iCow, an innovation out of Kenya, provides farmers advice on raising cows and chickens throughout the lifecycle of their animals.

Innovative use of ICT is already having a positive impact on Nigeria’s agriculture and education sector. These examples are just a few of the many innovations that are driving growth. Providing the foundation for these technologies – through improved cellular networks and electrical grids – will be the key to unlocking Nigeria’s demographic potential.

A renewable boost to the Internet cafe

Development projects come and go. They are replaced, neglected, restored, discarded, rejuvenated, and/or dismissed. The ruins of past development projects littered the community of La Plata in Bahía Málaga: the remnants of a concrete pedestal that had been used to elevate a rainwater collection barrel, a run-down and un-utilized school latrine, and a solar panel that had been abandoned after another project left the satellite phone it powered irrelevant.

The new project was an Internet kiosk built by Compartel – an initiative of Colombia’s Ministerio de Tecnologías de la Información y las Comunicaciones (MinTIC). While I was able to benefit from access to the kiosk all summer, it was officially inaugurated just this month to “give 110 families access to Internet without having to travel to the urban center of the municipality.”

Without a doubt this project will have a lasting and significant impact on the community, providing them with daily Internet access and phone service, overcoming the nearly non-existent cell phone signal (I had to stand on the dock in order to make/receive phone calls).

The kiosk does have a few limitations, however.

Compartel "Vive Digital" Internet Kiosk

Solar Pannel Installed on the Internet Kiosk in La Plata-Bahía Málaga

First, the available bandwidth can barely handle one computer playing a YouTube video, let alone five Ubuntu computers with children playing flash-based Internet games. For basic applications like checking email or using Facebook the kiosk worked just fine, but as soon as more than one computer began to use data-heavy websites the whole system became unusably slow.

Second, while the kiosk uses WiFi instead of Ethernet cables to communicate with the Internet, the password was strictly controlled so that they can charge for access and recoup some of the costs of operating and administering the kiosk. This was a bigger problem for me than for your average user, but it basically meant that I could not connect other computers, tablets, or smartphones to the Internet, forcing me to use the limited capabilities of the five Ubuntu computers and only the programs that came pre-installed (install rights had been restricted). Luckily, (for me, but not the bandwidth usage) Ubuntu lets you uncover the WiFi password in network settings and I was able to connect the Internet with my computer and other devices (shhh, don’t tell Compartel).

Finally, the kiosk depends on electricity generated by the community’s gasoline-powered generator, which only runs from 6-10pm every day (the official hours of operation posted outside the kiosk were 3-9pm). For me, this meant that I could only use the Internet during peak bandwidth-usage time or steal an hour here or there when the kiosk was running on battery power. Luckily, a few weeks into my field placement, a worker from Compartel came to decommission the solar-powered satellite phone. He took only the microwave transmitter, leaving the solar panel, cables, power inverter, and battery (everything we needed to jerry-rig a solar system for the Internet kiosk). After some amateur electrical engineering, and some acrobatic rooftop maneuvers by Santiago (the administrator of the digital Kiosk and my supervisor for my field placement), we managed to install the panel on the roof of the kiosk. But after attaching the panel to the system we got…nothing.

The power inverter that came with the system only put out 100 watts, enough to power a lightbulb or charge a basic Nokia phone, but not enough to power the satellite dish and wireless router. Santiago did a little searching and came up with another power inverter (this one put out 300 watts) and voilà: six more hours of Internet a day. The solar panel could not charge all five Ubuntu computers, but with direct sunlight during the morning hours it was more than enough to power the Internet. The extra six hours of Internet time allowed me to use the full bandwidth during off-peak hours and complete a new website for Ecomanglar (the main deliverable of my summer field placement).

 

With 1,000 Days Left to Reach MDGs, A Look Back and Forward

Blog originally posted on the Millennium Villages website.

The 1,000-day milestone to achieve the Millennium Development Goals (MDGs) was on the minds of presenters and audience alike at the Earth Institute’s Sustainable Development Seminar. The seminar gathered professors Jeffrey Sachs, Prabhjot Singh, and Vijay Modi to take a critical look at how far the Millennium Villages Project (MVP) has come in the eight years since its founding and analyze what still needs to be accomplished.

Sachs kicked off the seminar with an overview of the MVP, which he described as showing a pathway to achieve the Millennium Development Goals in very poor settings in sub-Saharan Africa.

Given the time-bound nature of the goals, Sachs noted, “part of our self-assignment in this project is to run, to hurry, to try to meet a timetable, to try and promote action.” In a project like the MVP, where the goal is to break the cycle of extreme poverty, Sachs argued, “it’s better to try and miss than to slow down and not try.”

The MVP built off the epistemic community knowledge of development best practices, and initially started with the implementation of quick-wins – which include long-lasting insecticide-treated bed nets and improved agricultural inputs to boost crop yield. The quick wins, however, while important are only part of the equation. As the project moved forward, ideas about how to meet the MDGs evolved along with the Millennium Villages themselves.

Sachs described the next phase of the MVP as falling into four categories: moving from demonstration to design, expanding beyond interventions to systems-based approaches, harnessing the unprecedented expansion of information and communications technology, and integrating public investments with business.

This next phase can create an environment of innovation in the MVP that has fostered the creation of new approaches to development. The health sector, in particular, has experienced a sea change.

Singh explained that moving to a design and systems-based approach forced the MVP to rethink the delivery of healthcare in poor, rural settings. Improved primary health facilities, the project realized, only get you about half the way to achieving better health outcomes due to constraints on access.

Community health workers (CHWs) extend the reach of primary healthcare systems expanding access for the rural poor. The growth of mobile telecommunication has allowed the MVP to develop platforms to enable managers to monitor the CHWs they oversee in real-time. Actionable data not only empowers managers and health workers, it provides critical information on how to improve the health system and make it more adaptive.

CHW programs have been implemented across the Millennium Villages, but the CHWs must be scaled across Africa in order to have a measurable impact on global development. The One Million Community Health Worker campaign aims to do just that.

With the 1,000-day MDG countdown underway, many countries are still far from achieving the MDGs, but new approaches to development born from the MVP have put ending extreme poverty within reach.

Canvassing for development

The New Media Taskforce here at SIPA is holding an “Innovating Mobile Tech for Development Competition,” where students are given the chance to pitch their idea for innovative mobile applications that seek to address specific political, economic, or social needs in international development to a panel of industry judges. Here is the idea that I may submit:

Village Well, Jombo village, Malawi

Village Well, Jombo village, Malawi by Flickr user Bread for the World

One of the major failures of Millennium Development Goals (MDGs) is a lack of timeliness and completeness of data measuring progress towards achieving the goals. Dr. Jeffrey Sachs wrote in the Lancet:

One of the biggest drawbacks of the MDGs is that the data are often years out of date. Accurate published information from the past 12 months is still not available for most low-income countries. This timelag was inevitable when data were obtained by hand in household surveys, but in the age of the mobile phone, wireless broadband, and remote sensing, data collection should be vastly quicker.

Dr. Sachs is spot-on in suggesting that mobile technology will make data collection more rapid, but I would also contend that mobile-technology-enabled crowdsourcing will increasingly make traditional statistical surveys irrelevant. This is already happening in the arena of American politics. President Obama’s canvassing app enables citizens to volunteer their time to help register voters, build a massive database of registered voters, and ultimately organize voters out to the polls on election day. The app uses information about your location to suggest nearby households that you should visit and questions you should ask when you get there. I believe same model can be applied to the realm of international development.

Let’s say you have a database of 1,000 water projects spread across Malawi. You know the location of the water projects but do not have the resources to send an employee to monitor them on a regular basis. Water For People has built a platform called FLOW that enables field workers monitor water projects using a mobile app. While replacing pen and paper with a smartphone and Internet connection is a significant step forward, I believe that FLOW still doesn’t take the concept far enough because its capacity limited by its reliance on paid professionals to conduct the surveys.

The next-best thing to a trained monitoring professional would be a citizen armed with a smartphone. Bringing up the app, the citizen would be given a map of water projects in their immediate vicinity. They would then “check-in” at the water project and complete a simple survey about the state of the project. If the idea is expanded even further, this app could potentially supplement or replace the statistical surveys currently used to track progress toward achieving the MDG. And because the data would have no time lag it could be used to identify regions that require intervention in real-time, such as a village with an abnormally high maternal mortality rate.

Effectively, crowdsourced development data could turn the MDGs from an out-of-date snapshot of past development status into a tool for development practitioners and governments to detect issues with development while they are still relevant and actionable.