Four Ways to Make USAID’s Digital Agriculture Ecosystem Assessments More Locally-Led and Inclusive

As a Virtual Student Federal Service intern working with USAID’s Bureau for Resilience and Food Security (RFS) for the past seven months, I completed several projects aimed at helping to strengthen RFS’s digital agriculture ecosystem assessments. This included a thorough analysis of USAID’s digital agriculture ecosystem assessments, focusing on how effective their interviews and online surveys have been in gathering the perspectives of people from all sectors and demographics in each country. Alongside this work, I analyzed Digital Ecosystem Country Assessments (DECAs) as well as reports from other organizations, such as the Gates Foundation, World Bank and others to understand what other modes of assessments and tactics could make RFS’s assessments stronger. A specific area I focused on were the survey questions from the 2022 digital agriculture ecosystem assessment reports for Haiti and Honduras, updating them to better understand the local population’s feelings about these developments. I also developed a summary of recommendations across digital agriculture ecosystem assessments, organizing them based on if they should be created through a partnership with another organization, a program that needed to be implemented or a process change within USAID itself. Using this research, analysis and new structure, I then created a best of the best assessment outline as my final task, including how recommendations should be structured and the assessment methodology should be explained.
Three Overarching Gaps in Past Assessments
Through this work, I chose to focus on three major areas of the assessments: approaching interviews with a local engagement angle, understanding people’s feelings toward new digital developments and increasing female and youth representation.
While reading the DECAs and older digital agriculture ecosystem assessment reports (prior to 2022), it was clear that some groups of people, especially those from the farming sector, were uncomfortable meeting with just representatives from the United States. For example, both Niger, whose entire assistance team was based in the United States, and Nepal, whose team included both Nepali and U.S. representatives, noted that a large portion of interviewees had to be excluded who were uncomfortable engaging with people from the United States. Further, some reports explained that certain communities are not accustomed to having solely business meetings.
Another area I found important was understanding the general sentiment that people had about digital technologies. Throughout both the digital agriculture ecosystem assessments and the DECAs, questions asked in surveys and interviews were related to projects being implemented at that time, such as how much they cost and their returns. Personal questions were also related to what the individual’s job responsibilities were or how they used digital technology. These are important questions; however, this angle cannot be the sole focus of interviews and surveys. It is also necessary to learn how people really feel about new digital technology developments.
Further, these sentiments need to be gathered from all demographics, specifically women and youth. When going through assessments, only the DECAs specified the gender of those they interviewed, while the digital agriculture ecosystem assessments did not include disaggregation based on age or gender. Even when the DECAs reported on gender, the majority of the interviewees, across sectors, were male. For example, Uzbekistan interviewed 23 females and 59 males, while Nepal interviewed 21 females and 52 males. None of them mentioned disaggregation based on age, which made it impossible to gauge differences based on generational perspectives.
Some Suggested Improvements for Future Assessments
Given the above, there are four key ways that I believe the digital agriculture ecosystem assessments can be strengthened in the future.
First, there needs to be a greater emphasis on partnering with local organizations to better understand these communities, gain their trust, and integrate local country representatives directly into the interview process. This new approach should become a standard baseline for all assessments.
Second, I propose adjusting survey questions to include a focus on respondents’ feelings about technology. In addition to questions like, “For which use case(s) have you used, are using or do you plan on using digital services in your work?” I also suggest adding, “What are some challenges that you have that you would like to be (better) addressed with digital tools in the future?” and “What are your opinions on using technology for farming?” Without understanding the mindset of local ecosystem actors, teams may go into assessments with certain assumptions on how the community feels that can skew other questions asked (thinking all problems are being addressed or that all farmers want technological help with their farming when that may not always be the case). We need to first understand how local communities truly see these developments without letting any other biases exclude questions that could potentially open up completely new perspectives. With this new information, recommendations in the future can be put into place that truly help the local farmers and are based on their needs, wants and aspirations.
Third, in countries where the role of women can be seen as different from men, it is important to understand their perspectives because their experience can be extremely different on all levels, whether at home in a farming community or in a job with the government. Furthermore, just as important as gender representation, youth voices must be shared. They hold a certain responsibility for the next generation that will take over and make use of these technologies. Already today, many youth have the responsibility of teaching their parents how to use new technologies, like apps for farming, and other forms of data. It is necessary to understand how they feel about these developments and to learn what they want to see change in the future because, ultimately, it will be for them to use. Their opinions must be taken into consideration with the shaping of future programs.
Finally, it is important for assessment reports to explain their limitations related to the above, as well as other limitations that may have relevance toward understanding and implementing the report’s recommendations. Current limitations heavily focus on operational problems, such as travel restrictions limiting geographical representation and COVID-19 limiting access to and engagement with certain participants. However, in addition to this information, there needs to be more explanation of foundational and societal limitations. For example, the Haiti assessment report only described operational difficulties, saying that “a four-week lockdown instated to reduce the threat of COVID-19 restricted our ability to conduct KIIs [key informant interviews].” While in some areas these limitations are slowly being elaborated on, there is room for improvement. It is not enough to explain that the limitations occurred. There also needs to be a focus on how it affected the perspectives included and excluded in the assessment. The Kenya DECA, for example, did mention that certain stakeholders who were less comfortable meeting with U.S. government representatives were excluded. However, it did not expand on which demographic groups most of them fell into — women? Certain geographical areas? The limitations section of the report needs more specificity to explain whose perspectives may be missing and share whose perspectives are actually represented in this assessment.
The interview approaches, question changes and youth and female inclusion recommendations I have made may take time to be fully implemented, but while they are still developing and (hopefully) being included in methods of assessments in the future, the limitations caused by a lack (or limited reach) of their implementations should be included in assessments as well.