Measuring Non Online Distance Learning Modalities

Measuring Non Online Distance Learning Modalities

Show Video

>> Saima Malik: Hello, everyone. Welcome to our webinar. We are really thrilled to have all of you here today. This is the Navigating the Distance Learning Roadmap: Monitoring and Evaluating Radio/Audio and Video/Television Programming webinar. As you enter, you might see that there were some polls up and I hope you've had a chance to respond to those polls. My name is Saima Malik.

I am a Senior Research and Learning Advisor focusing on foundational skills at USAID Center for Education. We're really, really glad to have you on our call today. Before we get started, I wonder if I could hand it over to our producers to go over some of the [audio cuts out] logistics of this call. So over to our producer. >> Emma Venetis: Thank you, Saima.

Welcome, everyone. We're happy to have you here. today. I want to just draw your attention to a few of our Zoom tools today.

Only our presenters will be on audio today. So I want to bring your attention to the reactions button at the bottom toolbar. Please feel free to provide any feedback using those icons throughout the session. At the bottom of your screen, you'll see a live transcription button. If you would like to view closed captioning, click on that button and then select Show Subtitles.

To send a message to the group, click on the chat button on your toolbar. This will open the chat window and you can type a message into the chat box. If you have any questions during the session, please put them in the main chat. To send a message to a particular person, click on the drop-down menu next to two, and select the person you'd like to chat.

And then back to you, Saima. >> Saima Malik: Thank you so much, Emma. So again, welcome everyone. Once again, my name is Saima Malik. I'm a Senior Research and Learning Advisor focusing on foundational skills at USAID Center for Education here in Washington, DC.

I have the pleasure of welcoming you this morning to our webinar. This is a very important topic. I know that all of us on the call have been grappling with some of the issues that we're going to discuss today.

For us at USAID Center for Education, as soon as COVID-19 happened and school closures began, we started having a lot of questions. Folks were thinking how to pivot in-person teaching and learning to remote modalities. And so, there were some folks who had not done this work before at all.

And there are questions around, how do we do this? What modalities do we use? What are even the terminologies that are used? And so, at the Center for Education, we began to put together some resources and some guidance tools for our own mission staff, but also for partners, anyone on the ground who was doing education work, and would find this information useful. One of the deliverables that we, with the help of some experts in the field, including those who are on the call today, including Dr. Emily Morris, Anna Farrell, the team, Amy Mulcahy-Dunn's team at EnCompass, and Yvette Tan, among others, worked really closely together with our team at USAID Center for Education, led by Rebecca Rhodes, who is also on the call, who was our point of contact, sort of the lead for distance learning within our office.

We worked to put together these resources, and we have been disseminating those resources out to everyone on various formats and platforms. The one deliverable -- -- that we're going to discuss today was produced in response to a lot of questions folks were asking once they had begun to implement distance learning on the ground. The very next question was, well, how do we measure what it is that we are implementing? How do we measure and make sure that we are reaching folks who we wanted to reach in terms of our learners? How do we make sure that we are actually having an impact and is learning occurring? The guidance note or the deliverable that was put together in response to these questions was a very useful tool, The Roadmap that we have available on our Edulink site. And I'd like to request if Emma could drop that link in the chat for everyone. If you haven't had a chance to look at this document, I would highly encourage taking a look. We're going to be talking about this document today -- -- and some of the very important findings and guidance that comes out of this document.

This document provides -- thanks Emma for doing that. This document provides a really nice review of some of the tried and tested ways that people have been measuring distance learning for decades before in places where distance learning was not a new thing. In many contexts, this had been happening for years and years. We also put together a review of some of the innovative strategies that folks were using to try to get at some of -- measure some of those things like the reach, the engagement, the outcomes of distance learning programming. Very importantly, we came up -- the team, the authors, and team came up with a roadmap that provides step-by-step guidance to teams on how to go about doing this work. So what are the things that you need to have in mind? What are the questions teams should be asking themselves about measurement of distance learning activities? So we're going to talk a lot more about that through our session today.

We have requested you all to send in your questions. We've been having a lot of questions from our own USAID mission teams on the ground. We've been hearing questions from partners. Thanks to everyone who submitted their questions. We have a team of expert panelists who will discuss and respond to those questions on our call today. Right at the top, I did want to make one disclaimer.

This discussion that we're having today and The Roadmap that has been produced is, yes, produced with USAID funding but it is not specific to USAID funded programs. This work is put together, and this webinar is for folks, anyone out there who is doing this work. So does not have to be affiliated with a USAID-funded program. Anyone who is thinking through how to measure, evaluate, and learn from distance learning programming, this will be for you.

So with that, what I would like to do first and foremost is to share with you all a video that the team has put together that helps to distill some of the key findings that come out of the roadmap. We'll go ahead and take a look at this video first, and then I will introduce you to our key speakers for this morning. So I'd like to request our producer Emma if we could please show that video. [ Music ] >> Narrator: The Roadmap for Measuring Distance Learning provides strategies for capturing reach, engagement, and outcomes. This video shares three key strategies and provides an example of how to apply these strategies to one distance learning modality, video programming.

As countries around the world closed learning institutions in response to the COVID-19 pandemic, there was a surge in distance learning initiatives. Distance learning is commonly used to reach learners who need flexible learning opportunities, such as working youth or where schools and learning centers cannot be routinely and safely reached. When implemented intentionally, distance learning can expand learning opportunities. What is distance learning? Distance learning is broadly defined as teaching and learning where educators and learners are in different physical spaces. Often used synonymously with distance education, distance learning takes place through one of four modalities, radio or audio, television or video, mobile phone, and online learning platforms. Printed texts frequently accompany these modalities and can also be a fifth modality in cases where technology is not used.

Distance learning can serve as the main form of instruction or can complement, or supplement in-person learning. As countries and education agencies take up distance learning, it is important to design and implement evidence-based and intentional strategies for monitoring and evaluation. This video outlines three interconnected strategies for measuring reach, engagement, and outcomes in distance learning.

These include integrated remote and in-person data collection. Multi-modal technology interfaces. And mixed methods data collection. Using a combination of these strategies will help ensure quality, equitable and inclusive data.

Key strategy number one. Integrated remote and in-person data collection. Remote data collection provides timely data on reach and engagement and can be used when in-person data collection is not feasible. In-person data collection is preferable for measuring outcomes including attitudes, beliefs, and behaviors, and where building rapport with learners is critical. Combining remote and in-person data collection enables more frequent, responsive, and systematic data collection in emergency and non-emergency contexts.

One example of integrated data collection is measuring learning engagement and video programming using both in-person and remote methods. Key strategy number two. Multi-modal technology interfaces. Technology interfaces include phone or video calls, interactive voice response, text messages, social media groups, paper, images, video and audio recordings, learning management systems, and educational apps, programs, or games. Measuring distance learning through multiple technology interfaces helps reach a wider group of participants including those with limited access to technology and connectivity. Interfaces should be selected based on technology, device access, and accessibility needs, connectivity, and demographics of the users.

One example of using multi-modal interfaces for measuring engagement and video programming is combining phone calls, social media groups, educational apps, and paper-based protocols to collect data. Key strategy number three. Mixed methods data collection.

Quantitative data collection methods include surveys, tests and assessments, teaching and learning analytics, and observations. Qualitative data collection methods include qualitative observations, focus group discussions, interviews, and participatory and arts-based research. Combining quantitative and qualitative methods allows for deeper analysis and provides greater opportunity to measure intended and unintended reach, engagement, and outcomes.

One example of using mixed methods for measuring engagement and video programming is combining surveys and learning analytics data with focus groups and voice commentaries using a combination of integrated remote and in-person data collection. multimodal technology interfaces and mixed methods data collection will help ensure quality, equitable and inclusive data. To bring our example of measuring engagement and video programming together, we integrate remote and in-person data collection, combine multi-modal technology interfaces and mixed methods by using in-person focus group discussions and paper-based service. Remote phone surveys recorded voice commentaries and social media group conversations, and learning and engagement analytics captured through educational apps. For more examples and case studies, and to learn more about the steps and measuring reach, engagement and outcomes, visit The Roadmap for Measuring Distance Learning.

>> Saima Malik: Thanks so much for that, Emma. So we just wanted to start the webinar off today with that video. So you all know that this resource is available for you. We are going through our clearance processes.

As soon as we are able to share the link to this video on Edulink -- -- thanks, we will be very happy to share this with you as soon as it's available. So please keep an eye out for that. Before heading over to our speakers this morning, we're going to talk a bit more about this resource, The Roadmap for Measuring Distance Learning. I'd like to quickly thank those of you who responded to the poll questions that popped up as you were entering the call.

It seems like a majority of our participants this morning are intermediate, have an intermediate level of experience with distance learning, which is great. So 57% intermediate and 43% beginner. And then, also just quickly to share that, we've got a lot of folks on the call who are -- -- excited about phone, television, and radio.

And we'll talk a lot more about those modalities on the call. We have some experts in this field who will be answering questions that you all have. So once again, thank you so much for joining today.

Please feel free. As questions come up, I know that you all had shared some questions beforehand. And we do have experts on the call who will respond to those questions. If there are questions that come up as we're going through the discussion today, please feel free to type your questions into the chat. And we'll make every effort to respond to those questions that are coming up on the chat, in the chat box as well.

Thanks so much. I will now hand over to Rebecca Rhodes, who is our Center for Education lead on the distance learning work. And Emily Morris, who is the lead author of The Roadmap document that we're going to be talking about today. And they're going to walk us all through The Roadmap itself and some of the key findings. Over to Rebecca and Emily.

>> Rebecca Rhodes: Great, thank you and welcome everyone. It's really nice to be here with everybody this morning and to see such great participation. And we're really excited about this resource, and about The Roadmap, and about what we can all learn by using The Roadmap since all of us have found ourselves in the year of COVID, you know, deeply involved in some version or consideration of distance learning. So as we've said, we're going to take just a short moment to actually walk you through key parts of this document. Saima, can I ask if you haven't already if you could drop the education link to the document in the chat.

We want to thank again, Dr. Emily Morris and her whole team. She really worked with an extensive number of folks to get this produced. Anna Farrell, and others, Yvette Tan, et cetera. So really Emily, thank you for being the lead on this and for assembling such a great team of helpers to produce it. We can go ahead to slide eight, the slide that shows about the data about the questionnaire responses from the registrations.

So when you registered, you were asked about your questions, about your areas of interest or concern vis a vis monitoring and evaluating distance learning. And what we realized, as you can see from the data here, the team on the call took the time to analyze that data. What we realized is that a lot of the focus of the questions has been around the outcomes, right? There's a lot of interest out there in how to measure outcomes from distance learning applications. And to some extent, the monitoring of reach or the monitoring of engagement.

But I think people in answering our questionnaire really wanted to know, you know, what is the impact? How can I, you know, have that measurement of improvement from my distance learning work? And then, the top three challenges cited in that data were measuring outcomes, as I just said, meaning changes in learning or changes in knowledge. Then measuring something about engagement, and then capturing actual use. And then, something about outcomes measurement, so you know, how to do the measuring. And then, quality of programming, which we'll get back to. But I think we just want to flag that that's actually, you know, a whole concept that englobes to some extent, these key aspects of reach, engagement, and outcomes.

And you're going to hear us a lot in talking about this document and in talking about distance learning. Keep coming back to that framing, right? You can measure reach. You can measure engagement, or you can measure outcomes. But we did understand from the questionnaires you turned in that your main interest was around the outcomes. Next slide, please. So I like this slide a lot.

Here you have the map of the world. And what this shows you is the countries from which Dr. Morris and her team, and the writers in this document were able to either conduct interviews or look at case studies or gather information or data about current efforts to produce distance learning and then monitor and evaluate it. So the team worked with about 25 experts and implementing partner groups. And roughly what they asked was, what work are you doing in distance learning right now? And then, how are you collecting data about how that experience is going? So Dr. Morris and Anna Farrell, as I've already mentioned,

really tried to create this roadmap based on current practice and current innovations. And in the back of The Roadmap, and I really think it's a rich, rich resource for you. There are 13 fully mapped case study write-ups that show you the different practices in measuring distance learning both across the modalities and also potentially across the sub-sectors of education. So this is really based on a lot of rich information coming from the field. This is much more than just a simple literature review of documents. Next slide.

So ta-da! Here it is. This is The Roadmap. It is indeed a roadmap. It's a little chart with eight steps. And the idea is that if you are working to design your monitoring and evaluation and learning plan to accompany a distance learning intervention that if you take yourself or your team through these eight steps, you will probably end up with a fairly robust data collection that meets the precise needs that you have in your context for understanding learning and reporting.

So you'll see it goes from left to right and around a little set of 90-degree turns. You'll see steps one through eight listed on the slide there. What you will find is that in this webinar, we will talk about slides one, two, three, and four. I'm sorry. About steps one, two, three, and four in some detail.

And five, six, seven, and eight are in the document and for you to read. But what we want to really emphasize is that if you are thoughtful and methodical about steps one, two, three, and four, steps five, six, seven, and eight are going to sort of more or less carry themselves forward and hopefully fall out well for you. So let's go ahead and go to the next slide and start in with the steps.

So this slide gives us step number one, which I will not read to you because you can read it. But this is -- step number one is to look at the key internal and external objectives that you likely have when you're creating your MEL plan for your distance learning work. And this is necessary because it's important to know how to calibrate your MEL to the data that is most appropriate for you.

So in that nice chi-square that you have there internally, internal to what is happening in your programming work, of course, you can be aiming to collect data that could be used to inform your program management, that could be used to inform how you are conducting the teaching and learning in the program that can help you adapt the program that can help you learn the aspects that are going to be required for sustainability of the program. And then, that can give you a basis from which to speak about accountability in terms of what the program is doing and achieving over time. And then of course, externally to your own programming, the data you collect, you may decide that you want to feed into the general global knowledge based on this, like the case studies that Dr. Morris and her team looked at.

You may decide that you want to focus on certain accountability purposes like costing or scaling or replicability, et cetera. So step number one is really the step about defining what are the objectives both internal and external of my monitoring, evaluation, and learning effort. Next slide, please. So on these slides, we want to point out that in the document in The Roadmap document itself, each step comes with a set of key questions that you'll want to be thinking to answer and then a key recommendation. So you can see this example here from the step one of The Roadmap.

These are the key questions that you will be asking yourself to complete step one. And then the key recommendation that we would offer you based on Emily's and her team's review of all of the case studies and experiences that they've looked at. So you're going to want to look at, you know, why am I measuring distance learning? How will I use my data? Who is my audience for this evaluation? And how will my MEL activities and plan align with my intentions here and my objectives for what I'm measuring? And then from there, you'll be able to make a plan. Emily, over to you.

>> Emily Morris: Thanks, Rebecca. The second step in The Roadmap is to determine what will be measured. And so, we provide a number of metrics and a number of ideas.

But this is really based on the context and the modalities and the intended users in the situations that you're working in. But we do create three categories: reach, engagement, and outcomes. Reach being access or technology devices, infrastructure, and connectivity. Engagement or participation and use of programming, needs in programming, satisfaction of users. Outcomes change in knowledge, skills, attitudes, behaviors, the teaching, and learning.

And The Roadmap really has these metrics. So we encourage you to go if you want to look deeper at the metrics. But also encouraging that reach, engagement and outcomes are really critical across. So programming should have reach, engagement, and outcome measures.

And in order to understand quality of programming, these are all essential. So quality cannot just be measured with outcomes alone, we need to understand reach, engagement, and outcomes across. Some of our key recommendations. Next slide, please. And questions that emerged for this step are, how are reach, engagement and outcomes measured? So we provide some guidance on that.

What are some examples? How can teams build these into their plans and designs? What kinds of equity analysis should be considered? And we provide some key recommendations about using measures that are feasible for the context in which you're working and the modalities with which you're working with. But also, in identifying who is being reached and engaged and who is not being reached and engaged. The step three. Sorry, before I go to step three, we also just want to draw attention. For those of you that are working with programs that are USAID funded, there is a standard foreign assistance indicator in the making. And this will really look at reach and look at the percent of learners regularly participating in distance learning programming with USG education assistance.

And this indicator focuses on reach and emphasizes reach. Because in order to unpack engagement and outcomes, it is really essential to understand who distance learning is reaching. So more information will be shared when this indicator is finalized.

Step three is determining how the data will be collected. So The Roadmap also includes a decision tree that walks users through whether key decision points and data should be collected remotely, in-person, and ideally, an integrated remote and in-person data collection approach can be taken. Next slide. Some of the key questions that we help answer is, should data be collected in-person or remotely? So this decision tree and these decision points.

But also, what key considerations, safety of teams, access to technology, infrastructure, feasibility. What should be considered? What equity considerations should be taken into account? So reaching also marginalized groups. And which technology should be used to collect data? Some of our key recommendations are to use integrated, in-person, and remote approaches where feasible, and ensuring that marginalized individuals are included in data collection, whether that's remote or in-person. Step four is determining approaches for measurement.

And so, we provide some guidance on what these approaches could be. And we encourage both quantitative methods like surveys and tests, analytics, quantitative observations. But we also encourage qualitative approaches, qualitative observations, focus groups, interviews, participatory methods.

And so, depending on whether you're doing a summative or formative approach, and looking at assessment from formative or summative, having a variety of methods is quantitative and qualitative methods to capture what you're trying to capture. Next slide. Some key questions in determining the approaches is really what methods can be used? What technologies and interfaces should be used? And what kinds of equity analysis again should be conducted? Again, one of the key recommendations is using mixed methods to collect data.

And going back to step one, aligning those to the aims and the objectives and the questions that you're trying to answer and the data that you're trying to gather. But also ensuring that data collection efforts are not further marginalizing participants. And as we've seen in COVID, that we have to be extra careful in designing, monitoring, evaluation, and learning plans and intentions that take marginalized groups into consideration throughout all of the process.

So this is our roadmap. And as Rebecca and Samia mentioned, The Roadmap is on Edulink. So please visit that if you want to explore more of these first four steps in the roadmap. I am now going to introduce some of our experts who have joined us today and introduce each of them.

And then they will have a chance to share some of their experiences in monitoring distance learning, measuring, and monitoring distance learning. So we have Cliodhna Ryan, who's joined us. And Cliodhna Ryan is Head of Educational Research at Ubongo, Africa's Leading Creators of Edutainment. After graduating with a degree in education, she worked as a primary school teacher in Ireland bringing play-based learning to her classes of early grade learners. She has also pursued advanced degrees in education and specializes in social-emotional learning and systems-level change.

This led her to her work with UNICEF in Tanzania supporting development of tools, to assess life skills learning, and recently in joining with Ubongo. So we welcome Cliodhna and thank you for joining us, and sharing with us your experiences working with television, mobile phone, and edutainment in Ubongo's programming. We also have Shelley Pasnik joining us. So Shelley is a Senior Vice President of Education Development Center, a global nonprofit that focuses on education, health, and economic development.

Her research on children and technologies is providing evidence of effective strategies to foster school readiness and an equitable learning opportunities for all families. And an internationally recognized expert in the thoughtful integration of digital media, Shelley has collaborated with the U.S. Department of Education, PBS, LEGO Foundation, the Corporation for Public Broadcasting, Bill and Melinda Gates Foundation. Apple, Google, National Science Foundation, Sesame Workshop, among others.

Welcome, Shelley. And Hetal, Dr. Hetal Thukral has also joined us. Hetal has over 19 years of experience in implementing international education programs from the U.S. and abroad with a focus on program evaluation and improvement at the federal and local levels. She has conducted evaluations, research studies, policy, linking workshops, and designed monitoring systems for projects funded by USAID, the African Development Bank, World Bank, UNESCO, DFiD, NSF, among others.

Domestically, Hetal has worked in district levels supporting research and policy development in Maryland. And in this role, Dr. Thukral has built and applied rich technical skills and program evaluation and analysis. And welcome Hetal, and thanks for joining us. So I'm going to first start with Cliodhna. And Cliodhna is going to share a few thoughts and responses.

We have asked the three panelists to start with some thoughts in response to the questions that you asked when you registered for the workshop, and some questions on reach, engagement, and outcomes. So I'm going to pass the Cliodhna to share some of her thoughts. >> Cliodhna Ryan: Thank you, Emily, for the wonderful introduction. And I'm really happy to be part of this panel today.

I think it's a really pertinent and timely conversation for us to be having, especially in the context of the pandemic. When so much of how we monitor and evaluate our programs has changed and been limited in some ways. But yeah, so I would like to touch on the kind of three issues of reach, engagement, and outcomes, and also share some of the experiences that we have had in relation to that.

So I know that there were a lot of questions around, you know, the most common measurement tools that are used, and the most common approaches taken for measuring these three metrics. And honestly, I will say that, for us at Ubongo, we really favor, you know, a mixed-methods approach to everything. And honestly, there is no, we haven't yet found a one size fits all approach that works across every country.

We find that you know, what works in one country might be actually very challenging to do in another country. And so, I think being responsive is really important. When it comes to measuring reach, you know, because we deal mainly with broadcast media and mass media, we get a lot of our data from nationally representative surveys, by in building questions into surveys that are run by people like Ipsos or GeoPoll.

And that gives us, you know, quite a solid measurement of the numbers of kids being reached by our broadcast media. And then, we do our own further analysis on the lean data to try and pull out things like the demographics of our audience, where do they sit on the wealth distribution scale? Because that's really important for us to be able to assess, you know, how do we reach, you know, underserved communities? How we can improve our programming to make sure that it's accessible to the learners that need it most. And that approach was working very well.

But what I will say is, during the pandemic, we had a bit of a shift. So we took the decision to make all of our content available for free publicly under creative commons non-commercial license. So what that did was essentially, throw our content out into the public domain for anybody who wants to download it, put it on a USB stick, and share it wherever they want. They can make as many copies as they want. And while, you know, we took that from a really ethics-driven standpoint, because we realized that during the pandemic, there was such a demand for learning materials that we really felt we could no longer, you know, wait until we had broadcast partnerships in every country.

If there was a partner who could take some USB sticks and bring them out to someone who has a solar radio that takes a USB drive, you know, let's give it to them. But it did provide challenges when it came to monitoring. Because honestly, I can say it's very difficult for us to actually know how many people out there now have our content. And so, we have this broadcast figure of 25 million kids across Africa.

But then we had to kind of -- there was kind of this, you know, give and take where we had to kind of compromise a little bit and say, okay, we're not going to get perfect data for this method of distribution. But because it's so important to us, we're going to try our best. And so, what we have done is put a signup sheet on the toolkit's website. So it's all available on toolkits.ubongo.org.

We put a signup sheet on the website, and we ask that whoever downloads this gives us an estimate of the number of children they're going to reach. And it's not perfect, and it's definitely not, you know, as neat and tidy as these nationally representative surveys are for broadcast content. But we also realize that sometimes in these, you know, situations of learning crisis, we really need to do our best. Then on engagement. For us, we really consider engagement even from the content design stage.

And what I mean by that is, we take a human-centered design approach to creating the content, so that it takes the real, you know, the real experiences of the children and their real baseline of where they are with their learning. It takes that into account at the very beginning. And so, we don't just test, have they understood the content? We also look for things like, do they laugh at the parts that are supposed to be funny? Are there parts where they lose interest? And sometimes it's as simple as having children in a room, you know, and placing a hidden camera and just letting them watch by themselves to see, you know, how do they interact with the content.

Sometimes we catch, you know, kind of spontaneous dance parties that happen when there are no adults in the room. But all of that is, aside from being really fun to watch, it's really important to us. Because our research has shown that engagement is really intertwined with learning outcomes, and you can't have one without the other. If a learner is not engaged by the content that they're interacting with, they're not really going to learn very much from it. And so, what we also do is post little hooks, or, I suppose unique parts into each episode and season because we preempt that we are going to want to measure engagement at some point. And for most of us, when we're designing distance learning, we know that we're eventually going to want to measure that engagement.

So by putting something in an episode or a season that's unique only to that content, we can then later test with our audience, do they recall this piece of information? Do they recall this character that's only in season five? And that's really important because sometimes, especially when it's part of, you know, a big national study where people are asked questions about a lot of different things not relevant to this distance learning program. You know, they might say, "Oh, yeah, yeah and my kids watch Ubongo kids." And really, you know, but from that, if we ask them, "Okay, and have you ever heard of a character called Tabasamu?" And if they have no idea who that is, we know, okay, they either haven't watched the newest season. Or we might ask them, okay, we might give them a list of five different characters, some are from Disney shows, some are from other shows that are popular on TV, and then we'll throw in one or two of our characters. And that can help us to see, you know, which episodes have they watched? Are they remembering the characters? Have they been engaged enough to remember different parts? And on the importance of mixed methods especially for engagement, I think sometimes when it comes to monitoring particularly, and you know, evaluation is a whole other piece.

But when it comes to monitoring engagement, it's really important -- it's been really important for us to use a variety of methods. So for example, for season five of Ubongo kids, we were focusing on teaching socio-emotional skills related to the concept of Utu, which is the Swahili concept of shared humanity, similar to Ubuntu, if any of you are familiar with that. And one of the activities that we were measuring engagement through is signups for Utu clubs. And so following the, you know, following the episode broadcast, we asked children to sign up through SMS, through WhatsApp, through social media, and to register their Utu club.

And initially, we thought it's great. We have 600 signups for Utu clubs. This is brilliant. But then, and initially, that was our metric actually. Our metric was we wanted at least 400 Utu clubs set up in the first three months. But then when we actually -- we decided to kind of interrogate that metric a bit more.

And we started doing phone surveys with the kids. And soon we came to learn that although they were registering as having a club, some of them just liked interacting with the chatbot but didn't actually have a club. Some of them, their parents texted into the SMS, because they just wanted to see what would happen. They thought that maybe they would get learning resources sent to them. But some of the times, they never actually passed that information on to the child. Sometimes they were older teenagers who just got a phone and realize this is a free number, you know, I can.

So we came to realize that, you know, even the metric itself was flawed. Because signup to a club doesn't really actually indicate engagement. It just shows that one time I went onto my phone, and I, you know, I sent this message. So I think having mixed methods and, you know, sometimes taking that rough and ready approach of just picking up the phone and building that in from the beginning so that when people sign up, they give permission to be contacted and they can opt into that.

So that, you know, it's not the most -- what can I say? It's not the most scientific approach. It's definitely not something you're going to write off and, you know, probably publish, but it really works in terms of just getting a quick feedback and real-life experience that will help you then to improve what you're doing, especially in these kind of emergency situations or, you know, crisis situations. And then, when it comes to outcomes, one of our key learnings has been, again, the importance of building it in from the very beginning. And so initially, through the human-centered design process, rather than creating a starting point that corresponds directly with curriculum milestones, we have found it effective to actually initially go out to our audience and do our pre-research and find out, what is the actual baseline for these kids? So I think sometimes when creating these programs, it can be easy to, and especially in a situation where you're doing it by distance, it's much easier to take the curriculum and see, okay, this is what they're supposed to know at this age, so this is what I'm going to teach. But sometimes we have actually found, we might need to take what we're teaching back a few steps to give the background knowledge that's really needed for a child to progress.

So usually, our starting point will be based on that pre-research that's done with the groups of children from our audience. And even children that are not already watching our show, we try to just cast the net as wide as we can and get as much feedback as we can. But then, when it comes to actually evaluating learning outcomes, you know, we use randomized controlled trials a lot. Although it is becoming increasingly challenging because as our reach is increasing through mass media, it's becoming more and more difficult to actually find a control group who haven't been exposed to our content before. And so, we really now have to start thinking, you know, how do we get more creative in our research design? How do we take it outside of maybe our usual target audiences? We also need to really think about children with disabilities and how we include them in our evaluations.

Because it's a real challenge in the countries that we work in, where there's very little data on even the numbers of children diagnosed with disabilities, let alone the ones who are watching our show. So I would say that is now the next major priority for us is to really figure out how we're going to do that to make sure that from the data that we have on learning outcomes, you know, that it's really inclusive of all learners. So I see that I'm out of time, I'll leave it there. And I'm looking forward to hearing what other questions people have and what the other speakers have to say.

Thank you very much. >> Emily Morris: Thank you so much, Cliodhna for sharing Ubongo's experience with measuring reach, engagement, and outcomes and the experiences and challenges that you've encountered. I am going to hand on now to Shelley Pasnik to talk a little bit about her work and some of her observations. >> Shelley Pasnik: Yeah.

Hello, everyone. Thanks for joining today. As Emily described in the outset, I work for a nonprofit organization Education Development Center that's been doing a tremendous amount of work in education, health, and economic development. And much of the work that we do across the globe involves USAID.

For the purpose of today though, I thought it would be helpful to bring in an example that sits outside of USAID funding to really drive home the point of how powerful The Roadmap is, and also how there are common challenges that many of us are facing. And also, I know Rebecca is really trying to prod people in asking questions. And so, what I'm going to try to do and giving and to walk through the example is place some pins in places where I think there are challenges on top of challenges or that are unique to the pandemic to really try to elicit some sharing of experiences. So what I thought I would do is describe a program that's in the United States called Ready to Learn.

And that's a federal program that comes out of the U.S. Department of Education. And it's meant to provide an investment in public resources, and in particular, the public media system. So many of the programs that come out of PBS that are supported by the Corporation for Public Broadcasting. Here, this initiative has existed for several decades. And the idea is, as you might guess from the name of Ready to Learn is how can young people, ages two to eight receive early learning experiences that are going to help them succeed in their formal educational experiences, and also in life more generally.

Definitely, the sciences has grown over the decades, and we understand the importance of early intervention and early supports. And in particular, how might resources that are broadcast and also that are available online, support children's learning. Even before the pandemic, in the U.S. context, more than half of children were outside of any daycare or preschool or learning experience.

And certainly, with the pandemic, those numbers plummeted as many children and families were struggling for educational experiences. And our organization along with SRI International has served as a research partner to the Department of Education over the years. And so, we are charged with the responsibility of figuring out, is this investment worthwhile? And, you know, as was called out at the outset, many of you are interested in outcomes. And that's where we need to reside, is really thinking about, do these programs do what they claim to do? And with each five-year cycle, there's an investment in particular areas. And in the last few years, the interest was really in informational literacy and in science.

And in particular, the series that I'll draw your attention to is Molly of Denali, which is a new series that is the first in the United States to have a Native Alaskan character as a lead character, which is a significant part of the ethos of this program, and really what it seeks to do. So there's the narrative contribution and character development. And then, it's specifically meant to promote informational texts. And the literacy skills are around knowing that there are all sorts of resources meant to convey information.

Whether it's a picture or a caption, or it could be a story from an elder, it could be a website. But can young children understand how to use informational text. In particular, how to use informational text to solve real-world problems? So that first question of reach.

Just as a baseline PBS is a public available resource. It has near-universal penetration in the United States. But 42 million young people engaged with Molly of Denali.

Sorry for a little catch in my throat. And they have nearly half a million unique users to their digital gains in their online content, right? So there's, you know, it's a robust program. One place where I'll place a pin though, is thinking about the use of research, right? And who gets to participate? I mean, Cliodhna was starting to allude to this. How do we think about young children who might have, you know, be at a diagnosis or pre-diagnosis and having a learning challenge or a learning disability? PBS KIDS take seriously universal design principles and so really wanted to be inclusive.

And also, with the pandemic and the severe financial challenges that were brought as a complication of COVID-19, the research team really grappled with, how do we invite families into participating in this study in ways that are not coercive or otherwise harmful? Because the offer of an incentive, we often offer the opportunity to hold on to technology if that's involved in a study, or there might be a cash incentive, there might be an incentive of resources. And if we didn't want to tip the balance in having families come to this study, simply because they needed the resources and they otherwise wouldn't give their consent to participate. And so, that's something that we really thought about and I know that was called out at the top of this webinar and also, there is some good language in that in The Roadmap as well. And then, on to engagement.

So what we did with the case of Molly of Denali, with this gorgeous randomized control study design. And it was a really beautiful design that folks in a way, that pilot study, really beautiful design. Pandemic came, we had to scrap it, right? You know, so we had a five-site design. We had an intervention condition. We had our control condition.

We had already done pre-assessments in two sites in the process of recruiting. And then, everything shut down. And so again, a lot of grappling with trying to figure out how we would proceed.

So what we decided to do is pivot entirely to remote assessment. And we adapted the assessments that we had created so that we can deliver them over video, as many of us are, you know, we're having this experience now. You know, but back in March, that was, you know, it was not a foregone conclusion that we were going to continue with the data collection. But, you know, the team made all sorts of adaptations in order to deliver the assessment online. That's another place where we can place a pin, because how do you ensure that it's the young person who's responding the assessment? You know, are there parents and other caregivers, you know, perhaps outside of the frame, you know, providing influence? And so, you know, real-world, you know. How do we think about the validity of that assessment? And then also, to reinforce what we've heard, you know, using a methods approach.

You know, we really focus on learning outcomes but also thinking about how we could not only take from young people. And in this case, we're focused on first graders, their responses to these assessments, but also triangulate with the data that we were getting from the back end, you know, in this case. Because we were really thinking about use of these videos and the online games. Part of our design is that we provided tablets loaded with the content already loaded. You know, in other cases, maybe if one is doing an implementation study, we might not have the opportunity to provide the technology.

You know, but instead, we wanted to hold the technology constant to ensure that having access that we could test what young children got from their experience, or in their engagement with the content. And also, the intervention as we design it was for a nine-week period. And the expectation was that families or that these kids would engage with the content for about an hour a week, right? So a light-touch intervention. And also, because we had designed the -- we're providing the tablet, we created a personalized app in order to deliver the content.

And again, that, you know, PBS KIDS has their own back end, you know, analytics and there, you know, that's how they get to larger numbers. But we wanted to know, use of the video and really do some monitoring there. And then the third piece -- I'll just jump to outcomes, because I'm just as eager to hear from Hetal as all of you are, is this whole notion of outcomes, you know. So what did we find? And we found that the results from both the initial study that we did, and I guess I skipped over this. So, you know, we just continued with the two sites that we had.

And then we decided to run a replication study that was entirely remote. So we did the assessment, both pre and post remote. And there, from both studies we found that access to the Molly of Denali resources improve first-grade children's ability to solve real-world problems using informational text, which was tremendous given that this was the first study to really look at informational text and digital media together. Interestingly, we also found that young children benefited more from access to the resources.

And also, that the children who used the resources for longer periods of time also learned more. And we know this, again, from looking at the usage patterns in combination with the outcomes on the assessment. For quick implications that I'm going to draw out. One, you know, it's a reinforcement that educational media can support children's learning at home. And the call that I'll make here is that unlike some of our other research, where we're really focused on the mediation that parents and other caregivers can provide to children, and we're you very encouraging of a two-generational approach. And that we want to think of whole systems of learning.

In this case, this was a light touch intervention, and we did not have involvement on the part of the parent or the caregiver. So that was noteworthy. And also reflects a lot of the real-world experiences that many children are having where they don't necessarily have an adult there to mediate or to help intervene. The second application, that light touch interventions can work.

You know, again, you know, roughly an hour a week. And the effect size that we got was equivalent to roughly to the growth and reading skills that a first-grader would typically experience over three months. You know, so this is significant. I mean, a lot of educational program really is looking for a whole year experiences or a whole curricular experiences.

But, you know, how can we be creative in thinking about broadcast media and some of these light-touch interventions that also can go the educational distance and really be a part of the larger educational experience that children and families that are having. The third, that piece that I, you know, initially skipped over, is this replication. The fact that we had this, the findings in the initial study as well as we're able to replicate it? We know replication is kind of a holy grail when it comes to building evidence. So often though, we don't have a tremendous amount of financial resources to replicate studies. So the fact that we were able to do that.

And again, wasn't our initial design, but that's something that we were able to craft in the context of the pandemic. You know, what could we do that was different and take advantage of that disruption? And then, the last implication that I'll call out, for now, is that the public media can have an important role to play not only in children's academic growth but also their social growth. And this is something that, you know, maybe there'll be questions and we'll get to more. And that is how Molly of Denali really meant to represent the experience of this community in Alaska. Native Alaskans were involved in all steps of the production.

We had a Native Alaskan group of advisors in the research, thinking about the assessments, thinking about all dimensions of the research, so that we weren't making assumptions. And I think, you know, the sensitivity that we can bring to cultural and linguistic realities, and really involving community rather than having, you know, research being imposed, you know. But there, we can see families as partners in the research.

And with that, I thank you, and I look forward to questions and getting to -- but Hetal is going to, will share with us all. >> Emily Morris: Thank you, Shelley, for walking us through your reach, engagement, and outcomes. I'm going to hand off to Hetal, so we get time for questions. So Hetal, moving to you.

>> Hetal Thukral: All right, thanks. So I don't have -- I'm not going to structure by those three, but just kind of like broadly across some lessons learned and things that I've seen happen in my experience in the last couple of months. So I think building off of those two very specific examples.

One, I wanted to highlight about four different points, and I'll kind of hit on some examples within them. The first one is really purpose. I really like how The Roadmap lays out this step one, two, three, four.

And I think, you know, in the past, sometimes we all almost jump to step four and say, "I'm doing a survey," or "I'm doing a treatment control design." And then, we kind of back into the purpose. And I think here with the uncertainty of opening and closing of schools, of the distance learning methods, right? Sometimes the interventions are also shifting. That really the purpose really has to drive the evaluation.

So you'll also have to be flexible and adapt programming but at the same time flexible and adapt [inaudible]. So really purpose driving everything in what you want to learn. So an example is, I'm in the midst of an evaluation, two evaluations in Uganda, that actually now that schools are closed, we are revisiting what that purpose is. And originally, we had a face-to-face treatment control design plan to assess radio programming.

And now, we're going to remote. So going back to purpose, we're reframing the evaluation. We're shortening the test. We're coming up with what parts of it we really want to learn from. And then, taking the temperature down a bit on what the findings are going to speak to. So they're more for learning for the project and less around accountability.

So saying, we are going to have gaps, we know that upfront, and we're just going to do the best we can to measure that. So when you drive with purpose, I think what it opens up is space for you to kind of let a couple of different things influence your downstream decisions. The modality of intervention and the modality of your monitoring and evaluation, right? Sometimes we're talking about kind of as a given, I'm doing radio programming but I'm going to assess through phone or in-person, or I am embedding questions in the radio programming, you have to call back.

So how all of those kinds of interplay are important decisions. Letting what you know about your population drive your subsequent decision. So if we have a pretty good handle on reach, you can frame some of the intentions of your evaluations more confidently on outcomes. You don't have a great handle on reach, maybe that is part where you kind of linger for a little longer to figure that out.

And then come back to purpose and say, "I needed to learn how this is impacting students. But until I get reach and engagement kind of clear under my belt, we can't really move forward." And I touched on flexibility, really that ability to flex across modes just as much as your program is going to be flexible, so to just the evaluations. What I found really interesting that reads in to evaluation is that we embedded the adaptation cycle and the way in which the students might be engaging with radio and or video. It was -- both were being distributed and print actually. And not really having like a this group gets radio, this group gets video, but it was kind of like everyone gets a little piece of everything, we're throwing everything at the window.

But then, that flexibility of intervention was built into the evaluation. So saying, our theory of change is that students are engaging with one or both, or all three pieces in some way. There's follow-up that allows them to kind of talk about what they're finding most useful. There's follow-up phone, through teachers.

>> Emily Morris: Hetal, we lost your sound for a minute? If you could unmute. We still can't hear. So there we go. >> Hetal Thukral: Oh, oh, sure, okay.

By using the theory of change to really drive some of that. So I think the point there, that embed those adaptations into the evaluation, and not necessarily building an evaluation that looks at just one type or one kind of snapshot of the intervention or assumes a snapshot of the intervention. A third point I want to make is monitoring.

I know there is some catchy title of a CIS session at some point, better M for better E. But I think it really kind of is so apt or appropriate right now. That if we don't have a good handle on the monitoring aspect, then what are we evaluating? So I recently saw an evaluation that focused on effectiveness of a radio program. And they, you know, expensive surveys and tests and everything. The findings come back, and you say, so we know that there's a difference, but we've no idea why or how. And those questions kind of left the results lacking.

And so, really to kind of let the monitoring be your steppingstone, is the point there. And I think last just at a time when we're balancing learning and how to measure distance learning, some interesting things I've seen come through, I mentioned using theory of change to really help you drive your designs, leverage what you have. So using monitoring data, very interestingly, to nuance the evaluation questions, use to kind of target what the monitoring doesn't tell you. So in places where reach is known for students who have access to a phone, because you can call them and ask them about, are you getting the program? Are you able to engage? You can do some questions.

But then saying in the evaluation, we're actually going to really prioritize the children who we have not heard from. So either don't have a phone or don't have a phone and may not get the signal. Have gotten like the print materials, but we have no idea how they use them. So then, and then in that focused to also then save your most precious resources, which right now is a face to face data collection. But saving that if it's available for those very targeted voices. So who have you not heard from and really making sure that they're included very strategically.

So at the end of an evaluation, kind of seeing collectively that not just the evaluation data that you might have collected, that primary data, but that in tandem with the monitoring together can give kind of a holistic picture of reach and engagement and outcomes. So I'll stop there. So we'll have at least some time for questions. >> Emily Morris: Thank you so much Hetal and to the panelists. Now, we want to turn over to the questions that you have all graciously put into the chat. So we're going to start with Jonathan Stern's question in the chat.

Many distance learning programs are using a range of modalities and activities such as TV or radio, SMS, or online platforms, et cetera. What recommendations do you have for measuring engagement across these different platforms? Particularly when relying on remote surveys for young children? And I'm going to ask Cliodhna to start with this. And ask another question sort of to add on to this. Is how can we evaluate mass media audiences where our target groups have difficulties accessing these technologies? So Shelley alluded to one where they have given preloaded information, but any other insight you have Cliodhna. And then, I'll open to other panelists.

>> Cliodhna Ryan: Yeah, it's a great question. And, you know, it's something that I think we're still doing a lot of learning and trying new things on. When it comes to evaluating on mass media, I think really, it's a case of using every available resource that you have. And so, you know, that it can often be the case that, you know, when you engage with these more, you know, nationally representative studies, they tell you one story. But as Hetal also said, the importance of, then using your own opportunities to engage with your audience face to face can actually tell you a different story. And then, when it comes to looking at, you know, marginalized groups and how mass media can reach them.

I think, you know, it's also a case of, I would say, looking at what's already being done in those areas, what surveys are already being conducted? What research is already being conducted? That, you know, you can take learnings from or perhaps even insert questions into. Again, looking at for example, when it comes to children who are refugees or migrant children, something that we have done is looked at, okay, who are the organizations on the ground who are reaching these people with whether it's food or education or, you know, essential items. And seeing, you know, can we consider distance education part of that package? You know, can we look -- okay, maybe they don't have a TV hooked up to a national cable channel, but can we, you know, is there someone in the community who has a radio? Often, we find across Africa, it's very common to have these kind of co-viewing spaces where people can gather together to watch, you know, football or different events, you know. Is it possible even if they're not hooked up to the main mass media broadcast, can you send out a USB stick that they can then use that to broadcast in that area? So I think sometimes, you know, mass media has this really wide approach.

But often, there's an extra step needed to get that to the most marginalized people. And it can be as simple as one thing that we're experimenting with is linking up with companies who give out solar TVs. And there's a project in Kenya where they're distributing TV and radio. And what we're doing is because that's an existing program that's already happening. Along with this TV or radio, anyone who gets one will also get a flash drive of Ubongo content. And then also, when we look at mass media, you know, we often think mainly of the main national broadcasters for TV and radio.

But also looking into like, sometimes the national broadcaster isn't who most people are listening to, or who most people are tuning into. So really also looking at the community level, you know. What are the channels that people are most interacting with? And then, I think that can really help to inform your monitoring and evaluation strategy. Once you have all of those things -- once you have all of those things in place. >> Emily Morris: Thank you, Cliodhna. And Jonathan Stern, if you have any further follow-ups, please don't hesitate to share in the chat, continue to grapple with your questions.

I'm going to also introduce question three that was asked by Laura in the chat. And ask Hetal to also respond to this. And if you have anything else to add to the other questions.

What recommendations do you have when it comes to equity and ensuring the most marginalized students are both able to access content and engage with the content? And so, Cliodhna touched on this a little bit. But if you could expand upon that. >> Hetal Thukral: Yeah. I actually have an evaluation right now, where we're really trying to balance capturing the voices of learners that we think we're on the periphery of the programming, and it's radio programs. And it's in knowing -- we don't know what we don't know, right? So there're two kind of strategies where we're playing around with them, really trying to be creative with.

One is being creatively about our data collection. So I think in a perfect world, we always had trained enumerators who go out and collect data

2021-07-13 14:05

Show Video

Other news