It goes without saying that 2020 has been a hell of a year. While the pandemic has upended work, school, and life in general, it’s also shone a bright light on the glaring inequities of Internet access, quality of that access, and its cost. Those of us who have worked on digital equity issues and initiatives have known and communicated about this for years– that massive inequities in Internet access, cost, and quality are the natural result of the race and class disparities that have been perpetuated in our country’s general culture. All these issues are intertwined, and in one way or another they’re all important issues for everyone.
While there have been many initiatives attempting to address these inequities during the COVID-19 pandemic, including from the digital inclusion and non-profit communities, as well as ISP programs for reduced cost service and government initiatives at all levels, it’s important to recognize that solving for this isn’t going to happen with old models and tactics. Digital inclusion practitioners, researchers, and policymakers will have to shift and adapt, not to the “new normal” as some have described norms for living in a pandemic, but to the realities that a large number of Americans have experienced for a long time.
So what do we do? Where do we start?
Our MLBN Team is not alone in the desire for data to help understand the issues of Internet access disparities on the ground. Having accurate information is critical for determining how to effectively improve the situation. But as Rachelle Chong and Larry Irving wrote in their recent blog post, titled The Broadband Mapping Flaw that’s Harming Education and Healthcare and published by the Benton Institute for Broadband & Society highlights, flawed data in the federal government’s broadband maps has stopped many communities from being able to access subsidy dollars for building out better access where it’s needed. And as the authors point out, while the Broadband DATA Act enacted earlier this year, “directs the FCC to improve its maps by gathering and publishing more granular data about broadband availability,” the FCC’s recent broadband mapping proposal wouldn’t collect data from community anchor institutions like schools, libraries, and healthcare providers– institutions that have historically provided a life line of Internet access in communities where broadband disparities are most prominent.
So where is broadband available and where isn’t it? Where do consumers have choice and where don’t they? What are the differences in costs for the same service from the same provider in different parts of the country?
We need new data to help answer these questions, and while the federal government agencies like the FCC, NTIA, and others are all working on these issues, and we actively support these initiatives, what if communities could collect these data on our own?
Our approach in our MLBN research project, funded by a federal grant from the U.S. Institute for Museum and Library Services, was to attempt to understand the conditions of Internet service in public libraries– by measuring it. Our remit was to deliver an open source and replicable broadband measurement platform and documentation on how to use it. We are using the platform to gather quantitative data to achieve the following goals:
- understand the broadband speeds and quality of service that public libraries receive;
- assess how well broadband service and infrastructure are supporting their communities’ digital needs;
- understand broadband network usage and capacity;
- increase their knowledge of networked services and connectivity needs.
The core of this measurement platform is a software package for measuring an Internet connection using a small computer placed on premise at public libraries and connected to the Internet serving the public. The software is currently named Murakami, which runs various measurements automatically and randomly and can save the data for use by the library.
By running measurements from a standard premise device, we eliminate issues of self-selection bias that can plague analyses of solely crowdsourced data. Seeing measurements like upload and download speeds, latency, etc. collected over time is more informative than a one time speed test, and Murakami enables that data collection. Test results can be saved locally on the device, sent to a lightweight data visualization service we developed, Murakami-Viz, pushed to a central archive on a self-maintained server on site or in the cloud, or sent to a storage location in Google’s Cloud Storage service.
These tools are all public, openly licensed code that can be used freely, and anyone can use this software to measure their Internet connections, keep the data, and track the connections’ performance over time.
This broadband measurement platform addresses a deep need in communities across the U.S. to gather data to do their own assessment of Internet service–an approach which we believe represents an example of the transparency and openness desired from official government sources. This openness and transparency is also why communities increasingly have begun using M-Lab’s data in their policy and advocacy.
Of course, M-Lab measurement services aren’t the only way to measure Internet service, nor should they be. A diversity of measurements from different instruments, measurements of different aspects of a connection, or different segments of the network path between you and the Internet are all relevant. This is one reason why while M-Lab has led the technical development of Murakami, the software doesn’t only include M-Lab tests, currently including both M-Lab NDT protocol tests, as well as a single and multi-stream Ookla test. As we begin to dig deeper into our analysis of the data we’ve collected, we’re looking forward to comparing and contrasting these data.
(Image above from Murakami-Viz in use at the Pryor Public Library in Oklahoma)
As we close out the calendar year, we’re focused on that future where we can measure and understand broadband access and quality using a variety of measurement tools. We’ve invited our participating libraries to review their test data in Murakami-Viz, and look forward to their feedback on it and our program in general in the coming year. M-Lab is continuing to develop Murakami as a tool that enables structured data collection using our platform, as well as using other measurement initiatives and tests.
We’re hopeful for a future where the openness and transparency of measurements of Internet service informs how we solve the access, quality, and cost disparities that have been bluntly exposed by massive shifts to online learning and remote work this year.