Thinking about AI Equity from the Perspective of Broadband Equity

There’s broad recognition that access to high-speed internet is necessary for success in school, work, etc. This recognition has led to a number of state and federal programs to improve access to broadband, like the FCC’s Affordable Connectivity Program (ACP) which provides internet subsidies to low-income households. If you believe that access to large language models (LLMs) and other generative AI will also be necessary for success in school, work, etc. in the future – as I do – it’s probably not too early to start learning lessons from programs like ACP and thinking about how we can apply them to improve access to LLMs and other generative AI.

True, there are LLMs with “openly” licensed weights that you can download and run locally on your computer, so that the ongoing costs of inference are mostly captured in the one-time cost of your laptop (i.e., you can run the model “for free”). However, models that are small enough to run locally are very limited in their capability relative to the massive, commercially hosted models. Even smaller models like those with 30 billion paramaters are generally too large to run locally unless you quantize them so aggressively that they’re hardly worth running. Models like GPT-4 are estimated to have 1.75 trillion parameters. You might think about that difference the same way you think about the difference between a 56k dial-up modem connection to the internet and a 1gig fiber-optic broadband connection. Access to the most powerful generative AI models will definitely be an equity issue, just like access to broadband is today.