Abstract

Limited participation in broadband subsidy programs is a growing concern among policymakers and a puzzle for researchers. This study examines participation in Lifeline and the Affordable Connectivity Program (ACP) across US counties. We propose that local attitudes toward welfare represent an important factor in the take-up of broadband subsidies. To test this hypothesis, we model program participation on multiple covariates using two different measurements that approximate local attitudes toward welfare. The findings indicate that program participation is inversely related to anti-welfare attitudes at the county level. We discuss theoretical contributions and implications for program design.

Introduction

This study starts from a seemingly simple question: Why do so many US households eligible to receive federal subsidies for broadband service not take up the benefit? Take the example of the Affordable Connectivity Program (ACP), a means-tested program launched in January 2022 to help low-income families pay for internet access. By the end of 2022, less than one in three of the approximately 55.3 million eligible households were enrolled in the program. The low level of ACP uptake is seemingly contradictory with the increased need for internet access following the rapid expansion in remote work, e-government services, telehealth, and other broadband-enabled activities since the COVID-19 pandemic.1

Scholars have long studied incentives and barriers to participation in government support programs.2 Generally speaking, the literature identifies three factors that affect take-up: (1) lack of information (or uncertainty) about program eligibility and benefits; (2) the administrative burden associated with obtaining information, enrolling, certifying eligibility, and obtaining benefits; (3) the social stigma associated with program participation. Research also shows that a tradeoff exists between the size of the benefit and these factors, such that individuals or households are more (less) likely to bear these costs as benefits increase (decrease).3

In this study, we propose that attitudes toward welfare represent another determinant of participation in welfare programs, one that explains a significant portion of the variation in the take-up of consumer broadband subsidies. To test this hypothesis, we examine participation in two federal programs: Lifeline and ACP. We estimate participation rates by matching program administration data at the county level with microdata from the Census Bureau’s American Community Survey (ACS). We model program participation across counties on a set of covariates that capture broadband demand as well as supply factors. Two variables are used to capture local attitudes toward welfare programs: (1) the share of Republican votes in presidential elections during the 2000 to 2020 period, and (2) the share of respondents in the Cooperative Election Study (CES) who agree that the legislature must “greatly reduce” welfare spending. As a robustness check, we retest our main results on a matched subsample of counties using Propensity Score Matching (PSM) estimation.

The findings are consistent across the two programs examined and the two variables used to measure local attitudes toward welfare as anti-welfare attitudes increase, participation in both Lifeline and ACP decreases.Our interpretation of these findings builds on social psychology studies that distinguish between social and personal stigma.4 We propose that individuals embedded in moral communities with unfavorable attitudes about welfare will internalize the stigma associated with participation despite minimal risk of such participation being visible to others. Stigma internalization increases the psychological costs of program participation, thus reducing uptake. We propose that these effects are particularly relevant for Lifeline and ACP, which offer a somewhat modest benefit relative to larger government programs such as SNAP (Supplemental Nutrition Assistance Program) and Medicaid.

The study is organized as follows: the next section offers an overview of government support programs for telecommunications services in the US, with a focus on Lifeline and the ACP programs (Section “Broadband Subsidies for US Households: An Overview”). This is followed by a brief review of the literature about participation in government programs in general and a discussion of the limited number of studies that examine participation in support programs for telecommunications services (Section “Literature Review”). Section “Methods” presents the data and variables and discusses the two estimation strategies. Following the presentation of findings in Section “Results,” the study concludes with a discussion of theoretical and policy implications (Section “Discussion and Policy Implications”).

Broadband Subsidies for US Households: An Overview

The Lifeline Program

Lifeline is a joint federal-state program established in 1985 to help low-income households pay for basic telephony services in the aftermath of the breakup of the Bell monopoly. As established in the Telecommunications Act, the program is funded through a levy on interstate and international revenues by telecommunications carriers. Lifeline was significantly expanded in the late 2000s following the decision to authorize the participation of wireless resellers, whose services largely catered to low-income customers.5 This expansion aligned with strong consumer demand for wireless services, tripling the number of Lifeline recipients from about six million in 2008 to about eighteen million in 2012.6 By 2021, 94% of services supported through Lifeline were mobile.7

The program underwent additional expansion in 2016, when support for broadband data services was authorized. The rapid increase in the number of beneficiaries took place at the same time that the funding base was shrinking, which led to reforms that sought to cap spending, tighten eligibility, and reduce the supply of Lifeline-supported services.8 After peaking in 2012, the number of households receiving Lifeline rapidly declined in the decade that followed to about 7.4 million in early 2023.9

To participate in Lifeline, households must either have an income at or below 135% of the Federal Poverty Level (FPL) or participate in qualifying government assistance programs such as SNAP, Medicaid, and Supplemental Security Income (SSI). The program is administered by the Universal Service Administrative Company (USAC) under guidelines and supervision from the Federal Communications Commission (FCC). However, several important components of the program are executed by state public utility commissions, including the authorization of carriers that seek to offer Lifeline-supported services (called Eligible Telecommunications Carriers, or ETCs), the monitoring of compliance with minimum-service standards,10 and in some cases (including large states such as California and Texas) the verification of eligibility.

Further, there are state Lifeline programs in about half of US states that offer additional monetary support to the standard federal benefit (currently at $9.25/month). The type and level of support offered by state programs vary considerably.11 For example, in 2023, the California Lifeline program offered up to $16.23/month in additional support, while a similar program in Idaho offered just $2.50/month. The guidelines for participation in state Lifeline programs are also defined at the state level.12 These state variations in benefit level and program implementation have been exploited by scholars to understand the drivers of Lifeline participation (see next section). The national average for Lifeline participation in early 2022 (the latest available) stood at about 14% of eligible households. However, as shown in Figure 1, participation in Lifeline varies significantly both across and within states.

Figure 1 Lifeline Participation Rate by County (January 2022), Contiguous United States.

Source: own estimates based on ACS 1-year 2021 and USAC.

Figure 1 Lifeline Participation Rate by County (January 2022), Contiguous United States.

Source: own estimates based on ACS 1-year 2021 and USAC.

Close modal

The Affordable Connectivity Program

The ACP was launched in January 2022 and offers support of up to $30/month toward broadband services (fixed or mobile) to qualifying households, which rises to $75/month in designated Tribal Lands. ACP replaced the Emergency Broadband Benefit (EBB) program, a federal program created to support broadband access for low-income households during the COVID-19 pandemic. The eligibility criteria for ACP are broader than for Lifeline. Households are eligible for ACP when annual household income is at or below 200% of the FPL or if any household member participates in a designated assistance program such as SNAP, Medicaid, SSI, Pell Grants, and the National School Lunch Program (NSLP), among others.13

At the time of writing, and in contrast to Lifeline, Congress has not guaranteed long-term funding for ACP. Another important difference is that states play a modest role in the implementation of ACP. For example, Internet Service Providers (ISPs) seeking to offer ACP-supported services are not required to receive authorization from state public utility commissions. Some states and local jurisdictions have actively promoted participation in ACP; however, these outreach efforts are not mandated by federal legislation and thus vary widely across jurisdictions.

By December 2022, ACP had enrolled about 15.4 million households. In contrast to Lifeline, about 45% of ACP-supported services are fixed residential access, with the remaining 55% being mobile. Considering the expansion in the eligibility criteria (which increased the number of eligible households from about 47.6 million for Lifeline to about 55.3 million for ACP), ACP uptake stood at approximately 28% of eligible households at the end of 2022. While this is about twice the uptake rate of Lifeline, ACP participation has fallen short of expectations for a program that offers a benefit more than three times that of Lifeline and that, in many cases, can reduce the cost of broadband to near zero. As with Lifeline, ACP participation varies considerably both across and within states (Figure 2).

Figure 2 ACP Participation Rate by County (August 2023), Contiguous United States.

Source: own estimates based on ACS 1-year 2021 and USAC.

Figure 2 ACP Participation Rate by County (August 2023), Contiguous United States.

Source: own estimates based on ACS 1-year 2021 and USAC.

Close modal

Literature Review

The Take-up of Safety Net Programs

Scholars offer three different explanations for the low take-up of government assistance programs: (1) lack of information or uncertainty about program eligibility; (2) administrative burdens associated with obtaining information, enrolling, certifying eligibility, and obtaining benefits; (3) the stigma associated with program participation when such participation is potentially observable to others. As Currie notes, these explanations are often intertwined. For example, an onerous enrollment process that requires an in-person interview with a case worker and sharing a large amount of personal data not only imposes administrative burdens but is also more likely to activate fear of stigmatization among potential recipients.14 In the case of ACP, the temporary nature of the ACP program may increase the psychological costs of acquiring information and overcoming barriers to enrollment, as potential recipients may consider it is not worth incurring those costs to receive a short-lived benefit.

Studies have found that lack of information is particularly relevant for programs with relatively small benefits (as is the case of the federal Lifeline program) and for newly established programs such as ACP.15 Related work has also underlined the role played by human capital factors such as language skills, which have been found to affect program awareness and the understanding of eligibility and benefits.16 Other studies underscore the mediating role of social capital. Borjas and Hilton show that the types of benefits received by earlier immigrants influence the types of benefits received by newer immigrants from the same country of origin, which suggests program information is transmitted through personal networks.17 This is confirmed by Bertrand et al., who find that living in proximity of same-language speakers increases welfare program participation for high welfare-using language groups.18

Another well-established finding refers to the problem of administrative burden or high transaction costs in program enrollment, (re)certification of eligibility, and delivery of benefits.19 For example, Finkelstein and Notowidigdo find that direct personal assistance for enrollment results in significantly more SNAP take-up than the simple provision of program information to potential recipients.20 Deshpande and Li show that the closing of Social Security Administration field offices (which offered assistance for disability support applications) reduced program take-up in adjacent areas.21

The welfare stigma hypothesis is perhaps the most contentious of the explanations for low participation in benefits programs. At its core, it suggests that potential recipients will not participate in assistance programs due to concerns about violating social norms when participation is visible to peers. In Moffitt’s original formulation, welfare stigma represents a psychological cost to potential recipients.22 It is more likely to surface in programs that involve recurrent transactions potentially observable to others, as is the case, for example, with the SNAP program. Lindbeck et al. suggest that the intensity of social stigma also depends on the extent to which others in the relevant personal network adhere to negative attitudes about welfare, as well as on the extent to which these peers participate in the program.23 Using administrative records linked to ACS data, Celhay et al. show that underreporting of program participation in SNAP decreases as the share of participants in the census tract of the respondent increases, empirically validating the link between welfare stigma and local social norms.24

At the same time, recent work has challenged the original formulation of the welfare stigma hypothesis. These studies suggest that the original formulation conflates two related but distinct psychological costs.25 The first is the conventional social stigma that is triggered when program participation is known by or observable to others. The second is what social psychologists term personal or self-stigma, and it occurs when an individual internalizes negative social stereotypes and attitudes toward the stigmatized group. Personal stigma is closely related to individual self-esteem and activates threats to one’s perceived group identity.26

Notably, stigma-induced threats to group identity operate regardless of whether activities take place in public or anonymously. For example, potential recipients may believe the program is not for “people like them” or that it reflects a lack of individual effort.27 This is also a function of how many others in the community or an individual’s personal network receive similar benefits.28 In other words, whether a potential recipient perceives public assistance as a right or as a personal failure depends on the moral community in which the individual is embedded.29

Several empirical studies validate this theoretical distinction and question the original formulation of the welfare stigma hypothesis. For example, Currie notes that the introduction of debit cards for SNAP recipients in the early 2000s—which replaced “food stamps” and presumably reduced the fear of stigmatization—did not result in an increase in program take-up.30 Similarly, Ebenstein and Stange found that the migration from in-person to telephone and online claims did not increase participation in unemployment insurance.31 The distinction between social and personal stigma is central to our study since the enrollment process and the delivery of benefits in the two programs we investigate involve minimal risk of being known to others.

The Determinants of Participation in Telecommunications Support Programs

Though comparatively small, the literature on the determinants of participation in support programs for telecommunication services points to important sources of variation in program uptake. Hauge et al. exploit state-level differences in Lifeline subsidy amount, along with differences in demographic characteristics, to explore the drivers of participation across states.32 Using panel data for the 1998 to 2004 period, the authors find that participation is positively related to the amount of benefit offered. However, given the small magnitude of the effect, the authors argue that the benefit will need to increase substantially to impact participation. Income, education, age, and participation in other safety net programs are also found to affect Lifeline take-up.

In a related article, the same authors exploit variations in local telephony prices and the presence of different service providers to investigate the take-up of Lifeline in Florida counties.33 Unsurprisingly, the authors find that higher telephony prices are associated with increased participation, but perhaps less intuitively, they also find that, ceteris paribus, take-up varies with the presence of different service providers. This suggests that outreach efforts by telephone companies are an important predictor of participation. The findings also corroborate that program take-up grows with education, suggesting that the ability to acquire program information is a barrier to enrollment. Finally, the results point to lower take-up in rural areas, which the authors attribute to the cost of outreach as well as the distance to welfare field offices.

Burton et al. also use variations in state Lifeline benefits to investigate the drivers of program participation.34 Using panel data for the 1997 to 2003 period, the authors conclude that program longevity (which is closely related to awareness) and the scope of benefits (e.g., the availability of additional services such as call forwarding) are positively associated with participation. Another critical finding is that participation grows with the number of public assistance programs that can be used to demonstrate eligibility, which corroborates that administrative burdens affect program uptake. Jayakar and Park use the 2012 reforms that enhanced the authority of state public utility commissions to designate ETCs as a natural policy experiment to examine how state variations in selection criteria affect Lifeline participation.35 Their results indicate that imposing stricter verification requirements for ETCs reduces program take-up.

In anticipation of the Lifeline expansion to broadband in 2016, the FCC supported several pilot projects to better understand broadband demand among low-income households. Wallsten offers an analysis of the findings from these pilots, and points to evidence suggesting that demand for broadband among low-income households is highly sensitive to price and relatively weak.36 This is confirmed by Lee and Whitacre in a study that uses data from two of these pilot projects.37 The authors find that concerns about cost steer low-income customers toward wireless data services at the expense of higher quality but more expensive residential broadband options.

Interestingly, none of the studies discussed above explores the role of welfare stigma as a source of variation in program take-up. While some studies acknowledge the stigma hypothesis, its relevance is minimized because enrollment and benefit delivery are unlikely to be observable to others. As Hauge et al. argue in their study of Lifeline, “the potential participant does not need to visit an agency or deal with social workers, nor does he have to publicly claim this benefit; it simply appears as a credit on his telephone bill each month. Such anonymity should decrease the stigma effect.”38

Methods

Data and Variables

Our data set combines several data sources. First, we obtain program enrollment data from USAC, which reports Lifeline and ACP enrollment at the county level. This is the numerator used in the calculation of program participation rates. To obtain the number of eligible households (the denominator), we proceed as follows: using PUMS (Public Use Microdata Sample) microdata from the ACS 1-year estimates, we identify eligible households based on either participation in a qualifying public assistance program or on meeting the income eligibility threshold for each program.39 The next step involves a crosswalk from PUMS to the county level, which yields estimations for the number of eligible households by county. To obtain Lifeline and ACP participation rates, we divide enrollment by eligible households. These are the dependent variables to be estimated in the models.

Model covariates include the following county-level characteristics that have been identified in previous studies as relevant predictors of participation in government assistance programs, as well as characteristics linked to the intensity of broadband demand: total population, population density, median household income (logged), median age, education (bachelor degree or higher), unemployment rate, poverty rate, share of White non-Hispanic population, presence of children 6 to 17 years old, share of foreign-born residents, and share of English-only households.

A consistent finding in the literature is that participation in a public assistance program is a strong predictor of participation in other programs.40 We, therefore, include a variable that captures the share of households in a county that receive public assistance income or participate in SNAP. In addition, the ACP models include the Lifeline participation rate as a regressor since Lifeline beneficiaries automatically qualified for ACP. To control for the higher benefit amount made available to residents of Tribal Lands in both programs, we include a variable that captures the share of the county population living in designated Tribal Lands. Since the ACP benefit supports both mobile and fixed residential services, these models include additional controls for the share of households without a computing device and the share of households with access through a mobile connection only.

Participation in Lifeline and ACP is also affected by supply-side factors. For example, potential recipients may be less likely to enroll in counties where there is less variety or competition in Lifeline-supported or ACP-supported services. To account for these factors, we include two distinct covariates that capture supply variations. In the Lifeline case, our variable denotes the number of Lifeline-designated providers in the county. Given that participation by ISPs in the ACP program is significantly broader and does not require designation by state public utility commissions, we proxy the availability of ACP-supported services with a variable that captures the number of ISPs offering services in each county as of December 2021.41 All models include state fixed effects to capture any potential variations related to unobserved idiosyncratic factors at the state level, such as additional Lifeline support and program outreach efforts by state agencies.

Our main hypothesis is that adverse attitudes toward welfare reduce participation in Lifeline and ACP. We use two variables to measure welfare attitudes at the county level. The first is a direct survey measurement sourced from the CES, a large, biannual survey of US adults.42 CES respondents are asked to state their preference about the current level of welfare spending on a five-point scale that ranges from “greatly increase” (1) to “greatly decrease” (5). We use the share of “greatly decrease” responses as an indicator of the intensity of adverse attitudes toward welfare programs.

Notwithstanding its large sample size (about 60,000 respondents), a single wave of the CES study does not allow for reliable estimates for all US counties. To increase the precision of the estimates at the county level, we combine four CES waves (2016, 2018, 2020, and 2022) to obtain a sample size of 245,469 respondents. This results in reliable estimates about welfare attitudes for 852 counties. This subgroup of counties, however, represents only about a quarter of all US counties and, as expected, represents the more populated ones.

An alternative measurement of welfare attitudes commonly used by policy scholars is the share of votes for different political parties.43 This indirect measurement is particularly common in the US context, where several studies corroborate that the median Republican voter holds significantly less favorable views about welfare programs than the median Democratic voter.44 For example, in the most recent wave of the CES (2022), 26% of Republican voters indicated a preference to “greatly reduce” welfare spending, compared to just 2% of Democratic voters.

Using this indirect measurement allows for including all US counties in the analysis. Further, in order to smooth out the impact of any single election and capture long-term ideological differences between voters, we use the average share of Republican votes in presidential elections in the 2000 to 2020 period as a proxy for attitudes toward welfare. This indirect measurement is admittedly noisy and must not be interpreted as indicating that Republican voters will necessarily reject broadband subsidies. Rather, it is used as a proxy for differences in aggregate attitudes toward welfare across counties.

Our final data set includes information for 3,143 counties or county equivalents in fifty states and the District of Columbia, except for models using CES survey data, which include information for 852 counties. Table 1 presents summary statistics for the variables used in the models.

Table 1 Summary statistics

VARIABLES Mean SD Min Max 
Lifeline participation (Jan 2022) 0.12 0.102 0.0006 
ACP participation (Aug 2023) 0.311 0.18 
Republican vote (2000–2020) 0.585 0.137 0.07 0.92 
Welfare “greatly decrease” (2016–2022) 0.156 0.57 0.44 
Total population 103,400 331,267 66 10,014,009 
Population density 276.3 1,847 0.0373 73,667 
Median HH income (log) 10.94 0.252 9.747 11.96 
Bachelor or higher rate 0.159 0.0687 0.561 
Median age 41.5 5.361 22.4 68.1 
Unemployment rate 0.0236 0.0106 0.157 
Poverty rate 0.102 0.0525 0.467 
White non-Hispanic rate 0.805 0.17 0.0833 
Foreign-born rate 0.0475 0.0573 0.54 
English-only HHs rate 0.894 0.126 
Children in HH rate 0.234 0.0648 0.476 
Population in tribal land 0.0425 0.182 
Public Income/SNAP rate 0.13 0.0502 0.0212 0.368 
No computer rate 0.11 0.0528 0.479 
Mobile access only rate 0.142 0.0588 0.555 
Lifeline providers 7.29 4.111 18 
Total ISPs 20.87 9.298 91 
VARIABLES Mean SD Min Max 
Lifeline participation (Jan 2022) 0.12 0.102 0.0006 
ACP participation (Aug 2023) 0.311 0.18 
Republican vote (2000–2020) 0.585 0.137 0.07 0.92 
Welfare “greatly decrease” (2016–2022) 0.156 0.57 0.44 
Total population 103,400 331,267 66 10,014,009 
Population density 276.3 1,847 0.0373 73,667 
Median HH income (log) 10.94 0.252 9.747 11.96 
Bachelor or higher rate 0.159 0.0687 0.561 
Median age 41.5 5.361 22.4 68.1 
Unemployment rate 0.0236 0.0106 0.157 
Poverty rate 0.102 0.0525 0.467 
White non-Hispanic rate 0.805 0.17 0.0833 
Foreign-born rate 0.0475 0.0573 0.54 
English-only HHs rate 0.894 0.126 
Children in HH rate 0.234 0.0648 0.476 
Population in tribal land 0.0425 0.182 
Public Income/SNAP rate 0.13 0.0502 0.0212 0.368 
No computer rate 0.11 0.0528 0.479 
Mobile access only rate 0.142 0.0588 0.555 
Lifeline providers 7.29 4.111 18 
Total ISPs 20.87 9.298 91 

Sources: American Community Survey, MIT Election Lab, USAC, USDA, CCES.

4.2. Model specification

To explore the association between attitudes toward welfare and household participation in broadband subsidy programs, we model Lifeline and ACP uptake on (1) the share of Republican votes in presidential elections in the 2000 to 2020 period and (2) the share of “greatly reduce” welfare spending in the CES survey item discussed above. Covariates include county geographical characteristics, household characteristics, and the availability of ACP-supported or Lifeline-supported services. Because our dependent variables are fractions, they are, by definition, bounded between 0 and 1. To avoid out-of-range predictions, we use generalized linear model (GLM) estimations with a logit link function, which imposes the requirement that predictions fall on the range of the logistic curve. GLM estimates are obtained with the following model setup:

graphic

where Yi is the participation rate for ACP or Lifeline observed in county i, α is a constant, βt are state fixed effects, θi is either the share of Republican votes or the share of “greatly reduce” responses in county i, Xi is a vector of county-level covariates, and μi is the error term (robust standard errors). The coefficient of interest is θ, which recovers the effect of attitudes toward welfare (using the two alternative measurements) on Lifeline and ACP take-up. The tables of results report marginal effects to facilitate interpretation.

Results

The results for the two programs under study using the two alternative measurements of welfare attitudes are generally consistent and aligned with previous findings (Table 2). Lower incomes, more poverty, and more unemployment correlate, as expected, with increased participation in both programs. Program participation increases with the share of racial minority populations but drops significantly with the share of the population with a bachelor’s degree and with the share of foreign-born residents. The latter is a notable finding, given that neither Lifeline nor ACP requires immigration status verification. While further research is needed, this finding likely reflects a combination of misinformation about program eligibility and concerns about immigration status following changes to the public charge test enacted during the Trump administration.45

Table 2 Determinants of Lifeline and ACP participation (GLM marginal effects)

Variables (1) (2) (3) (4) 
Lifeline (Voting data) Lifeline (CES data) ACP (Voting data) ACP (CES data) 
Republican vote (2000–2020) –0.0624*** n/a –0.298*** n/a 
 (0.0213)  (0.0270)  
Welfare greatly decrease (%) n/a –0.0527*** n/a –0.148** 
  (0.0202)  (0.0584) 
Population density 2.20e-06*** 4.60e-07 –7.42e-07 –8.49e-08 
 (6.24e-07) (4.26e-07) (1.15e-06) (1.38e-06) 
Total population 2.04e-08*** 5.07e-09*** –1.54e-08** 2.36e-09 
 (4.95e-09) (1.70e-09) (6.93e-09) (5.92e-09) 
Median HH income (log) –0.0401* 0.00341 –0.122*** –0.156*** 
 (0.0222) (0.0229) (0.0211) (0.0519) 
Poverty rate 0.203*** 0.293*** 0.381*** 0.216 
 (0.0779) (0.0888) (0.0757) (0.264) 
Unemployment rate 0.641** 0.272 0.830*** –0.991 
 (0.250) (0.275) (0.283) (0.737) 
Bachelor or higher rate –0.0201 –0.111*** –0.323*** –0.180* 
 (0.0614) (0.0320) (0.0538) (0.0936) 
Median age 0.00108* 0.000280 0.000935 0.00176 
 (0.000616) (0.000416) (0.000595) (0.00125) 
White non-Hispanic rate –0.0403** –0.118*** –0.0693*** –0.0998** 
 (0.0161) (0.0176) (0.0245) (0.0452) 
Foreign-born rate –0.285*** –0.268** –0.472*** –0.580*** 
 (0.0798) (0.110) (0.0698) (0.158) 
English-only HHs rate –0.000973 –0.0574 –0.0922*** –0.240** 
 (0.0346) (0.0588) (0.0354) (0.103) 
Children in HH rate 0.0928 –0.0999 0.444*** 0.338*** 
 (0.0607) (0.0609) (0.0587) (0.124) 
Public Income/SNAP rate 0.229*** 0.208*** 0.0765 0.298* 
 (0.0449) (0.0625) (0.0673) (0.168) 
Population in tribal land 0.0555*** 0.0240 0.0336* –0.0310 
 (0.00673) (0.0201) (0.0180) (0.0303) 
Mobile access only rate n/a n/a –0.0588 –0.323** 
   (0.0391) (0.126) 
No computer rate n/a n/a –0.165*** –0.420* 
   (0.0622) (0.236) 
Lifeline participation rate n/a n/a 0.255*** 0.810*** 
   (0.0463) (0.228) 
Lifeline providers 0.00343*** 0.00613*** n/a n/a 
 (0.000409) (0.00168)   
Total ISPs n/a n/a 0.00351*** 0.00174*** 
   (0.000357) (0.000582) 
     
Observations 3,107 848 3,113 852 
State Controls Yes Yes Yes Yes 
Variables (1) (2) (3) (4) 
Lifeline (Voting data) Lifeline (CES data) ACP (Voting data) ACP (CES data) 
Republican vote (2000–2020) –0.0624*** n/a –0.298*** n/a 
 (0.0213)  (0.0270)  
Welfare greatly decrease (%) n/a –0.0527*** n/a –0.148** 
  (0.0202)  (0.0584) 
Population density 2.20e-06*** 4.60e-07 –7.42e-07 –8.49e-08 
 (6.24e-07) (4.26e-07) (1.15e-06) (1.38e-06) 
Total population 2.04e-08*** 5.07e-09*** –1.54e-08** 2.36e-09 
 (4.95e-09) (1.70e-09) (6.93e-09) (5.92e-09) 
Median HH income (log) –0.0401* 0.00341 –0.122*** –0.156*** 
 (0.0222) (0.0229) (0.0211) (0.0519) 
Poverty rate 0.203*** 0.293*** 0.381*** 0.216 
 (0.0779) (0.0888) (0.0757) (0.264) 
Unemployment rate 0.641** 0.272 0.830*** –0.991 
 (0.250) (0.275) (0.283) (0.737) 
Bachelor or higher rate –0.0201 –0.111*** –0.323*** –0.180* 
 (0.0614) (0.0320) (0.0538) (0.0936) 
Median age 0.00108* 0.000280 0.000935 0.00176 
 (0.000616) (0.000416) (0.000595) (0.00125) 
White non-Hispanic rate –0.0403** –0.118*** –0.0693*** –0.0998** 
 (0.0161) (0.0176) (0.0245) (0.0452) 
Foreign-born rate –0.285*** –0.268** –0.472*** –0.580*** 
 (0.0798) (0.110) (0.0698) (0.158) 
English-only HHs rate –0.000973 –0.0574 –0.0922*** –0.240** 
 (0.0346) (0.0588) (0.0354) (0.103) 
Children in HH rate 0.0928 –0.0999 0.444*** 0.338*** 
 (0.0607) (0.0609) (0.0587) (0.124) 
Public Income/SNAP rate 0.229*** 0.208*** 0.0765 0.298* 
 (0.0449) (0.0625) (0.0673) (0.168) 
Population in tribal land 0.0555*** 0.0240 0.0336* –0.0310 
 (0.00673) (0.0201) (0.0180) (0.0303) 
Mobile access only rate n/a n/a –0.0588 –0.323** 
   (0.0391) (0.126) 
No computer rate n/a n/a –0.165*** –0.420* 
   (0.0622) (0.236) 
Lifeline participation rate n/a n/a 0.255*** 0.810*** 
   (0.0463) (0.228) 
Lifeline providers 0.00343*** 0.00613*** n/a n/a 
 (0.000409) (0.00168)   
Total ISPs n/a n/a 0.00351*** 0.00174*** 
   (0.000357) (0.000582) 
     
Observations 3,107 848 3,113 852 
State Controls Yes Yes Yes Yes 

Robust standard errors in parentheses

*** p < 0.01, ** p < 0.05, * p < 0.1

Sources: American Community Survey, MIT Election Lab, USAC, USDA, CCES.

The presence of children in the household is strongly associated with increased ACP participation but is not significantly correlated with Lifeline uptake. This finding is generally consistent with previous studies about the characteristics of broadband demand among K-12 families.46 Further, it is worth recalling that ACP replaced the EBB program, which was widely promoted by federal and local authorities as a response to school closures and the need to equip K-12 families for remote learning during the COVID-19 pandemic (EBB recipients were automatically transitioned into ACP after January 2022). Program participation also increases with the share of households in a county that receives public assistance income or is enrolled in SNAP, while Lifeline participation is a strong predictor of ACP take-up. This provides validation to earlier findings that being already connected to the welfare system lowers barriers to participation in related programs.

Participation is higher in Tribal Lands, where the benefit amount in both programs is significantly higher, although this result only holds when using voting data to measure welfare attitudes. The results also indicate that participation grows significantly with the number of service providers available in a county. There are several potential mechanisms that explain this finding, the most obvious being that more intense competition for subscribers results in more aggressive promotion and pricing strategies by ISPs that positively impact program take-up. Finally, ACP participation decreases with the share of households without a computing device. There are several possible interpretations for this finding, one pointing to limited digital literacy among potential recipients, while another suggests weak demand for higher-speed broadband service among mobile-only households.

Turning to our main variables of interest, the results corroborate that participation in ACP and Lifeline is correlated with attitudes toward welfare programs. More specifically, program participation decreases as the intensity of adverse welfare attitudes increases. This finding holds across the two programs as well as for both direct (CES survey) and indirect (voting data) measurements of local welfare attitudes. We interpret this finding as evidence that eligible households are forgoing participation on the basis of political ideologies that are deeply linked to group identity and negative stereotypes about welfare recipients.

While the results are similar for both programs, the magnitude of the effect is significantly larger for ACP, particularly when using voting data to measure welfare preferences. Notably, an increase of 10 percentage points in the share of Republican votes in a county is associated with a decrease in ACP uptake of about 3 percentage points (p < 0.01), whereas a similar increase in the share of “greatly reduce” welfare responses in CES is associated with a drop of about 1.5 percentage points (p < 0.05) in ACP uptake. In the case of Lifeline, the magnitude of the effect is smaller (about 0.5 percentage point drop in take-up) yet still significant at p < 0.001.

Because Lifeline and ACP models include different covariates, the comparison between coefficients across models needs to be carefully interpreted.47 Notwithstanding, the differences are large enough to warrant further research into why ACP uptake is more sensitive to variations in attitudes toward welfare. One hypothesis relates to program longevity: because Lifeline has existed for decades, the perceived stigma associated with participation may have diminished over time. Another factor is the rise in affective party polarization, which could drive welfare-adverse individuals to more strongly reject policy initiatives passed by Democratic administrations in recent years.48

The graphs below illustrate these results by plotting point predictions (along with 95% confidence intervals) of Lifeline and ACP uptake over the share of Republican votes (models 1 and 3 in Table 2) and over the share of “greatly reduce” responses (models 2 and 4). As shown, ACP uptake is predicted at just below 50% in heavily-Democratic counties (ceteris paribus) while dropping below 20% in heavily-Republican counties (Figure 3).

Figure 3 Predicted Lifeline and ACP uptake over Republican share 2000–2020 (95% CI)

Figure 3 Predicted Lifeline and ACP uptake over Republican share 2000–2020 (95% CI)

Close modal

Figure 4 Predicted Lifeline and ACP uptake over share of “greatly reduce” welfare spending (95% CI)

Figure 4 Predicted Lifeline and ACP uptake over share of “greatly reduce” welfare spending (95% CI)

Close modal

Similarly, ACP participation is predicted at just below 40% in counties where few respondents hold adverse attitudes toward welfare (ceteris paribus), compared to less than 30% participation in counties with more adverse welfare attitudes. The visible difference in slopes between ACP and Lifeline point predictions reflects the variation in coefficient magnitudes between the two programs discussed above.

As a robustness check, we re-estimate the main findings using PSM. PSM is part of the family of quasi-experimental techniques commonly used in program evaluation when random assignment to treatment or control (the “gold standard” in experimental techniques) is unfeasible. The main goal of PSM is to correct for imbalances in observable characteristics between two distinct groups.49 Balancing covariates helps approximate causal effects with observational data by creating a nonrandom but plausible counterfactual to each of the units in the treated condition.50 Balancing is accomplished by calculating the propensity for each unit to fall in the group of interest – the “treated” group in the treatment effects framework – and then matching each unit with one or more nontreated units. Because PSM rests on comparing treated and nontreated units, this is only possible when measuring welfare attitudes with voting data, which allows for a natural differentiation between Republican-majority counties (the treated group) and Democratic-majority counties (the control group). To implement PSM, we average the share of presidential votes for each party from 2000 to 2020 period and create a binary variable that identifies Republican-majority and Democratic-majority counties.

As expected, these two groups of counties differ significantly across most baseline characteristics, including differences in service competition intensity, thus confirming the need for balancing (Table 3). In this setup, the raw difference in program participation between Democratic-majority and Republican-majority counties is 3 percentage points for Lifeline and 10 percentage points for ACP.

Table 3 County characteristics by majority party (2000–2020 vote average)

VARIABLES Democratic Republican Difference 
Outcome variables    
Lifeline participation 0.14 0.11 0.03*** 
ACP participation 0.38 0.28 0.10*** 
Covariates    
Population density 834.9 95.9 739*** 
Total population 263,721 53,806 209,915*** 
Median HH income (log) 10.93 10.97 0.04*** 
Poverty rate 0.11 0.10 0.01*** 
Unemployment rate 0.03 0.02 0.01*** 
Bachelor or higher rate 0.19 0.15 0.04*** 
Median age 40.0 41.9 –1.93*** 
White non-Hispanic rate 0.66 0.85 –0.19*** 
Foreign-born rate 0.07 0.04 0.03*** 
English-only HHs rate 0.84 0.91 –0.07*** 
Children in HH 0.24 0.23 0.01 
Public Income/SNAP rate 0.14 0.13 0.01*** 
Population in tribal land 0.04 0.04 0.00 
Mobile access only rate 0.13 0.15 –0.02*** 
No computer rate 0.11 0.11 0.00 
Lifeline providers 7.12 7.34 –0.22 
Total ISPs 25.0 19.5 5.5*** 
Observations 768 2,375  
VARIABLES Democratic Republican Difference 
Outcome variables    
Lifeline participation 0.14 0.11 0.03*** 
ACP participation 0.38 0.28 0.10*** 
Covariates    
Population density 834.9 95.9 739*** 
Total population 263,721 53,806 209,915*** 
Median HH income (log) 10.93 10.97 0.04*** 
Poverty rate 0.11 0.10 0.01*** 
Unemployment rate 0.03 0.02 0.01*** 
Bachelor or higher rate 0.19 0.15 0.04*** 
Median age 40.0 41.9 –1.93*** 
White non-Hispanic rate 0.66 0.85 –0.19*** 
Foreign-born rate 0.07 0.04 0.03*** 
English-only HHs rate 0.84 0.91 –0.07*** 
Children in HH 0.24 0.23 0.01 
Public Income/SNAP rate 0.14 0.13 0.01*** 
Population in tribal land 0.04 0.04 0.00 
Mobile access only rate 0.13 0.15 –0.02*** 
No computer rate 0.11 0.11 0.00 
Lifeline providers 7.12 7.34 –0.22 
Total ISPs 25.0 19.5 5.5*** 
Observations 768 2,375  

*** p < 0.01, ** p < 0.05, * p < 0.1

Note: t-test for equality of means between Republican and Democratic counties.

Sources: American Community Survey, MIT Election Lab, USAC, USDA, CCES.

The first stage of the PSM model uses a logit regression to calculate the conditional probability (or propensity score) of observing a Republican-majority outcome based on the county characteristics listed in Table 3 (for brevity, we skip the presentation of results from this first-stage estimation). We next use the PSM algorithm to match counties based on their propensity score, using k to 1 nearest-neighbor matching with k = 10 (in other words, each Republican-majority county is matched with its nearest ten Democratic-majority counties based on propensity scores). To avoid poor matches, we trim both tails of the PSM distribution at the fifth and ninety-fifth percentile and use a caliper (maximum allowed distance for matches) of 0.25 standard deviations of the propensity score.51 The result is a subsample of 238 Republican-majority counties that can be matched on characteristics with 760 Democratic-majority counties. As shown in Table A.1Appendix), counties in the matched sample are significantly more balanced on characteristics than in the original (unmatched) sample. Figure A.1Appendix) similarly suggests that PSM yields matches that approximate random assignment to the condition of interest (in our case, a Republican majority in presidential elections over the 2000–2020 period).

From the matched sample, we obtain an estimate of the average treatment effect (ATE) by computing the average of the differences between each Republican-majority county and the matched Democratic-majority counties for the outcome variables of interest. In the Lifeline case, the difference in uptake rates between Republican and Democratic counties decreases from 3 percentage points in the unmatched sample (Table 3) to 2 percentage points in the PSM-adjusted sample but remains statistically significant at p < 0.05 (Table 4). In the ACP case, the uptake difference in the matched sample is about 5 percentage points (p < 0.01) relative to 10 percentage points in the unmatched sample. In short, after balancing counties on confounders related to participation in broadband support programs, our main findings remain robust.

Table 4 PSM-adjusted differences in participation rates between Republican and Democratic counties

 Republican Democratic Difference 
Lifeline participation 0.126 0.146 –0.02** 
ACP participation 0.308 0.357 –0.049** 
 Republican Democratic Difference 
Lifeline participation 0.126 0.146 –0.02** 
ACP participation 0.308 0.357 –0.049** 

*** p < 0.01, ** p < 0.05, * p < 0.1.

Note: the sample includes only the subset of PSM-matched counties (n = 998).

Discussion and Policy Implications

The COVID-19 pandemic brought renewed attention to the connectivity barriers faced by the most vulnerable US households, and in particular to the affordability gap that previous studies have identified as a key barrier to internet access for low-income Americans.52 Until the onset of the pandemic, Lifeline was the only large-scale program that provided support for broadband to vulnerable families, and the level of support ($9.25/month in 2020) was small relative to the average cost of residential broadband (about $75/month in 2020 according to FCC data).53 The EBB program and later the ACP program were established to fill this gap, offering a significantly larger benefit while expanding eligibility and the range of supported services. However, the level of participation in broadband support programs has remained well below expectations, confounding policymakers and scholars alike.

Drawing broadly from the literature about participation in social benefits programs, this paper presents new evidence about the factors that drive uptake in broadband subsidy programs for vulnerable households. Our premise is that welfare stigma has not been adequately conceptualized in previous studies and is a significant predictor of participation in both Lifeline and ACP. We build upon a key theoretical distinction between social and personal stigma and propose that individuals embedded in moral communities with adverse attitudes toward welfare will internalize participation stigma and, as such, may forgo the benefit regardless of the risk of participation being observable by peers. Using a direct survey measurement of attitudes toward welfare as well as a proxy measurement based on the share of votes in presidential elections in the 2000 to 2020 period, we show that—ceteris paribus—ACP and Lifeline participation decreases as the intensity of adverse attitudes toward welfare increases. These results are corroborated by PSM estimates, offering a robust validation of the findings within a quasi-experimental framework.

It is worth noting that our focus on welfare stigma and political attitudes does not diminish the relevance of other factors that are known to depress the take-up of broadband support programs. While studies consistently show that Republican voters hold significantly less favorable views about welfare than Democratic voters, ACP participation rates are not consistently higher in blue states, which points to the relevance of other factors such as program awareness, administrative burdens, and trust.54 For example, survey studies show that awareness about the ACP program is low (about 30%) among eligible households, and there is evidence that the administrative burden of enrollment is high for potential recipients.55 There is also evidence of mistrust among potential recipients, who report concerns about sharing sensitive personal information with the federal government, as well as about future rate increases, particularly in the case of ACP, which has been funded through a one-time congressional appropriation.

While the literature often addresses information barriers, administrative burden, and welfare stigma separately, in practice, these factors operate jointly. For example, individuals with unfavorable attitudes toward welfare may not seek out or filter out information about broadband subsidies.56 Further, these individuals are likely clustered in personal networks with few other program recipients, which limits the availability of program information from trusted sources, reduces the level of support from friends or family members to navigate the enrollment process, and increases the perceived intensity of the participation stigma.

It is also important to highlight the increased intertwining of political ideology and personal identity in American politics, which creates challenges to disentangle the role of welfare stigma from that of the blanket rejection of policies promoted by the other party.57 This is particularly relevant for ACP, a program closely identified with the Biden administration (despite its name, the Bipartisan Infrastructure Act of 2021 that established the ACP program received little Republican support in Congress). Ultimately, the close alignment of political and individual identities strengthens the threat to self-identity from participation in social programs that are associated with a stigmatized outgroup.

Our findings suggest a number of implications to enhance participation in Lifeline and ACP. First, there is growing interest in automating enrollment for eligible households based on participation in qualifying federal programs. Automation may reduce welfare stigma by combining broadband subsidies with existing programs and forgoing the need for individuals to initiate enrollment or recertify eligibility. In addition, promoting broadband benefits as part of existing public services could reduce threats to self-identity among welfare-adverse individuals. For example, if public schools were to promote ACP as integral to students’ educational activities (akin to borrowing school textbooks), parents may perceive the benefit as being supported by their tax dollars rather than as a government handout.58 Further, this destigmatization strategy aligns with our findings about increased ACP participation among families with children.

Another set of questions relates to the support level offered to recipients. Theoretically, a higher level of benefit is more likely to offset the perceived costs of participation, including the psychological costs related to welfare stigma. At the same time, the uncertainty about program funding (particularly for ACP) may attenuate the effect of larger benefits. Further, how variations in outreach strategies and messaging around ACP at the local level affect uptake is a topic that deserves further study. These and related questions call for continued research about broadband affordability and subsidy mechanisms, including comparative studies that exploit variations in eligibility criteria, outreach efforts, and program design across states.

Study Limitations and Areas for Further Research

There are a number of limitations to this study that must be noted and warrant cautious interpretation of the findings. A key limitation relates to the data available. We are unable to observe ACP or Lifeline program participation at the individual level and rely instead on geographically aggregated enrollment data that is matched to county demographics and other characteristics. Our individual-level interpretation of the findings is not uncommon in large observational studies about the effect of psychological factors (such as implicit bias or moral values) on policy outcomes.59 However, it calls for further research based on individual-level data (including both surveys and qualitative methods) to validate results.

Further, this study does not address outreach or message-framing strategies intended to reduce welfare stigma for Lifeline and ACP. Destigmatization is an area of continued interest for welfare policy researchers, and there are often striking results associated with low-cost interventions. For example, in a study of participation in a rental assistance program, Lasky-Fink and Linos find that subtle changes to the language used in outreach initiatives (changes intended to reduce internalized stigma) were associated with an 18% increase in application requests, relative to a baseline that used standard informational language.60 The results of the present study offer several indications about potential destigmatization strategies in outreach initiatives for internet support programs that deserve consideration by policymakers and advocacy organizations.

Appendix A

Table A1 Balance check between unmatched (U) and matched (M) samples

VARIABLES Sample Republican mean Democrat mean t-test Standardized % bias Bias reduction (%) 
Total population 53,829 2.6e+05 –9.77*** –47.2  
 1.9e+05 1.8e+05 0.24 0.3 99.3 
Population density 96.0 834.9 –15.7*** –28.5  
 332.1 317.6 0.05 0.6 98.0 
HH income (log) 10.9 10.9 –4.28*** –15.9  
 11.0 10.9 1.79* 18.7 17.1 
Poverty 0.09 0.11 –6.38*** –24.0  
 0.10 0.12 –2.19** –22.6 5.9 
Unemployment 0.02 0.03 –15.3*** –60.1  
 0.03 0.03 –1.20 –12.0 80.1 
Bachelor or higher 0.15 0.19 –18.2*** –65.5  
 0.20 0.19 1.8* 18.8 71.3 
Median age 41.9 40.0 8.82*** 36.2  
 40.16 40.1 0.0 99.9 
White non-Hispanic 0.85 0.66 29.3*** 104.7  
 0.68 0.66 1.51 13.5 87.1 
Foreign-born 0.04 0.08 –16.3*** –57.7  
 0.06 0.07 –0.50 –4.2 92.6 
English-only HHs 0.91 0.84 12.5*** 45.1  
 0.85 0.84 0.26 2.7 94.1 
Children in HH 0.23 0.24 –0.83 –3.5  
 0.23 0.24 –0.52 –5.4 –56.2 
Public Income/SNAP 0.13 0.14 –6.02*** –23.7  
 0.13 0.15 –2.76*** –27.7 17.0 
Population tribal 0.04 0.04 1.02 4.4  
 0.04 0.06 –0.69 –7.0 –58.2 
Mobile access only 0.15 0.13 7.03*** 30.1  
 0.13 0.13 –0.84 –7.3 75.6 
No computer 0.11 0.11 0.95 3.6  
 0.10 0.10 0.16 1.5 59.5 
Lifeline providers 7.35 7.12 1.29 5.5  
 7.06 6.80 0.76 6.4 –17.9 
Total ISPs 19.5 25.0 –14.7*** –53.0  
 24.7 22.8 1.870* 18.300 65.5 
VARIABLES Sample Republican mean Democrat mean t-test Standardized % bias Bias reduction (%) 
Total population 53,829 2.6e+05 –9.77*** –47.2  
 1.9e+05 1.8e+05 0.24 0.3 99.3 
Population density 96.0 834.9 –15.7*** –28.5  
 332.1 317.6 0.05 0.6 98.0 
HH income (log) 10.9 10.9 –4.28*** –15.9  
 11.0 10.9 1.79* 18.7 17.1 
Poverty 0.09 0.11 –6.38*** –24.0  
 0.10 0.12 –2.19** –22.6 5.9 
Unemployment 0.02 0.03 –15.3*** –60.1  
 0.03 0.03 –1.20 –12.0 80.1 
Bachelor or higher 0.15 0.19 –18.2*** –65.5  
 0.20 0.19 1.8* 18.8 71.3 
Median age 41.9 40.0 8.82*** 36.2  
 40.16 40.1 0.0 99.9 
White non-Hispanic 0.85 0.66 29.3*** 104.7  
 0.68 0.66 1.51 13.5 87.1 
Foreign-born 0.04 0.08 –16.3*** –57.7  
 0.06 0.07 –0.50 –4.2 92.6 
English-only HHs 0.91 0.84 12.5*** 45.1  
 0.85 0.84 0.26 2.7 94.1 
Children in HH 0.23 0.24 –0.83 –3.5  
 0.23 0.24 –0.52 –5.4 –56.2 
Public Income/SNAP 0.13 0.14 –6.02*** –23.7  
 0.13 0.15 –2.76*** –27.7 17.0 
Population tribal 0.04 0.04 1.02 4.4  
 0.04 0.06 –0.69 –7.0 –58.2 
Mobile access only 0.15 0.13 7.03*** 30.1  
 0.13 0.13 –0.84 –7.3 75.6 
No computer 0.11 0.11 0.95 3.6  
 0.10 0.10 0.16 1.5 59.5 
Lifeline providers 7.35 7.12 1.29 5.5  
 7.06 6.80 0.76 6.4 –17.9 
Total ISPs 19.5 25.0 –14.7*** –53.0  
 24.7 22.8 1.870* 18.300 65.5 

*** p < 0.01, ** p < 0.05, * p < 0.1

Note: The t-tests are for equality of means between Republican and Democratic counties in each sample. The standardized percentage bias is the percentage difference of the means in the Republican and Democrat sub-samples as a percentage of the square root of the average of the sample variances (see Rosenbaum, Paul R., and Donald B. Rubin. “Constructing a Control Group Using Multivariate Matched Sampling Methods That Incorporate the Propensity Score.” The American Statistician 39, no. 1 (1985): 33. https://doi.org/10.2307/2683903).

Figure A1 Propensity score density plot for U and M samples

Note: the graph plots the propensity score from the first stage of the PSM model estimation (logit estimation for Republican majority=1) in the full (U) sample and the M subsample.

Figure A1 Propensity score density plot for U and M samples

Note: the graph plots the propensity score from the first stage of the PSM model estimation (logit estimation for Republican majority=1) in the full (U) sample and the M subsample.

Close modal

Notes

1.

Shefali Dahiya, Lila N. Rokanas, Surabhi Singh, Melissa Yang, and Jon M. Peha, “Lessons from Internet Use and Performance during Covid-19,” Journal of Information Policy 11 (2021): 202–21, https://doi.org/10.5325/jinfopoli.11.2021.0202.

2.

Janet Currie, The Take Up of Social Benefits, NBER Working Paper No. 10488. National Bureau of Economic Research, 2004, https://doi.org/10.3386/w10488; Wonsik Ko, and Robert Moffitt, Take-up of Social Benefits (Cambridge, MA: National Bureau of Economic Research (No. w30148, 2022), https://doi.org/10.3386/w30148.

3.

P. M. Anderson, and B. D. Meyer, “Unemployment Insurance Take-up Rates and the After-Tax Value of Benefits,” The Quarterly Journal of Economics 112, no. 3 (1997): 913–37, https://doi.org/10.1162/003355397555389

4.

Bruce G. Link, and Jo C. Phelan, “Conceptualizing Stigma,” Annual Review of Sociology 27, no. 1 (2001): 363–85, https://doi.org/10.1146/annurev.soc.27.1.363; Saurabh Bhargava, and Dayanand Manoli, “Psychological Frictions and the Incomplete Take-up of Social Benefits: Evidence from an IRS Field Experiment,” American Economic Review 105, no. 11 (2015): 3489–529, https://doi.org/10.1257/aer.20121493.

5.

TracFone Forbearance Order and TracFone ETC Designation, FCC Docket No. 96-45 (2008).

6.

Government Accountability Office (GAO), “Telecommunications: Additional Action Needed to Mitigate Significant Risks in FCC’s Lifeline Program,” Government Accountability Office Report.” No. GAO-17-805T, 2017, https://www.gao.gov/assets/gao-17-538.pdf.

7.

Federal Communications Commission (FCC), “Report on the State of the Lifeline Marketplace” (WC Docket No. 09–197), July 2, 2021, https://www.fcc.gov/document/bureau-releases-report-state-lifeline-marketplace.

8.

John B. Horrigan, “Reimagining Lifeline: Universal Service, Affordability, and Connectivity,” August 2, 2022, https://www.benton.org/publications/reimagining-lifeline.

9.

In the Matter of Lifeline and Link Up Reform and Modernization, FCC WC Docket No. 11-42 (2016).

10.

Minimum-service standards refer to a set of minimum service benchmarks that Lifeline-supported services must offer. For example, in 2023 voice services must offer a minimum of 1,000 minutes/month while mobile broadband must offer a data allowance of 4.5GB/month at 3G speeds.

11.

NARUC, “State Universal Service Funds,” National Regulatory Research Institute Report No. 15–05, June 2015, https://www.naruc.org/nrri/nrri-library/research-papers/telecommunications.

12.

Phyllis Bernt, “Universal Service in the National Broadband Plan: A Case for Federal-State Cooperation,” Journal of Information Policy 1 (2011): 125–44, https://doi.org/10.5325/jinfopoli.1.2011.0125.

13.

Households also qualify for ACP if they meet the eligibility criteria to enroll in an existing affordable Internet program offered by an Internet Service Provider (ISP) that participates in ACP. However, the eligibility rules used by ISPs are generally based on a combination of the income and qualifying program criteria used by ACP.

14.

Currie, The Take Up of Social Benefits.

15.

Orley Ashenfelter, “Determining Participation in Income-Tested Social Programs,” Journal of the American Statistical Association 78, no. 383 (1983): 517–25, https://doi.org/10.1080/01621459.1983.10478004; James J. Heckman, and Jeffrey A. Smith, “The Determinants of Participation in a Social Program: Evidence from a Prototypical Job Training Program,” Journal of Labor Economics 22, no. 2 (2004): 243–98, https://doi.org/10.1086/381250.

16.

Janet Currie, “Do Children of Immigrants Make Differential Use of Public health Insurance?,” In Issues in the Economics of Immigration (University of Chicago Press, 2000), 271–308.

17.

G. J. Borjas, and L. Hilton, “Immigration and the Welfare State: Immigrant Participation in Means-Tested Entitlement Programs,” The Quarterly Journal of Economics 111, no. 2 (1996): 575–604, https://doi.org/10.2307/2946688.

18.

Marianne Bertrand, Erzo F. Luttmer, and Sendhil Mullainathan, “Network Effects and Welfare Cultures,” Quarterly Journal of Economics 115, no. 3 (2000): 1019–55. https://doi.org/10.1162/003355300554971.

19.

Pamela Herd, and Donald P. Moynihan, Administrative Burden: Policymaking by Other Means (New York: Russell Sage Foundation. 2019).

20.

Amy Finkelstein, and Matthew J. Notowidigdo, “Take-up and Targeting: Experimental Evidence from Snap,” The Quarterly Journal of Economics 134, no. 3 (2019): 1505–56. https://doi.org/10.1093/qje/qjz013.

21.

Manasi Deshpande, and Yue Li. “Who Is Screened Out? Application Costs and the Targeting of Disability Programs,” American Economic Journal: Economic Policy 11, no. 4 (2019): 213–48. https://doi.org/10.1257/pol.20180076.

22.

Robert Moffitt, “An Economic Model of Welfare Stigma,” The American Economic Review, 73 no.5 (1983): 1023–35.

23.

A. Lindbeck, S. Nyberg, and J. W. Weibull, “Social Norms and Economic Incentives in the Welfare State,” The Quarterly Journal of Economics 114, no. 1 (1999): 1–35, https://doi.org/10.1162/003355399555936.

24.

Pablo Celhay, Bruce Meyer, and Nikolas Mittag, “Stigma in Welfare Programs,” SSRN Electronic Journal, 2022, https://doi.org/10.2139/ssrn.4163308.

25.

Jennifer Crocker, Major, Brenda, and Steele, Claude, “Social Stigma,” in The Handbook of Social Psychology, ed. D. T. Gilbert, S. T. Fiske, and G. Lindzey, 4th ed., Vol. 2 (Boston: McGraw-Hill, 1998), 504–53; Brenda Major, and Laurie T. O’Brien. “The Social Psychology of Stigma,” Annual Review of Psychology 56, no. 1 (2005): 393–421, https://doi.org/10.1146/annurev.psych.56.091103.070137.

26.

Claude M. Steele, Steven J. Spencer, and Joshua Aronson. “Contending with Group Image: The Psychology of Stereotype and Social Identity Threat,” Advances in Experimental Social Psychology, 2002, 379–440, https://doi.org/10.1016/s0065-2601(02)80009-0.

27.

Link and Phelan, “Conceptualizing Stigma.”

28.

Carina Mood, “Take-up Down Under: Hits and Misses of Means-Tested Benefits in Australia,” European Sociological Review 22, no. 4 (2006): 443–58, https://doi.org/10.1093/esr/jcl007.

29.

Oliver Hümbelin, “Non-Take-up of Social Assistance: Regional Differences and the Role of Social Norms,” Swiss Journal of Sociology 45, no. 1 (2019): 7–33, https://doi.org/10.2478/sjs-2019-0002.

30.

Currie, The Take Up of Social Benefits.

31.

Avraham Ebenstein, and Kevin Stange, “Does Inconvenience Explain Low Take-up? Evidence from Unemployment Insurance,” Journal of Policy Analysis and Management 29, no. 1 (2010): 111–36, https://doi.org/10.1002/pam.20481.

32.

Janice A. Hauge, Mark A. Jamison, and R. Todd Jewell, “Participation in Social Programs by Consumers and Companies,” Public Finance Review 35, no. 5 (2007): 606–25, https://doi.org/10.1177/1091142106299019.

33.

Ibid.

34.

Mark Burton, Jeffrey Macher, and John W Mayo, “Understanding Participation in Social Programs: Why Don’t Households Pick up the Lifeline?” The B.E. Journal of Economic Analysis & Policy 7, no. 1 (2007), https://doi.org/10.2202/1935-1682.1583.

35.

Krishna Jayakar, and Eun-A Park, “Reforming the Lifeline Program: Regulatory Federalism in Action?” Telecommunications Policy 43, no. 1 (2019): 67–75, https://doi.org/10.1016/j.telpol.2018.04.001.

36.

S. Wallsten, “Learning from the FCC’s Lifeline Broadband Pilot Projects,” TPRC 44: The 44th Research Conference on Communication, Information and Internet Policy 2016. (2016), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2757149

37.

Hyun Ji Lee, and Brian Whitacre, “Estimating Willingness-to-Pay for Broadband Attributes among Low-Income Consumers: Results from Two FCC Lifeline Pilot Projects,” Telecommunications Policy 41, no. 9 (2017): 769–80, https://doi.org/10.1016/j.telpol.2017.04.001.

38.

Hauge et al., “Participation in Social Programs by Consumers and Companies,” p. 6.

39.

Our method for determining program eligibility differs from USAC’s in one important way. According to program rules, households are eligible for Lifeline or ACP if at least one household member is enrolled in a qualifying public assistance program. However, USAC’s calculations are based on data for the head of household only, which undercounts the number of eligible households, and thus tends to inflate participation rates. Throughout this study, we use our own calculations that are based on whether any household member participates in a qualifying assistance program.

40.

Ko and Moffitt, Take-up of Social Benefits.

41.

Source: FCC. This data was collected through FCC Form 477 and is known to overstate the number of ISPs operating in any geographic area.

42.

For details about the CES study see Brian Schaffner, Stephen Ansolabehere, and Marissa Shih, “Cooperative Election Study Common Content, 2022,” Harvard Dataverse, https://doi.org/10.7910/DVN/PR4L8P

43.

Recent studies that use this indirect measurement include: Trent Steidley, and Danielle Trujillo, “Status Politics and the Political Influences of Concealed Handgun License Demand in Texas,” The Sociological Quarterly 62, no. 4 (2020): 665–89, https://doi.org/10.1080/00380253.2020.1803157; Kelsey E. Gonzalez, Rina James, Eric T. Bjorklund, and Terrence D. Hill, “Conservatism and Infrequent Mask Usage: A Study of US Counties during the Novel Coronavirus (COVID-19) Pandemic,” Social Science Quarterly 102, no. 5 (2021): 2368–82, https://doi.org/10.1111/ssqu.13025.

44.

See Yotam Margalit, “Explaining Social Policy Preferences: Evidence from the Great Recession,” American Political Science Review 107, no. 1 (2013): 80–103, https://doi.org/10.1017/s0003055412000603; Maria Grazia Pittau, Alessio Farcomeni, and Roberto Zelli, “Has the Attitude of US Citizens towards Redistribution Changed over Time?” Economic Modelling 52 (2016): 714–24, https://doi.org/10.1016/j.econmod.2015.09.039; Shanto Iyengar, Yphtach Lelkes, Matthew Levendusky, Neil Malhotra, and Sean J. Westwood, “The Origins and Consequences of Affective Polarization in the United States.” Annual Review of Political Science 22, no. 1 (2019): 129–46, https://doi.org/10.1146/annurev-polisci-051117-073034.

45.

Randy Capps, Michael Fix, and Jeanne Batalova, Anticipated ‘Chilling Effects’ of the Public-Charge Rule Are Real: Census Data Reflect Steep Decline in Benefits Use by Immigrant Families (Washington, DC: Migration Policy Institute, 2020).

46.

Teresa Correa, “Bottom-up Technology Transmission within Families: Exploring How Youths Influence Their Parents’ Digital Media Use with Dyadic Data,” Journal of Communication 64, no. 1 (2013): 103–24, https://doi.org/10.1111/jcom.12067; Author.

47.

Richard Williams, and Abigail Jorgensen. “Comparing Logit & Probit Coefficients between Nested Models,” Social Science Research 109 (2023): 102802, https://doi.org/10.1016/j.ssresearch.2022.102802

48.

For a discussion of affective party polarization see Iyengar et al. “The Origins and Consequences of Affective Polarization in the United States.”

49.

P. R. Rosenbaum, “Evidence Factors in Observational Studies,” Biometrika 97, no. 2 (2010): 333–45, https://doi.org/10.1093/biomet/asq019.

50.

Markus Frölich, “Propensity Score Matching without Conditional Independence Assumption—with an Application to the Gender Wage Gap in the United Kingdom,” The Econometrics Journal 10, no. 2 (2007): 359–407, https://doi.org/10.1111/j.1368-423x.2007.00212.x; Shenyang Guo, and Mark W. Fraser, Propensity Score Analysis: Statistical Methods and Applications, Vol. 11 (Los Angeles: SAGE Publications, 2014).

51.

Paul R. Rosenbaum, and Donald B. Rubin, “Constructing a Control Group Using Multivariate Matched Sampling Methods That Incorporate the Propensity Score,” The American Statistician 39, no. 1 (1985): 33, https://doi.org/10.2307/2683903.

52.

George S. Ford, “Confusing Relevance and Price: Interpreting and Improving Surveys on Internet Non-Adoption,” Telecommunications Policy 45, no. 2 (2021): 102084, https://doi.org/10.1016/j.telpol.2020.102084.

53.

Data from the FCC’s 2021 Urban Rate Survey for the average cost of a connection offering at least 25/3Mbps (see www.fcc.gov/economics-analytics/industry-analysis-division/urban-rate-survey-data-resources).

54.

For ACP participation data in red and blue states see Bar and Galperin, “Broadband for All: The Affordable Connectivity Program (ACP) Benefits Households Across Party Lines.”

55.

See EveryoneOn, The State of Digital Equity (2022), https://www.everyoneon.org/2022-national-research.

56.

J. L. Hochschild, and K. L. Einstein, Do Facts Matter?: Information and Misinformation in American Politics (Norman: University of Oklahoma Press, 2015)

57.

Lilliana Mason, Uncivil Agreement How Politics Became Our Identity (Chicago, IL: University of Chicago Press, 2018).

58.

We thank an anonymous reviewer for this suggestion.

59.

Eric Hehman, Jimmy Calanchini, Jessica K. Flake, and Jordan B. Leitner. “Establishing Construct Validity Evidence for Regional Measures of Explicit and Implicit Racial Bias,” Journal of Experimental Psychology: General 148, no. 6 (2019): 1022–40, https://doi.org/10.1037/xge0000623; Nils Karl Reimer, Mohammad Atari, Farzan Karimi-Malekabadi, Jackson Trager, Brendan Kennedy, Jesse Graham, and Morteza Dehghani, “Moral Values Predict County-Level Covid-19 Vaccination Rates in the United States,” American Psychologist 77, no. 6 (2022): 743–59, https://doi.org/10.1037/amp0001020.

60.

Jessica Lasky-Fink, and Elizabeth Linos. “Improving Delivery of the Social Safety Net: The Role of Stigma,” Journal of Public Administration Research and Theory (2023), https://doi.org/10.1093/jopart/muad021.

Bibliography

Anderson, P. M., and B. D. Meyer.
“Unemployment Insurance Take-up Rates and the After-Tax Value of Benefits.”
The Quarterly Journal of Economics
112
,
no. 3
(
1997
):
913
37
. https://doi.org/10.1162/003355397555389.
Ashenfelter, Orley.
“Determining Participation in Income-Tested Social Programs.”
Journal of the American Statistical Association
78
,
no. 383
(
1983
):
517
25
. https://doi.org/10.1080/01621459.1983.10478004.
Bernt, Phyllis.
“Universal Service in the National Broadband Plan: A Case for Federal-State Cooperation.”
Journal of Information Policy
1
(
2011
):
125
44
. https://doi.org/10.5325/jinfopoli.1.2011.0125.
Bertrand, Marianne, Erzo F. Luttmer, and Sendhil Mullainathan.
“Network Effects and Welfare Cultures.”
Quarterly Journal of Economics
115
,
no. 3
(
2000
):
1019
55
. https://doi.org/10.1162/003355300554971.
Bhargava, Saurabh, and Dayanand Manoli.
“Psychological Frictions and the Incomplete Take-up of Social Benefits: Evidence from an IRS Field Experiment.”
American Economic Review
105
,
no. 11
(
2015
):
3489
529
. https://doi.org/10.1257/aer.20121493.
Borjas, G. J., and L. Hilton.
“Immigration and the Welfare State: Immigrant Participation in Means-Tested Entitlement Programs.”
The Quarterly Journal of Economics
111
,
no. 2
(
1996
):
575
604
. https://doi.org/10.2307/2946688.
Burton, Mark, Jeffrey Macher, and John W Mayo.
“Understanding Participation in Social Programs: Why Don’t Households Pick up the Lifeline?”
The B.E. Journal of Economic Analysis & Policy
7
,
no. 1
(
2007
). https://doi.org/10.2202/1935-1682.1583.
Capps, Randy, Michael Fix, and Jeanne Batalova.
Anticipated ‘Chilling Effects’ of the Public-Charge Rule Are Real: Census Data Reflect Steep Decline in Benefits Use by Immigrant Families
.
Washington, DC
:
Migration Policy Institute
,
2020
.
Celhay, Pablo, Bruce Meyer, and Nikolas Mittag.
“Stigma in Welfare Programs.”
SSRN Electronic Journal
,
2022
. https://doi.org/10.2139/ssrn.4163308.
Correa, Teresa.
“Bottom-up Technology Transmission within Families: Exploring How Youths Influence Their Parents’ Digital Media Use with Dyadic Data.”
Journal of Communication
64
,
no. 1
(
2013
):
103
24
. https://doi.org/10.1111/jcom.12067.
Crocker, Jennifer, Major, Brenda, and Steele, Claude.
“Social stigma.”
In
The Handbook of Social Psychology
, edited by D. T. Gilbert, S. T. Fiske, and G. Lindzey, 4th ed.,
Vol. 2
,
504
53
.
Boston
:
McGraw-Hill
,
1998
.
Currie, Janet.
“Do Children of Immigrants Make Differential Use of Public Health Insurance?”
In
Issues in the Economics of Immigration
, edited by George J. Borias, pp.
271
308
.
Chicago, IL
:
University of Chicago Press
,
2000
.
Currie, Janet.
The Take Up of Social Benefits
. NBER Working Paper No. 10488. National Bureau of Economic Research,
2004
. https://doi.org/10.3386/w10488.
Dahiya, Shefali, Lila N. Rokanas, Surabhi Singh, Melissa Yang, and Jon M. Peha.
“Lessons from Internet Use and Performance during Covid-19.”
Journal of Information Policy
11
(
2021
):
202
21
. https://doi.org/10.5325/jinfopoli.11.2021.0202.
Deshpande, Manasi, and Yue Li.
“Who Is Screened Out? Application Costs and the Targeting of Disability Programs.”
American Economic Journal: Economic Policy
11
,
no. 4
(
2019
):
213
48
. https://doi.org/10.1257/pol.20180076.
Ebenstein, Avraham, and Kevin Stange.
“Does Inconvenience Explain Low Take-up? Evidence from Unemployment Insurance.”
Journal of Policy Analysis and Management
29
,
no. 1
(
2010
):
111
36
. https://doi.org/10.1002/pam.20481.
Federal Communications Commission (FCC)
.
“Report on the State of the Lifeline Marketplace”
(WC Docket No. 09–197), July 2,
2021
. https://www.fcc.gov/document/bureau-releases-report-state-lifeline-marketplace.
Finkelstein, Amy, and Matthew J. Notowidigdo.
“Take-up and Targeting: Experimental Evidence from Snap.”
The Quarterly Journal of Economics
134
,
no. 3
(
2019
):
1505
56
. https://doi.org/10.1093/qje/qjz013.
Ford, George S.
“Confusing Relevance and Price: Interpreting and Improving Surveys on Internet Non-Adoption.”
Telecommunications Policy
45
,
no. 2
(
2021
):
102084
. https://doi.org/10.1016/j.telpol.2020.102084.
Frölich, Markus.
“Propensity Score Matching without Conditional Independence Assumption—with an Application to the Gender Wage Gap in the United Kingdom.”
The Econometrics Journal
10
,
no. 2
(
2007
):
359
407
. https://doi.org/10.1111/j.1368-423x.2007.00212.x.
Gonzalez, Kelsey E., Rina James, Eric T. Bjorklund, and Terrence D. Hill.
“Conservatism and Infrequent Mask Usage: A Study of US Counties during the Novel Coronavirus (COVID-19) Pandemic.”
Social Science Quarterly
102
,
no. 5
(
2021
):
2368
82
. https://doi.org/10.1111/ssqu.13025.
Government Accountability Office (GAO)
.
“Telecommunications: Additional Action Needed to Mitigate Significant Risks in FCC’s Lifeline Program.”
Government Accountability Office Report.” No. GAO-17-805T,
2017
. https://www.gao.gov/assets/gao-17-538.pdf
Guo, Shenyang, and Mark W. Fraser.
Propensity Score Analysis: Statistical Methods and Applications
,
Vol. 11
.
Los Angeles
:
SAGE Publications
,
2014
.
Hauge, Janice A., Mark A. Jamison, and R. Todd Jewell.
“Participation in Social Programs by Consumers and Companies.”
Public Finance Review
35
,
no. 5
(
2007
):
606
25
. https://doi.org/10.1177/1091142106299019.
Heckman, James J., and Jeffrey A. Smith.
“The Determinants of Participation in a Social Program: Evidence from a Prototypical Job Training Program.”
Journal of Labor Economics
22
,
no. 2
(
2004
):
243
98
. https://doi.org/10.1086/381250.
Hehman, Eric, Jimmy Calanchini, Jessica K. Flake, and Jordan B. Leitner.
“Establishing Construct Validity Evidence for Regional Measures of Explicit and Implicit Racial Bias.”
Journal of Experimental Psychology: General
148
,
no. 6
(
2019
):
1022
40
. https://doi.org/10.1037/xge0000623.
Herd, Pamela, and Donald P. Moynihan.
Administrative Burden: Policymaking by Other Means
.
New York
:
Russell Sage Foundation
.
2019
.
Hochschild, J.L., and K.L. Einstein.
Do Facts Matter?: Information and Misinformation in American Politics
.
Norman
:
University of Oklahoma Press
,
2015
.
Horrigan, John. B.
“Reimagining Lifeline: Universal Service, Affordability, and Connectivity.”
August 2,
2022
. https://www.benton.org/publications/reimagining-lifeline.
Hümbelin, Oliver.
“Non-Take-up of Social Assistance: Regional Differences and the Role of Social Norms.”
Swiss Journal of Sociology
45
,
no. 1
(
2019
):
7
33
. https://doi.org/10.2478/sjs-2019-0002.
Iyengar, Shanto, Yphtach Lelkes, Matthew Levendusky, Neil Malhotra, and Sean J. Westwood.
“The Origins and Consequences of Affective Polarization in the United States.”
Annual Review of Political Science
22
,
no. 1
(
2019
):
129
46
. https://doi.org/10.1146/annurev-polisci-051117-073034.
Ko, Wonsik, and Robert Moffitt.
Take-up of Social Benefits
.
Cambridge, MA
:
National Bureau of Economic Research
(No. w30148).
2022
. https://doi.org/10.3386/w30148.
Lasky-Fink, Jessica, and Elizabeth Linos.
“Improving Delivery of the Social Safety Net: The Role of Stigma.”
Journal of Public Administration Research and Theory
(
2023
). https://doi.org/10.1093/jopart/muad021.
Lee, Hyun Ji, and Brian Whitacre.
“Estimating Willingness-to-Pay for Broadband Attributes among Low-Income Consumers: Results from Two FCC Lifeline Pilot Projects.”
Telecommunications Policy
41
,
no. 9
(
2017
):
769
80
. https://doi.org/10.1016/j.telpol.2017.04.001.
Lindbeck, A., S. Nyberg, and J. W. Weibull.
“Social Norms and Economic Incentives in the Welfare State.”
The Quarterly Journal of Economics
114
,
no. 1
(
1999
):
1
35
. https://doi.org/10.1162/003355399555936.
Link, Bruce G., and Jo C. Phelan.
“Conceptualizing Stigma.”
Annual Review of Sociology
27
,
no. 1
(
2001
):
363
85
. https://doi.org/10.1146/annurev.soc.27.1.363.
Major, Brenda, and Laurie T. O’Brien.
“The Social Psychology of Stigma.”
Annual Review of Psychology
56
,
no. 1
(
2005
):
393
421
. https://doi.org/10.1146/annurev.psych.56.091103.070137.
Margalit, Yotam.
“Explaining Social Policy Preferences: Evidence from the Great Recession.”
American Political Science Review
107
,
no. 1
(
2013
):
80
103
. https://doi.org/10.1017/s0003055412000603.
Mason, Lilliana.
Uncivil Agreement How Politics Became Our Identity
.
Chicago, IL
:
University of Chicago Press
,
2018
.
Moffitt, Robert.
“An Economic Model of Welfare Stigma.”
The American Economic Review,
73
no.5
(
1983
):
1023
35
.
Mood, Carina.
“Take-up Down Under: Hits and Misses of Means-Tested Benefits in Australia.”
European Sociological Review
22
,
no. 4
(
2006
):
443
58
. https://doi.org/10.1093/esr/jcl007.
NARUC
.
“State Universal Service Funds.”
National Regulatory Research Institute Report No. 15–05
, June
2019
. https://pubs.naruc.org/pub/3EA33142-00AE-EBB0-0F97-C5B0A24F755A
Pittau, Maria Grazia, Alessio Farcomeni, and Roberto Zelli.
“Has the Attitude of US Citizens towards Redistribution Changed over Time?”
Economic Modelling
52
(
2016
):
714
24
. https://doi.org/10.1016/j.econmod.2015.09.039.
Reimer, Nils Karl, Mohammad Atari, Farzan Karimi-Malekabadi, Jackson Trager, Brendan Kennedy, Jesse Graham, and Morteza Dehghani.
“Moral Values Predict County-Level Covid-19 Vaccination Rates in the United States.”
American Psychologist
77
,
no. 6
(
2022
):
743
59
. https://doi.org/10.1037/amp0001020.
Rosenbaum, P. R.
“Evidence Factors in Observational Studies.”
Biometrika
97
,
no. 2
(
2010
):
333
45
. https://doi.org/10.1093/biomet/asq019.
Rosenbaum, Paul R., and Donald B. Rubin.
“Constructing a Control Group Using Multivariate Matched Sampling Methods That Incorporate the Propensity Score.”
The American Statistician
39
,
no. 1
(
1985
):
33
. https://doi.org/10.2307/2683903.
Schaffner, Brian, Stephen Ansolabehere, and Marissa Shih.
“Cooperative Election Study Common Content, 2022.”
Harvard Dataverse. https://doi.org/10.7910/DVN/PR4L8P
Steele, Claude M., Steven J. Spencer, and Joshua Aronson.
“Contending with Group Image: The Psychology of Stereotype and Social Identity Threat.”
Advances in Experimental Social Psychology
,
2002
,
379
440
. https://doi.org/10.1016/s0065-2601(02)80009-0.
Steidley, Trent, and Danielle Trujillo.
“Status Politics and the Political Influences of Concealed Handgun License Demand in Texas.”
The Sociological Quarterly
62
,
no. 4
(
2020
):
665
89
. https://doi.org/10.1080/00380253.2020.1803157.
Wallsten, S.
“Learning from the FCC’s Lifeline Broadband Pilot Projects.”
TPRC 44: The 44th Research Conference on Communication, Information and Internet Policy 2016
. (
2016
). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2757149
Williams, Richard, and Abigail Jorgensen.
“Comparing Logit & Probit Coefficients between Nested Models.”
Social Science Research
109
(
2023
):
102802
. https://doi.org/10.1016/j.ssresearch.2022.102802.
This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.