The rise of nondegree credential (NDC) programs as a public policy issue has lent perennial research subjects a new urgency. After all, state governments annually invest billions in NDC programs, and Congress plans to consider expanding Pell eligibility to include short-term workforce programs in the current session. So policymakers are increasingly asking the question researchers have posed for years: What value do NDCs have in the labor market? 

Evidence exists to answer that question. The problem is that there’s more than one answer. It’s like trying to hear one conversation in a busy café: So many people are talking that the signal tends to get lost in the background noise. The reasons for this are instructive and worthy of exploration. They can inform how policymakers, not to mention providers and other stakeholders, understand NDCs’ varying roles in the lives of learners and the health of local labor markets. 

Research on NDCs’ has found extraordinarily wide variability in their impact on learners’ labor market outcomes. This variability should come as no surprise given similar findings for college degree graduates. A substantial body of research evidence has shown that, on average, obtaining a bachelor’s degree dramatically improves the graduate’s earnings. But these estimates range widely based on a variety of factors, most notably field of study. According to the Center for Education and the Workforce, for example, the average first-year graduate of Rutgers University with a Bachelors degree in anthropology earns $23,400, while a computer and information sciences grad earns $70,296. 

NDCs appear to show even more variability than degrees. One team of researchers found positive returns of more than $300 per quarter for students earning certificates in Kentucky. But another team estimated negative returns of roughly the same amount in North Carolina, and still another observed positive returns of more than $500 per quarter in Texas

NDC variation in labor market outcomes has three primary sources. 

Field of Study: The most obvious is empirical variation across programs, primarily based on field of study. Multiple studies have shown, for example, that certain fields of study offer higher returns than others—for example, computer and information security certifications lead to higher wages than cosmetology licenses. 

A good illustration of the importance of scrutinizing program of study can be found in a recent report from the national Institute for Education Sciences (IES). This random assignment study found that providing Pell Grants to students in very short-term occupational training programs resulted in the same employment and earnings outcomes as students who did not receive them. 

Notably, more than two-thirds of students in the study seem to have enrolled in programs leading to a transportation credential, a security officer certificate, or a health care credential. These are potentially low-return programs, which might have the effect of flattening earnings outcomes from both groups. This demonstrates the inherent noisiness of research into NDCs. Given the sharp variation in NDC outcomes based on program of study, we risk over-generalizing from any given study unless taking care to ground our understanding in its specific industry context.

Regional Variation: Secondly, earnings outcomes vary significantly across regional labor markets. Researchers see sharp disparities between regions where demand may sharply outstrip supply and regions where it does not; between urban, rural, and suburban regions; and between regions where employers tend to accept a particular credential, driving up demand, and regions where they tend not to accept it. Studies that look at outcomes for comparable programs across multiple regions find that local context matters a great deal

Differing Data: Finally, much of the disparity in reported outcomes owes to differences in methodologies and data sources. Researchers often seek to answer subtly different questions and contend with imperfect data sources that make their preferred methodologies infeasible. 

For example, researchers use different benchmarks for comparison. An EERC review of studies on NDC quality found researchers comparing completers’ income with their income before earning the credential; comparing their income with others who did not earn an NDC credential; and comparing their income with others who earned associate or bachelor’s degrees. 

The researcher’s task is made more difficult by differences in the available administrative data sources. We can put aside for-credit certificates, which already qualify for Pell Grant reimbursement and therefore share the same reporting conventions as college degrees. Data on nondegree credentials, by contrast, typically come from state agencies and higher education systems. Most have only begun collecting data within the past several years. Their data sources are tailored to the needs of their states and, often, to the structure of subsidies enacted by their state legislatures. 

For example, Texas and Virginia collect rich datasets, but only for noncredit programs funded by their states. Maryland collects data on all students in noncredit programs, but at this point only those who complete the programs. Iowa reports outcomes from students who obtain some third-party certifications, but most states cannot. These differences may lead to disparities in reported quality outcomes even when researchers use similar methodologies. 

Today, the pressing need is to improve the signal-to-noise ratio of research on NDCs. We can boost the signal by developing clear, actionable evidence. For example, researchers and practitioners should study local or regional labor market conditions to better understand quality outcomes, which are likely to offer more precision than national labor market information. 

We can reduce unwanted noise by clarifying the significance of studies that use multiple methodologies to answer a broad spectrum of questions. As a starting point, state and institutional leaders should build data capacity at the state level to track NDC completion and attainment. Next, they will need to build consensus on data elements and definitions across states, which will provide a platform for the long-overdue development of federal data collection measures. 

Such proposals may seem like a heavy lift. But we’ve already come a long way. In 2005, researchers John Milam and Richard Voorhees coined the phrase “the hidden college” to describe noncredit programs for which they could find little credible information. Two decades later, we are beginning to have quite a bit more credible information at our fingertips. Now we just have to make sense of it.