Table of Contents

The State of play


When OpenAI first rolled out ChatGPT two years ago, CodePath immediately knew its work was going to change. The nonprofit training provider exists to help diverse students outside of the Stanfords and MITs develop the technical expertise and career experience they need to land top software engineering roles. The breakthrough on generative AI (Artificial Intelligence) has affected everything from the skills CodePath needs to teach to its business model.

“AI is going to be a crucial tool in our scaling,” says Zack Parker, CodePath’s vice president of product engineering.

The First Bet: Coaching is an area the organization zeroed in on early. The support CodePath’s 7K students get with their job searches, the all-important technical interview, and general career mapping is a huge part of what makes them successful. But providing that kind of coaching also is time and staff intensive—making it as expensive as it is essential. An AI-powered career coach can help bring down the cost.

The tech also can be available 24-7 and doesn’t get bored answering the same questions over and over the way a human does. “One of the things that makes it challenging to fill those roles is the conversations can be kind of repetitive,” Parker says. But that’s perfect for AI.

To create the coach, Parker and his colleague Katelyn Kasperowicz first built out a series of bots designed to be good with a specific task, such as helping students write a professional email, negotiate a job offer, or prepare for a technical interview. They then created an AI overlay that stitches them all together in a way that appears seamless to the user.

Photo by This is Engineering via Creative Commons

Early Insight: The process is a lot of work that, Parker notes, may be made obsolete—perhaps even in a couple years—by a commodity product. But he certainly doesn’t see that as wasted time. For one, he says, CodePath isn’t going to wait even a couple years. For another, the experimentation itself is a big part of the point.

“A lot of the work we do is not in the code we write, but in the organizational muscle that we build,” Parker says. 

In the world of education and workforce training, AI is in its play era. The savviest organizations are trying out things—building apps, bots, and backend systems—aware that they may have to rethink them years or even just months later. The tech is not only advancing rapidly, but we’re just starting to get research on how best to incorporate AI into tutoring and other aspects of learning. 

Current bets on how AI will change jobs and the education and skills they require also vary widely, and with past technological evolutions, predications haven’t proved especially accurate. Even the people developing AI aren’t sure exactly how it will impact jobs at their own companies. As OpenAI CEO Sam Altman noted earlier this year, the consensus prediction until recently was that advancements in AI would impact blue-collar work first, white-collar work second, and creativity maybe never. “Obviously, it’s gone exactly the other direction,” he said. 

We’re in a period that a trio of leading business and economics professors has dubbed the “Between Times.” But squint through the uncertainty, and you can also see a time of open-ended experimentation.

And the work being done now will have a lasting impact, even if a particular app or bot is a flash in the pan. Like a toddler building and smashing block towers, people are learning—and encoding—things about this new AI-driven world that will serve them down the road, no matter what particular solutions endure. This experimental phase has already resulted in new tools for learning and work, along with new partnerships between higher ed, philanthropy, and Big Tech. It has also, perhaps counterintuitively, heightened the focus on human agency—that workers and students can and should get a say in how this all plays out.


“The best way to figure out how we respond is to play with AI every day.”

—George Siemens


The Catch: Those are trends among early adopters, at least. Many experts worry that the vast majority of leaders in higher education and workforce development still don’t have a proper sense of urgency. George Siemens, a pioneer in online learning and a leader of Southern New Hampshire University’s AI study team, is among them. He sees a concerning lack of experimentation and strategy.

Siemens likens the current moment to the early days of MOOCs (massive open online courses). Some of the most outsized ambitions fizzled, but the platforms nevertheless changed how higher education is marketed and delivered to millions of people. And institutions including the University of Michigan and MIT, which dove in from the beginning, both developed valuable expertise and helped cultivate a new market. Those universities didn’t have to share huge chunks of revenue with the online program managers (OPMs) that later cornered much of the online market. 

The implications of AI are orders of magnitude larger, Siemens says. And too many institutions are sitting on the sidelines.

“I get that leadership is frustrated and university systems are fatigued by this constant cycle of change,” he says. “But it’s negligence to not have a thoughtful response to AI at a systems level.”

Good Enough: Katy Knight, executive director and president of the Siegel Family Endowment, argues for fighting institutional paralysis by being okay with good-enough plans. Previous disruptions, such as the development of the internet, may not be a perfect analogue to this moment, but they are close enough to be a useful guide for the kinds of investments philanthropy, government, and higher education should be making.

“There are new components, but there’s also a lot that we’ve learned from in the past,” she says. “We don’t have to know everything about AI in order to make moves and place some bets.” 


Photo by This is Engineering via Creative Commons

While many colleges and workforce organizations remain on the sidelines, we’ve seen four big trends take shape over the past six-to-nine months among the early actors who are driving change or otherwise jumping in: 

  • More targeted tools are emerging for learning and work, and costs are coming down. The big challenge now—and likely into the future—isn’t so much access to tools for lower-income learners and workers but access to know-how.
  • Growing numbers of colleges and universities are experimenting with AI, often with Big Tech as guides.
  • New partnerships are cropping up with an eye toward shaping AI’s longer-term, systemic impact—especially around data and navigation. Other areas of focus include teaching and learning, student support, and AI policy.
  • The federal government, think tanks, and philanthropy are pushing for a more prominent “worker voice” as AI continues to roll out across society.

These trends—along with targeted philanthropy—are shaping curriculums and student supports, as well as experiments with skills-based hiring and navigation tools. The rest of this Work Shift Explainer digs into these four trends and how they’re laying the groundwork for bigger change to come. It is the first in a series of explainers that will be looking at trends in how AI is shaping education and work, and at notable examples of its use.

Targeted Tools, Tentative Understanding


Arizona State University, pictured here, is among a handful of universities working with OpenAI to pilot and shape a ChatGPT solution specifically for higher education. (Photo by Deanna Dent, courtesy of ASU)

Big Tech companies spent the first half of 2024 unveiling a dizzying array of advancements in generative AI tools that have significant implications for education and jobs. One big move was OpenAI’s decision to now offer its most advanced model, GPT-4o, for free, albeit with some limitations for nonpaying users. That holds particular promise for expanding access to AI-powered tutoring because the current GPT model’s voice assistant is quite sophisticated, allowing students to interact with it as they would a person.

“With universal free access, the educational value of AI skyrockets,” Ethan Mollick, a professor of management at the University of Pennsylvania’s Wharton School, wrote in his newsletter.

In general, unequal access to quality tools has become less of a concern. Four leading large language models—ChatGPT, Google’s Gemini, Anthropic’s Claude, and Microsoft’s Copilot (powered by GPT-4o)—now offer high-quality free versions that can be used on a mobile device with a simple app download. While they don’t offer all the features of paid or enterprise versions, they provide wide access to cutting-edge tech even if you’re in a rural area or other location without reliable broadband service. Likewise, Google, IBM, and others are offering free or low-cost courses for developing AI literacy.

“It’s not like the 1990s when you had to have a really expensive PC to tap into new tech,” says John Bailey, a nonresident senior fellow at the American Enterprise Institute who is tracking AI’s development and influences.

But access doesn’t address the understanding gap. The vast majority of workers and learners don’t know how to use AI tools effectively—or even what they would use them for in the first place, Bailey says. And that’s unlikely to change soon without concerted effort.

“If you’re a low-income family in Appalachia,” he says. “You’re probably not going to sign up for a Google course.”

There are nascent efforts to close the AI literacy gap. The AFL-CIO union, for example, recently inked a partnership with Microsoft to provide training for its 12M members, many of whom are frontline workers. The partnership had the potential to become the largest single effort to boost AI literacy in the U.S. workforce. 

But thus far, details are scarce. 

More young people may soon be exposed to AI through school. Google, for example, introduced a series of updates to its Gemini family of generative AI models, including several aimed at helping high school-aged students familiarize themselves with the tool responsibly. The company said it will make Gemini available to students in more than 100 countries who meet minimum age requirements and use Google Workspace for Education.

The push comes as a growing number of K-12 educators are using AI in class. Early adopters including Jimmy Fisher, a high school principal in suburban Atlanta, see this time as an opportunity to shape how the tech gets incorporated into teaching and learning. The district he leads, Gwinnett County Public Schools, has been an early leader in adopting AI.

“Find a way to incorporate this technology and this work into the classroom,” Fisher told the audience at a recent industry conference. “Don’t wait. It’s just going to pass you by if you do.” 

In higher education, OpenAI has announced plans to tap into the lucrative college market. This year, it’s rolling out a new ChatGPT Edu platform, after piloting it at Arizona State University, the University of Pennsylvania’s Wharton School, Columbia University, the University of Texas at Austin, and the University of Oxford. For these early adopters, the pilot was a chance to shape how higher education applies generative AI to teaching and research. 

“When you’re one of a handful of institutions piloting a program like this, you’re establishing your own ground rules,” says Kyle Bowen, ASU’s deputy chief information officer.

Big Tech Guides Higher Ed, Workforce on AI


Students in the AI lab at Umpqua Community College in Roseburg, Oregon. (Photo courtesy of UCC)

Educators and employers increasingly are learning to navigate this new landscape through partnerships with tech companies. Some of these agreements came before OpenAI’s release of ChatGPT, and they aren’t exclusively focused on generative AI.

Robotics is a major interest for many community colleges, including Umpqua Community College in rural Roseburg, Oregon. The college is one of 15 participating in the AI Incubator Network, the American Association of Community College’s (AACC) partnership with Intel and Dell Technologies. The goal is to give these colleges the resources and funding needed to overhaul programs and create new AI-focused ones. Umpqua used its $40K grant to buy take-home AI hardware kits and AI-powered robots for its students, many of whom plan to work in southern Oregon’s advanced manufacturing sector.

The effort is part of a larger push at Intel and across many Big Tech companies. For example:

  • Another 110 schools in 39 states are participating in the broader Intel AI for Workforce program, which provides community colleges with more than 700 hours of free content to build AI courses. 
  • Google’s philanthropic arm has created a $75M fund to back workforce development and education organizations in the United States—with a focus on supporting no-cost AI skills training for rural, underserved, and public sector workers, as well as for students and educators, small businesses, and nonprofits. 
  • NVIDIA has joined up with California’s community colleges for a new AI education partnership announced by Governor Gavin Newsom in August.

These programs and others give Big Tech companies the chance to weigh in on how community colleges design their curricula to teach students the exact skills they need to land AI-focused jobs. The agreements have “created opportunities for faculty and students that were unlikely to exist without these investments,” says Shalin Jyotishi, a senior advisor at New America. 

Across the community college sector, the shift in the past few years is significant. In 2022, the Center for Security and Emerging Technology at Georgetown University published a report on the “latent potential” of community and technical colleges in training the AI workforce of the future. 

  • The report analyzed data from 2020 and found that virtually no community or technical colleges awarded degrees or certificates in AI-specific fields.

Now, many of the country’s largest community colleges, including the Maricopa County Community College District and Miami Dade College, have made high-profile pushes into offering degrees in AI. And dozens of others from Austin to Umpqua also are investing in new AI courses and credentials. 

As much as community colleges need to keep up with AI, Jyotishi worries that some partnerships with Big Tech may be spurring the creation of programs du jour that aren’t linked to local jobs. “Hype cycles are rife in technology,” he says, “and the honest truth is colleges and even states may not have the capacity to navigate and call bluff, especially when they’re tied to political or policy moves.”

Leaders at AACC and other major community college coalitions share that concern. Unrealized hiring gains in growth industries are a long-standing problem for community colleges, says Karen Stout, CEO of Achieving the Dream—and that is a particular concern right now, not just with AI but with electric vehicle (EV) tech and other emerging industries. 

“We’re trying to predict competencies and skills for jobs where we don’t really know what they are,” she says.

The best way to eliminate some of the risk of developing tech-training programs that lead to dead ends for graduates, says Stout, is to deepen ties between community colleges and employers. She praises efforts by the Biden administration to embed community college training in big federal grants like the Commerce Department’s Tech Hubs and various grants from the National Science Foundation (NSF). A group of three large community colleges, for example, just received $2.8M from the NSF to help the two-year sector partner with industry to better understand the demand for technician-level jobs in AI-related fields.

Google is aiming much of its training support directly at employers and workforce organizations. The tech giant’s new workforce fund, for example, gave its first grants to Goodwill Industries International and the Institute for Veterans and Military Families. And Google Cloud launched a new set of certificates and courses in AI, cybersecurity, and data analytics that were built alongside employers

  • The credential programs include hands-on labs designed to help learners move more quickly through the first stage of the job interview process.
  • The U.S. Department of the Treasury, Rackspace Technology, and Jack Henry helped develop the labs and have signed on to hire completers. 

“We’re looking to recruit outside the typical D.C. pool. It’s people who aren’t even thinking about government as an option,” says Todd Conklin, chief AI officer and deputy assistant secretary of cyber at the Treasury.

The agency will be able to fast-track any potential hires coming out of the program because of a special hiring authority granted to increase the federal government’s use of AI. More broadly, the agency’s work with Google makes good on a number of directives in the president’s executive order on AI and builds on policy work on the cyber workforce that the Treasury has been engaged in for years. “This is the first time we’ve been able to flex the work we do in that space in this tactical of a way,” Conklin says.

Coalitions for Navigating the AI World


Photo by Dani Hart via Pexels

As AI’s capabilities expand at a breakneck pace, higher education also is turning to partnerships to examine the technology’s risks and opportunities. The focus is on a wide range of areas, but especially on teaching and learning, student supports, and data and navigation. 

“Too many universities just don’t know what to do,” says Coursera CEO Jeff Maggioncalda. “And things are going to move fast.”

His company is working with a group of universities on creating courses focused on how colleges and universities can build an AI strategy and incorporate the tech into their teaching, student supports, and business enterprise. The nonprofit Complete College America (CCA)—through a new Council on Equitable AI—is focused on ensuring that community colleges, minority-serving institutions, and public, open-access colleges and universities have both a strategy and a seat at the table.

“Without connections to tech vendors, opportunities to develop technology that’s responsive to the needs of under-resourced campuses will go overlooked,” says Yolanda Watson Spiva, CCA’s president.

CCA sees this foray into AI as an extension of its work on student success, and it brings with it a track record of success in influencing state policy and investments.

Other coalitions are zeroing in on specific states or regions. The Southern Regional Education Board, for example, has formed a commission focused on using AI in classrooms and preparing for a technology-focused workforce. In its first meeting on April 30 and May 1, the Commission on AI in Education discussed how to move rapidly, responsibly, and proactively on AI. 

Likewise, the Western Interstate Commission for Higher Education is working to help colleges better understand and use generative AI, including by creating a framework for institutional-level policies for successfully incorporating the technology and an AI policy database.

In New York, the state has rolled out Empire AI, a $400M public-private AI research consortium that will create a computing center that higher education can use to better compete with Big Tech on AI-related research and development. And the SkillsFWD initiative has brought together a group of states and regions to use AI and other tech to build learning and employment records and expand skills-based hiring. Alabama, which is participating, has been an early leader in using AI tech to build a statewide talent marketplace and expand competency-based education at the state’s community colleges.

In fact, many of the coalitions are focused on better ways to use data. The American Council on Education (ACE) is developing a Global Data Consortium—powered by AI—that will bring together hundreds of institutions to better understand the student experience. The consortium aims both to transform how colleges use the reams of data they collect and to get more institutions to think creatively about how they can use and shape AI. 

“We suffer from both too much data and too little access to those data,” Ted Mitchell, president of ACE, said in announcing the consortium. “And we think the AI revolution is an opportunity to solve that problem.”

The ultimate goal is to help institutions understand how to provide fresh insights around using the tech to improve everything from internships to tutoring to mental health counseling. If all the institutions that have committed to the consortium move forward, the effort will launch with data on as many as 35M students worldwide. With a data pool that big, previously hidden truths will emerge, says Paul LeBlanc, the former president of Southern New Hampshire University, which is home to an AI study team that built the consortium with ACE.

“You can start to get insights you weren’t even seeking,” he says.

One of the biggest challenges in laying the groundwork for an AI-powered future is a lack of good data on how jobs are being impacted. There are plenty of predictions—with wide-ranging conclusions and of varying quality. And experts say that existing federal data is woefully inadequate for measuring how AI is changing jobs.

Nine of the world’s top tech companies, led by Cisco, have rolled out a consortium that aims to provide more illumination around tech and communications roles. The companies are mapping and describing how AI is reshaping jobs within their own ranks, and more broadly, changing occupations their training covers. The consortium also plans to recommend training and upskilling to ensure workers can adapt as AI takes hold. 

Other participants are Google, Microsoft, IBM, Intel, SAP, Accenture, Indeed, and Eightfold—with groups including the AFL-CIO and Khan Academy serving as advisors. The effort was inspired by talks of the U.S.-EU Trade and Technology Council Talent for Growth Task Force, which is co-chaired by U.S. Secretary of Commerce Gina Raimondo.

  • The group has already issued one report showing how dramatically some jobs, including business analyst, technical writer, and user experience (UX) designer, are changing.
  • In fact, the group assessed that many business and management roles—especially those at the entry level—were set to transform more significantly than much-talked-about tech roles such as software engineering. 

Not surprisingly, customer support roles were deemed highly exposed to AI and to automation specifically. But so too were jobs like business analyst, with AI currently able to do many core functions including data processing, cleaning, and initial analysis. Aspects of the job such as critical thinking and model design become much more important, and the report recommends upskilling in areas including AI ethics and data analysis as well. 

Nevertheless, while the insights are based on how jobs are changing at specific companies, hard numbers remain hard to find on how many jobs are being eliminated, overhauled, or enhanced. 

Julia Lane, an economist and one of the country’s foremost experts on data systems, says a national center for data and evidence is needed to accurately measure how AI is changing the labor market. Such a center should be independently funded and nonpartisan, Lane wrote in The MIT Press Reader, with bottom-up, demand-driven tools and insights for businesses, workers, and governments. She cites the approach and funding structure of the Urban Institute and MDRC as potential models.

“For fast-moving areas such as AI, private and nonprofit sectors outside of government need to take the lead on innovation and let the government benefit by later incorporating what is learned into the data series they produce,” she says. 

The U.S. lacked information and policies to help millions of workers who lost factory jobs in the 1990s and early 2000s, as automation and trade transformed the American economy and the labor market, writes Brent Orrell, a senior fellow at the American Enterprise Institute. 

Orrell says Lane’s proposal “can help us avoid a labor market Groundhog Day by maximizing the gains of the emerging technology boom while avoiding some of the pain.”

Growing Call for Worker SAY


Workers are definitely worried about that pain. Pick your survey, but somewhere around 4 in 10 workers worry that AI will take their job or lead to their hours and pay being reduced. While AI may actually lead to more jobs in the aggregate, in certain fields, such as customer service, workers have reason to be concerned, says Ben Armstrong of MIT. He points to one early and much-cited study showing that AI can significantly improve the performance of a low-skilled or inexperienced service representative. While that’s promising for productivity, he sees no reason that those gains would be passed on to workers in better wages or job stability.

“If you’re a call center manager and you can slot anyone in and in three months they’re going to be an expert, that means that their skills are very replaceable,” Armstrong says.

Raimondo, the commerce secretary, argues that companies and the government need to directly address the threat, perceived or real, that workers will become more disposable. For AI experimentation to happen in earnest in education and in work, she has argued, everyday people have to get more comfortable with it. “There is so much anxiety among the average American about their job security; we have to get really serious about that—about confronting that reality,” she said on the Possible podcast with Reid Hoffman.

She’s called for a kind of grand bargain or social contract between workers and employers, mapping out what jobs are changing industry-by-industry and giving workers some guarantee of a job if they agree to retrain. The idea isn’t that a single company—say, General Motors—could provide that guarantee but that collectively the auto industry could, especially with government support for retraining workers.

“They have a sense of security, which they deserve,” Raimondo says. “They have a job, which they deserve. And then everyone else is a bit more open to pushing forward with the possibilities of AI.”

Thus far, it’s just an idea. But Raimondo is perhaps the most prominent among a rising chorus of voices calling for workers to have a bigger seat at the table. The AFL-CIO deal with Microsoft prominently featured a commitment to collaborate on worker-centered AI design and policies on ethics and skill development. And New America, a progressive think tank, has been partnering with organizations including the World Economic Forum to push for increased worker power in response to AI.

While the charge is mostly being led by progressive groups, researchers at the conservative the American Enterprise Institute have argued that increased automation needs to be paired with strong worker representation akin to what is seen in Germany. And more broadly, labor is having a moment in both political parties during this presidential campaign.

The push is not just about mitigating job loss, but about ensuring that a wide swath of Americans benefit from the potential upside of AI. Alex Swartsel, a managing director at Jobs for the Future who’s leading the organization’s Center for Artificial Intelligence & the Future of Work, says that efforts like the AFL-CIO and Microsoft partnership could lay critical groundwork that allows workers to both adapt and shape their futures. 

“Hopefully it will lead to not just more people having access to these skills, but to better technology, which is what we all need and will benefit from down the road,” she says.


EDITOR’S NOTE
This Work Shift Explainer is one in a series that will be looking at trends in how AI is shaping education and work, and highlighting notable examples of its use. That series is part of our broader AI and Economic Opportunity Reporting Initiative. Our independent reporting for that initiative is supported by Cognizant, GitLab Foundation, Kapor Foundation, and Walmart.