]]>
Terry Gerton I know you have studied how workers of different skill levels choose to use generative AI and the concept of AI exposure. Can you talk to us a little bit about what you’re finding there? Are there certain roles more likely to embrace AI, or certain roles that are more likely to be replaced?
Ramayya Krishnan AI exposure, to understand that, I think we have to think about how occupations are structured. So the Bureau of Labor Statistics has something, a taxonomy called O*NET. And O*NET describes all the occupations in the U.S. economy, there are 873 or so. And each of those occupations is viewed as consisting of tasks and tasks requiring certain sets of skills. AI exposure is a measure of how many of those tasks are potentially doable by AI. And thereby that becomes, then, a measure of ways in which AI could have an impact on people who are in that particular occupation. So, however, AI exposure should not be assumed to mean that that’s tantamount to AI substitution, because I think we should be thinking about how AI is deployed. And so there are capabilities that AI has. For instance, this conversation that we’re having could be automatically transcribed by AI. This this conversation we are having could be automatically translated from English to Spanish by AI, for instance. Those are capabilities, right? So when you take capabilities and actually deploy them in organizational contexts, the question of how it’s deployed will determine whether AI is going to augment the human worker, or is it going to automate and replace a particular task that a human worker does? Remember, this happens at the task level, not at the occupation level. So some tasks within an occupation may get modified or adapted. So if you look at how software developers today use co-pilots to build software, that’s augmentation, where it’s been demonstrated that software developers with lower skills usually get between 20% to 25% productivity improvement. Call center employees, again, a similar type of augmentation is happening. In other cases, you could imagine, for instance, if you were my physician and I was speaking to you, today we have things called ambient AIs that will automatically transcribe the conversation that I’m having with you, the physician. That’s an example of an AI that could potentially substitute for a human transcriber. So I gave you two examples: software developer and customer service where you’re seeing augmentation; the transcription task, I’m giving you an example of substitution. So depending on how AI is deployed, you might have some tasks being augmented, some being substituted. When you take a step back, you have to take AI exposure as a measure of capability and then ask the question, how does that then get deployed? Which then has impact on how workers are going to actually have to think about, what does this then mean for them? And if it’s complementing, how do they become fluent in AI and be able to use AI well? And if there’s a particular task where it’s being used in a substitutive manner, what does that then mean longer term for them, in terms of having to acquire new skills to maybe transition to other occupations where there might be even more demand? So I think it’s we have to unpack what AI exposure then means for workers by thinking about augmentation versus automation.
Terry Gerton There’s a lot of nuance in that. And your writings also make the point that Gen AI adoption narrows when the cost of failure is high. So how do organizations think both about augmentation versus replacement and the risk of failure as they deploy AI?
Ramayya Krishnan If you take the example of using AI in an automated fashion, its error rate has to be so low because you don’t have human oversight. And therefore, if the error rates are not sufficiently appropriate, then you need to pair the human with the AI. In some cases you might say the AI is just not ready. So we’re not going to use the AI at all. We’ll just keep human as is. In other cases, if AI can be used with the human, where there is benefits to productivity but the error rates are such you still need the human to ensure and sign off, either because the error rates are high or from an ethical standpoint or from a governance standpoint, you need the human in the loop to sign off, you’re going to see complementing the human with the AI. And then there are going to be tasks for which the AI quality is so high, that its error rates are so low, that you could actually deploy it. So when we talk about the cost of failure, you want to think about consequential tasks where failure is not an option. And so either the error rates have to be really low, and therefore I can deploy the AI in an automated fashion, or you have to ensure there is a human in the loop. And this is why I think AI measurement and evaluation prior to deployment is so essential because things like error rates, costs, all of these have to be measured and inform the decisions to deploy AI and deploy AI in what fashion? Is it in augmentation fashion or not, or is it going to be used independently?
Terry Gerton I’m speaking with Dr. Ramayya Krishnan. He’s the director of the Center for AI Measurement Science and Engineering at Carnegie Mellon University. So we’re talking there about how AI gets deployed in different organizations. How do you see this applying in the public sector? Are there certain kinds of government work where AI is more suitable for augmentation versus automation and that error rate then becomes a really important consideration?
Ramayya Krishnan I think there are going to be a number of opportunities for AI to be deployed. So you remember we talked about call centers and customer service types of centers. I mean, public sector, one aspect of what they do is they engage with citizens in a variety of ways, where they have to deliver and provide good information. Some of those are time sensitive and very consequential, like 911 emergency calls. Now, there you absolutely want the human in the loop because we want to make sure that those are dealt with in a way that we believe we need humans in the loop, which could be augmented by AI, but you know, you want humans in the loop. On the other hand, you could imagine questions about, you know, what kind of permit or what kind of form, you know, administrative kinds of questions, where there’s triage, if you will, of having better response time to those kinds of questions. The alternative to calling and speaking to somebody might be just like you could go to a website and look it up. Imagine a question-answering system that actually allows for you to ask and get these questions answered. I expect that, and in fact you’re already seeing this in local government and in state government, the deployment of these kinds of administrative kinds of question-answering systems. I’d say that’s one example. Within the organizations, there is the use of AI, not customer-facing or citizen-facing, but within the organizations, the use of these kinds of co-pilots that are being used within the organization to try and improve productivity. I think as AI gets more robust and more reliable, I expect that you will see greater use of AI in both trying to improve efficiency and effectiveness, but to do so in a responsible way, in such a way that you take into account the importance of providing service to citizens of all different abilities. One of the important things with the public sector is … maybe there’s multilingual support that is needed, you might need to help citizens who are disabled. How might we support different kinds of citizens with different ability levels? I think these are things where AI could potentially play an important role.
Terry Gerton AI is certainly already having a disruptive impact on the American workforce, particularly. What recommendations do you have for policymakers and employers to mitigate the disruption and think long-term about upskilling and reskilling so that folks can be successful in this new space?
]]>
Ramayya Krishnan I think this is actually one of the most important questions that we need to address. And you know, I served on the National AI Advisory Committee to the President and the White House Office of AI Initiatives, and this was very much a key question that was addressed by colleagues. And I think a recent op-ed that we have written with Patrick Harker at the University of Pennsylvania and Mark Hagerott at the University of South Dakota, really we make the case that this is an inflection point which requires a response pretty much on the scale of what President Lincoln did in 1862 with the Morrill Act in establishing land grant universities. Much like land grant universities were designed to democratize access to agricultural technology, really it enabled Americans from everywhere in the nation to harness this technology for economic prosperity both for themselves and for the nation. I think if you’re going to see AI be deployed and not have the kind of inequality that might arise from people having access to the technology and not having access to the technology, we need something like this. And we call this the Digital Land Grant Initiative that would connect our universities, the community colleges, with various ways of providing citizens, both in rural areas and urban areas, everywhere in the country, access to AI education and skilling appropriate to their context. So if I’m a farmer, how can I do precision agriculture? If I’m a mine worker, or if I’m somebody who wants to work in in banking — from the whole range of occupations and professions, you could imagine AI having a transformative effect on these different occupations. And there may be new occupations that are going to emerge that you and I are not thinking about right now. So, how do we best position our citizens so that they can equip themselves with the right sets of skills that are going to be required and demanded? I think that’s the big public policy question with regard to workforce upskilling and reskilling.
Copyright
© 2025 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
