March 16, 2026

#226: ASU, Cloudflare & IDC on AI Governance in the Higher Ed Wild West

The player is loading ...
#226: ASU, Cloudflare & IDC on AI Governance in the Higher Ed Wild West

In this EDUCAUSE episode, Lester Godsey from Arizona State University, Dan Kent from Cloudflare, and Matthew Leger from IDC break down why most institutions are still in the AI Wild West - and what it actually takes to govern, secure, and scale AI across a campus before agentic systems make the problem exponentially harder.

Apple Podcasts podcast player badge
Spotify podcast player badge
YouTube podcast player badge
RSS Feed podcast player badge
Apple Podcasts podcast player iconSpotify podcast player iconYouTube podcast player iconRSS Feed podcast player icon

 

🤝 Join the CIO Communities for Local Government

A private, vendor-free network where public sector CIOs share what’s actually working

Members get:

  • 2x National Retreats (facilitated by Info-Tech Research Group & Gartner)
  • 4x CIO Virtual Roundtables 
  • Monthly “Behind the Mic” intelligence brief (1-pager)

Apply to Join → https://techtables.com/communities-local-government

Craig Hopkins, CIO, City of San Antonio TechTables testimonial

 


 

📝 Show Notes

 

Featuring

Lester Godsey is Chief Information Security Officer at Arizona State University - back at ASU after 30 years, having previously served as CISO at Maricopa County where he led cybersecurity through the 2020 and 2024 elections.

Matthew Leger is Senior Research Manager at IDC covering worldwide education and EdTech digital strategies - previously an academic researcher at Harvard Kennedy School and administrator at SUNY Albany, with over a decade across nearly every seat in higher education minus professor (though it’s on the list). : )

Dan Kent is Field CTO at Cloudflare - focused on helping public sector customers navigate emerging technologies, with 18 years working alongside higher education organizations and five kids who between them have given him more higher ed exposure than most.

 

Timestamps

(2:40) The Higher Ed AI Wild West - Matt on aimless experimentation, siloed adoption, and why coordination is the real governance problem

(5:45) ASU's Create AI Platform - Lester on the walled garden approach, 50+ LLMs, and the internal ethical AI engine built before he arrived

(8:00) AI in 2025 is where cloud was in 2010 - Dan on why undefined terminology is creating fear in boardrooms and legislatures

(10:00) Agentic AI is the biggest security concern - Dan on why giving agency to a machine is a fundamentally different risk than generative AI

(17:00) Shadow AI is just shadow IT - Lester reframes the governance problem and walks through ASU's nuanced three-tier DeepSeek response

(22:00) Less than 50% have a data governance plan - Dan on what he's hearing from public sector customers on AI readiness

(27:00) Declining trust in higher ed - Matt on whether the ROI skepticism is real and how AI can help institutions demonstrate value

(30:40) ASU's student-led SOC - Lester on training the next generation of analysts with agentic AI and security orchestration

(33:00) Final takes - AI as a security tool, tabletop exercises for AI threats, and why today's students will shape AI's ethical future

 


 

Whenever you’re ready, there are 3 ways you can connect with TechTables:

 

1. 📬 The TechTables Newsletter

Thanks for reading TechTables! Get early access to new episodes, insights, upcoming events, and more — straight to your inbox.

Join now: https://www.techtables.com/

 

2. 🤝 Join the TechTables CIO Communities for Local Government

A private, vendor-free network where public sector CIOs share what’s actually working

Members get:

  • 2x National Retreats (facilitated by Info-Tech Research Group & Gartner)
  • 4x CIO Virtual Roundtables 
  • Monthly “Behind the Mic” intelligence briefing doc

Apply to Join → https://techtables.com/communities-local-government

Chris Chirgwin CIO of Santa Barbara County TechTables testimonial

 

3. 🤝 The Better Together Virtual Series

The narrative-driven series bringing together industry partners and public sector CXOs. Discover the compelling stories that unfold when we stop working in silos and start building together.

»»» Email joe@techtables.com to learn more.

 


 

Platinum Newsletter Sponsor:

Info-Tech Research Group Logo

Missed out on Info-Tech LIVE in New Orleans? No worries, you can join TechTables & Info-Tech Research Group at Info-Tech LIVE 2026 - Las Vegas (June 9 - 11, 2026)!

Joe Toste with Info-Tech CEO Tom Zehren and CIO John Burris on stage at Info-Tech LIVE in New Orleans

 

Gold Newsletter Sponsor:

Logos of Verizon, SentinelOne, and Carahsoft

SentinelOne - Learn how SentinelOne empowers this state to stay secure.

Verizon Frontline -The advanced network that keeps first responders connected when it matters most.

Carahsoft - The Trusted Public Sector IT Solutions Provider™, supports government agencies and education/healthcare markets. Contact your Carahsoft rep today to access special discount pricing exclusively through the TechTables + Carahsoft partnership!

 

Guests

Episode Transcript

In this EDUCAUSE episode, Lester Godsey from Arizona State University, Dan Kent from Cloudflare, and Matthew Leger from IDC break down why most institutions are still in the AI Wild West - and what it actually takes to govern, secure, and scale AI across a campus before agentic systems make the problem exponentially harder.


Joe Toste: [00:00:00] Welcome to the Public Sector Show by TechTables. Super excited to have you all on. And we have a returning guest, Lester. Yeah. Returning guest Lester, let's kick off with you. Short intro.

Lester Godsey: Yeah. So I'm Lester Godsey. I'm the Chief Information Security Officer for Arizona State University. I've been in the role about 11 months.

Lester Godsey: Not to get into too much history, this is my second time around working for ASU. I started my career over 30 years ago at ASU and I find myself back as their CISO. As my prior gig was the CISO for Maricopa County where me and my team, we helped the county get through the 2020 and the 2024 elections among other things.

Lester Godsey: So that's a little my background.

Joe Toste: Yeah. I think Which aged you quite a bit, right? I remember in our first

Lester Godsey: yeah. You made comments about my beard. Yeah. That's a direct result of Maricopa County, so yeah.

Joe Toste: And we'll link that episode in the show notes. It was a really great episode. And as you had mentioned, we had the former Governor Ducey on great, another great episode.

Joe Toste: We'll link to in the show notes. Matthew short intro.

Matthew Leger: Yeah, great to be here and good to be here with you guys. I'm Matt Leger. I'm the lead analyst at IDC, which is [00:01:00] the International Data Corporation. And that's a high tech market research firm based out of Boston, but we're a global company. And I cover Higher Education K-12 and all the amazing technology transformation happening in that space.

Matthew Leger: Been in the role for five years, came from higher ed. I was an academic researcher at Harvard Kennedy School. I was also, before that an administrator working with SUNY Albany in the president's office, doing government relations and strategic initiatives. That's also where I got my bachelors and masters.

Matthew Leger: Been in higher education for about a dozen years or so in, in all the different seats. Basically minus professor. But that's on the to-do list one day.

Joe Toste: Oh, see, I was gonna ask, is that on the list? It is on the list.

Matthew Leger: One day probably adjunct.

Joe Toste: Dan, short intro.

Dan Kent: Sure. Hey, I'm Dan Kent. Nice to speak to you guys.

Dan Kent: I am the field CTO for CloudFlare. I've also been with the company just about 11 months, so I'm fairly new to the company. CloudFlare is a typically known as a cybersecurity company and application performance company, but we also do a lot with AI and development. And so in my role, my job is to talk with public sector customers to help them understand the new technologies coming out and how CloudFlare can help them integrate those [00:02:00] technologies into what they do.

Dan Kent: I've never been in higher ed myself, but I've got five children, four, which have gone through higher ed, so I feel like. That gives me qualifications to be participating in this conversation. But honestly, I've been working with the higher ed customers for about 18 years now.

Joe Toste: That was a great intro. Matt, the opening line of your IDC paper is pretty stark you said. Institutions have struggled to regain control and execute a more strategic vision.

Joe Toste: The industry has not yet fully realized the full benefits and potential of this emerging technology. You call up siloed adoption, shadow it challenges and aimless experimentation. Walk us through what aimless experimentation actually means. Is it shiny object syndrome? With AI trying to get I think you're going with, we need to get back to an actual business problem that we're solving, but Hey, you wrote the research paper.

Matthew Leger: Yeah, absolutely. So the first thing is, I just wanna make sure that the higher ed folks listening know that. That statement isn't necessarily true of all institutions. I know quite a few institutions doing quite incredible things. One right next to you with ai, one is sitting right next to me.

Matthew Leger: Try not to take that personally. I [00:03:00] don't. Yeah. I certainly know and see incredible things every day that, that your institution and many others are doing. But there is a large segment in many parts of higher ed that are, I've been calling it like the AI Wild West a little bit and.

Matthew Leger: What I mean by that is higher education over the last couple years has experienced a pretty significant transformation with COVID and the move to online learning. And then ChatGPT came out and AI became a very mainstream thing. We've seen a lot of issues with enrollment challenges and financial pressures that has put higher ed in a position where they're trying to leverage whatever latest and greatest technology they can to rethink how they deliver instruction, rethink how they operate.

Matthew Leger: And do all sorts of amazing things with AI. The challenge that higher ed hasn't solved and many are working on, but as an industry hasn't solved is Higher Ed is famous for two things being very bureaucratic and siloed functionally, and then also being very distributed and siloed in terms of technology.

Matthew Leger: And when you have those two things converge with every vendor in [00:04:00] this room throwing AI at you from all different angles. You have different functional users in different areas, testing and experimenting with AI in a bunch of different ways. That's not necessarily in line with what the institution's goal is with AI and they might not necessarily be talking to each other about how they're using ai and it's not a coordinated thing.

Matthew Leger: So you see this kind of emergence of AI everywhere all the time that everyone's experimenting with in a whole bunch of different ways. And then it's become, for some institutions, this very. Difficult problem to manage of there's just AI everywhere and I can't control it anymore. And so a lot of institutions I talk to right now are really focused on how do I get this under control, but also invest in this technology strategically and not inhibit innovation or, and or experimentation either.

Matthew Leger: So that's what I mean when I say aimless experimentation. It's like there's a lot going on. It's just not coordinated or strategic from a institutional level. Except for the institution sitting right next to you, which is so great. Of course, I'm not making sweeping generalizations. I hate [00:05:00] those as an analyst.

Matthew Leger: Yeah, no, totally. The, there was a word count limit on that paper, so I couldn't extrapolate,

Matthew Leger: but but yes, there's many amazing things happening, but there's lots of also challenges as it relates to managing this stuff as well.

Joe Toste: Thank you. No, I appreciate it. That very problem is actually probably what prompted ASU to say, hey, we're a leader. Let's launch Agentic AI and Student Experience Conference.

Joe Toste: And I'd love to hear a little bit more from both, both the conference, but also on the cyber side from you.

Lester Godsey: Absolutely. So the Agentic AI Conference was designed to do a lot of what you just described, right? And the focus was Agentic AI and the student experience, and so emphasis on the student learning experience along those lines.

Lester Godsey: And but to your point, I don't honestly disagree with, yeah. Yeah. Unfortunately I haven't read the article yet, so that'll be on my to-do, but I don't disagree with what you just said. But at the same time, I think higher education has a responsibility to be innovative and to have that freedom of.

Lester Godsey: Research and experimentation, if you will, but from a [00:06:00] cybersecurity perspective and what ASU I believe, and I wish as much as I could take credit for this all occurred before I started Joe. I inherited a very good situation in that this is a big reason why ASU developed its own AI platform.

Lester Godsey: And so the Create AI Platform that I'm sure my counterpart Kyle talked about, but really. It has, it's a walled garden environment. It has its own security, privacy, and ethics controls built in. So the very thing that you, I'm sure your article points out are things that ASU has thought of. And we've created a platform by which we've had thousands of projects come out of that Create AI Builder Platform.

Lester Godsey: And we'll soon transition to our Agentic framework that we'll be rolling out here very shortly. And so we could do that level of experimentation and investigation and exploration, but with the degree of confidence that, that we have things well in control and we're protecting those data assets, if you will.

Lester Godsey: So I think we're getting the best of both worlds along those lines. And so from a cybersecurity [00:07:00] perspective, our tools range from looking for traditional things like prompt injection, model poisoning. Malicious payload development, all the way to looking for risks around harm to others, and everything in between.

Lester Godsey: And on top of that, the 50, 60 plus large language models that we support, both open source as well as commercial. Every single one of those goes through our internally developed ethical AI engine to look for bias and those sort of things. We've made a large investment in creating that environment that we can be academically free and experiment, but do so in a responsible way.

Joe Toste: Higher ed is a, actually, traditionally a great leader. I was just thinking about this with Nvidia had hit their $5 trillion market cap today and there was an article from the CEO. Talking about very early on, 20, 30 years ago about being very intentional about, hey, like we want to deploy technology and infrastructure to the universities.

Joe Toste: That's that's so important and so critical. Obviously it paid off very well. [00:08:00] But Dan, I wanted to jump over to you and on our call you had said that AI in 2025 is where the cloud was in 2010. Wow. 2010. I can't believe it's already 20. Yeah, what was I doing then? And that we don't have clear definitions yet.

Joe Toste: Infrastructure folks are talking about compute and GPUs. Software folks are talking about applications and agents and security folks are talking about governance, but everyone's using the word ai. That's actually been a very common theme of whether that was CIOs going in front of different boards or somebody even had to go to the legislature.

Joe Toste: And not using the same language. Everyone's using ai, it means different, and then if we're not properly defining it, people are getting scared and nervous. But I'd love to hear from you, like, how are you defining ai? How are you thinking about it?

Dan Kent: Sure. And I think like I said, it is very much like cloud, right?

Dan Kent: When we used to talk about cloud, what does that mean? SaaS? Does it mean infrastructure as a service? Does it mean platform as a service? We're in the same boat, right? When we talk about ai. And it's getting better. Now you have [00:09:00] to have the adjective in front of it, right? Is it agentic ai? Is it generative ai or is it the classic ai, whatever you wanna call statistical ai.

Dan Kent: They're all very important parts and they're all coexist. And so I think the, that's the first level of it. Then the second level comes are when we talk about large language models. You mentioned up mentioned, are they open source, are they commercial? And why does that matter? It matters a lot if you're gonna be paying for one and not for the other.

Dan Kent: It matters a lot in terms of security perspective, a lot of this is just about getting to the next level of detail, and it'd be great that we have the same definitions and standardizations when we do that from a user perspective is dramatically different from a builder perspective. So as a ciso, you care about your users also accessing public chat bots and how do you protect them from getting or wrong information or any type of toxic information coming down that's different than when you're building your own models or building your own chat bots, and you have to then protect your assets from.

Dan Kent: The environments, the the internet. So all that means it just gets complicated until you get to the third level of conversation, until we have some type of [00:10:00] standardization. So I know NIST is working on this as they're trying to define cyber profile for ai. I think that'll help. But I think this is also where we are from a maturity phase right now in, in ai.

Joe Toste: Elaborate a little bit on right now everyone's trying to write rules and regulation. The federal government is. Doing their thing, the 50 states are doing theirs. What are you hearing from customers today? What, where's those biggest security concerns that you're hearing about?

Dan Kent: I think that right now, in terms of how do I protect against generative ai, I think we're pretty comfortable there. Especially when you go outside of public sector and you go, like the finance they're pretty strong. It's all around. Agent AI is the biggest concern and it's about, and there's a lot of concern with agent AI in terms of giving agency to a machine to be.

Dan Kent: Make a decision on your behalf, what does that actually mean? And how do I control it and how do I authenticate that to the system, especially if my agent machine might have access to 45 different machines that I can talk with. So that's where the more forward linking thinking customers are looking at and how to pre-prepared.

Dan Kent: And I know that was your, a lot of [00:11:00] the conversation last week in a SU. But there's still these, you still have a lot of I would say public sector customers that are just getting their original chatbot out. So there is a spectrum and it comes down to where are you on that spectrum? And we try to help customers everywhere on spectrum.

Joe Toste: That's great. One, one more short follow up. You were there last week at a SU. Yeah. Yeah. What did you glean? What were some of your top takeaways? I,

Dan Kent: you know what, the biggest thing that I came out from me with the students. I love that you had students from a SU had students from other universities telling what they were doing with ai and it really shows the, when you merge the ability to have the AI write your code, how much.

Dan Kent: How much someone who's not a computer science person can actually write code to actually meet a need that they have. And that was the cool part. I see these 18, 9-year-old, I've got five children and I look at it, I'm like, I want one of my children to be up there talking about that being the geek that is gonna be fixing problems of the future and really taking on technology.

Dan Kent: So that, that was really what interested me a lot. I think the that in a whole lot of, how do I make education. Better and how do I make [00:12:00] sure I, agent AI really doesn't have a negative impact on education. It's complex. How much personalization is good, it's very complex.

Dan Kent: So it's, it was a good, it was a great event. Great event.

Joe Toste: Matthew, jumping back to your paper, you had a section called making the Education Experience more Human. We could also rephrase it to making it more student a little bit. So you wrote AI's Most Significant Promise lies in the ability to extend and enhance human expertise, fostering deeper connections and more personalized support sub students in there.

Joe Toste: This is the AI optimist view that you were sharing with me. Tell us a story from your research about an institution that's doing this well, they're using AI to make it more human. What does extending that human expertise look like in, in the real world?

Matthew Leger: Yeah. So when I say that, first I wanna qualify what I mean by how AI can make things more human, right? Because we keep thinking that the bots are taking over and all those things, which is of course not the case, but. Higher Ed has a real amazing opportunity here to make the student experience [00:13:00] more human, not less with ai.

Matthew Leger: And there's two ways that's the case. The first is ai, of course, is really great at collecting, analyzing and pulling together data from all different types of sources on students. Of course within privacy, regulations, all those concerns. But it can pull a bunch of data together to help faculty and support staff.

Matthew Leger: Understand who their students are and what their needs might be. And AI can feed insight to your advisor, to your faculty about what that student needs, and then that advisor, that faculty can help the student better based on the insight that AI generates. So that's the first piece of that.

Matthew Leger: The second point on this is that AI is really good at automating things. So you can take a lot of administrative burden off of the backs of faculty and staff and give them more time to spend with students.

Matthew Leger: And, faculty and staff who've been doing administrative work for a long time have to change how they support students. To become coaches, guides, advisors in many different ways to them. And AI can give them that time back, but we have to [00:14:00] retrain and retool how the support staff and faculty work with students.

Matthew Leger: They're not necessarily trained to be coaches and advisors if they've been an administrator doing some sort of administrative work for so long. So they need to be retrained and set up to be guides and support for students. So those are the two primary ways that, that I see AI making this world more human.

Matthew Leger: We just have to, as an industry, focus on using AI in those ways. 'Cause students do wanna interface with humans, right? We keep saying digital natives want to talk to a chat bot instead of a human. There's some love, there's some cases where that's true. But I think students actually really wanna spend more time with their faculty and staff.

Matthew Leger: They just want those interactions to be more meaningful. And impactful. They don't wanna talk about oh, what classes do I have to take at this time? Like a student information system, for example, like Modern Student Systems can do that for you. What they wanna do is talk to someone about what do I wanna do with my life?

Matthew Leger: How am I gonna get there? How am I gonna pay for it? And what matters to me as a student. They need someone to help them through those decisions. A lot of students don't have that in their personal [00:15:00] life. So that's what I mean when I say that. And IDC works with a bunch of institutions who are really thinking about this.

Matthew Leger: We published a report last year. Actually, I presented on it at EDUCAUSE last year called the Responsive Institution Framework. And there's eight institutions in there that we listed as case studies that are using this, I like using AI effectively to do exactly what I just talked about, right?

Matthew Leger: Put more face-to-face time in the student experience, which is actually what matters. So that's what I mean when I say that.

Joe Toste: On that, real quick, you talked about shadow AI also, right?

Joe Toste: Shadow it. This is the next version of that. Just a little bit, if you can just comment

Matthew Leger: That kind of goes back to your first question about this kind of this ai Wild West and all this aimless experimentation. There has been lots of great things happening in terms of how, career advisors or academic advisors, financial aid staff or teaching faculty have used AI to support students.

Matthew Leger: But they've all in, I shouldn't say all, many have. Brought some of these tools maybe on their own. Maybe they've found something online that they wanted to use and thought it was [00:16:00] great, so they started using it maybe with or without it's permission. Again, I, we see at IDCA lot of early adopters of agen ai or are encouraging faculty and staff and things to, to build an agent on their own.

Matthew Leger: And they start doing that. But then there's not this like intentional effort to bring some of those tools into the mainstream of how the institution operates. So there's just these AI tools all over the place and it creates a shadow it shadow AI problem where there's all these tools all over the place that you can't actually manage or keep track of very well which opens up all sorts of risks, which I'm sure keeps you up, up at night probably.

Matthew Leger: But that's what I'm talking about when I say shadow ai.

Joe Toste: Lester, we have the shadow AI piece, and we've got all the devices that are also hooking up on, on campus. And how is ASU thinking about you securing the enterprise? This is so much being thrown at you right now. I'm just curious what's the posture or stance around ai we know it's happening.

Joe Toste: And I'd love to hear from from you on that.

Lester Godsey: Yeah, so before I really answer that [00:17:00] question, I want to go back to your comment about how AI is the equivalent of the cloud back in 2010. And I would say with minor exceptions, AI and this concept to Shadow AI is no different than the concept of Shadow Cloud versus Shadow IT, and a lot of the problems that have plagued the IT industry regardless of sector.

Lester Godsey: Still exists. It's variations on a theme, right? So look at, let's look at the concept of asset management, right? Many organizations have challenges in articulating what assets they have. We've just def we've expanded the definition because of the change of technology in term, in terms of what an asset is.

Lester Godsey: And now with Agentic AI, I've recently had a conversation about, okay, are we gonna track agents and treat them as an asset in our CMDB? . So those are things that exist. So this concept of Shadow IT, you're absolutely right, it applies to AI, but this is just, a variation on a problem that's existed for decades.

Lester Godsey: So going back to your original question, first and foremost, we've taken the position, some [00:18:00] organizations, government agencies and folks in particular have tried to ban AI, which I think is a fool's errand personally. And so you might as well just call it like it is, right? And so we have doubled down at ASU.

Lester Godsey: We have taken the position that AI is an integral part of learning. Period. Across the board. So whether you're talking about Higher Ed, K- 12, this is the new normal. And so we want to incorporate that technology in a way that achieves the greatest outcome for our learners, right? Whether they're degree seeking or just.

Lester Godsey: Exploring the various different non-degree educational opportunities that ASU has to offer. And so my job obviously is to secure that. So from a position perspective we've taken their approach where we broadly delineate between our on-prem Create AI Platform that we developed in-house. And that's where all this sensitive work has to go.

Lester Godsey: And so I'll give a great example of this. So back when DeepSeek became a thing, everybody was freaking out about it, right? Understandably some organizations I [00:19:00] won't mention who, ' cause we share that knowledge they outright banned the use of DeepSeek, right? And so ASU was able to take a more nuanced approach where if it's Chinese hosted DeepSeek, yes, it's, you're not allowed to use it for ASU purposes.

Lester Godsey: If you want to use a US instance of that, like AWS hosting DeepSeek, you can't upload sensitive or confidential data. And here's some. Parameters that you should follow with respect to standing up an instance of DeepSeek on a third-party US provided platform. But if there were legitimate research needs at ASU, like somebody wanted to do research specifically on DeepSeek, as long as they had their dean's approval in mind, that we would use our internally developed platform because we had enough confidence that we could do so in a way that the risk was reduced.

Lester Godsey: Acceptable level across the enterprise, if you will. And so we are fortunate that we're able to have this environment for research base and other sensitive types of activities based off of the data classification, adhering to [00:20:00] our compliance requirements and things. But then ASU has also taken the realistic and pragmatic approach to our earlier example.

Lester Godsey: People are gonna use, commercial versions of, or use commercial tools out there. So instead we've provided tools. We just announced earlier this month that we reached an agreement with OpenAI that every single faculty, staff and student at ASU has access to ChatGPT for EDU. So for example, and so there's a use case for all those sort of things, right?

Lester Godsey: And then the last thing I would say is we still have challenges in front of us, so I don't wanna give the impression that. Everything's rock solid. , It's changing all the time, whether it's the types of LLMs that come out or things of that sort. And so we are also looking at AI from a cybersecurity perspective to combat AI risks as well.

Lester Godsey: We're gonna be kicking off a POC with a vendor to use AI, and in particular, leveraging the natural language processing capabilities. To automate the [00:21:00] classification of structured unstructured and semi-structured data, right? Because especially with an organization of our size, it's a fool's errand to try to do that in a manual fashion, right?

Lester Godsey: And so we're looking at that, which, if we're successful in doing that, we can then apply that not only to the use of AI and validate before something gets uploaded to a third party or whatever. Whether it should be in the first place, but then it also gives us additional capabilities from other areas of cybersecurity.

Joe Toste: So Dan, you recently presented to 75 people. I think you said it was like less than 50% had good data governance plans. First back up what was the overarching theme of the presentation? And any kind of the lessons learned that you were sharing.

Dan Kent: So it was for Public Sector customers, not necessarily Higher Ed, or it was combination.

Dan Kent: And this was self-assessed. I did not go in that challenge and they all raised their hand. I asked the question. So in the, this was about cybersecurity and AI, right? So how do we work that together to build the next generation environment? , , everywhere I talk, I ask that question, are you [00:22:00] prepared?

Dan Kent: Do you have. An AI governance strategy? And I always ask right next to that, do you have a data governance strategy? Because without that, especially if you're gonna be a builder you don't have an AI strategy if you don't have a data strategy. So this is how people are feeling, right? They're not prepared.

Dan Kent: And it makes sense, right? It's still new, especially if you look at public sector, which, most public sector customers don't move very fast, right? They're typically not as funded as well, and they don't have a large. Student body to help with the research. So they, they struggle there.

Dan Kent: But that's how they feel. They don't feel like it. I really don't think it's a matter of them doing the wrong thing. It's just a matter of how do they create an agile environment that could prepare them? Because we clearly aren't through innovating here, right? We're still moving and Agentic AI has some ways to go.

Dan Kent: So I think it's really about how do you get your ducks in a row for when you are gonna be cutting over to some of these tools. And it's funny because when you ask the second question, and this is pretty normal, are you doing AI? Yes. Are you doing Agentic AI most of the time? No. What are you doing?

Dan Kent: Chat bots and when you [00:23:00] find out, hey, we are doing Agentic AI how many use cases? And it's usually three or four. Obviously I think ASU is very different because you've invested so much in it, but the general population it's still very early in the phase, so I think it's not like end of the world.

Dan Kent: Gotta do it now. But if you don't get prepared for it it's coming whether you want it or not, the business is gonna bring it in a mission side. A customer's gonna bring it in. So you just have to be prepared and you do everything you can to do, to get prepared. There's lots of frameworks out there. So it's not like we're lacking frameworks.

Dan Kent: The question is, which one do you use? Or do you look at 'em all and pick what makes most sense to you? So

Joe Toste: yeah, I just wanted to, in the words of my friend Ryan Murray the state, Arizona. Double click on that. Preparing the data. That's gonna be a big one, right?

Joe Toste: If you've got a bunch of bloat and you scale it, you still got bloat. I'm curious what are you advising customers on as far as. Preparing them for Agentic AI. How are you preparing customers?

Dan Kent: I'll say, going back to the definitions and understanding, when you look at [00:24:00] Salesforce Agentic AI and Microsoft's Agentic AI, those are really, I'll call 'em closed ecosystems.

Dan Kent: Versus more of the mission or business side where you're gonna be using third party that is not part of a, they don't own the whole system. You're gonna be using open source LLMs, you're gonna be using MCP tools and things like that. That's different than what you're gonna get with Microsoft. Eventually they're gonna be using it, but they're trying to make it, they're trying to make it easy.

Dan Kent: And in an office type environment, that's great, but in most of the business and mission side, that's probably not gonna be the answer. So we have to get the other side going as well. I think the whole idea with Agentic is can we go back to what are you doing today to protect your environment against the cloud?

Dan Kent: I look at Sase, if you have a great Sase deployment, it's a really good step to building out, at least protecting your environment against AI, right? Because Sase is gonna make sure we had controls around DLP, it's gonna make sure we have controls around Zero Trust network access, which you're gonna wanna put in place with AI from a user perspective.

Dan Kent: Now, once you're building. That's a different story, right? So when you're starting to build, now I do need to worry about [00:25:00] the data because I'm gonna be building AI LLMs or either training them or doing something. And my data, therefore, is gonna be part of that brain. How do I make sure I control that data?

Dan Kent: And again, I think most places now there are silos. Every data's silos. And that's fine. When you start to do enterprise stuff, it's gonna be a little bit more complicated. But we talk about the data in that sense. But when you talk about Agentic AI, it's how do you, and you mentioned it earlier, how do I, is it an asset?

Dan Kent: How do I control it? Like an asset? Does it have its own identity? How do I make sure it has authorization and accessibility to the various tools it has to have? How am I gonna control that? Putting in a good. Sase Zero Trust network access environment today is a real, a good first step to getting into that world.

Dan Kent: 'cause you're gonna have to learn about new tools like AI firewalls and you're gonna have to AI gateways, which we're gonna be help you build AI applications. But if you don't have the other stuff figured out yet and deploy you, you've, you gotta go back and do that.

Joe Toste: Yeah. And that's gonna require a lot of training also.

Joe Toste: And. Even right now, I'm trying to keep, I, [00:26:00] I host this podcast and every day I'm just trying to keep pace with ChatGPT might be trying to inject something and I'm like, oh no, this is not good. 'Cause I'm using this stuff all the time, i'm using Claude code, , I've got a bunch of MCP hookups inside that, and that's passing data through.

Dan Kent: And I've read a study, just, it's about two months old now, 22% of. Of attachments into a prompt, have sensitive or PII information in there. You gotta stop that, right? 'cause you don't know where that, especially if you don't have an enterprise agreement, do you have an enterprise agreement with the LLM provider?

Dan Kent: Then you got it under control. But if you don't, how do you enforce your users to only use that enterprise agreement when they have access to so many others? So that's the things that we're telling customers about.

Joe Toste: It's a really good conversation. So Matthew, one of the last pieces of the paper, and we can't cover it all, but I'm sorry but I wish we could.

Joe Toste: Yeah, I wish we could. So you talked about declining trust in the value and effectiveness of higher education, which I know you're gonna go, Joe. This is what I meant which includes the growing skepticism about the return on investment and [00:27:00] impact of academic degrees.

Joe Toste: When I was talking with Dan, he had mentioned to me as a parent, it's like questioning, the cost of that $200,000 degree. I'm having this conversation with my daughter who's 16 and she's I wanna go to Amherst. And I looked at the starting price tag is 95,000, and I'm like, you should check out Santa Barbara City College or ASU online.

Joe Toste: That's gonna be the new option, ASU online. How should institutions be thinking about AI as a way to. Really demonstrate value, trust, and offer those skills as they're looking to go into the marketplace.

Matthew Leger: . So Joe, this is what I meant. Yeah. I think personally the public narrative about the value of higher education not being there anymore, I think it's totally false and, I think there's lots of challenges and of course, Higher Ed has to do more to evolve with the times and teach more of the skills that students need to get in into the jobs of now and prepare them for the future, which we can't fully anticipate. So there's certainly some challenges. I am a, an example of someone who actually went to college for a specific area and got a job in that space and is still doing that, almost 10 [00:28:00] years after getting my master's degree.

Matthew Leger: So like I'm an example of where that works pretty well. And to this day, Higher Ed is still the best path to a higher income and better economic mobility. There's nothing else that compares still to this day. So that I will say that first. The second thing is AI has a lot of opportunities not just to transform how institutions operate or how they teach to students, but also how they prepare.

Matthew Leger: Students for the future through things like, I'm thinking about Babson College and Patty Patria, who's the CIO over there and they're doing a lot of great work. Ruben is also their , Academic Innovation Officer or something like that. But they're bringing Copilot to not only all of their faculty to embed AI into how they deliver instruction, but they're also putting AI into the hands of all their students, particularly in their entrepreneurship classes.

Matthew Leger: And they're helping students build companies. With AI, right? Or build some sort of product or solution that they've always had in their head as like an idea and they're showing them, here's how you can bring that to life with AI. And then that's something that students can put on their resumes.

Matthew Leger: That's something they can demonstrate in any sort of interview they might be [00:29:00] doing for a job, but also they might just take that business and run with it, which is about as amazing as it gets, right? Like Higher Ed is not necessarily known for. Maybe Babson is, but Higher Ed isn't necessarily known for helping students build businesses.

Matthew Leger: But that's the path forward. AI can help you do that. So it's not just this tool for personalizing learning, which I hate that term. It is actually this tool that you can use to empower students to build things. So that's an example that immediately comes to mind of how institutions are thinking about this.

Matthew Leger: Also, there's tons of, ASU is doing amazing things. I'll let you speak to that. But there's also other institutions like. Cornell who's doing AI hackathons, where they're giving students a bunch of AI capabilities and helping and giving them problems to solve and say, go out and solve it and use AI to do it .

Matthew Leger: There's company or excuse me, institutions building partnerships with Nvidia and Microsoft and Oracle and all these companies to build like AI labs on their campus that they can come in and create a sandbox environment for students to play in. That is what all the amazing things I'm seeing Higher Ed doing with AI.

Matthew Leger: It's not necessarily about how they are leveraging AI, it's how they're putting AI into the hands of their students to, to learn and [00:30:00] create and build things. So I think that's the biggest opportunity Higher Ed has.

Joe Toste: Speaking of entrepreneurship, my daughter is taking an entrepreneurship class and we built an an app on Rept or Repli as the cool kids call it.

Joe Toste: It spun up an app and it also spun up a website, that thing could almost take Stripe payments. It was pretty impressive. Now, it started deleting itself and okay, this is the dark side of this, but

Joe Toste: but I think this is where it's going. Lester I'd love to hear from the cyber side to ASU, how you're thinking about the, this AI Augmented. Students that next generation of where you're seeing this go.

Lester Godsey: Yeah, absolutely. And it's great that you're asking this question.

Lester Godsey: So we are in process, fingers crossed of hiring student SOC coordinators. So my boss Lev Gonick gave me direction as part of our program strategy and roadmap is to stand up at tier one student led SOC. And so I will go out on record and say under normal circumstances. Most people aren't standing up tier one level SOC with human beings at this point.

Lester Godsey: They're using automation and Agentic AI to replace that first [00:31:00] tier person. And and that's the direction that technology's headed and totally understandable in our case, because of our charter and the fact we're a higher education, we need to educate our students to give them that exposure.

Lester Godsey: But our objective along those lines. Is to expose them to security orchestration

Lester Godsey: if we're successful, we'll hire a student SOC coordinator whose role will be to mentor and guide our students through what we're hoping is a year long journey on learning the skills of being a SOC analyst. But in addition to the standard skills and exposure that you might have an analyst go through it's our intention to expose them to security orchestration, automated response, and Agentic AI because really.

Lester Godsey: That's where the industry's headed. And this kind of goes back to your point, right? Yes. We do want to create a unique learning experience for students where it's appropriate. Because everybody learns differently. But at the same time, we are taking at ASU a pragmatic approach about, look, people want jobs, they need to have these skills, and they need to be [00:32:00] exposed to them.

Lester Godsey: And so our student led SOC is intended to do that, if you will. And we're looking at leveraging a combination of our own internally developed Agentic framework, as well as that of the tooling that we have in our cybersecurity portfolio. All those things combine exposing our students so that they understand how Agentic AI works, where it's appropriate, but more importantly, where from a cybersecurity perspective, you still need that, that human in, in the loop, so to speak, with regards to decision making, because as as Ryan Murray, our buddy knows it's fundamentally about risk, right? And so there's a place for technology in the form of AI, and then there's still the need for the human element, the creativity and the dogged persistence, if you will, to protect an organization, especially the size of ASU.

Joe Toste: As we round this out and everyone, this is so good. I love that we're doing this. They're about to take this whole booth down.

Joe Toste: I'd love to hear just, one kind of lesson or non-obvious insight. Around AI and [00:33:00] cybersecurity that you would leave for leaders that range across Higher Education, State and Local government, K-12. And we can kick off with you, Matt.

Matthew Leger: That's a good question.

Matthew Leger: I think not a lot of people in higher education yet are realizing how they can leverage.

Matthew Leger: AI to enhance security across their entire organization. So they think about AI as this thing they have to protect against, but it's also this tool that you can use to protect against the things you're trying to protect against, right? And so now I, I envision like a. Like Star Wars of who's got the best AI.

Matthew Leger: That's what I think is happening in cyberspace right now , but I think AI is actually an incredible tool that we can use to enhance security as well. Not just this thing we have to protect against. So I think that's probably the biggest takeaway that I have for Higher Ed in particular is it's a tool as well.

Matthew Leger: Not just something that's a risk to you.

Lester Godsey: For my case, it goes back to earlier comment. I think. Not too many organizations are thinking about the operational ramifications of AI. And so what I mean by that is it's very common in the cybersecurity field to do tabletop exercises.

Lester Godsey: [00:34:00] More often than not, they're around ransomware or maybe DDoS or, whatever the case is. We are conducting a tabletop exercise next week. So by the time your listeners see this, we have already completed this. We're conducting a tabletop exercise to ensure that the organization at ASU is ready.

Lester Godsey: From an operational perspective to address threats from the use of AI as it relates to security and privacy. And if you're gonna adopt a technology, I would encourage organizations to start thinking about how is your organization ready and how would they respond to security and privacy concerns that come up as a result of the use of AI.

Lester Godsey: And so I think that's something that not too many people are thinking about.

Joe Toste: Dan, take us home.

Dan Kent: All right, so I'm gonna pivot from this conversation. I'm, I've just read two books, Co-Intelligence by Ethan Mollick and Scary Smart by Mo Gawdat. And I think the two are, everybody should read the two. But from a university perspective, I think what students should be learning and be prepared two things.

Dan Kent: One is we have to work with AI. It's gonna make us more productive. It might replace [00:35:00] some roles, but we'll figure out how to add other roles to do something with it. And that's great. I think everyone knows that. Awesome. But if you look at how AI is gonna be impacting our future and the human future, you have to take a step higher and I, and think about what it actually can do.

Dan Kent: It is the only technology that's goal is to help augment the brain, not the musculature, not the bones. And that's a big difference.

cut: EDUCAUSE is now closed. Please make your way....

Dan Kent: is there gonna be like a blooper reel that I, those are my favorite.

Joe Toste: I'm gonna do that for sure.

Dan Kent: So the last comment is we make AI, the students of today are the ones that are gonna be making AI as intelligent as it's gonna be. So I believe every student has to understand AI. From a bias perspective. From an ethical perspective because they are gonna control the destiny of how impactful AI is on the human race.

Joe Toste: Awesome. Thank you for coming on the Public Sector Show by TechTables and there was a lot of resources I will link in the show notes. I appreciate you all coming on the show. Thank you. Thank you. Thank you.

 

Lester Godsey Profile Photo

Chief Information Security Officer at Arizona State University

Dan Kent Profile Photo

Field CTO at Cloudflare

Matthew Leger Profile Photo

Senior Research Manager at IDC