
Risk, Strategy, and the AI Frontier: How Terry Ziemniak is Guiding Business Leaders Through the Uncertainty
Artificial intelligence isn't on the horizon anymore — it's already transforming the way we work, compete, and make decisions. But beneath the shiny tools and bold promises lies a more complex reality: risk, governance, and uncertainty. Few understand this better than Terry Ziemniak, a seasoned executive at the crossroads of AI, cybersecurity, and responsible leadership.
Recently on the AI with Bryan podcast, I sat down with Terry to unpack the tough questions leaders should be asking as they embrace AI — questions around trust, data governance, and the hidden risks of racing ahead without a plan.
From Cybersecurity to AI: The Risk Lessons Business Leaders Can't Ignore
Terry's journey with AI didn't begin last year when ChatGPT hit the headlines — it goes back to the early 1990s, when he was building AI-driven navigation tools before most people had even heard the term "artificial intelligence."
But it was his decades in cybersecurity and risk management — advising Fortune 500 companies and serving as a Chief Information Security Officer — that uniquely prepared him for the AI era.
“The parallels between cybersecurity and AI are striking,” Terry shared. “Companies that already have a mature cybersecurity program are able to adapt much faster to the AI governance conversation. The structures, the questions — they're familiar.”
Why AI Without Governance Is a House Built on Sand
It’s easy to get swept up in the hype. New models, shiny tools, and promises of skyrocketing productivity. But Terry urges caution — not to slow innovation, but to ensure it’s built on solid ground.
“Companies want to jump straight to the cool stuff — predictive analytics, generative AI,” he explained. “But without foundational data governance, without understanding where your data lives, who owns it, and how it's controlled, you’re setting yourself up for major problems.”
The truth is, most organizations still struggle with the basics — data inventories, access controls, risk assessments. And as AI capabilities grow exponentially, those gaps become dangerous liabilities.
The Shadow AI Problem — And Why It’s Coming Faster Than You Think
As AI tools become cheaper and more accessible, the rise of "shadow AI" — unauthorized or unmonitored use of AI by employees — is inevitable. And according to Terry, this makes robust governance even more critical.
“Imagine an employee casually giving an AI tool access to your entire Google Drive — it's happening,” Terry warned. “Without clear policies and controls, you open the door to massive data exposure risks.”
It's not just theory. Real-world examples — like companies rolling back AI deployments after discovering uncontrolled data access — are already emerging.
Responsible AI Leadership: It Starts with People
While technology gets the headlines, Terry believes the real key to responsible AI adoption is human — culture, education, and leadership alignment.
“Start with your people,” he emphasized. “Are your executives educated? Is legal involved? Does everyone understand the risks and the plan?”
Frameworks like the NIST AI Risk Management Framework provide a blueprint, but leadership must champion thoughtful, phased AI adoption — tailored to the company’s risk tolerance.
Where to Experiment — and Where to Be Cautious
Not all parts of an organization should be on the AI bleeding edge. Terry recommends tiering AI adoption by risk level:
✅ Marketing & Internal Tools: Lower risk, good for early experimentation.
✅ Core Business Functions: Proceed with caution.
✅ Finance, Healthcare, Customer-Facing Systems: Extreme caution. Tight controls.
“It's about aligning AI with your business priorities and risk appetite,” Terry explained. “Don't let the hype push you beyond what’s safe and smart.”
The Bottom Line: AI Success Is a Business Problem, Not Just a Tech Problem
AI isn’t just for technologists — it demands business leaders who understand risk, governance, and long-term strategy.
As Terry put it, “Pure technologists are great, but successful AI adoption requires leaders who can connect the dots — technology, risk, business outcomes. You can’t have one without the other.”
For companies ready to lead responsibly in the AI era, that means slowing down, planning thoroughly, and building on a foundation of governance and clarity.
Want to learn more from Terry?
You can connect with him via TechCXO or his LinkedIn profile (link to be added).
And if you found these insights valuable, be sure to follow AI with Bryan, share this with your network, and stay tuned — because the future isn’t waiting.

Episode's Transcript
Please understand that a transcription service provided the transcript below. It undoubtedly contains errors that invariably take place in voice transcriptions.
Bryan (00:01.496)
Hey everybody, welcome back to AI with Bryan, the show where we dive deep into how to learn, leverage and lead with AI. I'm your host Bryan Dennstedt. I'm a technologist, strategist and someone always looking for the edge AI can bring to work and our lives. This week, we've got an amazing special guest, Terry Zimniak. Terry's career sits at the intersection of risk, strategy and technology.
He's a former chief risk officer and a seasoned analytics and cybersecurity executive. He's advised Fortune 500 companies on managing risk and digital transformations and brings a unique lens on trust, governance, and ethical dimensions of AI. His current focus, he's helping organizations build AI strategies that are not only powerful, but responsible.
Terry, you've been somebody I've been watching for a while, because you don't just chase this shiny objects. I think you ask the really hard questions, questions like, should we trust that model? And what's the risk that we're not seeing? I know these are things we've geeked out over coffee and breakfast a few times. So today, I really want to dig into those questions and more. Welcome to the show.
Terry Ziemniak (01:13.479)
Awesome. Thank you, Bryan. I'm looking forward to the conversation.
Bryan (01:16.92)
Well, let's start where we always do with learning, right? So Terry, what's something you've learned recently about AI that's shifted your perspective or challenged some of your assumptions?
Terry Ziemniak (01:27.379)
Yeah, so my perspective, my background is cybersecurity and risk as you mentioned, but I've been dabbling with AI since, boy, undergraduate back in the 1990s, early 1990s. We had actually our senior project we're working on and we effectively built Google Maps for our college campus. So how do you navigate from the library to the football stadium, the football stadium, the engineering building? So we were definitely AI a long time ago.
It's kind of neat to see the evolution of what's been going on here. what I've seen though is with a little bit the AI background, technical background in cyber is I'm seeing more more parallels between the cyber and the AI conversation from a risk perspective. So companies have learned to deal with cyber risk and some companies are still struggling but there's a maturity path that we're working through.
So how does cyber impact my business? What are the risks? What do I have to worry about from contracts and partners and business continuity, all those sorts of things. We're seeing the evolution and the growth of those basically same conversations from an AI perspective. If I'm buying AI technology, what's the risk to my company? If I have it in-house, what are the risks? What are the compliance concerns? The business continuity, data governance as you mentioned. So there's a lot of striking parallels between the two.
And I'm finding that companies that have a mature solid cybersecurity program, they're actually leveraging that to quickly jump and merge to the data governance in the AI program as well. So, you what I'm learning is a lot of parallels. It's a lot of neat technology, but the structure and the governance and to make this, these neat shiny objects work well within your organization. A lot of these thought processes have already been done.
So they're out there and are leveraging it with minimal changes to accommodate AI because it is different than cyber. But again, it's not rocket science. not, from a risk perspective, we're not starting from ground zero. And I think that that's one of the lessons to be learned.
Bryan (03:34.552)
Yeah, I agree with you completely. I think that AI is forcing us to do more planning. Like let's slow down before we start. Like I know even me as a developer, I just want to jump in and play with it and connect the wires and get it to do the thing I want to do. But the right way to look about it is to sit down and be thoughtful of them. Let's really think through everything with these strategic things like, gosh, what is the...
What is the Sagrada Familia is what's popping into my head. These people were building this amazing church in Barcelona and they knew it would take a long, long time. I think they've been building this church for what three generations now. And I think it's finally coming to completion, but like, I guess that's my concern too is just like, how do we pull in somebody like you to help us plan and figure out our data governance thing? So we don't wander down that slippery slope of AI and really on, on
unlock AI's potential, but not give away our family secrets to a degree. And that's really, that's the hype, I guess, right? There's so much hype. There's so much fear around AI. You know, how are you helping people cut through that noise and do this planning piece so well?
Terry Ziemniak (04:49.481)
Well, the hard part is we don't know what the rules are gonna look like, goodness, 12 months down the road. So compliance and regulation, contractual obligations, these are all evolving and they will change. So whatever you see today is gonna change. So you've gotta be prepared for that change. The other thing I'm seeing is that the use case is so great. We built this shiny widget or we purchased this widget to solve problem A. Hey, we just realized it solved problems B, C, and D as well. Great for the business.
But you gotta take a second, the brakes on and see what, say what are the risks of doing this? So data's moving in ways you're not expecting it. It's growing, it's evolving. You've got learning models that you're building on the fly. So there's a whole level of uncertainty that, and I don't even say uncertainty. We know we don't know. And these changes are coming down the road. So you gotta take a moment, as you said, and plan and have the governance in place that when the rules change tomorrow,
Again, contracts, regulations, whatever it may be. When these changes come tomorrow, are you prepared to handle it or do you have to blow up your models and retrain? You do you have to revisit everything? It's again, tie it back to cybersecurity. Are you building something that you're going to have to blow up to then make secure? So build the security, build the governance, build the AI planning in early, prepare for the changes that are undoubtedly coming down the road so you don't have to blow up these things that you're building now.
Bryan (05:49.134)
Yeah.
Bryan (06:16.834)
Yeah. I mean, I think it's so interesting to, you know, which AI camp do you want to be in? Do you want to be in the open AI or the Gemini or the Claude, or you want to run with the DeepSeq or some of the Olamo models? Like you're picking one of these camps and jumping into it without really knowing their roadmaps, their trajectories, your roadmap, your cost projections. And it, it's easy, but hard to switch once you've built an architecture and
information around it. mean, that's such an invaluable insight you're giving there. And I don't think it's something we talk about enough.
Terry Ziemniak (06:49.855)
And I think, yeah, and think Bryan, there's another parallel. Another problem that IT has been struggling with for 15, 20 years is that cloud journey. Again, you make a decision today, what's it gonna look like? Can you back out of the cloud contract? Can you move your technologies? These same problems are gonna pop up in the AI space. So, all the AI evangelists and architects,
Think back to what our industries learned with the cloud migration and the cyber screen, all the things we've learned before, revisit that, because there's a lot of meaningful gems that we can leverage in our AI journey.
Bryan (07:29.644)
Yeah, for sure. I I always try to look back like we look back 100 years, how did they go through some of the change that we saw the introduction of electricity in the car and the washing machines? How do we learn through what they went through to try and make not make some of the mistakes that we're seeing, but at the same time, it feels like it's moving so fast and so rapidly, it's hard to figure out which way to go. I want to move into like the leveraging section.
now a little bit. I mean, you have been across a multitude of industries and I'm just curious, like, how are you helping take some of the stuff we were just talking about that we need to learn about and help these organizations actually put AI to work in a smart and strategic way?
Terry Ziemniak (08:18.087)
Well, it's different for different clients. So it's really starts with people understanding the problem they're trying to solve. that's kind of a business analyst sorts of conversation. And those are conversations they have with a couple of clients. But it really goes back, Bryan, I think to you need a structure, you need a plan. And I've got a client, for example, that they just came to me about four months ago and said, we want to start using AI.
Okay, great. Can you be a little more specific? Again, parallel back, hey Terry, we wanna be secure. I'm like, okay, well, let's have some conversations. But working through with, we bring in some experts and we had some great conversations, educational sessions. We started with basic, we had a multi-stage approach to what we're gonna do. So it's educate the organization, the executives, the users, get them familiar with AI, start with.
Bryan (08:48.558)
What's that mean? Yeah.
Terry Ziemniak (09:15.121)
office back office sorts of solutions. So that's what we build out for them. Phase two is coming up behind that, which then starts leveraging some of the generative AI. And then eventually we're going to go to the predictive analytics and all the cool visual sorts of technology. Point being is you got to struggle with the basics. And if you jump way to the end of the really complicated stuff, it assumes you've got the sound culture, you've got the sound data governance, you've got your security in place and your contracts, all the other things in place.
So generally it's gonna be a meter thoughtful approach through the AI journey, starting with what are you trying to accomplish? And then a lot of foundational stuff, because if your foundation is flawed, you're gonna be building it on your house on sand. It's not gonna work.
Bryan (10:00.792)
Yeah, and I guess that just goes back to that data governance thing that you were talking about. Like, are there some common gaps in the data governance or the model risk management that leaders should be focused on and paying attention to?
Terry Ziemniak (10:14.515)
Well, again, tying back to my experience as a CISO for several companies, all companies, pretty much every single company struggles with the inventory of the data, the data governance. What's the true source of information? Where does it come from? Where does it go? Who's responsible for it? What are the control expectations? What are the compliance requirements? That foundational stuff, because even something as simple as payroll or your sales report numbers.
You may have multiple versions of that throughout your company. So you got to start and you got to stop and take a moment and think through what are the true sources of information that you're dealing with? A lot of companies struggle with it. And it's a journey because I've been helping companies with that for 20 some years. Interesting in some of the big companies I worked in as CISO, as cybersecurity, they put the data governance team underneath me. So I own data governance as the cybersecurity executive.
Bryan (10:59.234)
Yeah.
Terry Ziemniak (11:10.751)
And we found a lot of parallels. the data governance guys, they came in and said, hey, the problem we have is people don't listen to us. We don't have the authority. We don't know what's going on. We're like, well, you sound like a cybersecurity guy. Welcome to the team. A lot of the same problems. But it's that foundational core again. What's the data? What are the expectations? Who are the owners? So it's lot of, frankly, inventorying. It's not complicated, but it is complicated. We all know we're needing to get
Bryan (11:18.091)
You
Terry Ziemniak (11:39.111)
we're going to go, it's just hard to get there and hard to maintain it. Maintenance is another big problem, especially in data governance.
Bryan (11:46.092)
And it's interesting to me because I feel like we've been in this era where hard drives were cheap enough and data storage was cheap enough that we could just save everything. Let's save all our financials. Let's save all of our customers. Let's save all of our documents and marketing and everything we want that we create. Let's just save it all. I think we're vastly moving into this world where we're creating so much content, so much information, and we aren't realizing that
the AI chat history that you have on that left-hand side expires after so long. The images that we generate and the videos we generate are only valid for 24 hours unless you download them to some permanent storage place. But we're moving into this transient data thing where I can recreate something on the fly and what data governance thing is gonna say that it must be saved for HIPAA compliance or other things versus what should be destroyed or can thrown away really quickly.
It's just such an interesting, do you think that AI, do you think people are aware of that component of this transient nature of some of this AI stuff and or do you think that AI is going to help solve some of our data governance problems because you're going to be able to talk to AI to help figure that out?
Terry Ziemniak (12:59.067)
No, it won't. I think what AI is showing us is it's a good reminder that there's stuff we care about and there's stuff we don't care about. Risk management. So if you decide of the petabytes of data floating around and multiple copies of spreadsheets and databases, maybe you don't care about 98 % of it. And if Bryan needs 15 copies of report on his laptop and it's properly secured and the controls are in place, let Bryan keep us 15, but
Those are Bryan's and those are not the true source of information. So again, back to the cybersecurity model. What do you care about? What don't you care about? And draw the line between the two and maybe you only have half a dozen true sources of information that deals with your most critical business operations and processes. That's the stuff you're really going to tightly control because it feeds your AI and a lot of decisions are made off that. And again, if Bryan has an old copy, shame on Bryan. It's kind of his responsibility. So
Draw the line, what we care about, we don't. Basic risk management, those concepts apply to data governance as well.
Bryan (14:04.032)
It's it's so I was just watching, you know, the CEO of Nvidia, Jensen is is the man of the hour, in my opinion, right? Because all roads lead to Nvidia for chips and AI stuff. And I was watching him on Veil, some of his new chips over the past couple of days. And he was talking about this spine that connects all these AI motherboards together in some fashion. And that spine can
transfer information between all of the AI chips in this one stack or something like that, it can transfer, I don't know, don't quote me on this internet people listening to this, something like 200 terabits or so, I don't know, was some very, very large number. And he says at any given moment, the internet as its entirety is only transmitting
90 or something like that. don't know. Don't quote me on the numbers, but it was something like this AI stack and cluster that he's rolling out will be able to do three times what the entire internet does every second. It's just like we don't realize how much data AI is going to be generating on a regular basis as we head into this new paradigm.
Terry Ziemniak (15:17.695)
And I think another thing we talked about the uncertainty 12 months down the road as the price of this keeps dropping. It's going to be more and more kind of consumerization of AI. So the controls that we're building and hopefully thoughtful implementations we're doing today a year from now, Susie and HR, maybe she has quick access to an AI function AI as a service and she just does her own thing off to the side. So
Bryan (15:21.965)
Mm-hmm.
Terry Ziemniak (15:44.843)
Shadow IT, Shadow AI maybe popping up pretty soon as this stuff gets cheaper and cheaper.
Bryan (15:49.88)
And this is where those governance controls are so, so important as an implementation, know, leveraging the boundaries and controls that you can put in place. Cause it's way too easy for any employee, whether it's Microsoft or Google Camp, to go click into a new AI tool, click sign in with Google and boom, give it access to my entire Google drive of data. And now that's been uploaded into that AI universe. Like we've got to put strong controls in place. So.
I think people like you have just doubled your salary personally.
Terry Ziemniak (16:20.703)
Awesome. Yeah. So what I found is working with a client late last year is this concept of latent security issues. If you have lingering security issues, if your data loss prevention, your ability to detect data in and out, if that doesn't work well, if your backup strategies, if your change management, if your workstation controls are not sound, A, you've got a cyber issue, but B,
AI is going to make that a thousand times worse because the capabilities and the uncertainty and the data flows are getting exponentially more complicated. If your foundational cyber controls are not in place, those latent security issues, they're going to be exploited by AI. I've had large company that rolled out Co-Pilot found out that people's access controls are way more than they thought because Co-Pilot started crawling SharePoint.
put the brakes and they rolled the copilot back in. Again, access control and another big project I did is I made a giant control list, AI control lists for their AI governance program. I found that about 85, 90 % of the identified controls, control being a protection, a general protection, these protections almost mirrored exactly what cybersecurity.
Bryan (17:18.446)
Yeah. Yeah.
Terry Ziemniak (17:44.2)
already had in place or the cybersecurity expectation, the basic AI security controls covered a majority of what you needed for AI. There was not a whole lot. Now, AI has got architectural concepts like trustworthiness and bias and all those things have to be thought through. You have to control maybe your databases and your learning management systems and the models have to be protected. There are some variances.
But the vast majority of the concerns really just map what cybersecurity is already asking you to do. Do you do proper change control? Do you back things up? Do you look for vulnerabilities? So again, the great news is you're not starting from square one. A lot of this is already understood, but if you're not doing it well, the foundations are not doing well, take a moment and go back and shore up your cybersecurity, shore up your data governance before you get too far down the path.
Bryan (18:36.078)
Yeah, absolutely. There's so many things like that that needs to be considered as you dabble into the AI. So make that sandbox off to the side and set those controls in place. I don't know, do something if you need to reach out to Terry and he can walk you through his checklist. I mean, it's an important thing. And I think it leads to the last segment that I kind of want to touch on is really the leading piece of this because AI isn't just about
Terry Ziemniak (18:46.472)
in
Bryan (19:03.566)
models and math and the future of society and stuff. It's all of those things, but it's really about unlocking our people, our culture, our really being critical about decision-making. These decisions that we make are having large ramifications throughout our company at the click of a mouse on that front. So what, your opinion, should responsible leadership look like in the age of AI in 2025 and beyond?
Terry Ziemniak (19:34.442)
Well, I would say that what companies can do to greatly reduce the risk in these AI projects we're all driving towards is leverage a good framework. So I'm a big fan of frameworks. NIST has an AI risk management framework, NIST being the National Institutes of Standard Technology. The government body that talks about cybersecurity, privacy, they got a lot of nice things. But.
They also have a framework for AI and that'll kind of walk you through some of these concepts, but I'll tell you, excuse me. The first point in the, one of the main points in the framework, Bryan talks about people and education. It says specifically, are your users educated? Are your executives educated? Is it the legal team looped in? Those sorts of things, whether it's the NIST AI risk management framework or others, everybody talks about start with the people.
So I think that's another thing that gets forgotten and I think that's where leadership needs to start is with people and culture supporting the governance programs. That's all a leadership sort of concept and if those are not in place, you're setting yourself up on a bad path.
Bryan (20:32.27)
Mm-hmm.
Bryan (20:44.504)
Yeah, I agree completely. That's what I've been telling teams is ask the AI for industry standards, industry best practice. It will go and find them for you as well as a deep research task. I guess there's two other aspects to touch on in this leadership space that I know I'm finding as I'm helping companies roll out AI roadmaps. And part of that roadmap is looping in somebody like you to make sure we the right governance in place. But how are you leading conversations with executives that might not be so technical and
Terry Ziemniak (20:51.359)
There you go.
Bryan (21:13.74)
are concerned about AI and shifting them from like hesitation to confidence and a couple of those aspects.
Terry Ziemniak (21:22.879)
Well, I don't know that hesitation is necessarily a bad thing. So what we learned in an earlier project that I participated in is the idea of you need to align your AI objectives with kind of your cultural risk. Do you want to be bleeding edge? Do you want to be just behind that? Do you want to be slow? Are you a healthcare organization? Are you a finance organization? There's a lot of consideration. So maybe take.
Take a break and think about where do you want? Where do you want to be in which your risk tolerance because again AI is risky because we don't know what it's gonna look like 12 months from now all these expectations and the contracts and the regulations everything coming down the road So so what I learned is is start again with a foundational we talked about security. We talked about the governance data management Risk management from an AI perspective is a conversation you have to have you know Does the organization truly want to be bleeding-edge? Maybe maybe not
Maybe you want to be bleeding edge in a very small subset of your organization. Maybe you want to be cutting edge for your developer writing code, but the chat bot that talks to your consumer, to your client, maybe that's a few steps behind the cutting edge. So I think take a moment in aligning it with the risk profile of the organization is a key concept because with those conversations in place,
then you can more easily foster leadership support in what you're doing. Everyone's gotta be aligned, we all gotta be on the same page and march into the same drummer. Those sorts of conversations around AI risk helps set that tone.
Bryan (23:00.566)
It's interesting when you unpack that, because I know me as the CTO leading these development teams, the question is they all want to be on the bleeding edge. But the code is usually the critical component for a lot of these companies. And it's like, I don't know if we want the code to be on that leading edge. I almost go back and say, let's play with marketing and sales, because that's all about experimentation in my mind.
Which department do you think should be maybe a little bit more bleeding edge, cutting edge versus some other parts? Like definitely finance should be 10 steps behind in my opinion. Like don't, don't get too crazy on some of the AI stuff there. I don't know. Are you seeing one department versus another or what would you recommend for leaders?
Terry Ziemniak (23:49.376)
Yeah, I don't know that I see one department more than others. I see a lot of people dabbling. see back office functions. They're starting to play with that generative AI. But what I would suggest to companies is think about your external influences. So regulations are definitely going to drive how you manage your AI risk. Again, your finance, as you mentioned, health care, military government, whatever it may be. But even then,
in that broad scope, if you're a healthcare organization, some of it's gonna be risk management associated with AI, but there's low risk things that you do. Maybe validating the insurance claim reject that comes back. Maybe that's a safe place to play. Maybe you throw human in a loop on top of it to help absorb that risk as well. there's different areas. Organizations don't do one thing, but you should definitely be able to tier.
the most sensitive ones versus the least sensitive, the ones that can accept hiccups in the AI. Again, marketing would be a great one. Something purely internal would be a great one. Maybe you've got some kind of chat engine tied to your intranet that helps your employees find things. That's all low risk and work your way up to the high risk stuff. You don't want to put high risk solutions that directly interface with customers. I think there was that famous story about AI.
airline website that had the tickets and someone bought a ticket really really cheap and it's because the AI bot didn't work correctly.
Bryan (25:24.578)
Yeah, absolutely. We've seen numerous examples of things like that. So some awesome, awesome points. mean, I think it's even more imperative that you've got some strong technology leaders in place, like you, me, and other seasoned executives who have been playing with neural networks and some of the stuff that we have been for 15, 20 years. And this AI buzzword is finally out in the open because the layman can use it with ChatGPT now.
but we've got to really bake in those proper governances. I really appreciate your being here.
Terry Ziemniak (25:58.507)
I would tell you one more net, you the technical expertise that Bryan, you and other people bring to the table. But don't sell yourself short, Bryan. You've got the expertise as a seasoned executive. You're a business guy that knows technology. Pure technologists is great, but all we've been talking about for the past 20 minutes is business and risk and how do we solve business problems? How do we not get ahead of our skis? All those sorts of conversation.
You need technology, you also need the business expertise. And Bryan, you and other people can kind of mirror those together. So if you are looking for a partner to help you through this companies, I would tell you, make sure that it's a business person who knows AI technology, not one or the other. It's got to be a combination of the two.
Bryan (26:43.294)
I think you're spot on. I mean, I think that's really the kind of leadership that matters. It's got to be rooted in clarity, responsibility, and looking at the people and the real impact that it can have. So Terry, thank you so much for being here today. I think your insights around everything we discussed, trust, risk, strategy are more relevant than ever. For those of you listening, if you found this conversation useful,
Terry Ziemniak (26:57.407)
Agreed.
Bryan (27:11.672)
Please go ahead and follow the show, leave us a review or share it with a friend who's curious about navigating AI the right way. You can learn more about Terry at Tech CXO or on his LinkedIn and I'll drop that link in the show notes. Again, I'm Bryan Dennstedt and I will see you next week. Remember, the future is not waiting. So learn, leverage and lead. Thank you again, Terry.
Terry Ziemniak (27:37.76)
You're welcome.