Between Product and Partnerships

Developing standards in the rapidly evolving field of AI

Pandium Episode 27

In this discussion, Cristina Flaschen, CEO of Pandium, speaks with Heather Flanagan, Principal at Spherical Cow Consulting, and Shon Urbas, CTO of Pandium, about the complex realities of building integrations when identity, compliance, and data governance are on the line.

Heather’s Background and Identity-Centric Lens

Heather Flanagan draws on years of experience in identity standards, advising governments, nonprofits, and tech companies on secure identity flows. At Spherical Cow Consulting, she emphasizes that integrations are not just about API connections. They must preserve identity and policy context across systems. This lens shapes how she evaluates long-term integration quality.

Identity is the Data

In many cases, identity itself is the data being transferred. Systems are not just passing files. They are transmitting roles, permissions, and group memberships. A failure in handling identity correctly can result in unauthorized access or users being locked out. This is especially critical in sectors like government and education.

The Hidden Work Behind “It Just Works”

Heather and Shon note that behind every seamless integration is complex logic. Connecting identity systems like SCIM, SAML, and OpenID Connect requires shared understanding across platforms. A major challenge is the assumption that systems interpret identity attributes the same way.

Integration as Infrastructure

Shon sees integrations as core infrastructure, not just product features. At Pandium, his team treats them as reusable, composable flows. Even with modern tools, reliable integrations depend on clear contracts around data formats, identity handling, and error recovery.

MCP: Open Source, Not a Standard

Heather and Shon discuss the growing hype around MCP, the Model Context Protocol, often mislabeled as a standard. Heather explains that MCP is an open source project from Anthropic, not a true standard, since it lacks formal security reviews, governance, and cross-industry consensus. Shon notes that while it may help drive adoption of existing protocols like OAuth 2, it adds little technical innovation and risks moving too fast without proper safeguards.

When Identity Meets Governance

Heather stresses that integration design must align with governance requirements. In regulated environments, even passing a field like email may require approval. Developers must understand what data can be shared and what must stay controlled.

Building Trust Into the Stack

Trust requires more than encryption. It depends on visibility into what moved, when, and why. Heather advocates for logging and traceability as essential for debugging and for building confidence in identity-driven systems.


For more insights on integrations, identity, and APIs, visit www.pandium.com.

Cristina Flaschen (00:17)
Hi everyone. And thank you for listening to our podcast between product and partnerships where we talk about the challenges and what it takes to build integrations,

platforms and technology generally. Today we are super excited or I'm super excited because we have two people on here with me. We've got Heather Flanagan and Sean Urbas. Heather is the principal of Spherical Cow Consulting and has been involved in leadership roles with a ton of open standards organizations. Identity and standards are her jam. And we've also got Sean, who some of you may know, he's our CTO and co-founder here at Pandian. He leads the development of our embedded integration platform and our engineering team. He's got about two decades of experience in software and has a passion for standards and has actually spoken about OAuth at a number of conferences, if you would believe it or not. So thanks you guys for joining us today and we will go ahead and get started.

So we'll jump right into Heather's bread and butter here. What do you think are the current challenges in developing standards in the new age of AI? And what do you think are some of the challenges in determining the right organizations to help enforce these standards?

Heather Flanagan (01:21)
There's a whole bunch of, interesting challenges in this space right now. not the least of which is the speed that people want to move in people, companies, you know, the expectation of end users straight to the expectation of CEOs of, know, we've got to get stuff out now. There's no time to waste. And that bugs me a lot because the standards process naturally slows things down.

right? Because it makes you say, wait, we know you can, but should you? Because it makes you go through like security evaluations. It makes you actually look at a full stack of what does it mean from the base layers of the internet all the way up to the user experience, right? It forces you to think more deeply, to build consensus. And that takes time. So I'm seeing just that increased tension of

Go fast versus, but please don't break things.

Cristina Flaschen (02:17)
Yeah, how do you help address that? I mean, I think that's a common problem in software development generally, right? And AI is definitely not excluded from that.

Heather Flanagan (02:23)
Yeah.

You know, honestly, I haven't figured that out yet because I, one of the interesting challenges that I think happens in identity broadly and standards more specifically is we have our own language, right? In how we talk about things and what our priorities are and what we're trying to accomplish in the world. And that's all very, very important stuff. We need the language to know what we're talking about. We need to have the priorities and the strategy to know what we're aiming for. Unfortunately, we are not.

usually the business decision makers, right? We're not setting strategy for the whole company. And so our priorities don't always align gracefully with the business priorities. And our jargon, our language, which we do need, certainly doesn't translate easily into business requirements or financial requirements or whatnot. And so we see all of this, we see all the things we need to do, but

We are, in terms of like all the different silos that you might find in a business, in such a position that we've got to speak all the languages. And in standards development, again, you have to speak all of the languages to get a standard that's going to actually be implemented in the world. And that's really, really hard. That's like sets the bar for people to participate higher than it does to be, say, a product manager or to be an executive or something like that.

The bar is definitely, definitely different for us. And I think that's why I'm like, I see the problem. I don't know how to resolve the problem.

Cristina Flaschen (03:48)
I mean, what, guess tactically, how do you guys think about that? Like, is there like a framework you use or thinking about like the different user types, user types of people, consumers, I guess.

Heather Flanagan (03:58)
Well, what often happens, not always, but what often happens is the people working on the standards will kind of put their head down and say, okay, we've caught rumor of there's this problem or we know there's this problem because we've seen the security issues, so we're going to solve for it.

Hopefully someone will

But that's like as far as it gets. It is a very, very reactive thing. There are efforts to, especially in some of the standards organizations, they do a better job at saying, no, you really do have to consider the end user. The W3C, for example, is particularly good at requiring accessibility reviews. Even for APIs, it's like, okay, no, but have you thought about how this is going to be presented?

Shon Urbas (04:16)
a reactive thing, usually, right?

Heather Flanagan (04:39)
users? Have you made it possible to have different choices in colors and different choices in how it's described and layout and whatnot that takes that into account somewhat? But none of that then turns into say, is there a business need? The business need is almost implied by people being funded by their companies to work on things.

Cristina Flaschen (04:58)
Sean, I saw you nodding. What are your thoughts?

Shon Urbas (05:00)
No, mean, it's sort of tangential, but you were talking about how language is really important in the standards process. And I think one of the things we talk about, Christine and I, is just like sometimes the jargon around different technologies and AIs right in the midst of all that. Like, we'll talk about it later, but MCP everything or like...

Heather Flanagan (05:19)
Mm-hmm.

Shon Urbas (05:21)
LLM, JackGBT, all the different words and verbages and like having to marry that also with our really difficult concepts in auth. Our company, we do a lot of stuff around authentication and stuff. I still feel like a novice when talking about the aspects of the specifications that go into these authentication protocols and stuff. bridging the two, there's just, it's...

It's kind of scary to think about actually.

Heather Flanagan (05:46)
It's scary, if you, if you know, someone asked me once, if I could

one trait that makes for a successful identity person and a successful standards person, it's curiosity. It's that, it's the willingness to come in and say, huh, how is this working? Why is this working? Does this actually work? How can I help make it better? You know, the asking questions and coming into it.

is the single most important trait I think you have to hold on to as you're engaging with all of it. Because otherwise you're 100 % right. It's so easy to get overwhelmed. Once upon a time, long ago in a galaxy far, far away, There were people that read every RFC that came out because they could. There was only a couple dozen a year. And that became like a hundred a year. And then that became a couple hundred a year.

And then there's other standards organizations and they're producing standards, right? And then there's the, you know, analyst firms that are coming in saying, we're not working on standards, but we're going to talk about the trends and that should hopefully inform the standards or vice versa. No one can keep track of everything that's going on at this point, which is also, you know, it's a great opportunity for people like me to say, great, where is my curiosity going to take me? Cause there was no end to where I could go, but

It's also scary because, well, that means there's probably a lot of reinventing wheels of, do you know who's working on delegation? What level of delegation? How is that happening? Yeah.

Shon Urbas (07:08)
just to add to the organizations and stuff, reporting and compliance is a whole other layer on top of all that. And there seems to be a lot of overlap. So we just went through a sock

Cristina Flaschen (07:18)
Yeah, I mean, I think it's interesting too that what you guys are saying, Heather, you especially that.

When I think of standards, and I think a lot of our listeners too, you think about like this rigidity and something that's like well established and potentially like kind of dry. And what you're describing is actually a lot of creativity, right? It's being able to come in and make an assessment and then come up with your own ideas about like how that should be translated and communicated, which is a different, a different take maybe on what standards actually are. And, you know, in my experience, people that work in technology, especially engineering, like deeply technical folks are

Heather Flanagan (07:45)
Mm-hmm.

Cristina Flaschen (07:52)
creative, there's a little bit of like artistry involved in all of this stuff and the combination of that artistry and then trying to create a box within where folks are supposed to work. I'm sure that creates some tension.

Heather Flanagan (08:04)
it does

because, you know, any as any artist will tell you, you you take it personally when someone says your art is ugly, you know, it's a wait, no, but I put my heart and soul into this. I've given a lot of thought to this. Like, look at this thing. How can you say it's not pretty? Engineers actually have exactly the same reaction more often than not, unless they've gone through a lot of training or have been beaten up a lot in meetings, you know.

the your idea may be all very well and good, but you're coming at it from one perspective by default, yours, and the standards process forces you to consider other use cases and other scenarios. And I think that's one of my favorite things about the standards process, actually.

Shon Urbas (08:48)
I gonna add to that. think what you sort of mentioned, Christina, with respect to like the way it's almost the inverse. The standards exist and there are these like codified rules, but they're also made to allow the most people to use them. If like no one's gonna use a standard, it's not useful. And so like...

There are so many different options, like to talk about OAuth 2 as an example of the standard, that was made by a giant committee of people coming together and saying, this is what I need for my application. This is what I need for my app. And it actually makes it confusing in some ways. A new term I learned at the last conference I went to was the idea of a profile. And so there are all these different, that's like a whole brand new word to me.

when you're setting up security and you're setting up your IM and all the different options that are unique to you, that's thought of as a profile. And you can have standards around the options that go into fulfilling a standard. it's just, I don't know, there's some meta magic there too,

Heather Flanagan (09:50)
for a specification that is like, we'll go ahead and pick on OAuth because who doesn't love picking on OAuth really at the end of the day? It has to be all things to all people.

A profile can never expand on that. It can only narrow down. It can only take a subset of that and restrict it. If you want to expand, then you're talking something else entirely. I've seen profiles used to very good effect in different sectors. like research and education tends to like to do profiles of things because they've got such a very interesting...

set of use cases. I've also seen them like generally when you're talking about not a specific sector but a particular

set of requirements where you say, we have a high assurance need. Social media is not at all our use case. We have something where we absolutely have to know who's who, what's what, when, when. And then you get a high assurance profile, for example. So yeah, the profiles are a very useful way to encourage interoperability when you've got a group that has constrained needs.

Cristina Flaschen (10:50)
Well, let's extend this out to talking about open standards and profiles to everyone's favorite, the MCP. So we hear all about MCPs now, if you're on LinkedIn, but them being labeled as an open standard. What are your thoughts on that as a standard? Heather, we'll start with you. ⁓

Heather Flanagan (11:07)
Sean, would you like to start? Because I've got a soapbox

right here and I'll just hop right on it and go, let it go and fire.

Cristina Flaschen (11:12)
I mean,

whoever wants to go, Sean, do want to go first?

Shon Urbas (11:15)
I'm actually interested to see what Heather's gonna say here.

Heather Flanagan (11:17)
Ha ha ha!

Cristina Flaschen (11:18)
Heather, it's back to you.

Heather Flanagan (11:19)
Okay, so I recently wrote a blog post about this and it is one of what I call my rage blogs. I mean, it doesn't read like I'm raging, but trust me, what was underneath it was me going, people, you're driving me insane. MCP, the Model Context Protocol, is kind of a universal adapter that allows AI systems to interface with any other system.

in the same way that an adapter when you're traveling, you can plug in one plug and it will go to a different plug in the wall and electricity will still flow and isn't that glorious? Yes. MCP is a very useful construct for that. What it is not is a standard in any sense of the word. It is an open source project in that the source company, which is Anthropic, bless them, they're doing the right thing. I don't object to MCP.

I object to calling it a standard. They made it an open source project so anybody can see the code. People can submit issues to the code. People can submit pull requests to the code saying, hey, I've got this idea to enhance this thing in this manner. That's all great. What isn't happening in any kind of recognized structured manner is, okay, did you, you you've got an open source license.

but have you done any more IP, IPR checks than that? What about a security evaluation, a proper security evaluation like you would see in the ITF or the W3C or the OpenID Foundation? What about an accessibility review? know, even APIs have suggestions for accessibility. What about privacy reviews? All of that stuff feeds into what something would be like in a...

in an actual standard. There is a steering committee. I actually learned about that since I wrote that post. But no one can tell me what the steering committee does. What are they steering? So I can't quite tell if this will become a standards organization in the same way that the FIDO Alliance became a standards organization because there was a bunch of entities that all agreed we need to work together to develop.

this passwordless set of specifications and it didn't fit anywhere else, great. That's how Fido Alliance came about. Maybe it wants to do its own form of MCP Alliance, I don't know. Maybe it needs to find a home in something that already exists, I don't know. But whatever it does, however it does it, it is not at this time a standard.

Cristina Flaschen (13:39)
Do you think that that's the direction that it should go? Or do you think there will be like a new standard that will spin out of this?

Heather Flanagan (13:44)
Yes.

Well, that's actually a really interesting question because MCP isn't the only universal AI to system adapter in the world, right? There's also, I think it's called A2A out of Google. And I think Microsoft may even have something as well. I don't know enough as to whether one of these should serve as the model for the standard or if there's like that underlying thread.

that says here are the principles that this API needs to build on and then you can have different APIs that do that and then have some kind of value add that makes them different. I do not know enough to answer that particular question.

Cristina Flaschen (14:20)
It might also not be mature enough as a technology to even have an answer.

Heather Flanagan (14:22)
It might not be. mean,

it's a lot of the standards do require two independent implementations in order to be considered a standard. Because implementation experience is important to know. In this case, it's like, what are you even standardizing? So I don't object at all to this having started as an open source project. My concern is the rate it's being adopted without sensible controls around it.

It's moving too fast.

Shon Urbas (14:49)
I'll add to like a lot of technology starts this way, like especially back in the day, like the first browsers were not open. Well, they were open source, but there wasn't a standard W3C came together after or actually more specifically like ECMAScript, right? Like JavaScript was two competing implementations between Netscape and Microsoft. just viscerally remember the...

Heather Flanagan (15:10)
Mm-hmm.

Shon Urbas (15:14)
event models being backwards compared to each other for whatever reason. One was top down, one was bottom up JavaScript. Anyway, what I find there a couple points about MCP in terms of raging. And then I'll talk about it more positive light. It's like, I don't understand the need for it. We have all these API standards already and there's nothing specifically that's magical about MCP. Just seems like a jargon and a thing that was sort of added on to.

make things a little bit more difficult for people to.

It felt like a way of taking control of something, right? So people were coming up with their own ways of connecting LLMs to data sources and Anthropic said, no, we need you to do it our way. Maybe there's something there. don't know. But really interesting, and this is a total counterexample. This came up on a call earlier today. We're talking to someone and they're talking about how their company doesn't have OAuth 2 yet, but they're...

they use API keys, which actually in their use case, I don't think it's wrong, but that's a whole other thing. They don't submit, they don't have any OAuth 2 flows for any of their integrations. And so they are actually implementing it now because they've been mandated to implement an MCP server. And MCP as a part of its implementation actually requires OAuth 2. I think they actually got the guy who did OAuth 2 to sort of like put his rubber stamp on this.

and,

I just found that interesting. So it's like actually helping adoption of standards in some ways, even though it's still nascent and perhaps unneeded. It's just JSON RPC. Like I don't, I don't get the magic of it. And JSON RPC was left behind for a reason. It's very verbose,

Heather Flanagan (16:46)
Mm-hmm

Shon Urbas (16:48)
I could go on and on about the technology.

Heather Flanagan (16:50)
I sense a rage blog in your future.

Shon Urbas (16:53)
yeah, so many. That said, like what it has allowed these LLMs to do is actually pretty cool when you're like, some of the stuff we're able to do in the last just few months where now you're talking to your Git repository or you're talking to some analytics framework. It makes it easier to interrogate systems like that.

Cristina Flaschen (16:53)
So, so many, so many rage blogs. And I love that term. Go ahead, John.

Heather Flanagan (16:55)
Hahaha

Shon Urbas (17:16)
Did it need to be a special standard or protocol or whatever you want to call it, implementation.

Heather Flanagan (17:21)
Well,

the good news for the good the argument for it being such a thing is the ubiquity of how it's rolling out. I would much rather there be it's like if everybody is doing their own little unique thing. It's it's like the old rule never never invent your own cryptographic algorithms. Don't do it. It's just a bad idea. This might be the same kind of thing. You know, don't

invent your own MCP because then no one's going to look at it. You're going to miss the security holes in it.

If it's that critical that you need this kind of thing in your life, do something that has more eyes on it.

Of course

Shon Urbas (17:54)
Well, you bring up security. MCP

has its own security constraints. know that's...

Cristina Flaschen (18:01)
Yeah, let's talk about

that. The concerns regarding security and privacy now that not only is AI being heavily adopted in some organizations, but I feel like everyone I speak to is hooking up everything that they've ever touched to their AI models and working in the space that we work in. That makes me really uncomfortable personally, but I'm curious what you guys think, Heather, if you want to start.

Heather Flanagan (18:23)
I think that the reason, so I strongly differentiate in my mind between people that are doing this for gen AI versus people that are doing this for agentic AI. If people are doing this because they're trying to get better grasps on an analysis of the data that they have and can verify it in separate ways, great, fine, live like you want to live. I like gen AI quite a bit actually.

If you're doing it because you're trying to establish AI agents in your life, that's a big problem because there are so many things we don't have solutions for, we don't have best practices for. I'm looking at you delegation, right? We don't know how to do delegation safely in an agentic AI environment.

So what you have to do then instead is say, OK, we'll just make all of the security features for the AI agent more open because we can't do granular authorization. that's a bad idea. We don't like that idea at all. But that's what we're seeing people set up for to do things like that. And it's more than a little terrifying.

Cristina Flaschen (19:29)
I don't disagree. Sean, curious your thoughts on, yeah, some of the security and privacy situations with some of this stuff.

Shon Urbas (19:39)
It's really interesting, right? If you listen to my earlier argument, and I know I rambled, I'm saying MCP is no real different than any other API that you have access to. think it's the accessibility that people are gaining to the APIs. The data has been out there. These security constraints are based on things that have already existed. like,

MCPs based on OAuth 2, you can limit it with some scopes, it's not, OAuth 2 isn't really well-defined enough for this. so like, if this is a problem for AI specific things, it's also a problem in the wider world, I think. Like, how do you, and it's not an easy question. I'm not saying there's an easy answer. It's an easy question. It's not an easy answer. It's like the idea of like, how do you attribute who has access to what information and.

Like, do you tag every piece of information? That tagging has a cost, right? Like, there's just so many different things to think about. I, what I find it really actually positive is like people are now really thinking about it in a way. Like, man, yeah, we have all this data and it's just flowing, but is that okay? I mean, at least we're talking about it here. I don't know if everyone feels that way.

Heather Flanagan (20:45)
Yeah,

they're thinking about it, but they're thinking about it after things are starting to roll out. And now we get into this, we're back to the speed because the speed implies scale. Things are going very fast, they're exploding, which means the scale is getting really big. And for those people that are living by the move fast, break things model,

think of the amount of technical debt they're setting themselves up for, right? We don't have the answers to delegation. We don't have the answers to agentic AI and all it can do versus should do in the environment. And yet they're rolling this out. So are they prepared to back up? I bet you they're not. Are they prepared to do something entirely different?

bet you they're not. So it's like we're just saying, technical debt, here, have more. Bad ideas.

Cristina Flaschen (21:29)
mean,

is it technical debt or is it by design? Like these are some of the questions, right? It's like to your point, can you put that genie back in the bottle once it's like, okay, cool. Like this model was able to do all of these things because it had all of this access. Now we're gonna try to pair that back. There's going to be limitations to the functionality when you do that. ⁓ I don't know.

Heather Flanagan (21:43)
Mm-hmm.

Right? want to, they, pairing

it back is going to be a cost. And they'll be like, no, wait, we already spent $1 million on this thing. Right? And they're like, well, we already put so much into it. We don't want to invest more just to make it less functional. What are you even thinking coming from the business perspective? Right? So yeah, I call that setting, setting up for technical debt, a whole load of stuff that you are not going to easily change.

Cristina Flaschen (22:14)
Yeah, I guess for the listeners, because I think we're all aligned on this, but what are the risks to allowing this sort of like broader global access to an AI agent versus like a human agent, an actual human being?

Heather Flanagan (22:29)
Whew, okay. So one of the things that jumps out into my mind, you know, looking at, and I'm literally pulling this at random, things like the privacy considerations of eventually we're going to figure out that AI models like this can recreate who is asking and all sorts of information. You can basically can track people and build profiles about them.

Cristina Flaschen (22:30)
you

Heather Flanagan (22:56)
because of the sheer number of data sets that this, the agentic AI might potentially have access to. And if then the AI space is compromised, all of that is just a huge honeypot of information that can be leaked about people and what they were doing and how they were doing it and the credentials they required to get there from here. That's like one aspect of it. If you wanna go full on conspiracy theory,

let's think about, right, so now you've got somebody being able to access, you know, via the AI agent, all of these different systems. What happens when they put information into those systems to force hallucinations, to force incorrect answers, to force bad behavior? If you're not prepared to handle that, you know, I think what companies are going to do is they're going to try and go back to ye old...

hard and crunchy outside, soft and chewy inside model of security. Okay, we don't know how to fix that, so we'll just firewall the heck out of it and that will be fine.

Shon Urbas (23:50)
That's a really fun way of calling that. That model, growing up in that with VPNs and everything. What was that kind of? I think the...

Heather Flanagan (23:59)
you

Shon Urbas (24:00)
I'm less nefarious. That's like going to be more mistake based. It's going to be like, I mean, the way I think about agentic AI specifically, it's like a junior engineer or some like assistant. When you give that assistant access to all your accounts, if it's a real human, you know, there's there's not as fast as what potentially

an AI agent could do, but they could, they're going to make the same mistakes. And so I just wonder if it's more like, you know, you're to have some really dumb things happen if you give access to some kid off the street and you're like, Hey, go book me a flight. And they may end up booking you the opposite flight, or they may use the wrong credit card, or they may like, I don't know, go open up a new account in your credit name because they have access all this stuff.

it's the speed that they're going to be able to do all that stuff. So it all comes down to verification. And this actually goes to your earlier point, Heather. It's like, I'm glad there's going to be so much tech debt being built up because it means like engineers are still going to have jobs at the end of all this. It's like the AI is not going to replace us. It's not going to take our jobs, so to speak, but some of

Heather Flanagan (25:01)


well, I am much less optimistic along all of those lines. And I would actually argue that this isn't all about verification. This is all about risk management. And humans are traditionally extraordinarily bad at risk management. But everything we're talking about. So do I think nation state actors are going to get involved in this playground? Absolutely. Do I think the majority of issues out there are probably going to be just

Stupid little mistakes on an individual level? Absolutely. What should a company do? Risk management. What is your risk appetite? You're not going to be able to avoid all risks. So you really need to sit down with some people who've been trained on how to do this because you're not going to be able to do it naturally yourself without that training and figure it out.

Cristina Flaschen (25:44)
Yeah, and I think part of the risk here too is like some of the black box nature of some of these, not the models, but the agents, especially if you're contracting with another piece of software that is then using an agent, it's a lot of degrees separated and even something like.

Auditing? Like I don't know if there's great auditing in most of these tools. Like what did it actually do? I, you know, I've used a lot of, you know, agents myself and played around with stuff. And sometimes after, know, it's five minutes of thinking, it'll spit something out. And I'm like, I have no idea how you got to that place. And sometimes it will tell you and sometimes it won't. And what I'm doing is very surface level stuff, right? Like I'm not going deep. I guess for both of you guys, like when do you think...

Heather Flanagan (26:03)
Mm-hmm.

Not yet. Not yet.

Cristina Flaschen (26:27)
Or how do you think the industry generally will attack that problem of people like maybe like us who are like, I wanna know more about what it's doing in the absence of being able to do more like permission based control. Maybe the question is like, do you think that more people want that? Is that something a place of the industry is gonna have to go? To your point, Heather, you feel like it's not yet. Sean, go ahead, I you.

Shon Urbas (26:48)
I was gonna

say, I think the technology that leads to...

point where these AIs are selling you and are advertising to you is the same technology you're gonna need to like sort of log all this stuff. And so, you know, I believe in the AI companies wanting to make a dollar and like, there's gonna be a time when you're gonna start doing a query and it's gonna offer you a biased response back, maybe like, hey, what is a good movie to see? I don't know, that's such a stupid example. But anyway, but like,

It's just a basic query, but like that, all that stuff can be lean towards you. As Heather said, they're like tracking every single prompt we send it eventually. If they're doing that, we can also like do some verification. I don't know. That's the dream anyway.

Heather Flanagan (27:29)
So in answer to the question of when do I think things are going to change, in the time on our tradition of never waste an emergency, I think it's going to take an emergency. I think it's going to take a massive breach of some kind. I think it's going to take something that would scare the average person. So I know people who have started whole companies because of how their

biometric information was lost through the US federal government, right? It's going to take a massive

emergency along those lines to start shifting things around a little bit, think. Otherwise, it's awfully convenient. So yay, team.

Cristina Flaschen (28:07)
Yeah, I've been kind of saying that behind closed doors too, that like, feel like we're gonna, I hope I'm wrong, but that there, it'll have to hit like the CNN style newsfeed, like that level of breach. It's not gonna be those little like socially engineered startups that have somebody do a bad thing. It's gonna end up being a big one and.

Heather Flanagan (28:17)
Mm-hmm.

Cristina Flaschen (28:27)
Yeah, I don't know. I don't know what it's gonna be, but I'm kind of in between the two of you guys, I think. I think there's gonna be some honest mistakes, but I also feel like anytime you have technology like this, there's gonna be people that are trying to capitalize on it in a malicious way. the fact that there's so, not even so little understanding about how this works, but it's intentionally opaque, right? Always makes me a little bit nervous, but I'm also tinfoil hat maybe about this. think like I...

have just been around, all of us have been around tech for too long and seen like the cycles maybe. Sean, go ahead.

Shon Urbas (28:57)
The

two things I'll say is like, think LLMs have a way of being able to influence people in ways that haven't been really thought of yet. Or if they've been thought of, you know, if you really think about the implications of like, some of the things that an LM could drive you to do. People are idiotic enough to like follow GPS into a river now, like what's stopping a

an idiotic LLM from driving you off the road or like someone minutiously placing thoughts into there and trying to incept you. I know you guys are talking about like a big data breach but like the the skeptic in me or the salty person in me I'm not saying the right word but it's like we have so many data breaches curmudgeon there we go the curmudgeon in me is like we have so many data breaches now like no one it's noise.

Heather Flanagan (29:35)
curmudgeon. You're looking for the word curmudgeon.

Cristina Flaschen (29:37)
Hahaha.

Shon Urbas (29:43)
I don't know if anyone's gonna care when China gets all of ChachiPT's data. Like, what are they gonna do with it?

Heather Flanagan (29:43)
Yeah.

No, the breach

is, so you're right. We have been, we, the collective we, the tech industry of the world, with being aided and abetted by regulators have done a fantastic job of desensitizing users to most things. We've desensitized them to permission prompts, to consent requests, to notifications. All of that has become, you

that the understandable regulatory requirements of if you've got an identity breach and people's data has been lost, you have to say so. Great. Well, yes, now everybody's saying so. I was like, yeah, whatever. I just assume my data is now in a database somewhere that's being sold on the dark web. So whatever. That's not actually the kind of breach I'm thinking of. Because yes, you know what? Your data, it's already gone.

your user names, your passwords, your social security numbers, your national ID numbers, where you lived when you were 12, all of that is out there and discoverable. When it turns into money, when we see like massive, large scale identity fraud, to the point that like, my mom and your mom and you your kid brother and everybody has noticeably lost money into some black pit because of the stuff in AI, that's going to be the moment.

Cristina Flaschen (30:44)
Mm-hmm.

agree. And I'm the first person to like to say, yeah, everybody already has my data. Like Google knows everything about me at this point, especially with how long I've on the internet. Like they literally know everything about me. If you wanted to steal my identity, you absolutely could. But yeah, I think it's the downstream effect of that that will cause people to be upset. And then that is what trickles up to like, where did all that data come from? And eventually goes back to the source, right? Like I don't even.

I get emails and even physical mail all the time being like, you were affected by XYZ data breach. I'm like, I didn't even remember ever using that tool, but like, okay, what are you gonna do at this point? So with that in mind, I think we have time for one more question or one more point, and this might be a gotcha, but how do you think organizations can address that trade off, right? Of like allowing fast, hopefully not broken AI functionality and maintaining.

Heather Flanagan (31:32)
Mm-hmm.

Cristina Flaschen (31:48)
privacy, security, traceability maybe? Like what would be y'all's recommendations? Heather, you wanna start?

Heather Flanagan (31:55)
I come back to, you know, get people either trained or hire people who actually understand risk management and start making some thoughtful decisions. And the more catchy thing I would say is it's called bleeding edge for a reason. Think really hard before you, you know, donate your corporate blood to this thing.

Cristina Flaschen (32:14)
I like that. Stitch it on a pillow. Sean, what are you gonna come up with to follow that?

Heather Flanagan (32:16)
Ha

Cristina Flaschen (32:19)
Yeah, you just log

Heather Flanagan (32:19)
But there's no pressure.

Cristina Flaschen (32:20)
off.

Shon Urbas (32:21)
What I think my overall arc in this thoughts around this is more like if you're not already doing it now, like AI is not going to make a difference in your thought process on this. so just try to stay away from those companies that don't have a SOC T for example, or where it is some risk management or, yeah.

I don't have a good one for that. The bleeding itch.

Cristina Flaschen (32:42)
if we're gonna get a new type of security certification that includes in-house standards for, or guidance for how you deal with governance or something with your AI models. Does it? I don't know.

Shon Urbas (32:45)
That's what I was thinking, sock 2 AI.

Heather Flanagan (32:57)
Are you sure it doesn't exist already? I mean, that sounds

like a, well, of course there will be. Because one, it's needed, and two, someone will make money off of that.

Cristina Flaschen (33:06)
There's probably a startup right now that's working on that. If you're listening, dear listener, and that is going to be your next company, hit me up. I want to be in the footer somewhere. Cool. Any final thoughts from either of you guys? This has been super fun. Heather, been awesome chatting with you. Anything you want to leave our listeners with?

Shon Urbas (33:08)
You should do it.

Heather Flanagan (33:24)
No, no, mean, I'm often referred to as the cheerful voice of fear, uncertainty and doubt because I love this stuff. It's so much fun. And yes, it is terrifying and it's a lot, but I would much rather that the world start thinking about it and approaching it with curiosity than either A, panicking and trying to turn it off or B, sticking their head in the sand.

I write about this stuff all the time. I have a blog that I post to every week and I even started an audio blog for people that don't want to 12 minutes, no more than 12 minutes. Yay.

Cristina Flaschen (33:52)
We will link to that. We will link to that, I'm sure, when we

Shon Urbas (33:53)
We will definitely like

Cristina Flaschen (33:55)
post this. Sean, anything from your side? Any parting, curmudgeonly thoughts?

Shon Urbas (34:00)
I started out as a huge AI skeptic and I feel like I've been slowly moving more towards like, man, there's some real magic here. At the same time, it's just a tool. It's a thing that helps lever and make you do more work more productively. But all the same problems you had before I think exist after it. Now it's just more about, I think Heather put it really well, it's like just the.

It's exploding with all this stuff. Like you've hit a new inflection point. So next few years are going to be interesting.

Cristina Flaschen (34:28)
And I agree with you, Sean, too, about the speed. It's like all the problems that exist now and have existed with technology will potentially, you'll continue to see, but may be magnified because of the speed, right?

Shon Urbas (34:39)
It's the story

of the printing press

Heather Flanagan (34:41)
So if anyone really

wants to start diving into what the rate of change means for things like market dominance and whatnot, look up a book called Clock Speed. It's not a new book. It was actually written, I think, a couple of decades ago. But Clock Speed, that's exactly what they talk about, how the faster the supply chains change and

how companies manage that speed is really a large part of what dictates whether they can have and keep what market dominance they've got. So book recommendation.

Cristina Flaschen (35:11)
Ooh, we will also put that, we'll make that reference I'm sharing our clips. Heather, thank you so much for spending time with us. This has been really fun. I'm sure we could do this forever, like you and Sean could definitely jam on this stuff for a really long time, but I know you spend all day talking about and thinking about this stuff, so really appreciate you being here. Sean.

As always, thank you for being here. I'm sure I will talk to you in five minutes about something else. And all of our listeners, thank you guys so much for tuning in. If you are interested in learning more about partnerships, APIs, product, SaaS, everything, you can check out our website, pandeam.com. We've got a blog that links to other podcasts, and we'll also link to Heather's blog with this video. So can check her out too. So thanks everybody. It's been great, and I hope you enjoy the rest of your day.