Preview Mode Links will not work in preview mode

Welcome to Uncovering Hidden Risks, a broader set of podcasts focused on identifying the various risks organizations face as they navigate the internal and external requirements they must comply with.
 
We’ll take you through a journey on insider risks to uncover some of the hidden security threats that Microsoft and organizations across the world are facing.  We will bring to surface some best-in-class technology and processes to help you protect your organization and employees from risks from trusted insiders.  All in an open discussion with topnotch industry experts!

Learn all about Microsoft M365 Compliance solutions here. Stay up to date by following our Insider Risk blog here.

May 26, 2021

Words matter. Intent Matters.  And yes, most certainly, punctuation matters.  Don’t believe us? Just ask the person who spent the past five-minutes eating a sleeve of cookies reflecting on which emotion “Sarah” was trying to convey when she ended her email with, “Thanks.”

In this episode of Uncovering Hidden Risks, Raman Kalyan, Talhah Mir and new hosts Liz Willets and Christophe Fiessinger come together to examine the awesomely complex and cutting-edge world of sentiment analysis and insider risks. From work comm to school chatter to social memes, our clever experts reveal how the manifestation of “risky” behavior can be detected.

 

0:00

Hello!: Meet your new Uncovering Hidden Risks hosts

2:00

Setting the story: The types and underlying risks of company communication

6:50

The trouble with identifying troublemakers: the link between code of conduct violations, sentiment analysis and risky behavior

10:00

Getting the full context: The importance of identifying questionable behavior across multiple platforms using language detection, pattern matching and AI

16:30

Illustrating your point: how memes and Giphy’s contribute to the conversation

19:30

Kids say the darndest things: the complexity of language choices within the education system

22:00

Words hurt: how toxic language erodes company culture

26:45

From their lips to our ears: customers stories about how communications have impacted culture, policy and perception

Raman Kalyan:

Hi everyone. My name is Raman Kalyan, I'm on the Microsoft 365 product marketing team, and I focus on insider risk management from Microsoft. I'm here today, joined by my colleagues, Talhah Mir, Liz Willetts, and Christophe Eisinger. And we are excited to talk to you about hidden risks within your organization. Hello? We're back, man.

Talhah Mir:

Yeah, we're back, man. It was super exciting, we got through a series of a, a couple of different podcasts, three great interviews, uh, span over multiple podcasts and just an amazing, amazing reaction to that, amazing conversations. I think we certainly learned a lot.

Raman Kalyan:

Mm-hmm (affirmative). I, I learned a lot. I mean, having Don Capelli on the podcast was awesome, talked about different types of insider risks, and what I'm most excited about today, Talhah, is to have Liz and Christophe on the, on the show with us 'cause we're gonna talk about communication risk.

Talhah Mir:

Yeah, super exciting. It's a key piece for us to better understand sort of sentiment of a customer, but I think it's important to kind of understand that on its own, there's a lot of interesting risks that you can identify, uh, that are completely sort of outside of the purview of typical solutions that customers think about. So really excited about this conversation today.

Raman Kalyan:

Absolutely. Liz, Christophe, welcome. We'd love to take an opportunity to have you guys, uh, introduce yourselves.

Liz Willetts:

Awesome, yeah, thanks for having us. We're excited to kind of take the reins from you all and, and kick off our own, uh, version of our podcast, but yeah, I'm, I'm Liz Willetts. I am the product marketing manager on our compliance marketing team and work closely with y'all as well as Christophe on the PM side.

Christophe Eisinger:

Awesome. Christophe. Hello everyone, I'm, uh, Christophe Eisinger and similar to Carla, I'm on the engineering team focusing on our insider risk, um, solution stack.

Raman Kalyan:

Cool. So there's a, there's a ton, breadth of communications out there. Liz, can you expand upon the different types of communications that organizations are using within their, uh, company to, to communicate?

Liz Willetts:

Yeah, definitely. Um, and you know kind of as we typically think about insider risks, you know, there's a perception around the fact that it's used, um, and related to things like stealing information or, um, you know, IP, sharing confidential information across the company, um, but in addition to some of those actions that they're taking, organizations really need to think about, you know, what might put the company, the brand, the reputation at risk. And so when you think about the communication platforms, um, you know, I think we're really looking to collaboration platforms, especially in this remote work environment-

Raman Kalyan:

Hmm.

Liz Willetts:

... where employees, you know, have to have the tools to be enabled to do their best work at home. Um, so that's, you know, Teams, uh, Slack, Zoom, um, but then also, you know, just other forms of communication. Um, we're thinking about audio, video, um, those types of things to identify where there might be risks and, and how you can help an organization remediate what some of those risks might be.

Raman Kalyan:

Awesome. And Christophe, as we think about communications risk more broadly, what kind of threats do you... have you start seeing, um, organizations being more concerned about?

Christophe Eisinger:

Yeah, so exactly to what you just mentioned and, and Liz, so again, there's two, two main use cases; fulfilling regulatory compliance and the regulators definitely have been putting more scrutiny and, and fining, uh, organizations large and small that don't abide by those, uh, laws, whether it's in the US, whether it's in Europe and Canada. So there's definitely an increase in enforcement, so definitely, you know, a common use case that we're seeing over is with the, uh, recent event, and the pandemics, banks wanna enable their workforce to work remotely, and one of the tools that they need is the ability to do meetings and voice and, and chat. As soon as you introduce a n- a new tool like Teams for productivity, you need to, uh, look at, uh, patterns that would, um... that fall under those regulations, things like insider trading and collusions.

            So definitely, where the change in the workforce and, and as being remote has accelerated adoption of Teams, certainly people want a, uh, a way to look at those behavior and, and avoid getting fined. And then the parallel work stream, which is also what, uh, Liz was mentioning is, you know, there has been, um, change significantly and that has naturally put some stress. Uh, it could be personal stress, you know, my kids are at home screaming or the dog or whatever, um, maybe I don't have a, uh, a nice room like here today where I can have a podcast, you know, maybe I'm, maybe I'm sitting in the kitchen and my young kids don't understand what it means to hush. So I put personal stress on me.

            Maybe I'm stressed because I don't know if I'm gonna have a, a job tomorrow, maybe I've already been [inaudible 00:05:15]. That potentially could trigger me to, to forget that the tool I'm using to get work done and to communicate with my peers, there are some rules of engagement, if you like, and there's things that are not acceptable per employee, uh, code of conduct. And again, all this stress and the fact that maybe I'm lying on my couch make... gives me the full sense of it's casual, but now I'm having a meeting with Liz and Raman, and there's certain language that's just not acceptable at our organization.

            So I think that's, that's a new trend that we're seeing that's also backed up by, by regulation in certain countries, um, to make sure there's no abuse over language. And the most common use cases, uh, in the world of education, the, the, the district, the school, the principal are responsible, uh, if bullying is reported or, or misbehavior and to really help mitigate so it doesn't escalate in- into something bad. So, uh, those are examples of what we're seeing this, eh, um-

Talhah Mir:

[inaudible 00:06:19], Christophe, um, you know, you and I have talked a lot about this sort of interplay and, and looking at, um, these communication risks, it's sentiment at the end of the day. And I know when we talk to our customers, it's, it's a very common ask around being able to understand, uh, these leading indicators. Now, Raman and I talk about insider risk management as a game of indicators, and, um, the, the more leading the indicator, the more impact it's gonna have on being able to help you identify proactive issues. So talk to me a little bit more about how some of these code of conduct violations are actually sentiment that can help you identify somebody who's a potential insider risk in the organization.

Christophe Eisinger:

Yeah, so the, the high level is, uh, if we take a concrete example, let's say, you know, I say some, some... I use some profanities certainly with peers, and, and... or sexual content, but it's just not acceptable. And, again, assume that Christophe is stressed, just a bad day, kids are screaming, whatever, I'm just stressed in my personal life and I've crossed that line. Now, the question is, was it accidental, Christophe suddenly reached the tipping point and started using foul language, or no Christophe, uh, did use foul language today, but he's been using foul language against Liz for the past 30 days. And not just over Teams or emails, over whatever, the... all the different communication channels that my employer has given.

            So I think there's that two things, is it accidental, and I think you, you guys talked about that or is it [inaudible 00:08:02]? And most of the time, you know, we're humans and we get good intent, a lot of the time it is accidental. Uh, so it's just a matter of very quickly, hopefully, uh, seeing that behavior and notifying the [inaudible 00:08:14], whatever is your, your process of telling that person's manager that, "Hey, you stepped out of bound, uh, first warning, you know, maybe retake the employee training, you reread the code of conduct, and all good then and, and move forward."

            To your questions, so that's the scenario. What's hard is because of the richness of the language, and we're humans and language keeps evolving, is just looking for specific profanities, there's some usual suspects that have no room in the workplace, but there's more pattern like abuse and harassment where I might not even use profanity, but the way I, I, I, um, criticize Liz or Raman clearly is way beyond constructive criticism.

Talhah Mir:

Mm-hmm (affirmative).

Christophe Eisinger:

And then, so how do you detect that? Because it might be u- I might be using perfectly, uh, okay dictionary words, uh, but when you read it as a whole from a sentence is horrendous or is just not acceptable? Um, so that's... To the... your question, like to really get to the crux, which is the a- that intent, that sentiment, you need to certainly look at the context and the intent. You need to see, is it a one-off with Christophe against that person, or no, it has been a pattern of repeated, uh, uh, communication risk against the individual. And so that's where, um, the problem is a fascinating problem and ever evolving because human language is this dynamic dimension that keeps evolving every day. And as you can see, I'm sure you have kids, with social media, whatever's the new buzz word, that certainly is part of the common language and guess what, we need to adapt to detect those new patterns.

Raman Kalyan:

Yeah. That's, that's fascinating, man. I think a couple of questions, one for you, and, and one for Liz. You mentioned a couple of things. One is that there's this accidental or inadvertent type of, "Hey, I... Maybe I'm not meaning what, what you think I'm meaning." So I'd love to kind of tease that out in terms of like, how does, how do we deal with that in terms of like a privacy... from a privacy perspective, right? So, you know, um, don't... you don't assume that the individual is actually doing something wrong, you wanna investigate it further. And then... That's a question for Liz and then a question for you would be really around, okay, you talked about context, how has the technology evolved to be able to really sort of understand that context? Because I know there's a lot of tools out there that promise, you know, offensive language detection or like, you know, the sentiment analysis, but they really focus in on pattern matching. And I wanna try to contrast, you know, how are we approaching that from a, from a, uh, machine learning perspective or AI perspective. So maybe Liz, you can go first on the privacy side.

Liz Willetts:

Yeah, definitely. I think that's a great question. Um, you know, we at Microsoft always keep the privacy of our customers top of mind and so wanna ensure we're, um, you know, delivering solutions to our customers that really have those capabilities built in. So, you know, when we think about, um, you know, communications, we think about, um, you know, making sure that all of the, um, communications that organizations are seeing in their solution are synonymized, um, meaning that they are de-identified, and so, um, when you think too about, you know, the fact that this is on by default, um, you know, customers are opted into, um, then you have to think about those people who are actually reviewing, um, and scoping the policies out to their workers, their analysts, their investigators, and so we definitely also keep, um, role-based access control top of mind so that only the right people, um, within an organization are able to see, um, you know, certain policies, f- flagged violations, um, and then, you know, we, we have audit reports where we can ensure that those investigators and analysts aren't misusing the data that they have at hand.

            But then also thinking about, you know, one of the, the more important differentiators is that insiders are actually in a position of trust. And so, you know, they're making use of privileges that have been granted to them to really perform their role, and if they are abusing them, um, you know, we definitely wanna make sure we're catching that while at the same time, ensuring that those privacy principles are in place.

Raman Kalyan:

Awesome, that's great. Uh, really, that's, uh, great to hear. And then Christophe, as we talk about the evolution of the technology, you know, and talk to me a little bit more about how we've evolved the technology to kind of talk about what you said, which was this context, this sentiment, like, how do we get to that?

Christophe Eisinger:

Yeah. Actually, I don't wanna talk about technology. I just wanna talk about the problem we're trying to solve. Now that... Uh, leaving that aside, so yeah, it's all about context because it's, it's already a challenge and I think we're... one of the future podcasts will go about that to detect negative sentiment, uh, for... is, is already a challenge in itself, but the question is then you put that into context. Was it just the first time, Christophe just having a bad day, he crossed the line, he needs to be reminded that this is, uh, not acceptable and problem solved, and he never does it again? Or no, he crossed the line and guess what? Last Friday he put in his resignation and it looks like he started downloading a lot of document that were marked as confidential. So suddenly you're getting language risks, you know, a code of conduct violations, but you add that with the fact that he's gonna leave and he's also been... downloaded things that could potentially signify, um, theft.

            So certainly getting that whole context of that individual, at the end of the day, what, what all that context give you is then your remediation action can be very specific versus just saying, "Christophe, stop using foul language." You know, suddenly we need to maybe pull in our compliance team or legal team or a security team or Christophe's manager versus just slapping him on the wrist for a foul language. So context is very... uh, is hugely important to help you deal with the proper remediation and the proper process based on that initial red flag which was foul language for instance. And so obviously that's, that's the, you know, the ideal, the, the uber solution that, um, a lot of us are trying to solve because the more complex you have, then [inaudible 00:14:59], position to really find those needle in the haystack and then take the appropriate action versus dismissing foul language when this person is on the road to actually burn on the house.

Raman Kalyan:

Yeah, that's, that's actually a really important point. I think the whole context, it's not even just the context of the communication, it's context of the sequence of events surrounding that communication and what might've happen before mi- might be happening after.

Christophe Eisinger:

Yeah. And Just to add to that, [inaudible 00:15:25], to mention that one thing I wanna be clear to the audience, uh, we're fully aware at, at Microsoft that it's not just the way you communicate in, in 365 such as Yammer or on email and, and Teams, but we also potentially help you... Like I said, if you give a, a, a work phone to your employees and they have SMS or they have WhatsApp, or they use [inaudible 00:15:49], technology or professional apps like Instant Bloomberg-

Raman Kalyan:

Mm-hmm (affirmative).

Christophe Eisinger:

... you gotta be holistic because again, you might see one thing in one channel, but it's actually probably hiding maybe the forest of abuse or maybe my initial thing to Liz was on Teams, but the really bad behavior happen over SMS. So giving you the ability to look holistically and make sure you've... you reduced the blind spots as possible is also something that's, uh, dear to our heart.

Raman Kalyan:

Yeah, so having that sort of one pane of glass, you don't have to have multiple solutions and platforms that-

Christophe Eisinger:

Yeah.

Raman Kalyan:

... you're trying to manage and manage workflows, manage integration, and signals, you can actually take one pane of glass and look across multiple communications and leverage the technology to identify the risks that are most important to you, right?

Christophe Eisinger:

Yes.

Talhah Mir:

So, um, Christophe, you... Christophe, you're gonna talk like multiple times a day and, and a lot of it is words, a lot of it is passionate words, but a lot of it is memes and GIPHYs that we send back and forth, so how do you think about in the context of, um, the communications and words and whatnot, how do you think about, uh, memes and GYPHYs? 'Cause some could be funny, but some could be crossing the line, right?

Christophe Eisinger:

No, You're you're spot on and, and it's definitely... Back to Liz, what Liz was mentioning, we know that communication is not written, right, anymore. And, and, you know, some of us have been on the workforce longer than others but... and some of us have kids and we've seen definitely the shift-

Talhah Mir:

Yeah.

Christophe Eisinger:

... that it's no longer just an email or a one page memo, uh, now we have the Torah of channels on how we can do work, but like you say, rules for the form on how we communicate is not written. And so for the audience, what, uh, Talhah is referring to, it could be an image and very commonly, a lot of people, um, will annotate on an image, will literally put text on an image and that text could be a risk, could be very nasty, could be inappropriate, could be containing customer information, could be containing confidential information. Um, so how do we detect that if Christophe is just sending images in Teams all day or over email but if there's actually nothing is written?

            Um, so we're actually working on, on, on this problem and we have a number of solution because there's like basically two patterns. First of all, there's the obvious image, you know, maybe is, is racist or adult or, or gory in nature, and that again has no place in the organization. So just recognizing, uh, the content of that image. But like we say, in addition to that, we're also working on doing, uh, what we call, uh, in technical jargon, optical character recognition. So extracting whatever the text is, whether it's a written sketch or, or typed on top of the image, and then once you get that extracted test, run that to our detection, we say, "Is it... matches code of conduct violation? Does it match potential regulatory, uh, compliance violations?" And so forth?

            So yes, we're absolutely looking at other forms of communication that are including in our... in the tools we use day in day, uh, such as images. And you're probably thinking how about video? And yes, this is also, uh, something we're, we're, um, working on in the futures. The goal is to reduce as much as possible, those blind spots. And that's what effectively we're doing, you know... If the end user thinks they can outsmart the system by just putting whatever, some social security from their favorite customers of a bank account or swear words in an image then, uh... and not in written text, then we wanna mitigate that to again, close all those bad blind spots.

Liz Willetts:

Yeah, and I would add there to that too, it's, it's not just... English isn't the only spoken language. So thinking about globalizing, um, some of that as well 'cause I know, um, we were talking to a customer in the EDU space and they were saying, "Hey, you know, students are trying to (laughing), bypass the system. They are writing... They are cyberbullying and, and writing harassing messages in Japanese, um, translating that through, you know, a translate app and sending that to their peers." And, um, you know, being able to detect things like that, not just in English, um, is certainly something that's also come, um, to the forefront for us.

Christophe Eisinger:

Yeah, that's... Thi- this is a true story that Liz is telling and it was interesting for us. And that's when you learn so much from kids, uh, are very... Creativity to abuse the system or be colorful is amazing and endless. But yeah, this is a true story of a school district in the Midwest, and, and we're definitely, to Liz's point, being Microsoft, we know we, we wanna cater to, uh, customers worldwide, and we already had strong demand in Asia that has laws to protect against harassment, so there's Japan and others, and we're, we're, we're wanting feedback from some customers into one of those customer interaction, and we asked the school district, "A, we're looking at introducing, um, the abuse, uh, in those languages, would you be interested? Including Asian languages."

            And the customers to our surprise say, "Yeah, I'm very interested in that." It's like how come a customer in the Midwest in the US is interested in, in Japanese and Korean and simplified Chinese. And to Liz's point, some students might not even be native in those language but they can definitely use a search engine. And instead of saying what I think about Talhah in plain English, I'll translate it and put the translated version with the, with the Katakana or Kanji which are the alphabets in Japan, and think I can get away because no one else besides Talhah will figure that I'm, I'm being very nasty and my school administrators is definitely not fluent in that language and will think it's harmless. So yeah-

Talhah Mir:

Now we're gotta, gotta go back and search our chat history, man. Now, now, Japanese characters are making sense. I gotta go (laughing), translate them.

Liz Willetts:

(laughs).

Christophe Eisinger:

I mean, it, wasn't just in French.

Talhah Mir:

(laughs).

Raman Kalyan:

Now I have to look at my kid's chat history and be like, "What are you... What is that?"

Christophe Eisinger:

Yeah, anytime you find some language you don't speak, question yourself. Uh, it might not be love words after all.

Talhah Mir:

(laughs).

Liz Willetts:

(laughs).

Christophe Eisinger:

I'm just saying.

Raman Kalyan:

Well, as you know, one of the things that we've talked about is, uh, the importance of supporting company culture, right? And how toxic communications, um, can erode that, you know, culture and the trust in your organization. I'd love to talk a little bit more about that and, you know, get your perspective on that and also talk about how, you know, some of the remediation actions we have within, you know, this solution can help organizations really address, uh, or support a positive company culture.

Liz Willetts:

Yeah, definitely. I think there are a lot of cultural implications, um, for, uh, a corporation or, um, an organization and, and definitely having the ability to support their, um, company culture, but also to support their employees in times when, you know, they might be going through an external stress factor, you know, COVID being a great example. Um, you know, an organization that might be looking at, um, you know, their company culture impact in this day and age, they want their employees to have the tools and, and support to do their best work, whether that's webcams, computers, conference calls, um, and you know, now in the context of remote work, you know, you're in the privacy of your own home, um, and there are definitely distractions all around. And at the same time, you have to remember, "Hey, this is a work environment."

Raman Kalyan:

Mm-hmm (affirmative).

Liz Willetts:

Um, so there are definitely some things that you should, and shouldn't say in the context of work that might be okay in your personal life, um, but you know, in the workplace, there still is a code of conduct charter, you've signed it, um, you know, you take training, hopefully on the first day of work, um, and so in this context, how do you remind people, um, you know, that there is this change for remote work but the same standards still apply, um, you know, whether that's fostering diversity and inclusion within your company. Um, and, and you certainly wanna make sure that you're investigating and remediating something, um, that your employees know are, um, wrong, you know, something like sexual harassment, um, you know, lots of, kind of potential infractions, um, and to kind of...

            One, from a brand reputation perspective, you know, this person might go off and write some social tweets or whatnot, um, and have a pretty big and bad impact for your organization. Um, so it's kind of one thing to have code of conduct, a charter, um, but another is to really live by it and, and show your people that, um, you know, it's, it's really something that you're invested in. Um, and so I think also it's not all that (laughs). Um, so, you know, we're under stress, job security concerns, scared of, um, you know, a loved one or a parent getting sick, and so maybe you're not intentionally trying to hurt your peers, um, but just, you know, perhaps used an inappropriate word or expressed your frustrations at work.

            Um, and so I think that that's kind of where you can also come in and provide support. You know, maybe it's a little slap on the wrist, but just remind you what your company charter is, um, maybe, you know, encourage you to retake some of the trainings, um, and really just kind of making sure that all around, um, you know, employee wellbeing is, uh, kind of top of mind for the company.

Talhah Mir:

Yeah, and On that note, Liz, I know you talked to me about the fact that, you know, technology like this, solutions like these are not just about finding the bad, it's about, you know, uh, an organization using it as an opportunity to show a commitment towards a positive employee culture and saying, "We're gonna put money behind what we say is important to us, which is a positive company culture." But some of the stories that I've heard from you was just amazing where companies are looking to do, whether it's education or government or private, uh, sector, just being able to back that up and say, "We actually care, we're gonna look out for these things." And to your point, it's not just, "When we find something bad that we're gonna take some, you know, dramatic action. It's like when we find something, it's an opportunity for us to educate and kind of uplift the culture." So I think that's a, that's a really important one for you to call up there.

Liz Willetts:

Exactly, yeah. And I think, um, you know, especially as you think, living and breathing your corporate culture and, and your principles, um, it's important 'cause, you know, other employees are expecting you to take action on, on certain things and, um, you kind of have to uphold your standards as well to, to match their expectations.

Talhah Mir:

Hmm. So What are some stories that you guys have heard or come across from customers? Something, uh... And then I don't know, I don't know which one of those you can actually talk about here, I don't... You guys have shared a lot of those offline and stuff, and I talked about quite a few, but what are some, some great examples of positive impact that you've seen that you're... that you guys can share?

Christophe Eisinger:

Uh, I'll share one. I'm not gonna mention the customers, uh, due to sensitivity, but to your point on... and what Liz was saying that, you know, it doesn't take... You just look at the headline in the newspaper and you can see there's potential regions, potential, uh, industries that, that had bad press, and, uh, probably for good reasons because of, of not doing anything about those, um, abusive behavior. Uh, so I, I've been involved with one customer, um, I'll just say North America, but it was exactly to get ahead of that. They, they haven't been in the headlines, the industry has been in the headlines, and it's just a mandate from their leadership team to say, to your point, "We wanna be proactive so we want a virtuous cy- uh, cycle of making sure we live by, to Li- to Liz's point, live by our code of conduct." So it's more like. "Le- I wanna get ahead of the game because I wanna show all my employees I've got their back and this is a healthy environment, please don't go to my competitor. Like we've got your back and let me prove it to you that we're, um, fostering that healthy environment."

            The example, that example, I mentioned earlier, it's it's not a company, but it's the same team where in Japan in April of 2020, a new law went into effect around, uh, what they call power harassment, and so the question is great, there's this new law that if your manager or your manager's manager is, is abusing you, uh, it's illegal, then the next question comes, uh, what are you gonna do about it, uh, as an employer? So in Japan, they, you know, because it, it takes time to put processes and, and solutions to look for that, initially it starts with the large corporations. I think it's like a three-year four year phase out by the time it goes to, uh, small and me- medium size.

Liz Willetts:

Yeah. And I think one of my favorite, um, customer stories was one that really, in my mind, helped enable their creativity. Um, you know, we were talking to a sports league kind of right at the beginning of the pandemic. You know, they knew that it was gonna be a washout season, all games, everything was being canceled basically, except for golf at that point in time, um, and there was obviously a worry around, you know, contact sports and, and spreading of the virus. And so, um, we had this one sports league come to us and say, "Hey, you know, we've got these season ticket holders, they're huge fans. We feel like we're letting them down. You know, they don't have a season to, to kind of, um, rally around this year. And so we're thinking about, um, you know, how can we get them to interact with players, coaches, um, you know, coaching staff, et cetera?" Um, and so they wanted to enable that sort of scenario but at the same time were concerned around, you know, "We need to moderate content to ensure there's no abusive language, either between fans, between players, um, staff, et cetera."

            And so I think that was an interesting use case where, hey, yeah, you wanna detect certain things in communications and this might be completely out of your wheelhouse. Um, but being able to feel comfortable coming to a company like Microsoft and say, "You know, what can we do here?" Um, and so I thought that was, uh, enlightening, uh, case for, um, us as well.

Talhah Mir:

This is terribly exciting stuff, man. I know the four of us have talked about this quite a bit, but to me, sentiment analysis is the holy grail of insider risk. Being in this space for a couple of years now, um, the sooner you detect these things, the more impactful you will be, and it's all about the behavior. And one of the, the first areas, the first sort of physical manifestation of a behavior is in the communication of an individual. So that's why I sent them an [inaudible 00:30:59], to such an amazing, amazing people. It's also incredibly difficult if you guys don't. So you guys are on the tip end sort of [inaudible 00:31:05], sphere as it comes to this stuff, but, we're super excited about some of the opportunities that you guys are driving towards and how we can leverage that to kind of broaden our detection when it comes to identifying and managing insider risk[inaudible 00:31:18]. Thank you guys, this is very exciting stuff, looking forward to the rest of the podcast as well.

Raman Kalyan:

Yeah. And I was just gonna say, thank you so much for coming onto the show. We really appreciate having you here, and, Liz in Christophe, we can't wait to hear the different podcasts you have coming up, uh, like Talhah said. Exciting space, definitely, uh, space where there's a lot of innovation happening and we're excited to see what you have coming up. So thank you again.

Liz Willetts:

Awesome. Yeah, thanks. Thanks for having us on, and, um, we're excited to kind of... We've passed the torch from y'all and have a great lineup of speakers, um, over the next couple of weeks. Um, Talhah, to your point, sentiment analysis is definitely an area where we're gonna deep dive with, um, Kathleen Carley, a professor at CMU.

Talhah Mir:

Thanks.

Liz Willetts:

Um, we're gonna go deep on machine learning with one of our data scientists, Christian Rodnick, um, so definitely have some exciting, uh, conversations to come.

Talhah Mir:

Awesome, awesome.

Raman Kalyan:

And so thank you everyone for listening. Uh, this is another episode of the Hidden Risk Podcast. We've had, uh, some awesome guests on the, on the show today. Again, uh, Liz Willetts and Christophe Eisinger, and Talhah and I are excited to have you listen, uh, to their podcast as well as if you haven't heard our, uh, previous podcasts, you can find them on your favorite, uh, YouTube channel. So... Or favorite podcast channel, wherever you wanna see it.