Artificial Intelligence: Opportunities and Risks

Listen to learn about isssues for companies developing or using AI Concerns about privacy, transparency, and accountability.

SecurityMetrics Podcast | 74

Artificial Intelligence: Opportunities and Risks

Artificial Intelligence (AI) is a hot topic of the year. People want to understand how it will impact their lives and their business practices. Willy Fabritius (Global Head for Strategy and Business Development - Information Security Assurance at SGS) sits down with Host and Principal Security Analyst Jen Stone (MCIS, CISSP, CISA, QSA) to discuss:

  • Issues for companies developing or using AI
  • Concerns about privacy, transparency, and accountability
  • How regulations or certifications could be applied

Resources:

Download our Guide to PCI Compliance! - https://www.securitymetrics.com/lp/pci/pci-guide

Download our Guide to HIPAA Compliance! - https://www.securitymetrics.com/lp/hipaa/hipaa-guide

[Disclaimer] Before implementing any policies or procedures you hear about on this or any other episodes, make sure to talk to your legal department, IT department, and any other department assisting with your data security and compliance efforts.

Transcript of Artificial Intelligence: Opportunities and Risks

Hello, and welcome back to the SecurityMetrics podcast. My name is Jen Stone, and I'm one of the principal security analysts here at SecurityMetrics. Very happy today to be able to speak with Willy Fabritius. Willy has a master of science in computer science.


For more than twenty five years, Willy has held management positions with organizations in the private sector. His experience in management systems goes back to nineteen ninety four when he implemented an ISO nine thousand two nineteen ninety four QMS at a stamping facility in Germany that got certified in less than twelve months. Willt has conducted several thousand audits for numerous organizations, including multiple Fortune one hundred organizations against a variety of standards. ISO nine thousand one, ISO twenty seven thousand one, CSA STAR, ISO twenty seven thousand seven zero one, and ISO twenty two thousand three zero one.


Willy has worked for several global certification bodies and delivered audits in APEC, Africa, Europe, and the Americas, and spoken at several global events. Willy joined SGS as the global head for security and business development, information security assurance in twenty twenty one. Willy, welcome.


Tell people a little bit more about yourself and maybe a little bit about SGS.


First of all, Jan, thank you very much for having me on on your show. Much appreciated.


And, yes, my name is Willy Hovitsjos. And, as already said, I'm working for SGS.


So you you introduced myself already. So really no need, to to add to that other than I'm also, representing the IIOC, the International Independent Organization of Certification Bodies at a couple of ISO committees.


So I'm now really privileged to contribute, to future standards and and future additions of existing standards.


SGS, is a Swiss based company, headquartered in Geneva, listed at the Zurich Stock Exchange, and we have about, ninety six thousand people or so, six point five billion Swiss francs revenue and an annual base.


And for all sense and purposes, we are the world's largest tick company, test inspection certification company.


And that really means we are testing things.


We are inspecting things Mhmm. Certifying management systems and objects. Now you say things. Can you be a little bit more specific?


Sure. If you have, you know, the next three days' time, I'm more than happy to talk about food safety, oil and gas, nutrition, oil platforms, cars, vehicles, and so on and so forth.


Medical devices, whatever needs a third party assurance that regulations are being fulfilled, we can jump in. We can help clients to get an independent third party view and confirmation and assurance that these things are indeed fulfilling the applicable requirements.


That was a great way to put it. You know, a lot of times people ask me, well, why do I have to get audited? Or or they're kinda cranky about that. We're doing all the right things.


Why do you have to come and audit us? Well, if that third party of assurance third third party assurances is really essential, for a lot of organization or organizations who are trying to prove that they meet a certain standard to to their customers, to service providers, to government bodies, to whatever it is. So and we at SecurityMetrics, we don't do any of the ISO, audits. So I'm very excited to have you on to be able to talk us through some of those things and and and to be able to talk specifically about, I think, what is arguably the topic of the year, artificial intelligence.


AI is people are buzzing about it. It's it's, it's hit the meme world, which means everybody knows about it now.


But but I guess my question, first of all, is is AI hype? Is it is this gonna fade away? You know? What are the, maybe, some of the opportunities and risks that you see associated with it? Give me that that, high level view of AI in your in your opinion.


Sure.


There's a there's a German word that is called.


Is and at the same time. Yes and no at the same time.


Is it hype?


Yes. In a certain way, it is. On the other hand, I I also believe that there is some heightened awareness across the general population, and that leads to that hype. But in in generally speaking, I I do not think it's a hype. It's more something like people talk about it. People are getting aware of it, and and that's cool. That's great.


Will it go away?


AAI will not go away.


The the the hype will go away because that's just how things work in terms of the buzzword of the year. You know, I I remember some, I don't know, fifteen years ago, cloud computing was the buzzword, and every said cloud and cloud, and nobody really knew what cloud was other than maybe a handful of experts.


And now nobody is talking about it other than the real experts, but everyone is using it Right. Without really realizing that all those services we are leveraging are cloud based. I mean, even the system we are using right now to record this, podcast is is cloud based. But are we talking about that? No. We are not. No.


And and, you know, in terms of AI, it's really not that new.


I actually checked Siri. Apple Siri was first introduced in October two thousand eleven.


Let's sync that in for a sec. Right? In two thousand eleven, Apple had already a system that was remarkably good and is still very excellent.


When you just talk and see it recognizes your your commands, your your statements, and translates that into responses.


Well, that's AI.


So that's already eleven, twelve years old technology, let alone all the other stuff that that fall into AI.


But but I think in in recent months, there was some, you know, change, and and those changes are in particular coming from this big language modules like Judge GPT. Mhmm. And and people realize that, well, they enter a phrase, a question, and that think, there's that word again, is is creating some apparently very intelligent response.


And and that is remarkable because, that that is really new technology, but that's really coming from the fact that computing power is way, way better than it was just a couple of years ago.


And that is just getting more and more, and we will more see more and more intelligent systems that will help us to master all kinds of challenges we have.


Are there risks? Absolutely.


But as already said, I'm I'm coming from the IC world. So I I must admit I'm a little bit biased.


So for me, the word risk, the definition given by ICER is the degree of uncertainty.


So when I'm drinking this cup of coffee, there's a certain risk that it falls into my lap Mhmm.


Because there is a certain uncertainty to that. But there's also an level of uncertainty that there might be a plane crashing onto my house because I'm very close to Chicago's or here.


But that is level of uncertainty.


But, usually, the word risk is associated with a negative meaning.


But risk can also be a positive meaning, can have a positive impact.


When I'm investing, there's a certain risk that I win, that my investment gain in value. There's a certain risk that your investment gains in value. Yes. There's a certain level of uncertainty.


And and people are not really used to that kind of usage of the word risk.


But, I I think that the the biggest challenge for organizations will be to not be aware and not utilizing AI.


In the future, there are only two kinds of companies, companies that are using AI and companies that don't exist.


That's a that's a pretty bold statement there. So what you're saying is you think the AI is going to be intrinsic to how we do work?


That there's no there's no parsing it out from the regular day to day industry.


Absolutely correct.


So so with that in mind, what are the issues you can see for companies either developing or using AI?


I think the the biggest challenge will be understanding.


Really grasping what AI is and then making the right decisions to use it in one or the other way.


And then there is the problem of talent, which is a little bit of the reflection of the first thing. Right? Understanding and talent. If if if we don't know individually, that's okay.


But as a group of people, as a company, we must understand AI. We must understand the implications, and we must understand how we are going to use it.


But if if there are not enough people that do understand it, then we've got a serious issue because then we are just blindly using it.


And if we don't understand what's behind it, we will get in trouble.


And and that is one of the issues I I can foresee for many organizations that they may use some kind of AI system for automated decision processes, and then it bites them, and they are surprised.


So, so so knowledge about how to use it, not just not just for not just for developing it, but really knowing AI so that you can properly use it in whatever application that you need.


Okay. Yes. Alright.


Well I mean, there was, for example, a a case where a lawyer was representing one of his clients and tried to sue an airline for some kind of stuff that happened on the plane. The lawyer went to Chegg's GPT and asked for, example cases.


Mhmm.


Chegg's GPT provided him with three cases.


He puts those cases into his filing with the court.


Well, it turns out that all three were bogus and never existed.


Right. The I I I heard about that. AI just made those cases up.


Yes. And and and and that goes, you know, in in terms of trustworthiness of AI. That goes in terms of transparency of AI. And and the normal person will not understand this concept because, well, if I type something in Google, I need to assume it's correct, isn't it? Right?


I never assume that Google is right.


However, I I get where you're coming from that the if you're asking for information, you want to make sure that that information is in some way accurate. You can back it up. You can go find it in the real world. Right?


Yeah.


And so is is is that what you mean by trustworthiness?


Yeah. Yes.


Oh, gosh.


What what what's the big buzzword we heard the last three years in a row? Trust the experts. And I think that a lot of times that's going to be shifted to trust the AI.


Well, how do we know that the information we're getting out of a system is is correct when we know that there is a huge problem already in our world with, different opinions being censored, with different information being hidden, with with, knowledge that we all had easy, ready access to in in the past not being available to us. How is AI gonna help that? It seems like that, you know, these come down to the topics of bias and fairness and trustworthiness. Right? It it I look at it and I I am concerned that the speeds at which our systems can function mean that AI is only going to create further problems with that topic and further rifts in belief systems because of it. What are what are your thoughts there?


Yeah. I think in the past, we were talking about the, economic divide, then we were talking about the digital divide.


Mhmm.


And I think that in the future, we'll talk about the AI divide.


In terms of, people, you know, having different income classes, different ability and capabilities to access digital services, different levels of access, to AI, and maybe even different levels of quality of AI.


And and what I mean with quality is, you know, fulfilling expressed and non expressed expectations of me as the user.


So when I'm asking an AI for an answer, I expect the right answer.


But the right answer could be based upon my interpretation of the world.


And, you know, I I don't wanna go back into recent history in in terms of politics, but let's make that example the right answer for COVID is getting vaccinated, not getting vaccinated. It it's not necessarily a medical question.


It's, among others, a political question. Mhmm. All kinds of things are related upon what we individuals see as the truth.


Right.


And that is potentially being influenced by the AI. And if the AI is matching our perspective, of course, it's trustworthy.


On the other hand, if the AI is giving us a contrary view, somewhere, cannot trust that stuff.


So that's one of the challenges I can see.


So I I guess that comes back to, some of my my foundational concerns, which is what shapes the AI? What creates the the best solution for AI? It's it's this is a hard conversation.


What if you wanna talk about it, not bringing politics into it because the, the speed at which information flows is what shapes our beliefs and our understanding and is what shapes our politics. And and honestly, that feels like, something that is happening currently because of the speed of information, but also more importantly because of the speed of censoring of information.


Yes. So so how do we create this? How do we how do we have transparency in artificial intelligence? How do we how do we have accountability to what feeds the knowledge base of any AI system? Is this something that, an ISO could cover? Is this something that, you know, how do we how do we make sure that there is a fair and balanced approach to any information we're getting from an AI system?


Yeah. I I think it comes down to having an holistic approach.


You you you cannot just look at one aspect.


You really need to look at the entire, let's say, ecosystem of an of the AI.


And and let's face it. The AI by itself is just a tool.


It's a hammer and a nail. How you use that hammer and that nail, that's really up to you. But the real question is, how did you determine what kind of hammer you need?


Yes. How did you determine that that hammer is suitable for the application that you are using it for? And then later on, what kind of mechanisms do you have in place to ensure that the hammer is indeed properly used?


Right.


And sorry to say, but that sounds like a classical example of a management system, you know, something that many people know in terms of quality management system from an ISO nine thousand one perspective or information security management system for number twenty seven thousand one perspective.


All those systems are in place to ensure or should I say to provide guidance to organizations to implement a solid system that is repeatable and can be audited, can be verified, and therefore can get some kind of stamp of approval from an external body that says, yeah, you do have a solid foundation.


That doesn't mean that the outcome is guaranteed to be perfect.


You know? There there is this this sad statement, and and, unfortunately, it's true.


You can make and you can produce concrete light lifesaving vests as an ISO nine thousand one certified company.


That's okay.


Well, I'm pretty sure that a concrete, lifesaving jacket is not gonna give you the outcome you were hoping for, though. I mean, unless you're you know?


In terms of certification, that would be okay. Right? Because if if the specification says Mhmm. Needs to be out of concrete, needs to fit, you know, a particular body size and shape, and when you make that, that's cool.


And you met the specifications correctly.


Yes.


And and and and that is obviously, not fulfilling the intent. I get that. Right?


But that is, I think, highlighting just being certified doesn't mean that the outcome is desirable.


But at least there is some kind of general understanding that there is a solid system in place.


So, starting with what is our intended outcome, again, I think brings a lot of human elements into an AI system that that people might not be aware that that the human element, what are you what do you want it to tell you is going to affect what it does tell you. So so I'm wondering, there are also a lot of people who are are concerned that AI is going to take their jobs.


Have you have you put thought into that that people are worried about the chain just basically the changes in how we work based on AI?


Yes.


I I think that every technical change, any technical development will lead to changes in the job, market.


And I say intentionally changes because, yes, it will lead to losses of jobs, but at the same time will create new jobs.


I mean, how many, WIP manufacturers for buggies are still in business today?


Not as many as there were in the eighteen hundreds.


How many, you know, train engineers are still there putting the coal and the, and the wood into the engine? Oh, there are no such engines any longer. Oopsie doopsie. Right?


How many secretaries are still out there?


Right? I mean and and this is just an example of the there there are periods of time where jobs will go away.


But at the same time, a large number of new jobs will be created.


And I think it's really up to the individual to be resilient, to those changes and and upscale him herself on a regular base to learn new things. You know, if if you are a creative writer and you are not using chat GPT, you might wanna miss you are missing something, and you might wanna invest some time to learn about it. I I I'm not saying that you must use it today.


But going forward, there will be the need to to use this kind of tools in one way or the other.


I think I think a kind of a recent just to to maybe interject one that I saw happen very recently was so many networking engineers were worried that cloud computing was going to put them out of a job. As as it turns out, most of the organizations that I work with who are using cloud implementations need network engineers. They just have to know how how security groups work in a cloud implementation.


And those organizations that that, maybe rely on their development team to create that, are often less secure because they don't understand some basics of networking that, a network engineer brings to the table. But it's in the cloud environment rather than on prem. And so, you know, a lot of people who are concerned about that know the job morphed and and it they had to learn some different skills and and, specifics to what they do. But but the interest in how networking, creates communication paths and and connects all of us together and how to make those secure, that's that's still there. And so people who are interested in that still have that job. And I I think in a lot of ways, AI is going to support, changes, maybe some similar changes, maybe some more massive changes. I don't think we know yet what the impact of AI is going to be.


Yes. And and I think that is the fundamental reason why people are concerned.


As you said, a lot of people are worried, concerned, the the change, and not knowing what the changes are gonna result in. A lot of people are talking about, are we should regulate this. We should reg legislate this. I, you know, I personally get very frustrated with people who always want an ad a new law whenever something's changing or they don't like it.


But that seems to be the nature of of, large groups of people. Are we legislating this? What what are what what do you know about that? Is there legislation?


Are there regulations? What are what are their political bodies doing in response to AI?


Yes.


We are all the victims of some stupid people in the past.


Think about speed limit on the road.


Usually, we have a good sense for what is the right speed. Mhmm. Because we we drive the car on a regular base. Yeah. We know the environment. We automatically adjust.


Yes.


But still, there used to be and there are still many people who just simply don't get it. And that's the reason why we have the speed limits. Mhmm. Because, apparently, people must have some kind of limitations imposed upon that.


Once again, you know, it has this this logical versus the emotional versus whatever thing we have.


So, yes, there will be regulations.


And, yes, once again, you know, I I recently heard a a very interesting statement from an, EU representative that said, China and the US are definitely military superpowers.


But the EU is a superpower when it comes to regulation.


Yeah. Yep. It is.


Yes. We we we definitely see already the EU AI act.


Right now, it's just a proposal from the parliament. But sooner or later, that will be, you know, a regulation will be a law.


And, fundamentally, it says that AI needs to be done in such a way that it protects people.


And I think this is a great thing because at the end of the day, all technologies can be used for the good as well as for the bad.


An interesting thing happened this week where Zoom updated its privacy policies to say that it's going to use your interaction, your use of Zoom to, educate its AI implementation, and you don't have the ability to opt out, which made a lot of people very concerned about privacy.


And and it makes me wonder what what is privacy in the context of AI? We don't have to talk about Zoom specifically, but it just made me feel like, how do we, create, educate, tune AI systems, and keep privacy at the forefront?


I think there is also connection to intended use, transparency, and ethics.


Mhmm. Mhmm.


So if if a company that is providing communication services is using the data coming from those communication services to train the AI, for example, for transcription services, and is transparent about that, I think that's cool. That's enhancing the capabilities of the product.


Mhmm.


But who says that it AI is also not used for industrial espionage?


Uh-huh. Yeah.


And if a company doesn't spell that out, I'm sorry.


There is no trust.


Right. Yeah. Exactly. Comes back to the trustworthy question, doesn't it?


Mhmm. If if if the company, let's say Zoom says, we are using that communication platform to train our AI system for the purpose of training for transcription services and nothing else, wonderful. Thank you very much. I have no problems using that system.


But if they just say for whatever we deem appropriate I'm sorry. Done. Delete it. Remove.


Companies yada yada are privacy too much. They and it need they need to be more specific and more transparent and more accountable for sure. I feel like, a lot of our laws are based on an assumption of privacy.


And so if you if there is no presumption of privacy in certain ways, then some certain different laws don't apply.


And so when you look at the continued gathering of information and assimilation of information and how highly AI is going to speed this, does it mean going forward, there is no presumption of privacy?


And Yes. We no longer have that that freedom to move in some of these maybe gray areas, whether they're good or whether they're bad or whether somebody else thinks they're good or bad or or how you know, without even labeling them, it seems like it feels more restrictive the less privacy we have.


Yes. And I also think that it it requires that we are thinking things through in more detail.


Mhmm.


For example, there is that that story that, AI can be used obviously for facial recognition. We all know that. Yep. But facial recognition is not just face.


It's also facial recognition on a picture. So if you're posting on Facebook or whatever social media, I'm in Hawaii, Well, put that together. All of a sudden, people will be able to track you where you are. Right.


And and that is not just, you know, by the way, my home is empty. Feel free to burglar me. Right?


Right.


But it's it's all kinds of implications that that may lead to that that leads to loss of privacy.


Now it's up to you to post that picture.


Remember, there there's no law that says they shall post their pictures on Facebook while on vacation.


But the the question is, are we aware of that?


And if we are aware, are we willing to take that risk?


Right.


And it feels like the the posting on social media, we got sold a bill of goods on that one for sure. Everybody got, frankly, addicted to it and are still addicted to it and have the belief that they don't exist without posting.


You know, pictures are or didn't happen is a is a pretty common statement. Right? So so when we have when we feel these social drivers to post everything about ourselves and don't also have the understanding that we are sacrificing privacy and that AI is going to just drive this, speeding towards this cliff of nonprivacy.


It it seems like there there are things that we are not yet considering, in the use of AI.


It's just gonna compound problems that we already have, socially and personally.


And and, you know, the the the other thing that I really would like to emphasize is the the the risk of bias.


You know, let's be brutally honest.


We as humans are all biased in one way or the other.


The good thing is that sometimes or most of us are hopefully able to discover the bias and counteract. Mhmm.


But with an AI, there is no mechanism, at least that I know of, to counteract that bias.


And and that can have profound implications on individuals.


Can profound have profound implications on on companies.


Right.


You know, once again, that that colleague I spoke with, in in Australia this morning on a different occasion, He told me that, he uploaded his, picture, to apply for Australian citizenship.


It was declined because, his, teeth were visible.


And then he showed us the picture, and, no, there were no teeth visible.


The problem is, here's a man from Pakistan wearing a full beard. So black ear hair, black, beard, and the only light, let's say, white area is other lips.


So whatever the AI interpreted that white area of the lips or the lighter area of the lips as teeth, therefore rejected it.


Because it it assumed that he didn't follow the rules on uploading a picture. So that's that's very yeah. So that's very interesting. And and how do we you know, that's a very small and and specific way that that bias might be introduced, but you can see how that could be affected in broader ways, if there's no way to detect that in our AI systems and find a way to counter it.


Yes.


And I think that So where's the complaint mechanism?


Where's the adjustment mechanism? Right?


Right. Well, this is, I I I'm worried that we have left people with a very depressing set of information.


What can what can you can you, wrap it up with some positive notes on AI?


Yes. I'm I'm optimistic. So I I'm not the kind of doomsday kind of guy.


By my very nature, I I identify challenges to for the purpose of identifying opportunities.


And and that is something, you know, in terms of, is there a challenge with regard buying a house? Yes. Of course. But at the same time, there is the opportunity to build equity over time.


Right? Right. And it's something like, okay. But but you said that there is a risk of buying a house.


Yeah. There is a risk drinking coffee. Who knows? Right?


Mhmm.


So I'm I'm not, the the the kind of person that says, AI is bad and AI, has all negative things.


I'm identifying the potential risks for the purpose of mitigating them. Because only if we know the potential risks, we can implement countermeasures. We can control we can implement controls to mitigate those risks. And that that is what what really the the important message is.


Well, I sure appreciate you you talking to me today, Willy.


We'll make sure that that people can, find you on LinkedIn in various places and hear more about what you have to say. I I think, we could talk for quite a a while about, AI. It's a it's a fascinating topic, and, it's gonna be interesting to see what direction it goes.


Yes. Yes. Indeed. So thank you very much for having me. And as you said, I'm available on LinkedIn.


If if somebody is looking for me, it's Willy Fabritius. There's only one of us.


And, also feel free to reach out to me at, willy.fabritius@sgs.com.


And by the way, we also have a white paper published about the subject of trustworthy AI. Okay. So if the audience is interested in getting more information about either ISO forty two thousand one, which is the upcoming, AI management system standard, which is, as of this morning, still in that phase between this and FDIS draft international standard and final draft. Okay. But I expect that the final draft will be published in the next couple of days, hopefully.


So if somebody's interested in learning more about that or interested in the white paper I just mentioned, please feel feel free to reach out to me on either LinkedIn or send me an email. More than happy to help and support in whatever way possible. Thank you.


Great. Thank you very much.


Thanks for watching. To watch more episodes of SecurityMetrics podcast, click on the box on the left. If you prefer to listen to this podcast, it's available on all your favorite podcast platforms. See you on the slopes.

Get the Guide To PCI Compliance
Download
Get a Quote for Data Security
Request a Quote