eBay Sees Extra Risks Around Artificial Intelligence Software





Year eBay (NASDAQ: EBAY) executive who focuses on artificial intelligence recently sat down for an interview during which he described some of the unique challenges facing IT managers who use AI software.

In this video from “The Virtual Opportunities Show,” recorded on Jan. 18Fool.com analyst Asit Sharma and Fool.com contributor Demitri Kalogeropoulos discuss the extra risks for tech companies as they use more AI across their systems.

Asit Sharma: This is an article that our colleague, ProShopGuy, Mike McMahon, had tweeted out. It’s called, why eBay’s AI Chief is setting guardrails for use of low-code AI. eBay is stepping into the AI ​​space in a big way. They’ve been working with machine-learning for years, Demitri, in trying to get their products placed better when you visit the site, so they’re not alone in this endeavor. But eBay’s push into AI even extends into having merchants on its platform use a little bit of AI in their systems. They’re spending a lot of AI through their organization. They hired a new Chief AI Officer. I guess maybe that’s CAIO, I wonder what the abbreviation is. [inaudible 05:16:17] that in the executive suite.

His name is Nitzan Mekel-Bobrov. It’s a very interesting article. It’s in a publication called Protocol. Mekel-Bobrov makes one important point that I really loved, had occurred to me as such. But he says, when you update software, you can plan to update it at some point in time when your needs change, when maybe a customer has a need or you’re doing maintenance, you can take code, you can optimize it, replace it, but AI isn’t like that. He sees a problem in that much of society is looking at AI, artificial intelligence, as just another piece of software code. The nugget or insight that I loved was that AI is something that reacts to the world. It watches what’s going on in the world and it reacts to it. His code is with AI, the piece of software could be performing correctly, but you need to monitor it because the world changes and it’s reacting to the world.

Not so with other types of software which are really updated at a time of your choosing, unless you have a bug or need to jump in before scheduled update. AI is always reacting to data. It’s reacting to things that change. The same way we see this in larger systems. We see it already in the products we use. For example, my Spotify, if I start switching from my Turkey songs to my Brazilian songs, it wants to show me a lot of Brazilian music. If I switch to jazz music, suddenly I’m seeing a lot of Charlie Parker in my feed. Shout out to our producer Adam Lanphier, who happens to be a jazz saxophonist. Actually, he sent me some pictures of all of his horns, which I had to admire because it took me back to a time when tenor saxophoners used to play a lot of instruments.

They play the flute, they pick up the soprano horn, they pick up the base clarinet. I think Adam had most of these in some recorders, still have to ask him offline. Otherwise, I’ll keep talking about that. But kudos to him. This is something that we need to pay attention to as a society. What eBay is doing is putting up some guardrails around low-code AI that they are allowing employees in their company to use. If their software team develops some helpful modules, I’m guessing this is an internal bought, you see this in a lot of companies.

They’re putting some guardrails on how a department can request and use that AI, and they are being very careful about the distribution of that to merchants and users on their platform. I really like this. I think this is the thing we’ve been talking about and arguing about. I know we’ve argued so much because most of us seemed to have the same opinion that AI is very powerful, but it needs some healthy regulation. Thoughts on this, Demitri. I thought this is a great short article, the publication’s called Protocol. I think more companies would do well to adopt some type of a posture like this as they’re implementing AI and ML code throughout their organizations.

Demitri Kalogeropoulos: Yeah. I’m still mystified by how that works, the whole AI software idea. I think I’m like a lot of people, when I picture AI, I picture a robot or something like that or robotic expression of it. It’s a little hard and I’d love to maybe just see that in action because I know so many companies are just using AI all throughout their ITs, their platforms, and their software.

But I thought that was interesting, that little part you talked about in the article. Plus a little bit later down there, this guy says that a mistake a lot of companies are making is just, like you said, installing and then treating it like software, just letting it go. He says, I don’t want to say living, but you want to treat it more like a living thing. It’s constantly changing and evolving, and it needs to be monitored. I think that’s pretty amazing, but also scary to think about that idea. You get the software running all the way through your system that you can’t trust to be completely the same. I mean, it’s entire job is to change and learn, so it’s going to be different, so you got to watch it.

Sharma: Brings some of those famous two words from, I think Frankenstein, “It’s alive.”

This article represents the opinion of the writer, who may disagree with the “official” recommendation position of a Motley Fool premium advisory service. We’re motley! Questioning an investing thesis — even one of our own — helps us all think critically about investing and make decisions that help us become smarter, happier, and richer.





Leave a Comment