Yes, for sure!

| Wiendelt Steenbergen

The latest column by full professor Wiendelt Steenbergen for Campus Magazine. This time on the question if researchers who decide to deploy artificial intelligence essentially get it. 'In any case, it would be good if there is some kind of ethical leaflet.'

Normally, my credit card company bores me with payment reminders. But this message is different: they’ve detected a suspicious transaction and blocked my card. Did I spend the exact same amount ten times at different branches of the same store chain the day before? It’s possible, of course. I could have cashed in on a bunch of discounts with a stack of coupons at different shops (‘Two max per customer!’). But the transactions took place at different Walgreens stores in Los Angeles and I’m at home in Enschede.

The new year has just begun and I’m still working on finishing off the last of the oliebollen so it’s unlikely that I spent around 500 euros on drugstore items in a violent neighbourhood in Los Angeles yesterday. It was a good catch by the credit card company, long live the artificial intelligence they used for this! It’s a classic example of a useful application of artificial intelligence. 

Artificial intelligence - AI; everywhere you turn, it’s AI this, AI that. The credit card company’s vigilance is great, but it doesn't take a lot of imagination to see how this piece of technological ingenuity could go off the rails. Maybe it’s just me, but it often seems easier to think about the bad rather than the good. The bad seems versatile and endless, whereas you have to put in more effort to see the good. 

Neural networks

In science too, artificial intelligence, or what is presented as such, is popping up everywhere. I’m not talking about serious AI research, but I’ve seen quite a few neural networks being set up out there that made me think: do they actually know what they’re doing? And isn’t it just a trendy fig leaf to cover up inadequate technology or scientific modelling? Now you might be thinking: Steenbergen just doesn’t get it himself. And you’re right about that, but I wonder whether all those researchers who decide to jump on the AI bandwagon do, in fact, get it. In any case, it would be a good thing if these neural network tools came with some sort of ethical instruction leaflet.

As for myself, I spend a good part of the day viewing the world through a Microsoft-filled screen. Over the years, more and more features have appeared at the edges of that screen that make me think: hey, what’s that doing there, and what am I supposed to do with it? The most recent addition is the unsolicited appearance of suggested replies above incoming e-mails. Another product of artificial intelligence, but with suggested replies like ‘OK, enjoy!’ and ‘I feel for you!’ it’s clear that this AI hasn’t quite adopted my style of language yet, thankfully. What strikes me is that the suggestions are always some variations of ‘Yes, will do!’ or ‘Agreed!’. Saying no, a good way to stay in control of your own work, does not exist in the Microsoft vocabulary. ‘Computer says yes!’

This was my last column for Campus. I hope you got some use out of them from time to time, some annoyance is also perfectly fine. To dispel any remaining doubts about whether I should quit, I e-mailed myself: ‘Dear Wiendelt, it’s time you quit your column, don’t you think?’ The suggested reply: ‘Yes, for sure!’

Stay tuned

Sign up for our weekly newsletter.