What Makes AI So Weird, Good, and Evil

Artificial intelligence has changed the way we roam the internet, buy things, and in many cases, navigate the world. At the same time, AI can be incredibly weird, such as when an algorithm suggests “Butty Brlomy” as a name for a guinea pig or “Brother Panty Tripel” as a beer name. Few people are more familiar with the quirks of AI than Janelle Shane, a scientist and neural network tamer who lets AI be weird in her spare time and runs the aptly named blog AI Weirdness.

Janelle Shane released a book this month titled You Look Like a Thing And I Love You. It’s a primer for those who want to know more about how artificial intelligence really works or simply entertainment for those who want to laugh at just how silly a computer can be. We talked with Shane to ask about why she likes AI, how its strangeness affects our lives, and what the future might hold.

What first got you interested in AI?

Janelle Shane: Just after high school, when I was deciding what I wanted to do in college, I attended this really fascinating talk by a guy who was studying evolutionary algorithms. What I remember most from the talk about the research are these stories about algorithms solving problems in unexpected ways or coming up with a solution that was technically right but not really what the scientist had in mind. One of the ones that made it to my book was an anecdote where people tried to get one of these algorithms to design a lens system for a camera or a microscope. It came up with a design that worked really well, but one of the lenses was 50 feet thick. Stories like these really captured my attention.

What is artificial intelligence, in the simplest terms?

Shane: AI is one of those terms that’s used as a catch-all. The same word is used for science fiction that gets used for the products that are actually using machine learning, all the way to things that are called AI but real humans are actually giving the answers. The definition I tend to go with is the one that software developers mostly use, which refers to a specific type of program called a machine learning algorithm. Unlike the traditional rules-based algorithms, where a programmer has to write step-by-step instructions for the computer to follow, with machine learning, you just give it the goal and it tries to solve the problem itself via problem and error. Things like neural networks, kinetic algorithms, there’s a bunch of different technologies that fall under that umbrella.

One of the big differences is that when machine learning algorithms solve a problem, they can’t explain their reasoning to you. It takes a lot of work for the programmer to go back and check that it actually follows the right problem and didn’t completely misinterpret what it was supposed to do. That’s a big difference between a problem solved by humans and one solved by AI. Humans are intelligent in ways we don’t understand. If we give humans a description of the problem, they’ll be able to understand what you’re asking for or at least ask clarifying questions. An AI isn’t smart enough to understand the contents of what you’re asking for, and as a result, may end up solving the completely wrong problem.

What did you think about while you were translating this very technical topic for readers?

Shane: It was a bit of a challenge to figure out what I was going to cover and how I was going to talk about AI, which is such a fast-moving world and has so many new papers and new products coming out. It’s 2019, and 2017 [when I started writing the book] was ages ago in the world of AI. One of the biggest challenges was how to talk about this stuff in a way that will be true by the time the book gets published, let alone when people read about it in five or 10 years. One of the things that helped was asking what has remained true, and what do we see happening from the earlier days of AI research that’s still happening now. One of the things, for example, is this tendency for machine learning algorithms to come up with alternative solutions for walking. If you let them, their favorite thing to do is assemble themselves into a tall tower and fall over. That’s way easier than walking. There are examples of of algorithms doing this in the 1990s and recent examples of them doing it again.

What I really love is this flavor [of results] where AI tends to hack the simulations that it’s in. It’s not a product of them being very sophisticated things. If you go back to early, simple simulations, little programs, they will still figure out how to exploit the flaws in the matrix. They’re in a simulation that can’t be perfect, there are shortcuts you have to do in the math because you can’t do perfectly realistic friction, and you can’t do really realistic physics. These shortcuts get glommed onto by machine learning algorithms.

One of the examples I love that illustrates it beautifully is this programmer in the 1990s that built a program that was supposed to beat other programmers at tic-tac-toe. It played on an infinitely large board to make it interesting and would play remotely against all these other opponents. It started winning all of its games. When the programmers looked to see what its strategy was, no matter what the opponents first move was, the algorithm’s response was to pick a really huge coordinate really far away, the farthest reaches of this infinite tic-tac-toe board it can specify. Then the opponent’s first job would be to try and render this newly huge tic-tac-toe board, but in trying to build this board so big, the opponent would run out of memory, crash, and forfeit the game. In another example, [an AI] was told to eliminate sorting errors. It learned to eliminate the errors by deleting the list entirely.

Can you get into that a bit more? How do we avoid these negative consequences?

Shane: We sometimes find out that AI algorithms aren’t optimizing what we hoped they would. An AI algorithm might figure out that it can increase human levels of engagement on social media by recommending polarizing content that gets them into a conspiracy theory rabbit hole. YouTube has had trouble over this. They want to maximize viewing time, but the algorithm’s way of maximizing viewing time isn’t quite what they want. We’d get all kinds of examples of AI glomming into things they’re not supposed to know about. One of the tricky parts about trying to build an algorithm that doesn’t pick up on human racial bias is, even if you don’t give it information on race or gender in its training data, it’s good at working out the details by clues in zip code, college, and figuring out how to imitate this really strong bias signal that it sees in its training data.

When you see companies say, “Don’t worry, we didn’t give our algorithm any information about race, so it can’t be racially biased,” that’s the first sign that you have to worry. They probably haven’t figured out whether, nevertheless, the algorithm has figured out a shortcut. It doesn’t know not to do this because it’s not as smart as a human. It doesn’t understand the context of what it’s being asked to do.

There are AI algorithms making decisions about us all the time. AI decides who gets loans or parole, how to tag our photos, or what music to recommend to us. But we get to make decisions about AI, too. We get to decide if our communities will allow facial recognition. We get to decide if we want to use a new service that’s offering to screen babysitters by their social media profiles. There’s an amount of education that we as consumers can really benefit from.

Gizmodo: And where did your title, You Look Like a Thing and I Love You, come from?

Shane: An AI was trying to generate pickup lines, and this as one of the things it generated. It was my editor who picked it as the title. I wasn’t quite sure at first, but so far everyone I’ve said the title to has just grinned, whether they’re familiar with how it was generated or not. I’m completely won over and am really pleased to have it as my book title.

Add comment