Maria Gini is a Professor in the Department of Computer Science and Engineering at the University of Minnesota. She studies decision making of autonomous agents for robot exploration, distributed methods for allocation of tasks, teamwork, and AI in healthcare. She is Editor in Chief of Robotics and Autonomous Systems, and is on the editorial board of numerous journals in Artificial Intelligence and robotics. She is a fellow of the Association for Advancement of Artificial Intelligence, a Fellow of IEEE, a Distinguished Professor of the College of Science and Engineering at the University of Minnesota, and the winner of numerous awards.
I sat down with Professor Gini for an exclusive interview where I asked her thought provoking questions on the subject of AI.
How do you personally define AI?
For me, AI isn’t just about philosophical discussion or studying humans, and it’s not just about computational methods that can make decisions, which humans in general attribute to intelligent people. In the AI community, we don’t talk much about intelligence–we talk about rationality. Rationality is a more clearly defined concept in both mathematics and economics. If an agent, a program that makes decisions, chooses the actions that maximize the expected utility of something, then we say the agent is rational. This is because anytime you are confronted by choices, you always want to pick the best possible action. For a program to know what the best action is, you need to have some sort of evaluation or numerical value, which is called utility–a concept that comes from economics. So, for instance, you say that the utility of going to the beach is two, the utility of doing homework is ten, and somehow you need to have some way of computing those utilities. Again, rationality is the technical definition of AI that most of the AI community uses because intelligence is really hard to define. People often debate about whether something is intelligent or not, but the real question is if it is rational or not. We don’t want to have a program that makes bad decisions, so rationality is a good foundation for computational decision making.
Humans have the ability to think outside-of-the-box. We willingly break rules in order to alter our direction for the better, for advancement. A question then arises: if AI is guided by a set of coded laws, will it also be able to deviate from the rules that govern it, and engage in creativity like humans?
Creativity and breaking rules are connected, but are not exactly the same. A lot of work has been done in AI that shows creativity, such as computers drawing and making sketches, and though some drawings can be very mathematical, some are quite different. One way that computers can exhibit creativity is to use randomness because, in a sense, creativity has to do with randomness. In fact, evolutionary algorithms have been used for a long time in the area of creativity, which are basically random methods with guidance. , which essentially state if something is good or bad; however, somebody has to define these parameters. If a computer is working on something creative, like drafting a musical piece, it doesn’t really know if the music is good or bad. You still need the human to decide what combination of tones or sounds are “good”.
On the topic of breaking rules, this is slightly different than creativity, because breaking rules is more intentional. For instance, maybe you see some problems with existing rules and you want to break the rules in order to branch into different directions. However, this is not really as random as creativity because of the intention involved. Here I think the issue is a little bit more complicated because the main question now is can AI be ethical? When you try to break rules, if those rules are dictated by ethics, then you should not break them. You don’t want computer programs to break ethical rules. It becomes more difficult here, however, because not every rule should be broken. People have a better idea of what rules are good to break because maybe certain rules act as an impediment to development or might limit the potential for people. However, you need judgement in order to make these sort of decisions, and I’m not sure that anybody has really looked much into that. In any case, right now definitely the question of whether or not AI can be ethical is a very big discussion within the community.