Multi-sensory input in artificial intelligence

Robot playing chess Artificial intelligence has always been one of my favourite topics of interest. Artificial intelligence or AI, as it’s commonly known, is

the study and design of intelligent agents where an intelligent agent is a system that perceives its environment and takes actions which maximizes its chances of success.

I think of AI more in terms of simulation. How can a certain behaviour be simulated? How can a certain phenomena be simulated? For example, flock behaviour can be simulated using simple rules yet resulting in complex patterns. I’ve even tried my hand at computer virus simulations.

There was a time when I was into game development. I mean, like really into it. I joined an independent game development group. Can’t remember from where (I think I found the group from GameDev). Can’t remember the group name too (I think it has a “rhino” somewhere…)

Anyway, there wasn’t much documentation yet, except that it’s a “get the flag” kind of game (kinda like paintball). I was in charge of the AI component for simulating the computer controlled opposing team. Since this genre was new to me (I don’t play these kinds of games, real life or otherwise), I had to do some research and thinking.

Visual and aural range AI component I came up with using a visual and aural component for the AI. The enemy would have the standard 60 (or 45 or 90) degree field of vision, and this has a cutoff distance (or a gradual decline of visibility). Then there’s 360 degree aural sensory input, but with a small cutoff distance.

This allowed seeing long distances but with a narrow scope. Rendering fog only limits the player. Computer controlled enemies can see till infinity, hence the cutoff distance. Well, if we limit the vision, then a player can sneak up to an enemy from behind and whack him on the virtual head. Not realistic.

So I thought up a 360 degree “alarm” system by incorporating an aural component. Any sound made by the player such as moving on sand (or water or gravel) and firing would alert the enemy (from behind, sideways, anywhere).

I was going to add in some sensitivity control to the aural component, like more receptive from in front than from behind. Then I decided that long distance communication, slow responses and school work was too much. So I sent a polite email to the team leader saying I’m glad to have been part of the team and I really need to focus on my studies. Then I left.

That ended my short-lived stint as a game developer.

This just came into my mind. In a virtual environment, autonomous agents can be omnipresent and omniscient. They can be anywhere in the blink of an eye and know everything. In a virtual environment, we try to limit the capabilities of our simulations to create a believable, realistic character.

In the real world, we try to expand the capabilities of our simulations running in robots. In our physical world, robots already face lots of limitations.

I see a squeeze theorem event waiting to happen…

Comments are closed.