Dissecting Trigonometric Particles part 1 – Particle system

Last week, I introduced a particle effects demo named Trigonometric Particles. Today, I’m going to explain the mechanics behind making it work. But first…

What is a particle system?

I’m sure you can find many articles on the definition. A particle system is basically a collection of particles, and controls the particles through some logic you defined. Uh, so what’s a particle?

Think of smoke. Imagine it being comprised of thousands of small bits of carbon matter. Which it is, in a manner of speaking. And there you have it, each small bit of carbon matter is a particle.

Particles and particle systems are commonly used to simulate smoke, fire, water, explosions. Anywhere when there are large numbers of particles (for lack of a better word…) behaving in a certain way, and you need a system to control them. You could directly manipulate them in the program, but the logic won’t be portable to other programs. “You need smoke in another program? Here, use this particle systems class.”

The class representing a particle I used in Trigonometric Particles had a 3D position with X-, Y- and Z-axes (in terms of functions of time), colour (red, green, blue), and life (length of display or existence). Some particle systems have particle velocities, gravities and probably concepts I won’t even think of. The one I’m using has few components, but the axis functions are complex.

What happens is, for every frame of animation, you update each particle with the correct calculations. Say, one particle was here, then at the next frame, it should be there, according to the simulation logic. And you iterate through the entire list of particles, updating each component as needed (such as decreasing the life counter).

Usually, you don’t update the positional component directly. You update the velocity, and let the velocity influence the position instead. There could also be a global gravity (or local to the particle), which influences the velocity, which in turn influences the position.

So the order of updating probably goes like this: you update the position based on the current velocity. Then you update the current velocity based on the gravity component. Then you update the gravity component based on the logic you require. This happens in one frame of update. As you can imagine, that’s a lot of updates to stick your hand into in a program. It might be easier to pull the standard updating logic into a class.

I understand that this long method of indirect updating creates a smoother particle simulation. Say you’re simulating a waterfall and drops of water are initially splashing to the right. As each water droplet is influenced by gravity (downwards), its velocity starts shifting downwards. But it’s still moving fairly fast towards the right. But on each subsequent update, it moves faster towards the ground, hence simulating the gravitational effect.

In my case, I’m directly manipulating the position because I don’t need that kind of smoothness. It’s taken care of by the axis functions. I’ll explain more in another article.

Oh, and the life counter? It’s usually 1, and goes down to 0. It’s like an alpha setting, usually used when rendering the particle. So at 1, it’s displayed at full strength. As it winds down to 0, the particle is rendered at greater and greater transparency, till you finally can’t see it. Depending on your logic, you might “revive” the particle by giving it full strength and starting over, or just let it “die”.

Speaking of alpha values, here’s the bitmap I used for rendering my particles.
Particle texture
I blew it up from 32 by 32 pixels to 128 by 128 pixels so you can see it better. I think I used my Bryce renderer to generate this (yes, I know it’s weird using a full 3D graphics renderer to generate a small bitmap for a particle effects demo… but it’s the only tool I have to generate it! Anyway…). It starts as a white square, then I applied Gaussian filters to it so a circular white blob is in the center fading to black at the edges.

You can do this with Paint.NET as well (I didn’t have it back then). Version 3.36 allows you to use the Gradient tool. Just use the radial gradient.

In my demo, I mapped this bitmap as a texture to my particles. I set the bitmap to the specified colour of the particle, and use the “height map” value of the bitmap as an alpha value. So if the particle is red, then it’s rendered like the bitmap texture, but in red, fading gradually to transparency towards the edges.

Then I used the life counter to further augment the alpha value. So at full life, the centre of the bitmap is rendered at zero transparency (full opaqueness). As the life counter goes down, the bitmap is rendered at the transparency inversely proportional to the life counter. Just look at the point in my video where I said to fade the top of the tornado. This was what I was doing. Other times, I just rendered all particles at full strength. Just download my demo and play with it already!

The bitmap texture is in black and white because its purpose is to give shape to the rendered output, which is a circular blob fading at the edges. Colour is provided by the particle’s properties. And it’s small in size, because my particles are small.

This also means that your particle system can use some other form of rendering the particles. For example, you could use a different texture, such as wispy puffs, which you could bunch together and animate and they could look like moving clouds.

Or you could render full 3D models using the positional information of the particles. Rendering textured mapped quadrilaterals is faster than rendering 3D models, which is a big deal because there are a lot of particles. But depending on your situation (maybe you need less particles), and targeted computer speed, 3D models may be more suitable.

And that’s all I have to say for now. You want to ask about or add information to my particular brand of particle systems (used in my demo), or about particle systems in general? Post it in a comment below or email me.

Computer virus behaviour thesis

I was going through some web site logs, and found an interesting combination of search terms. This site was listed in search results for “computer virus”, “dissertation” and “thesis”. I racked my brain for the reason why. Then I remembered I wrote something about the computer virus behaviour thesis for my final year project in university.

I rummaged through my collection of important CDs, and found it. With a small slip of paper that reads “Vincent’s Honours project — DO NOT ERASE!!!”, it’s easy to guess what’s inside that CD. Alas, it didn’t contain a PDF of my dissertation. It did contain all the source LaTeX and PostScript files. I wasn’t really in the mood to install a converter, so I went for the easy way out: ps2pdf.com.

Download the computer virus behaviour thesis in PDF (372 KB). I’d appreciate some credit if you find anything useful.

In fact, I’m feeling generous. Here, download the whole shebang (525 KB). The zip file contains all the program source code, MATLAB code files, LaTeX source files, PostScript files, some image files and even the PowerPoint slides I used for my presentation.

It’s been what, 6 years? So don’t expect me to remember a lot of stuff… A short breakdown then.

The “program” folder contains the source code for the simulation program running on Unix. Note the accompanying MATLAB files. This was where I used the output of the C program to generate data files usable by the MATLAB code to generate graphs. Note the narcissistic name of vince.c *smile*.

The “winprog” folder contains the source code for the simulation program running on a Windows computer. I used this version for the presentation. It’s more interesting when the audience could see the infection in “real” time. The code was written in Dev-C++ IDE. The code was based off a game template I was working on, hence the use of DirectX and the game-related comments and code structure.

You’ll find a few files with the .pfn extension. If I remember correctly, that’s my custom font file format, and it stands for “Phantasy Font”. I was running a web site phantasyrealm.com then. Don’t bother looking, the site’s not there anymore; I took it down. If you look through the code, and find any references to phantasyrealm, you’ll know what it is.

Question: Can you figure out the structure of the .pfn files?
Hint: It has something to do with bits, where 0 is black and 1 is white. It also has something to do with a 256 by 256 pixel grid. Or was it 128 by 128? *smile*

The discussion on spatial graphs is interesting. Imagine an entire grid of nodes, each node connected to the node above it, below it, to the left and right of it. If the grid wraps vertically and horizontally, the topology effectively becomes a torus (aka donut).

To visualise this, imagine you have a sheet of paper. Roll it so it becomes a cylinder with open ends. Then imagine bending the ends towards each other to form a loop. Tada! Torus formed.

I presented 4 points in the thesis

  • Topology
  • Advanced alarm system (foreknowledge of the virus epidemic)
  • Natural response system (higher vigilance after infection, decreased immunity over time)
  • Periodic activity system (computers and users are not “on” all the time)

I also remember my thesis advisor asking if any one point could be taken out and still produce the desired results. I said no. It’s only now that I realise he was referring to irreducible complexity.

So, I hope I’ve given you some useful information, some points to ponder and a fun computer virus simulation program to play with. Have fun!

Multi-sensory input in artificial intelligence

Robot playing chess Artificial intelligence has always been one of my favourite topics of interest. Artificial intelligence or AI, as it’s commonly known, is

the study and design of intelligent agents where an intelligent agent is a system that perceives its environment and takes actions which maximizes its chances of success.

I think of AI more in terms of simulation. How can a certain behaviour be simulated? How can a certain phenomena be simulated? For example, flock behaviour can be simulated using simple rules yet resulting in complex patterns. I’ve even tried my hand at computer virus simulations.

There was a time when I was into game development. I mean, like really into it. I joined an independent game development group. Can’t remember from where (I think I found the group from GameDev). Can’t remember the group name too (I think it has a “rhino” somewhere…)

Anyway, there wasn’t much documentation yet, except that it’s a “get the flag” kind of game (kinda like paintball). I was in charge of the AI component for simulating the computer controlled opposing team. Since this genre was new to me (I don’t play these kinds of games, real life or otherwise), I had to do some research and thinking.

Visual and aural range AI component I came up with using a visual and aural component for the AI. The enemy would have the standard 60 (or 45 or 90) degree field of vision, and this has a cutoff distance (or a gradual decline of visibility). Then there’s 360 degree aural sensory input, but with a small cutoff distance.

This allowed seeing long distances but with a narrow scope. Rendering fog only limits the player. Computer controlled enemies can see till infinity, hence the cutoff distance. Well, if we limit the vision, then a player can sneak up to an enemy from behind and whack him on the virtual head. Not realistic.

So I thought up a 360 degree “alarm” system by incorporating an aural component. Any sound made by the player such as moving on sand (or water or gravel) and firing would alert the enemy (from behind, sideways, anywhere).

I was going to add in some sensitivity control to the aural component, like more receptive from in front than from behind. Then I decided that long distance communication, slow responses and school work was too much. So I sent a polite email to the team leader saying I’m glad to have been part of the team and I really need to focus on my studies. Then I left.

That ended my short-lived stint as a game developer.

This just came into my mind. In a virtual environment, autonomous agents can be omnipresent and omniscient. They can be anywhere in the blink of an eye and know everything. In a virtual environment, we try to limit the capabilities of our simulations to create a believable, realistic character.

In the real world, we try to expand the capabilities of our simulations running in robots. In our physical world, robots already face lots of limitations.

I see a squeeze theorem event waiting to happen…

Write code that helps write code

Writing tool codeI have discovered a rather surprising fact about my fellow programmer colleagues. Some of them, hate, programming. They go through the day writing code, copying and pasting where appropriate, crushing bugs and then go home. Their distaste for writing code has blinded them to the notion of creating programs to help them in their work. They will only create a program for a project, and not create smaller helper programs to ease the work.

Code generators
For my university thesis, I wrote a simulation program for computer virus behaviour. It was coming along fine, until I wanted to have graphs of my results. The simulation program was written in C and I didn’t know how to create graphs from C. Luckily, I had some experience in Matlab, and remembered how to create graphs for mathematical functions.

So I added some code in my simulation to output the results in a text file, which became the input for my graph generating Matlab code. Basically I used two different programming languages to help in writing my work. The output of one became the input of another.

Random data generation
There was a time when I had to generate a huge amount of data in the database, for a presentation of a web application I developed. And it had to be relatively similar to the real production data. Copying and pasting SQL insert statements wasn’t going to cut it.

So I wrote a small program to generate the insert statements into a SQL script. Then I ran that script in the database. A couple of minutes to run the generation program, and another couple of minutes to run the script, and bingo, mass random data records.

Testbed applications
I have 3 testbed projects, one for console output, one for Windows programs and one for web applications. They are used to test out theories, like what would the output of DateTime.Now.ToString(“dd/MM/yyyy”) look like.

By having these testbeds, I can test my code in a tightly controlled environment, because that testbed application can do only one thing, the code I want to test. Sometimes, testing the code in the actual program is hard, because there can be a lot of other stuff around. Testbeds allow me to just test that chunk of code without all the other distracting fluff. Which helps me write better code.

Custom tool programs
Sometimes, a fully working program is written just so its output can be used in another program. The demoscene and game production are examples of this need.

Demos are computer programs generating stunning visuals and music in real time. Some demo groups, like Conspiracy, have created full-blown applications to help create content such as 3D models and sound files.

Game production teams have tool programs, such as a preview program of texture mapping on a 3D model for artists, or dungeon creation programs to produce the 3D dungeon model used in the game.

These tool programs are not packaged in the final product, yet are essential to the success of the product, because they help make creating the product easier.

Conclusion
Writing code can be easier. Using a professional code generation software might be overkill for a tiny bit of code. Copying and pasting can be tedious. With a little imagination, you can make your programming life easier, by writing code to help you write code.