Featured demo – Lifeforce

Today’s featured demo is Lifeforce, by Andromeda Software Development (ASD). It’s about 24.6 MB in size and 9 minutes in length (trust me, it’s worth watching). Download page at Pouet.

This demo was also featured in the demo reel presented at NVISION 2008 (the last parts).

Some highlights:

Notice the rendered lines that form picture frames at about t = 1:00. How would you go about animating them?

The mural of scribes about t = 4:15. If I’m correct, the effect is done with bump mapping. What you see is modelled as a flat surface. Bump mapping is used to create the illusion that there are carvings and protrusions. You can see the flat edge of the mural when it flipped over to the other side at about t = 4:44.

The music starts to pick up at about t = 5:30, the bathroom scene. This is actually my favourite part of the music. The tempo is quicker. It’s sort of repetitive yet still interesting to the ear. Maybe it’s just me… *smile*

This is the part where the demo is portraying the coming of the end of life. It’s like trying to grasp at something, anything to hold on to this life. The quickening beat has done what ASD wanted:

create a feeling of anxiety before the revelation of the last scene

There’s a shadow of a man reaching upwards with his hands at about t = 5:55. Notice the faint shadow of that shadow of the man cast on the walls at about 6:00.

Now for some trivia. There are 3 instances of a pachyderm in the demo (where are they?). The city landscape background is used thrice (where?).

And there is a model of a shark skeleton at the beginning of the demo. The shark model is like a trademark of ASD. Didn’t see it? Go watch the demo again carefully. Powers of observation, people!

Read the readme file in the downloaded demo for the back story of the demo. It’s awesome.

Please ConvertToEnglish() – Obscure SQL

Sometimes you come across a simple SQL statement. Yet you’re confounded by the reasoning behind it. You think, “What is this trying to do?!”

Never had that experience before? Well, here’s your chance. Here’s a heavily anonymised version:

select max(is_active) from customers where customer_id = 'ABC'

And here’s the accompanying database table named “customers”.

customer_id customer_name is_active
ABC ABC Corporation Y
PQR PQR Company Y
XYZ XYZ Company N
XYZ XYZ MegaCorp N

Study both the SQL statement and the data in the database table. Then tell me what you think is the business logic behind the SQL statement.

What I mean is, suppose some manager needs some information. And the SQL statement is part of the solution to obtain that information. What does that manager need to know?

Need some help? Here’s an equivalent version:

select min(customer_status) from customers where customer_id = 'ABC'
customer_id customer_name customer_status
ABC ABC Corporation C
PQR PQR Company C
XYZ XYZ Company H
XYZ XYZ MegaCorp H

This entire exercise is to train you to decipher existing logic and data with very little help or documentation.

Bonus points: What do you think the values for the columns “is_active” and “customer_status” stand for?

Lights, camera, action!

You’ve learned a bit on viewports already. Now, we’ll look at how lights and cameras are used in a 3D environment.

Natural light (or sunlight)

Natural light comes into play when you’re working with outdoor scenes. Practically everything is lit up. Outdoor scenes are also usually filled with objects. Trees, houses, grass.

You never really think about grass until you realise you have to model and render every single blade of grass. Unless you create an illusion that there’s a sea of grass out there… Yes, there are ways.
[end digression]

Light rays in Sofia Cathedral

[image by -lvinst-]

There are also indoor scenes where natural light streams in through a window, and you get a rectangular block of light in the scene. That’s … a little out of scope for now. You can research on volumetric lighting for more information.

Right now, we’ll work on light that we can’t see. I know, it sounds contradictory, but think of it this way: You see an object because light fell on it, not because you see the light. Remember the short science lesson when we discussed wavelengths of different colour reflecting off surfaces?

The most prominent source of natural light is the sun (leave out the moon and the stars). As far as I know, we’ve only got one sun, so we only have one source of light to model.

For the purposes of modelling light rays, natural light rays are parallel. They are not, strictly speaking. But by the time they reach Earth, they are almost parallel. This makes it easier to model, because there’s only one angle to consider.

What angle am I talking about? At noon, light rays hit where you are at about 90 degrees with respect to the ground. At dawn (I know you might not wake up at dawn, just humour me), light rays hit where you are at say 10 degrees. At dusk, maybe 170 degrees. Yes, this is the angle.

With a point source as far as 150 million kilometres away, calculations with a difference of fractions of a degree are wasteful (and unnecessary). Parallel rays simplify calculations. We’ll look at the detailed calculations in another article. Maybe. I don’t like manual math calculations anymore than you do…

Point sources of light

For simplicity, light sources are assumed to be point sources. Your desk lamp, your flashlight, your television, a candle flame, fireflies. And they emit light in all directions (yes, they’re very generous).

Different light angles on object vertices

I’ve made the light source visible so you can see its position relative to the cube, our main object in the scene. At close range, the areas around the (6) points on the cube are rendered and lit differently because light hits the cube at different angles.

Every single point on the cube is hit by our light source at a different angle. Imagine the calculations involved. This is the reason why natural light rays are assumed parallel.

You might also notice that points closer to the light source are more brightly lit. How is that modelled? Let’s look at attenuation.

Attenuation (or falloff)

There are typically 2 types of attenuation: linear and quadratic attenuation. Basically they’re just functions of distance between the position of the light source and the object vertex (the point on the object) in question.

For illustration, let the “strength” of light, L, be full “power” 1.0 at the light source. The further the distance, the lower L becomes. So L could be
L = 1.0 / (c1 * d + c0)
where c1 and c0 are some constants and d is not equal to zero (if d is zero, L is to be 1.0, remember?)

That’s linear attenuation. Then just use L as part of the lighting calculations.

What about quadratic attenuation?
L = 1.0 / (c2 * d^2 + c1 * d + c0)
where c2, c1 and c0 are some constants and d is not equal to zero

Relax, the graphics API you use probably does this (almost) automatically. This is just to let you know what’s going on behind the scenes. I know OpenGL has inbuilt functions so you can just set your choice using just a function call.

There’s another type which is ranged attenuation. Basically, L becomes zero at a certain distance. With the 2 attenuation models above, you never really get zero. I’m not sure if it’s supported by popular engines, so you might want to just keep this in mind. Maybe you’d like to implement your own to simplify calculations or produce a certain effect.


Different types of spotlights

The different spotlights are used for different effects. You’re probably familiar with cylindrical or conical spotlights. They’re used to highlight people when they’re on stage.

The square spotlight and parallel lights are, well, … because we can. *smile* Remember, in the virtual world, it’s sometimes easier to create certain effects. You can create a heart-shaped spotlight if you really want to.

Ambient light

Sometimes, objects are lit brighter than expected. You’ve taken into consideration the natural light and the light sources you specified. Yet the objects still look a bit brighter.

This is ambient light in effect. Maybe your calculations aren’t as precise or as close to “reality” as you thought. Things like light from the other side of the planet bouncing up to the sky, bouncing off clouds, bouncing off the sea, bounce, bounce, bounce and ended up at your scene. Add to that light from digital watches, fireflies, neon signs, candle flames, office buildings and what-not, and you get a low amount of light that’s ever present.

To model this, we simply brighten everything up a little bit, say 5% more. We don’t care what’s the source. See, that was easy.


Rounding this discussion on light, we have radiosity. Basically it’s light bounced off another (diffuse) surface. It’s computationally intensive, so it’s not appropriate for real-time renderings and games.

I don’t have any specific pictures on radiosity to show you (because I haven’t installed my other rendering software with this feature). Try searching at Flickr for some examples.

Where’s the camera?

The virtual camera acts very much like real-world cameras. You are just able to do more stunts with them. *smile*

Camera positions are represented using a vector with X-, Y- and Z-coordinates. Depending on the graphics API you use, you might also have a W-coordinate, which is usually 1.0. For our discussion here, it’s not important. You can refer to more information by searching on homogeneous coordinates and affine transformations.

What are you looking at?

Having a camera position isn’t enough. You need to know what the camera is looking at. Imagine a camera moving from one point to another, yet it keeps looking at the same object.

For example, you change the camera from one position

Camera position 1

to another position, but still looking at the same object

Camera position 2

Are you upright?

A final component of cameras is the upright position. It’s easier just to show you. First, we have the camera in upright position.

Camera in upright position

Then we tilt it left a little.

Camera tilted left

Then we tilt it right a little.

Camera tilted right

We tilt it up a little.

Camera tilted up

And we tilt it down a little.

Camera tilted down

In particular, you can tilt left or right, and still look at the object in question and keep your camera position.

In practise, we usually keep the camera upright. In this case, our representation is not a point, but a direction. We still use the X-, Y-, Z-coordinates though (W as well, depending). Typically, we use (0, 1, 0, 0).

Note that you can probably use (0, 3, 0, 0) and still be fine. It’s the direction that counts, not the magnitude. So you don’t need to normalise the vector, or make the vector a unit vector (magnitude of 1 unit).

Note also that, depending on the graphics API you use, all 3 camera properties may be lumped together. So you may get a function requiring 9 parameters: 3 for the position, 3 for the look-at position, 3 for the upright direction.

Camera paths

Because of the representations of the camera, you can assign values to the camera positions, look-at’s and upright direction dynamically. This is how you can create spectacular views by constantly changing the camera position and what the camera (and viewer) looks at.

The easiest way is to iterate on a linear path. So at the start of time t1, the camera is at position 1. At t2, the camera is at position 2. Then you just do linear interpolation in between t1 and t2 and there you have it, a moving camera.

This was the impetus behind my research in applying Bezier curves to camera positions.


So we’ve covered the basic understanding of light for 3D development. Note that sometimes, you might need to add more lights than necessary, even though in the 3D scene, those extra lights shouldn’t be there. The extra lights are to enhance (or sometimes correct) the final rendered scene.

Your focus should be on what’s finally rendered, not what’s accurately modelled. It’s about results.

We’ve also covered the basics of camera representations. Hopefully, you have an understanding of how to create better scenes through moving cameras.

That’s all for this lesson, and I hope you learn something from it.

Demoscene outreach reel at NVISION 2008

Taken from the video:

Demoscene: realtime underground digital art.

This video contains short samples of various demos. It is used in the demoscene section of the NVISION 2008 event. The event is held at San Jose, California from August 25 – 27 2008.

Some of the highlights include

  • Learn to build your dream multimedia computer
  • Never-before-seen computer art
  • And the demoscene!

I’m so going to miss all the fun…

Matrix multiplication code

The following code is to illustrate the matrix multiplication method mentioned previously. For simplicity sake, I’m limiting the size of the matrices to 3.

const int cnSize = 3;
int[,] A = new int[cnSize, cnSize];
int[,] B = new int[cnSize, cnSize];
int[,] C = new int[cnSize, cnSize];
int[] x = new int[cnSize];
int[] y = new int[cnSize];
Random rand = new Random();
int i, j, k;

// fill matrix and vector with random values
for (i = 0; i < cnSize; ++i)
    for (j = 0; j < cnSize; ++j)
        A[i, j] = rand.Next(1, 10);
        B[i, j] = rand.Next(1, 10);
    x[i] = rand.Next(1, 10);

// matrix-vector multiplication
for (i = 0; i < cnSize; ++i)
    y[i] = 0;
    for (k = 0; k < cnSize; ++k)
        y[i] += A[i, k] * x[k];

// matrix-matrix multiplication
for (i = 0; i < cnSize; ++i)
    for (j = 0; j < cnSize; ++j)
        C[i, j] = 0;
        for (k = 0; k < cnSize; ++k)
            C[i, j] += A[i, k] * B[k, j];

Console.WriteLine("Matrix-vector multiplication");
for (i = 0; i < cnSize; ++i)
    for (j = 0; j < cnSize; ++j)
        Console.Write(" {0}", A[i, j].ToString().PadLeft(3));
    Console.WriteLine("][{0}] {1} [{2}]", x[i].ToString().PadLeft(3), ((cnSize / 2) == i ? "=" : " "), y[i].ToString().PadLeft(3));

Console.WriteLine("Matrix-matrix multiplication");
for (i = 0; i < cnSize; ++i)
    for (j = 0; j < cnSize; ++j)
        Console.Write(" {0}", A[i, j].ToString().PadLeft(3));
    for (j = 0; j < cnSize; ++j)
        Console.Write(" {0}", B[i, j].ToString().PadLeft(3));
    Console.Write("] {0} [", ((cnSize / 2) == i ? "=" : " "));
    for (j = 0; j < cnSize; ++j)
        Console.Write(" {0}", C[i, j].ToString().PadLeft(3));

You will notice there's a lot of nested for loops. Get used to it. Here's a screenshot of the output:

Matrix multiplication screenshot

Exercise: Explain what this does. (a ternary operator refresher might help)

((cnSize / 2) == i ? "=" : " ")

Oval and rectangular viewports

Computer graphics is basically a simulation of our eyes, of creating a scene where our eyes will be, even (or especially) if the scene is imaginary. I’m going to touch a little bit on viewports. Think of viewports as surfaces where the scene is displayed. Your television screen or an application window on your computer are viewports with this definition.

Oval viewports

There aren’t really any oval viewports. This part is based on my observation and there are no references (that I could find).

So your eye works like a pinhole camera. What you see comes through your eye, gets inverted vertically and falls on your retina. The retina consists primarily of rods and cones (described this way because of their shapes). Rods are more sensitive to light, and are responsible for our night vision. Cones are responsible for our colour perception.

The image that forms on your retina is circular, because your pupil is circular. Obviously this would go on for a long time if I had to explain the full biology of our optical organs, so I’m rushing through a bit.

You have two eyes, so the two resulting images overlap each other. Your eyes can only focus on one thing at a time. To “compensate” for this, your brain shifts focus very quickly, and your eyes make minute adjustments. This gives the impression that you’re looking at many things (or at least a larger area) at the same time.

Because of the overlapping circular images and the minute readjustments of the eyes, I concluded that this was why I see a sort of oval shaped field of focus… see image below.

Retinal image

Oval viewport

I want to mention that this is a conclusion of mine based on my own observations. Feel free to let me know any citations or incorrect portions of my explanation. I’m mentioning this because this is related to…

Rectangular viewports

Most viewports are used on a computer screen, and a rectangular shape is most natural. The aspect ratio is typically 4:3, which is the width to height ratio.

One thing I want to point out is, our vision stretches to near infinite distances (or at least very very very far). Anything within our field of view (or FOV) is visible, provided it’s not blocked by something. Of course, just because it’s visible doesn’t mean we pay attention to it.

Oh dear, I seem to be rambling on and on… there’s just so much I need to tell you! *sigh*

Anyway, computer resources are limited because of calculation speed. To reduce the number of calculations, we limit the visible portion.

Rectangular viewport

We cut the visible portion using two planes, a near plane and a far plane. This coupled with a pyramidal view, creates a cut pyramid (with 6 flat planes). This cut pyramid or frustum defines our viewing volume. Anything within this is considered “visible”.

The far plane is set at a distance where it’s deigned so far that even if there’s an object present, it’s almost not visible. We can cut out more objects (and hence more calculations) by moving the far plane closer to the virtual camera.

The near plane is actually our viewport. So anything between the virtual camera and the near plane is technically invisible. In practice, the near plane is actually very near the camera, like 0.01 units away.

And that’s all I can think of that’s necessary for your understanding. I probably mislabelled some terms, either for simplification or I’m just wrong. Feel free to correct me so I can explain this part better.

Trapped aeroplane video

This is my first video production, a demo of a trapped paper aeroplane (hosted by Vimeo). Yay! It’s a demo of an SDL (Simple DirectMedia Layer)/OpenGL project I made as a result of a university programming assignment.

The original assignment was in wireframe and in black and white (no colours). I added textures and a revolving sky background for colour. Besides, the explosion effect looks cooler with a fiery texture. Oh, I didn’t mention the explosion effect? *smile*

Unfortunately, I lost the source code. Aarrrgghhh…

Anyway, here’s where you can download the demo. Download Trapped Aeroplane [ZIP file, 380 KB]. Maybe I’ll rewrite it… XNA and C#? We’ll see. Have fun!

Featured demo – Panic Room

Today’s featured demo is Panic Room by Fairlight. It’s about 4 minutes in length and 151 KB in download size.

If you don’t see the video in your feed reader, please click through to the post.

First, some warnings. The demo executable requires a fairly capable computer with great graphics card. Watching it in video form is recommended. I had to use a low resolution setting in windowed mode so I could watch it run on my computer. After starting the demo, it takes a while to do precalculation work, so just wait a bit.

The one thing that caught my eye was how realistic the water was when it flowed through the wall cracks near the beginning of the demo. You could see the columns of water expanding and contracting, mimicking how a real-life water column would ebb and flow in volume.

And the water ripples! That’s just awesome. Remember, all this is calculated and rendered in real-time.

The explosions and the smoke trails of the aeroplanes are done with particles. Search for “particle systems” for more information.

So, enjoy the demo!

Matrices for programmers

Following the fine tradition of the colour theory post, you are getting another crash course. This time, a lesson in matrices. You’re going to be fine. And yes, I’ll hold your hand while you do this. *smile*

For those who are mathematically inclined, we’ll be working in the realm of real numbers (which I talked about briefly when discussing floating points). Let’s start with…


Scalars are simply numbers. For example, 2 is a scalar. So is 3.14159 and 1.618. And so is -273.15. Bonus points if you can figure out what those numbers are special for.

Scalars are stored as normal variables in code. Your ints, floats, doubles come in handy.

Scalars are typically denoted by a lowercase alphabet, such as a or b or c.


Vectors are series of scalars. For example, [1 3 5 7 9] is a vector.

You typically store vectors as an array. For example,

int[] v = new int[] { 1, 3, 5, 7, 9 };

Vectors are typically denoted by a lowercase alphabet in bold, such as v.


Matrices are series of series of scalars, or series of vectors. In code, they are typically stored as an array of arrays.

int[,] A = new int[3, 3];

3 by 3 matrix

Matrices are also known as multidimensional arrays. The dimension of a matrix is m-by-n, where m is the number of rows and n is the number of columns.

When either m or n is 1, we get a vector. So a vector is a special case of a matrix. And because of this, we have to define…

Row and column vectors

It’s easier to just show you how they look like.

Row and column vectors

A row vector is a matrix where the number of rows is 1. A column vector is a matrix where the number of columns is 1. While we’re at it, a scalar can be thought of as a matrix where the number of rows and columns are both 1.

For our purposes of working towards 3D programming, we’ll be focusing on the column vector. It doesn’t matter which one we use when coding, but in terms of notation, we’ll be using column vectors. You will see why later on.

Matrix entries

Individual entries are referred to with the notation A[i,j] (or ai,j) where A is the matrix, i is the i-th row and j is the j-th column. Typically, we have
1 <= i <= m and 1 <= j <= n, where m is the number of rows and n is the number of columns. Take note, because you'll be using them in code. So know how your programming language does indices of arrays. If your language starts with the 0-th element, make sure to shift positions by one less. Then you have 0 <= i <= m-1 and 0 <= j <= n-1. The 0 index has tripped many a programmer, so be careful.

Square matrices

This is a special case where both the number of rows and number of columns are equal. For example, a 3 by 3 matrix, or a 4 by 4 matrix.

In a stroke of coincidence, we will also be focusing on 3 by 3 and 4 by 4 matrices. Hint: It’s because we’re working in 3D.

Identity matrix

In math, there is a number such that when you multiply anything by it, you get back the same thing. It’s the number 1. For example, 8 * 1 = 1 * 8 = 8.

We have the same concept for matrices. There is a matrix such that when you multiply any matrix by it, you get the same original matrix back. It’s called the identity matrix, typically denoted by an uppercase “I”.

Identity matrix

We’ll look at matrix operations soon.

Zero matrix

You know that multiplication unity described above when defining the identity matrix? Guess what, there’s a number such that when you add anything to it, you get back the same thing. It’s the number 0. For example, 8 + 0 = 0 + 8 = 8.

Similarly, we have the zero matrix. It’s simply a matrix with zeroes in all its entries. It’s denoted by a big gigantic 0. Probably not quite useful to you, but nevertheless, you now know something more.

Symmetrical matrices

Symmetrical matrices are symmetrical about the diagonal. Where’s the diagonal? Look at this:

Matrix diagonal

For a 3 by 3 matrix with values:
a b c
d e f
g h i

Entries a, e and i form the diagonal. Notation wise, A[i,i] are the diagonal entries.

If a matrix has zeroes in entries below the diagonal, it’s known as an upper triangular matrix. In our case, d = g = h = 0.

Similarly, if a matrix has zeroes in entries above the diagonal, it’s known as a lower triangular matrix. In our case, b = c = f = 0.

What symmetry means in this case is b=d, c=g and f=h. The general formula is
A[i,j] = A[j,i]

To speed up computations when checking symmetry, some algorithms use
A[i,j] = A[j,i], where i < j The extra condition leaves out the diagonal and entries below the diagonal. No point double checking values, right?

Transpose of a matrix

Now that we know what a symmetrical matrix and its diagonal, we can define the transpose of a matrix. What you do is simply flip the matrix about its diagonal.

For a matrix A whose values are:
a b c
d e f
g h i

Its transpose is:
a d g
b e h
c f i

The transpose of a matrix A is denoted by AT. So if A = AT, A is a symmetrical matrix.

Yes, we’re dealing with square matrices. Rectangular matrices aren’t useful for our purposes in 3D programming, and you’re welcome to research on its practical uses (try “operations research“).

The inverse of a matrix

The inverse of a square matrix A is denoted by A-1, where
AA-1 = A-1A = I

Yes, I know I still haven’t covered matrix multiplication. Just go with it a little longer…

For a matrix product AB, it’s inverse is
(AB)-1 = B-1A-1

Then this looks beautiful:
= B-1A-1AB
= B-1IB
= B-1B
= I

Don’t you think that looks beautiful? *smile*

Matrix equality

Matrices A and B are said to be equal if every corresponding entry of both matrices are equal. In notation, A[i,j] = B[i,j] for all i and j.

Matrix addition (and subtraction)

A matrix C is said to be the sum of matrices A and B if
C[i,j] = A[i,j] + B[i,j] for all i and j.

Subtraction is similar. Let me show you a scalar example.
8 – 5 = 8 + (-5)
Same thing for matrices. The negative sign is “pushed in” to individual entries.

Matrix multiplication by scalar

Let’s multiply matrices by scalars first. It’s easy.

Scalar matrix multiplication

Just multiply the scalar with all the entries in the matrix.

Matrix multiplication by vector

This one’s a little more complicated. For our purposes, we are concerned with multiplying a matrix A by a column vector v. Yes, the order and the type of vector matters. Let’s look at a diagram.

Matrix vector multiplication

The result is a column vector. Suppose we multiply matrix A by column vector x and we get a column vector y, the general formula is
y[i] = A[i,0] * x[0] + A[i,1] * x[1] + … + A[i,n] * x[n]

It actually looks much more concise if I can use the summation notation… BUT, I’m trying to simplify things for you. Hopefully, you can visualise how it works with the diagram. I’ll write another post with code to explain this.

You can’t multiply a matrix by a row vector though. Hopefully through the diagram, you’ll see why it doesn’t work. What happens is, you multiply each row of the matrix by the values down the column vector. Since a row vector only has one value “down the column”, it doesn’t make sense to multiply matrices by row vectors.

You can multiply a row vector by a matrix to get a row vector. But it’s not useful for our purposes. If you understand a little about 3D transformations, then A is a transformation matrix, and x is a vertex. For example, A could be a translation matrix and moves x to point y. If you don’t understand any of this, relax, we’ll get there together soon.

Matrix and matrix multiplication

This is complicated to show and explain, but once you get the idea, it’s actually easy to code.

Matrix by matrix multiplication

I’ll leave it to you to figure out the general formula… It’s similar to the one with matrix by vector multiplication, only with more vectors. *smile* This is what I do in university, write out a’s and subscripts, and summation notations in my lecture notes and tutorial questions…

I’ll write another post explaining this (together with the matrix by vector multiplication) with code to illustrate the use.

In terms of 3D transformations, you could have a bunch of transformations done, say you rotate something, then translate (move) it. So you have something like TRx, where R is the rotation matrix, T is the translation matrix and x is the vertex.

Note the order. The earlier (in order) a transformation is done, the closer it is to the vertex in question. Basically you reverse the order of transformations when implementing.

Since we’re at it, matrix multiplications are not commutative. What it means is that
AB != BA
The order is important.

As an exercise, visualise the difference between moving something then rotate, and rotate then move.

End of crash course

Whew… *wipe sweat* How’re you doing? Still with me?

Good. This sets the foundation you need for understanding 3D programming. Yay! Review what you’ve read, do some research if needed, and I’ll see you next time.

The road to 3D programming

I plan to write some posts that lead eventually to 3D programming, the kind done in games and demos. There’s no concrete series of posts, meaning you won’t find a clue from the title that it’s actually part of that plan. The reason is that there are all kinds of topics you could learn to be proficient in 3D programming. Having a set-in-stone series seems stifling…

Nevertheless, these are the topics I’m covering, in no particular order:

  • Basic colour theory, which I already did
  • Matrix operations
  • Viewports, eye/camera coordinates
  • Raytracing, rendering
  • Lights, diffusion, ambience, specularity, radiosity
  • Basic 3D modelling, vertex representations
  • Textures, mipmaps

I’ll probably add some more topics, and also remove some topics. Like I said, not set in stone. You’ve probably noticed some missing topics. What about music and audio? That’s why there are teams. I don’t know much about creating audio. I just use them (the code for using audio will probably be covered).

Instead of leading you through a series of posts, which ends with a finished product or game, I’ve decided to cover as much of the bases as possible. There are other sites covering finished products through a series much better than me. I want to cover topics leading to the finished product. This gives you flexibility in deciding what concepts you want to learn and use.

So if you have specific topics you’d like to be covered, add in a comment or contact me.