7th Annual Scene.org Awards

If you’re in Germany, go check out Breakpoint 2009, held from 10th to 13th April 2009. It’s when the winners of the Scene.org award nominees will be announced.

3 of the nominated demos were also featured here. Well, 2 actually. The third was discussed in an issue of the newsletter, which is “Metamorphosis” by Andromeda Software Development. Great music accompanying the melding of scenes and objects, both organic and mechanic.

As for the other 2 demos, we have “Inflorescence” by mfx.

And the 2nd demo is “The Seeker” by Still.

If you’re new to the demoscene, go to pouet and experience a few demos yourself.

[Vincent is currently on vacation. I’m his blog, taking over for a while. He never said blogging was this hard! I have new-found appreciation for my master now…]

Factorials, prime numbers and a puzzle

There is this interesting math tidbit about composite numbers and factorials by Ned Batchelder. Now prime numbers never appear consecutively (except for 2 and 3). Ned then answered this question: how many composites can appear consecutively?

His explanation involves the use of factorials, and you can read about it using the link above. His explanation also gave me something to think about…

Now the factorial of n, denoted by n! is
1*2*3*4* … *(n-1)*n
which is a product of 1 through n.

Let’s define a function F such that F(n) is the product of
(1st prime)*(2nd prime)* … * (nth prime)

For example, the first few prime numbers are 2, 3, 5, 7, 11, 13. So
F(2)=2*3=6, and
This is different from factorial primes (I was actually going to name this special function “prime factorial”).

Now, n! is divisible by 2, n! is divisible by 3 and n! is divisible by 4.
F(n) is divisible by 2, F(n) is divisible by 3, but F(n) is not divisible by 4!

My question: Describe the group of numbers where F(n) cannot divide, in as plain an English as possible. This group of numbers will necessarily be between 2 and F(n)-1.

Your knee-jerk answer could be “all composite numbers between 2 and F(n)-1!”. Ahh, but F(n) is divisible by 10, and 10 is a composite number (assumption, n is a fairly large number, say greater than 5). This puzzle should be easy to figure out. Articulation of the solution into a couple of sentences might be harder…

[Vincent is currently away on vacation. He asked me, the blog, to take over for a while. Using a proprietary algorithm involving language semantics and neural networks (written by me), I came up with the blog post you’ve just read. It even seems coherent! I mean, uh, of course it makes sense. Oh, the things I do for my master… He’d better come back with lots of pictures for me to post, or he and I are going to have words…]

Certifications, standards or just codswallop?

Recently, I attended a 1 day course on Information Technology Infrastructure Library or ITIL. It’s just an awareness course, so I only learned what it is, what it’s used for, why it should be used and so on. Basically ITIL is a set of best practices for doing IT work.

Truth be told, I didn’t want to attend the course. Due to management targets and budget considerations, I was “strongly encouraged” to attend the course (it is the end of the financial year…). Ah well, so I did. Man, did I have to rearrange my coding deadlines… One never knows how much 1 day of coding means, till one loses that 1 day…

Anyway, I learnt what ITIL is, and what the associated ISO 20000 is too. ITIL, as my instructor mentioned, is “documented common sense“. Also, an organisation (or process) can’t be certified in ITIL, as there might be some recommendations that conflict with each other within your business scope. An individual can be ITIL certified though.

Businesses can be ISO 20000 certified. The process of certification, as I understand it, is tedious. You need documentation of processes, and you need proof that you’ve been following the standards and your own documentation. And the first thought that came to my mind was Evidence Based Scheduling and FogBugz. I talked with the instructor and he agreed that, having a system that’s part of the workflow, and that it captures proof as it goes, is the best solution. And you need at least 3 months worth of proof.

I can see 2 benefits to certification. Boost in credibility (people trust companies with certifications?) and actual improvements in financial bottom lines (whether it’s cost savings or profit).

Frankly speaking, I don’t know why I’m attending the course. That I’ll push for a more structured IT environment back at work, following ITIL recommendations? That I can be part of the certification process team? I’m a messy person. What do they think I can do?

Ok, the course wasn’t a complete loss for me. At least I learnt about the certification process. Still, I keep wondering, could the entire thing be condensed into the Joel Test?

At the team I’m working at, we’d maybe pass 3 out of the 12 points on the check list. If we can’t pass that, we can’t pass ITIL, and we certainly can’t pass ISO 20000. I’m just saying…

[Vincent is currently preparing to go to the land of the rising sun, and hopefully see sakura blossoms. Comments and responses will be slower. I’ll be taking over for a while, and remember, I’m just a blog.]

Bilinear interpolation article referenced in another language

I’m thrilled. I’m also confused. I wrote an article on bilinear interpolation, specifically for image rotations, and it was provided as a link in a comment to this post. Thanks to vottini for referencing my article (it was used in a good light, right?)

I’ve got a problem. I don’t know what’s the language of the site! I think it’s French or Spanish, but I can’t be sure. If you know what’s the language, please let me know. Better yet, translate and summarise that article and tell me, because I only understood it as a example of how to expand an image and fill in the extra pixels. Thanks.

UPDATE: Thanks to an anonymous commentor, I finally found that the article (and site) is in Portuguese. I tried online translation tools and Portuguese-English yielded enough for me to understand what was going on. The commentor recommended the tool by Google, but I got barred with “automated request” error.

So I used the Babel Fish.

Excerpt of translated comment:

the solution, after all, was well simple. Instead of rotacionar pixels of the source, it rotacionava pixels of the destination and discovered in which place of the source would go to be. Joining with the technique of the bilinear interpolation or some most advanced one, the result is really impressive.

In the article, I suggested starting from the destination image, and find out what pixels to use from the source. This sort of assumes that both source and destination speak the same “language” (both are talking about RGB pixels).

The irony? In this incident, I don’t know anything about the source language (Portuguese), so I don’t know how to start from the destination (translated article in English).

UPDATE: Actually Christopher also commented that it’s in Portuguese (missed his comment in the diligent black hole SPAM processor…). Thanks Christopher!

And the owner of the site came to confirm that it’s in Portuguese! Wow. He (“o velho” means “old man” in Portuguese) apparently posted an English post about this, specially for me. Wow. Thanks.

Stationary camera, moving scene

Previously, we talked about revolving the entire 3D scene about the camera, and also the problem of the camera looking directly downwards. Today, we’ll look at the mechanics of implementing that stationary camera (it ain’t pretty).

There are 2 transformations to take care of: translation and rotation. Translation takes care of the distance between the camera and the point it’s looking at. Rotation takes care of simulating the camera turning around to look at objects, roughly speaking. Let me use a 2D version to illustrate the concept.

Reverse translation and rotation of 2D scene

Suppose the camera is at some arbitrary position looking at an object. Based on the positions of the camera and the object, you can find the distance between them. You know, with this:
d = sqrt( (cx-ox)^2 + (cy-oy)^2 )
where cx and cy are the x-coordinate and y-coordinate of the camera respectively, and ox and oy are the x-coordinate and y-coordinate of the object respectively.

The camera is looking at the object, so the angle (theta) of its line of sight with respect to the (for example) x-axis can be calculated.

Suppose we want the stationary camera to look in the direction of the positive y-axis, and be positioned at the origin (0,0). To make the scene viewed through a stationary camera the same as that in the original version (the default by the 3D engine), we would rotate the entire scene (90 – theta) degrees, then translate the result of that d units along the positive y-axis.

Remember that order of transformations is important. Rotating first then translating, is (generally) different from translating then rotating.

So that’s the general idea of making a stationary camera work, by moving and rotating the entire scene. The fun part comes because it’s in 3D.

The distance calculation still holds true:
d = sqrt(x^2 + y^2 + z^2)

The angle… not so much. Because it’s in 3D, I adopted spherical coordinates. The radius would simply be the distance calculated previously. But there are now 2 angles to calculate, theta and phi.

Spherical coordinate angles

Suppose the camera is at (a,b,c) and the viewed object is at (p,q,r). We make the viewed object the centre of our attention, so we start our calculations with the object at the origin. Therefore, the camera is at (a-p, b-q, c-r).

We can calculate the distance between them as
d = sqrt( (a-p)^2 + (b-q)^2 + (c-r)^2 )

Then we also solve for the following set of simultaneous equations (note I’m using y-axis as the “upward” axis)
x = r * sin(theta) * sin(phi)
y = r * cos(phi)
z = r * cos(theta) * sin(phi)


a-p = d * sin(theta) * sin(phi)
b-q = d * cos(phi)
c-r = d * cos(theta) * sin(phi)

to look for the angles theta and phi, where
0 <= theta <= 2*PI 0 <= phi < PI Once found, the rendering occurs by rotating the entire scene phi degrees about the positive z-axis (starting from negative y-axis as 0 degrees), then rotate about the positive y-axis (starting from the positive z-axis as 0 degrees), then translate by (-a,-b,-c) (this moves the entire scene away from the camera positioned at the origin). Well, that was a lot of trouble. What was I trying to solve again? Oh yeah, that looking down and losing the "up" vector problem. Notice anything wrong in this implementation? The "up" vector of the camera was never considered. But figuring out all the math was fun... if only it solved something too... *sigh* [Note: all use of "degrees" in this article can be substituted with "radians", depending on your situation. Use accordingly.]

The problem of looking down

So in film, the camera usually moves a lot, together with the actors and props. The scene doesn’t move. In the virtual 3D world, we can move and revolve the world around the camera, which I talked about previously.

Let’s talk about the virtual camera first before launching into the problem I was trying to solve. There are 3 positional vectors for a virtual camera: its position, its “up” vector, and where it’s looking at. The 1st and 3rd positional vector should be easily understood. The “up” vector refers to where the “head” of the camera is pointing at.

You’re looking at something in front of you. Now tilt your head left. Your eyes are still at the same position (1st vector), and you’re still looking at the same thing (3rd vector). But the top of your head is pointing at a different direction (2nd vector). This changes your view. Refer to my article on lights and virtual cameras for more information (and pictures…).

So far, I haven’t used the 2nd vector to do much, so I’ve kept it at the default (0,1,0), which means point to the sky. Now for the problem…

Suppose you’re looking at something in front of you, say an (absolutely symmetrical) apple, and you move up while keeping the apple at the same position. You’re now looking down at it, aren’t you? Say you’re not the fidgety type, so your head is kept straight. Your head’s not pointed directly up to the sky, but it’s straight. You don’t know how, but you know it’s straight. This is important.

Now, slowly move towards the space directly above the apple. Your head is still kept “straight”. As in, if you tilted your head, the view changes substantially, and the apple appears “lopsided”. Here’s the problem (finally!). What happens when you look at the apple directly from above it?

Moving eye position while looking down

You can now tilt your head in any direction, and the apple still looks the same (as in you’re looking directly above it). Now the 2nd positional vector matters, because the view changes substantially (because the scene around the apple changes). The default “up” direction fails.

I can’t remember the name of this problem (or even if there was one). And I can’t find my OpenGL textbook that describes this, so I’m writing this from memory. If you can find me a reference to this, let me know.

So my young naive mind back then was “How about I don’t move the camera at all!” *sigh* So naive… As you can tell, my solution to move the entire world instead of moving the camera failed. It failed as in not solving the original problem I was trying to solve. But it worked flawlessly in that it worked exactly as the camera was made to move through the 3D world.

And I’ll tell you how I built that camera … next time.

A Tale of Two Timezones

I work with date and time data a lot. I write applications that handle tens of thousands of such records a day. Not sure if I can tell you the exact nature of my work, so I’ll just briefly touch on the peripheral. Besides, if I write it here, I’ll have to hunt down the 7 people who read my blog. Excluding you of course. You’re awesome.

Big Ben
[image by track5]

So a company in United Kingdom provides a … certain service. And this service produces most of the data I work with. United Kingdom and Singapore… not quite London and Paris though…

Anyway, the data is accessed by customers all around the world, so I’ve got to align the dates and times correctly. Luckily, the customers understand the data is in UTC, so I didn’t have to explain too much in the user interface. I just have to present the date/time information correct to the second.

Internally, there’s a fork in opinions. Should the times when the backend processes are run be in UTC, or Singapore time? Purists might say, “All times should be in UTC!” and they convert accordingly. Pragmatists might say, “I’m the only who reads those logs, so it should be in local time!” The customers won’t know what went on in the backend processes, but they might be interested in when their data was last refreshed (which happens to be the process date/time).

This creates an interesting problem. You have to standardise the handling of any date/time data logic, even if it’s just to get the current date/time. Sometimes, I forget whether a particular piece of data is in UTC or local time, and I had to dig out the code that handled it to find out. Because 8 hours is a big difference.

Adding to this confusion, is that the system times of the web server and database server might be off by a few hundredths of a second. So do you use DateTime.UtcNow in C# or getutcdate() in SQL? Does it even matter? I used to work for a manager who insisted all times to be based on the database server’s system time. I admire his stand. I also have to point out the many database calls wasted just to get the current time in the applications…

On the upside, I have to wait for the data to be consolidated for the previous day, before it’s sent over to Singapore for processing. UTC midnight is 8am in Singapore, which is about the time I start work. This gives me some time to react in case of emergencies. You take whatever blessings you have, however small…

Revolve the world around you

Sit or stand with your head pointing straight up. Tilt your head to your left. Note the view, the slant of the horizon, the movement (if any) of surrounding objects.

Tilt your head back to pointing straight up. Now imagine the view in front of you tilting to your right. Can you imagine the scene as having the same view as if you tilted your head left?

That was exactly what I was trying to achieve when I implemented a custom camera object to overcome a particular viewing problem in 3D. Well, you’ll have to wait for another article for the description of the problem. I’m going to just describe the function of that custom camera in this article.

So in 3D scenes, you have the scene objects and a virtual camera. Using the position and orientation of the camera, you can view the scene (by rendering the scene onto the viewing plane). This is analogous to the physical world.

Now, in the physical world, the scene, the set, and the props typically don’t move. Only the camera moves (we’ll leave out the human actors). I’m referring to the movement where an object goes from one place to another. Movement such as water flowing, or explosions aren’t included (as far as the discussion goes).

For a physical camera, there are limits. You can’t quite fly a camera through an explosion. You need special cameras to go through tiny openings. You’ve got to be careful when working with mirrors, because the camera (and cameraman) can be inadvertently captured (unless that was the effect). And you definitely can’t pass through walls.

A virtual camera in a 3D scene has none of those limitations. As far as the renderer is concerned, a camera is just a point, unless it’s modelled and treated as a 3D object. It can film the hottest of volcanic films, or be submerged in the depths of the seas, and remain undamaged. Now, the virtual camera might be limitless, but that’s not the point. Due to the transformations such as translations, rotations and scaling, the 3D scene itself can be modified at will.

I was inspired by a remark made by my university lecturer. He said that moving the camera towards a stationary object, is the same as moving the object towards the stationary camera. This also implied that rotating the camera clockwise around a stationary object, is the same as rotating the object anti-clockwise around the stationary camera.

This opened my eyes to another corollary. You don’t need to move the camera at all! You can move the entire scene instead.

So I set out to design a camera object where the entire 3D scene depended on it. What I mean is, instead of setting camera properties and have them work nicely with the 3D scene, the 3D scene itself conforms to the camera properties.

For example, if I set the camera position at (1,0,0), in actuality, the camera is still at (0,0,0). But the entire 3D scene is translated (-1,0,0).

What I did was set the camera at a default position, say (0,0,5) (I’m using the upright Y-axis), and set the camera’s “up” vector to (0,1,0) (meaning it’s head is pointing upwards, so it’s level with the ground). Then everything else is done with respect to this default camera orientation.

So why am I doing all this? I was bored, I had time then, and I wanted to solve a particular problem. I’ll tell you more about the mechanics of the camera, and the problem some other time…

Minor irks between C# and VB.NET

It’s about the way you think about programming. This isn’t another debate on which language is better. Just noting the differences because of how I think. The first is…

Declaring variables

After I think through the logic I want, the first thing that comes to mind might be “I need an integer”. This works well:

int i;

This, not so much:

Dim i As Int32

In C#, the name of the variable is secondary, at least at the point when it’s created. I need an integer. I don’t really care what its name is (yet). Nor does the compiler.

In VB.NET, I have to come up with a name. And if my RPG days are any indication, I take a long time coming up with names. By the time I think up an appropriate name, I forgot what type it’s supposed to be.

It’s like the active-passive voice in English. “He ate the apple.” and “The apple was eaten.” Which do you want to focus on?

I might be wrong. VB views variables as containers for values, hence there’s no point in fixing the type at declaration (like Javascript)? And VB.NET inherits the language structure.


In C#, arrays are declared like so:

int[] ia = new int[5];

In VB.NET, they are declared like so:

Dim ia(5) As Int32

There’s a catch though. The array in C# has 5 elements. The one in VB.NET has 6.

Both languages treat arrays with zero-based indices. In VB.NET, the number used in declaring the size of the array means the last index of the array.

So if I wanted 5 elements, I should declare it as:

Dim ia(4) As Int32

Ok, I guess my frustration has run its course…

What The Sims taught me about socialising

For the purposes of this article, socialising means having meaningful conversations and interactions with other people (namely friends). So what’s the trick to maintaining a fairly large network of friends while still holding meaningful conversations with all (or at least most) of them?

Hold parties. Or organise group outings. Any event that involves many people and activities that engage most of them together.

Cocktail party
[image by Yuri Arcurs]

Before I started blogging, I hang out with a handful of friends. Which seem to coincide with 4, the number of friends a male can have on Facebook.

Granted, Facebook is an online social network. The reasons cited for the number are valid though. You can know many people, and once the criteria of “having 2-way meaningful conversations” come in, the number of “real” friends drop to a low number.

This is a limit on how much information about those friends a person can hold at any one time. Who are the friends of those friends, what are their favourite foods and activities, who are the mutual friends and so on.

I did some research and there’s Reed’s Law, which states that

the utility of large networks, particularly social networks, can scale exponentially with the size of the network

and there’s Metcalfe’s Law:

the number of unique connections in a network of a number of nodes (n) can be expressed mathematically as the n(n-1)/2

Regardless of the calculations involved, they just mean the value of a network grows faster than the increase in number of people involved. A social networking site might be able to hold this information. A human can’t scale as efficiently, hence the limit.

Back in my pre-blogging days, that period of time coincided with my student days. So there wasn’t a need to actively organise group outings. Just meeting up at school would do, with a few outings outside of school here and there.

Then came working life, and the people I associated with most often were my colleagues. Thus far, the interactions were face to face, or via phone calls and messages, or email (though infrequent).

But there’s a limit to those kinds of interactions because they were one to one (or one to few). Then something lit up in my brain from thinking about the game The Sims. In the game, the same social and “physical” limits on friendships are there.

In order to maintain friendships (there’s a friendship score), a Sim has to continually interact with other Sims. By calling them on the phone. By inviting them over to the house. And yes, by holding parties and going out on group outings.

So the most efficient way is to have a bunch of friends together, and interact with them all at once. Better suggestion? Have those friends interact with each other and have fun too.

And this might be why social networking sites are so popular now. They enable people to interact with a lot of other people at a fairly frequent rate. They enable people to find other people whom they’ve never met and start conversations and build friendships. There’s still a limit to how many “real” friends one can maintain, but it’s probably higher. This has the side effect of creating a lot more friends whom one interacts with infrequently. But I guess we can live with that.

With that, I’m stopping here. Let me know what you think. You can also have meaningful conversations with me on Twitter and Facebook.