Previously, we talked about revolving the entire 3D scene about the camera, and also the problem of the camera looking directly downwards. Today, we’ll look at the mechanics of implementing that stationary camera (it ain’t pretty).

There are 2 transformations to take care of: translation and rotation. Translation takes care of the distance between the camera and the point it’s looking at. Rotation takes care of simulating the camera turning around to look at objects, roughly speaking. Let me use a 2D version to illustrate the concept.

Suppose the camera is at some arbitrary position looking at an object. Based on the positions of the camera and the object, you can find the distance between them. You know, with this:

d = sqrt( (cx-ox)^2 + (cy-oy)^2 )

where cx and cy are the x-coordinate and y-coordinate of the camera respectively, and ox and oy are the x-coordinate and y-coordinate of the object respectively.

The camera is looking at the object, so the angle (theta) of its line of sight with respect to the (for example) x-axis can be calculated.

Suppose we want the stationary camera to look in the direction of the positive y-axis, and be positioned at the origin (0,0). To make the scene viewed through a stationary camera the same as that in the original version (the default by the 3D engine), we would rotate the entire scene (90 – theta) degrees, then translate the result of that d units along the positive y-axis.

**Remember that order of transformations is important**. Rotating first then translating, is (generally) different from translating then rotating.

So that’s the general idea of making a stationary camera work, by moving and rotating the entire scene. *The fun part comes because it’s in 3D*.

The distance calculation still holds true:

d = sqrt(x^2 + y^2 + z^2)

The angle… not so much. Because it’s in 3D, I adopted spherical coordinates. The radius would simply be the distance calculated previously. But *there are now 2 angles to calculate*, theta and phi.

Suppose the camera is at (a,b,c) and the viewed object is at (p,q,r). We make the viewed object the centre of our attention, so we start our calculations with the object at the origin. Therefore, the camera is at (a-p, b-q, c-r).

We can calculate the distance between them as

d = sqrt( (a-p)^2 + (b-q)^2 + (c-r)^2 )

Then we also solve for the following set of simultaneous equations (note I’m using y-axis as the “upward” axis)

x = r * sin(theta) * sin(phi)

y = r * cos(phi)

z = r * cos(theta) * sin(phi)

==>

a-p = d * sin(theta) * sin(phi)

b-q = d * cos(phi)

c-r = d * cos(theta) * sin(phi)

to look for the angles theta and phi, where

0 <= theta <= 2*PI
0 <= phi < PI
Once found, the rendering occurs by rotating the entire scene phi degrees about the positive z-axis (starting from negative y-axis as 0 degrees), then rotate about the positive y-axis (starting from the positive z-axis as 0 degrees), then translate by (-a,-b,-c) (this moves the entire scene away from the camera positioned at the origin).
Well, that was a lot of trouble. What was I trying to solve again? Oh yeah, that looking down and losing the "up" vector problem. Notice anything wrong in this implementation? The "up" vector of the camera was never considered.
But figuring out all the math was fun... if only it solved something too... *sigh*
[Note: all use of "degrees" in this article can be substituted with "radians", depending on your situation. Use accordingly.]

Very interesting article. Do you know any other article where I can read more about this topic?

I’m trying to move the scene in matlab rather than the camera.

Thank you.

I’m not familiar with 3D rendering in MATLAB, Diego. The specific mechanics are detailed already. MATLAB actually simplifies some of the code, such as those of matrices and vectors, so the code should be more concise.

As far as I know, no one’s written anything on keeping the camera still and rotating the entire scene about it. Probably because rendering the scene and flying the camera through it is easier. ðŸ™‚