Lights, camera, action!

You’ve learned a bit on viewports already. Now, we’ll look at how lights and cameras are used in a 3D environment.

Natural light (or sunlight)

Natural light comes into play when you’re working with outdoor scenes. Practically everything is lit up. Outdoor scenes are also usually filled with objects. Trees, houses, grass.

[digression]
You never really think about grass until you realise you have to model and render every single blade of grass. Unless you create an illusion that there’s a sea of grass out there… Yes, there are ways.
[end digression]

Light rays in Sofia Cathedral

[image by -lvinst-]

There are also indoor scenes where natural light streams in through a window, and you get a rectangular block of light in the scene. That’s … a little out of scope for now. You can research on volumetric lighting for more information.

Right now, we’ll work on light that we can’t see. I know, it sounds contradictory, but think of it this way: You see an object because light fell on it, not because you see the light. Remember the short science lesson when we discussed wavelengths of different colour reflecting off surfaces?

The most prominent source of natural light is the sun (leave out the moon and the stars). As far as I know, we’ve only got one sun, so we only have one source of light to model.

For the purposes of modelling light rays, natural light rays are parallel. They are not, strictly speaking. But by the time they reach Earth, they are almost parallel. This makes it easier to model, because there’s only one angle to consider.

What angle am I talking about? At noon, light rays hit where you are at about 90 degrees with respect to the ground. At dawn (I know you might not wake up at dawn, just humour me), light rays hit where you are at say 10 degrees. At dusk, maybe 170 degrees. Yes, this is the angle.

With a point source as far as 150 million kilometres away, calculations with a difference of fractions of a degree are wasteful (and unnecessary). Parallel rays simplify calculations. We’ll look at the detailed calculations in another article. Maybe. I don’t like manual math calculations anymore than you do…

Point sources of light

For simplicity, light sources are assumed to be point sources. Your desk lamp, your flashlight, your television, a candle flame, fireflies. And they emit light in all directions (yes, they’re very generous).

Different light angles on object vertices

I’ve made the light source visible so you can see its position relative to the cube, our main object in the scene. At close range, the areas around the (6) points on the cube are rendered and lit differently because light hits the cube at different angles.

Every single point on the cube is hit by our light source at a different angle. Imagine the calculations involved. This is the reason why natural light rays are assumed parallel.

You might also notice that points closer to the light source are more brightly lit. How is that modelled? Let’s look at attenuation.

Attenuation (or falloff)

There are typically 2 types of attenuation: linear and quadratic attenuation. Basically they’re just functions of distance between the position of the light source and the object vertex (the point on the object) in question.

For illustration, let the “strength” of light, L, be full “power” 1.0 at the light source. The further the distance, the lower L becomes. So L could be
L = 1.0 / (c1 * d + c0)
where c1 and c0 are some constants and d is not equal to zero (if d is zero, L is to be 1.0, remember?)

That’s linear attenuation. Then just use L as part of the lighting calculations.

What about quadratic attenuation?
L = 1.0 / (c2 * d^2 + c1 * d + c0)
where c2, c1 and c0 are some constants and d is not equal to zero

Relax, the graphics API you use probably does this (almost) automatically. This is just to let you know what’s going on behind the scenes. I know OpenGL has inbuilt functions so you can just set your choice using just a function call.

There’s another type which is ranged attenuation. Basically, L becomes zero at a certain distance. With the 2 attenuation models above, you never really get zero. I’m not sure if it’s supported by popular engines, so you might want to just keep this in mind. Maybe you’d like to implement your own to simplify calculations or produce a certain effect.

Spotlights

Different types of spotlights

The different spotlights are used for different effects. You’re probably familiar with cylindrical or conical spotlights. They’re used to highlight people when they’re on stage.

The square spotlight and parallel lights are, well, … because we can. *smile* Remember, in the virtual world, it’s sometimes easier to create certain effects. You can create a heart-shaped spotlight if you really want to.

Ambient light

Sometimes, objects are lit brighter than expected. You’ve taken into consideration the natural light and the light sources you specified. Yet the objects still look a bit brighter.

This is ambient light in effect. Maybe your calculations aren’t as precise or as close to “reality” as you thought. Things like light from the other side of the planet bouncing up to the sky, bouncing off clouds, bouncing off the sea, bounce, bounce, bounce and ended up at your scene. Add to that light from digital watches, fireflies, neon signs, candle flames, office buildings and what-not, and you get a low amount of light that’s ever present.

To model this, we simply brighten everything up a little bit, say 5% more. We don’t care what’s the source. See, that was easy.

Radiosity

Rounding this discussion on light, we have radiosity. Basically it’s light bounced off another (diffuse) surface. It’s computationally intensive, so it’s not appropriate for real-time renderings and games.

I don’t have any specific pictures on radiosity to show you (because I haven’t installed my other rendering software with this feature). Try searching at Flickr for some examples.

Where’s the camera?

The virtual camera acts very much like real-world cameras. You are just able to do more stunts with them. *smile*

Camera positions are represented using a vector with X-, Y- and Z-coordinates. Depending on the graphics API you use, you might also have a W-coordinate, which is usually 1.0. For our discussion here, it’s not important. You can refer to more information by searching on homogeneous coordinates and affine transformations.

What are you looking at?

Having a camera position isn’t enough. You need to know what the camera is looking at. Imagine a camera moving from one point to another, yet it keeps looking at the same object.

For example, you change the camera from one position

Camera position 1

to another position, but still looking at the same object

Camera position 2

Are you upright?

A final component of cameras is the upright position. It’s easier just to show you. First, we have the camera in upright position.

Camera in upright position

Then we tilt it left a little.

Camera tilted left

Then we tilt it right a little.

Camera tilted right

We tilt it up a little.

Camera tilted up

And we tilt it down a little.

Camera tilted down

In particular, you can tilt left or right, and still look at the object in question and keep your camera position.

In practise, we usually keep the camera upright. In this case, our representation is not a point, but a direction. We still use the X-, Y-, Z-coordinates though (W as well, depending). Typically, we use (0, 1, 0, 0).

Note that you can probably use (0, 3, 0, 0) and still be fine. It’s the direction that counts, not the magnitude. So you don’t need to normalise the vector, or make the vector a unit vector (magnitude of 1 unit).

Note also that, depending on the graphics API you use, all 3 camera properties may be lumped together. So you may get a function requiring 9 parameters: 3 for the position, 3 for the look-at position, 3 for the upright direction.

Camera paths

Because of the representations of the camera, you can assign values to the camera positions, look-at’s and upright direction dynamically. This is how you can create spectacular views by constantly changing the camera position and what the camera (and viewer) looks at.

The easiest way is to iterate on a linear path. So at the start of time t1, the camera is at position 1. At t2, the camera is at position 2. Then you just do linear interpolation in between t1 and t2 and there you have it, a moving camera.

This was the impetus behind my research in applying Bezier curves to camera positions.

Conclusion

So we’ve covered the basic understanding of light for 3D development. Note that sometimes, you might need to add more lights than necessary, even though in the 3D scene, those extra lights shouldn’t be there. The extra lights are to enhance (or sometimes correct) the final rendered scene.

Your focus should be on what’s finally rendered, not what’s accurately modelled. It’s about results.

We’ve also covered the basics of camera representations. Hopefully, you have an understanding of how to create better scenes through moving cameras.

That’s all for this lesson, and I hope you learn something from it.

Sorry, comments are closed, but you can contact me directly if you like.