## Compression through time

So the last time, I told you I came up with my own video format so I could load videos into my game engine. Today, I’m going to tell you what it is.

The concept isn’t revolutionary. Image compression is the art of expressing an image in as small a form as possible. There are 2 versions: lossy and lossless. Lossy compression means some data is lost, meaning the file size can be smaller, but overall you won’t notice the difference (much). The jpeg format uses this. Lossless compression doesn’t lose any data at all. The png format uses a lossless compression.

I don’t know much about video compression. However, a video is a series of images. So what I did was instead of focusing on image-by-image compression, I adopted time-based compression.

For each pixel position, I try to come up with an algorithm (or specifically a maths formula) that will represent all the pixels in that pixel position throughout the entire video.

Let’s say we use the top-left pixel of the video. And let’s say that pixel is pure white for half of the video, and black for the other half of the video. My aim is to represent this “pure white 1st half, pure black 2nd half” information in 1 formula.

With this in mind, let’s say I’ve got a maths formula somehow. Let’s say it’s a 640 by 480 pixel video (it was considered good resolution back then. I know we have HD now…). Let’s say there are 3 variables in that maths formula. This means, I have to store 3 * 640 * 480 variables in my movie format.

Yes, that’s the compression tactic.

Well, more specifically, I had to come up with 3 * 640 * 480 maths formulas that compresses itself as efficiently as possible. With some research, I found that the less the variables vary (hahahah…), the better the compression. This means the colour space I use is important. RGB values change too much.

And with some more research, I found the YUV colour space ideal for use. I think what I actually did was count the number of consecutive values and store that. The YUV space has a “lightness” component, which for most videos, doesn’t change very often.

This means in the ideal case, I store “N of X” for the “lightness” component, where N is the number of times the value X occurs for the “lightness” component of that pixel position. And in the ideal case, N would be approximately the frame rate multiplied by the number of seconds.

So for each pixel position, I store a series of counts of the Y component (could be 10, 50 or ideally just 1). Then I store a series of counts of the U component. And similarly for the V component.

Then I move on to the next pixel.

The idea is that the longer the video, the “better” the compression. It’s not that the compression tactic is better. It’s just that the tactic takes advantage of the fact that certain colour components don’t change much. If a pixel is always white, it doesn’t matter if the video is 1 second long or 1 hour long, we just store “30 white” and “3600 white” respectively (assuming 30 frames per second).

This works great for screencast videos because the screen typically doesn’t change much. It’s usually just the mouse moving around and clicking stuff and sometimes the screen changes. This means most of the screen stays the same. And if you’re like me, you’ve probably watched some screencast videos where nothing on the screen happens for minutes. It’s just the person talking.

Of course, I fudged the actual values a bit so they can compress better. Meaning it’s a lossy compression. But in the end, it was comparable to a moderately compressed QuickTime movie file, so I was ok with that.

But even if the compressed size is still larger than other movie file types, at least it’s not copyrighted. So I can use that. And if you want to try this compression method, go ahead and use it (you have my permission). Let me know how it works out for you.

## Compression and space

Back when I was in my making games mode years ago, I was interested in space. Specifically data storage.

Blitting images to the screen as quickly as possible to keep the frame rate consistent was a thing. Texture images are pre-created. Then there’s the concept of generating textures as the program was executed. This was how demos pack data into a small space.

I was studying fractals then, the iterated function system was a thing. I found it a pain to implement though…

And then I was interested in having moving images in my game engine. Packing movies was way harder than images.

Due to the copyrights of all the video formats, in my infinite wisdom, I decided to create my own. I mean, it’s a series of images, right?

The resulting customised movie file was just slightly larger in size than the QuickTime .mov format. I took that as an encouragement. I’ll tell you what I did next time.

## Pictures in Excel have way too much power…

You’d think inserting a picture into an Excel worksheet is pretty straightforward. If you’re just inserting the picture, then it is.

Then you check what other properties you can play with manipulating the picture. Woah amigo! There’s a whole plethora of options!

That’s like giving the typical user a (good enough) subset of rendering functions, including 3D. And here I thought you could just resize the picture, and maybe rotate it… If you’re not going for full 3D manipulation as in OpenGL or DirectX, Excel will do it for you.

Excel respects alpha/transparency of the picture (right now, I only know of PNG images that contain alpha information AND is fairly prevalent). And you can outline the picture and fill the transparent portions with a colour.

Do you know it can take a phenomenal amount of work to get shadows working in 3D renderings? You’ve got to know the light source position, calculate the rays of light blocked by an object to create the illusion of a shadow (by creating a polygon that’s the outline of that object as described by those blocked rays of light). Well, technically speaking, Excel has got it easy, since the object is always a rectangle (an image).

Speaking of light, we can also do reflections. Let’s run a full gamut of the 3D power.

I lost the software keys to my 2 3D rendering software (Bryce 4 and TrueSpace). It sucked that I couldn’t render landscapes anymore… But, I’m seeing Excel as an alternative to getting some 3D-ish images.

Well, the above screenshots were from a spreadsheet generated by my software library, SpreadsheetLight. The library is based on the Open XML SDK, which in turn is based on what Excel can do. Here’s how the source code looks like:

```SLDocument sl = new SLDocument(SLThemeTypeValues.Oriel);

SLPicture pic;

sl.SetCellValue(2, 2, "Accent 1");
sl.ApplyNamedCellStyle(2, 2, SLNamedCellStyleValues.Accent1);
sl.SetCellValue(2, 3, "Accent 2");
sl.ApplyNamedCellStyle(2, 3, SLNamedCellStyleValues.Accent2);
sl.SetCellValue(2, 4, "Accent 3");
sl.ApplyNamedCellStyle(2, 4, SLNamedCellStyleValues.Accent3);
sl.SetCellValue(2, 5, "Accent 4");
sl.ApplyNamedCellStyle(2, 5, SLNamedCellStyleValues.Accent4);
sl.SetCellValue(2, 6, "Accent 5");
sl.ApplyNamedCellStyle(2, 6, SLNamedCellStyleValues.Accent5);
sl.SetCellValue(2, 7, "Accent 6");
sl.ApplyNamedCellStyle(2, 7, SLNamedCellStyleValues.Accent6);

pic = new SLPicture("mandelbrot.png");
// anchor at cell (4,2) with 0 horizontal and vertical offsets.
pic.SetRelativePositionInPixels(4, 2, 0, 0);
sl.InsertPicture(pic);

pic = new SLPicture("mandelbrot.png");
pic.SetRelativePositionInPixels(4, 6, 0, 0);
pic.SetSolidFill(DocumentFormat.OpenXml.Drawing.SchemeColorValues.Accent3, 20);
pic.SetSolidOutline(DocumentFormat.OpenXml.Drawing.SchemeColorValues.Accent4, 0);
pic.SetOutlineStyle(4, DocumentFormat.OpenXml.Drawing.CompoundLineValues.Double, DocumentFormat.OpenXml.Drawing.PresetLineDashValues.LargeDash, DocumentFormat.OpenXml.Drawing.LineCapValues.Round, SLPicture.SLPictureJoinType.Bevel);
sl.InsertPicture(pic);

sl.SetCellValue(2, 2, "Accent 1");
sl.ApplyNamedCellStyle(2, 2, SLNamedCellStyleValues.Accent1);
sl.SetCellValue(2, 3, "Accent 2");
sl.ApplyNamedCellStyle(2, 3, SLNamedCellStyleValues.Accent2);
sl.SetCellValue(2, 4, "Accent 3");
sl.ApplyNamedCellStyle(2, 4, SLNamedCellStyleValues.Accent3);
sl.SetCellValue(2, 5, "Accent 4");
sl.ApplyNamedCellStyle(2, 5, SLNamedCellStyleValues.Accent4);
sl.SetCellValue(2, 6, "Accent 5");
sl.ApplyNamedCellStyle(2, 6, SLNamedCellStyleValues.Accent5);
sl.SetCellValue(2, 7, "Accent 6");
sl.ApplyNamedCellStyle(2, 7, SLNamedCellStyleValues.Accent6);

pic = new SLPicture("mandelbrot.png");
pic.SetRelativePositionInPixels(4, 2, 0, 0);
// transparency 0%, blur 6pt, angle 90 degrees, distance 3pt
sl.InsertPicture(pic);

pic = new SLPicture("mandelbrot.png");
pic.SetRelativePositionInPixels(4, 6, 0, 0);
// transparency 50%, horizontal size 100%, vertical size -23%, horizontal skew -13.34%, vertical skew 0%
// blur 6pt, angle 45 degrees, distance 3pt
// origin at picture's bottom left, don't rotate with picture.
pic.SetPerspectiveShadow(DocumentFormat.OpenXml.Drawing.SchemeColorValues.Accent2, 50, 100, -23, -13.34m, 0, 6, 45, 3, DocumentFormat.OpenXml.Drawing.RectangleAlignmentValues.BottomLeft, false);
sl.InsertPicture(pic);

sl.SetCellValue(2, 2, "Accent 1");
sl.ApplyNamedCellStyle(2, 2, SLNamedCellStyleValues.Accent1);
sl.SetCellValue(2, 3, "Accent 2");
sl.ApplyNamedCellStyle(2, 3, SLNamedCellStyleValues.Accent2);
sl.SetCellValue(2, 4, "Accent 3");
sl.ApplyNamedCellStyle(2, 4, SLNamedCellStyleValues.Accent3);
sl.SetCellValue(2, 5, "Accent 4");
sl.ApplyNamedCellStyle(2, 5, SLNamedCellStyleValues.Accent4);
sl.SetCellValue(2, 6, "Accent 5");
sl.ApplyNamedCellStyle(2, 6, SLNamedCellStyleValues.Accent5);
sl.SetCellValue(2, 7, "Accent 6");
sl.ApplyNamedCellStyle(2, 7, SLNamedCellStyleValues.Accent6);

pic = new SLPicture("mandelbrot.png");
pic.SetRelativePositionInPixels(6, 3, 0, 0);
pic.SetFullReflection();
// width 6pt, height 6pt
pic.Set3DBevelBottom(DocumentFormat.OpenXml.Drawing.BevelPresetValues.Convex, 6, 6);
// width 3pt, height 4pt
pic.Set3DBevelTop(DocumentFormat.OpenXml.Drawing.BevelPresetValues.ArtDeco, 3, 4);
// extrusion colour transparency 0%, extrusion (or depth) height 15 pt
pic.Set3DExtrusion(System.Drawing.Color.Green, 0, 15);
// contour colour transparency 40%, contour width 4pt
pic.Set3DContour(DocumentFormat.OpenXml.Drawing.SchemeColorValues.Accent3, 40, 4);
pic.Set3DMaterialType(DocumentFormat.OpenXml.Drawing.PresetMaterialTypeValues.TranslucentPowder);
// 5 pt above "ground"
pic.Set3DZDistance(5);
// field of view 105 degrees, zoom 100%
// camera latitude, longitude, revolution in degrees (50, 40, 30)
// light rig latitude, longitude, revolution in degrees (0, 0, 30)
pic.Set3DScene(DocumentFormat.OpenXml.Drawing.PresetCameraValues.PerspectiveFront, 105, 100, 50, 40, 30, DocumentFormat.OpenXml.Drawing.LightRigValues.Sunrise, DocumentFormat.OpenXml.Drawing.LightRigDirectionValues.TopLeft, 0, 0, 30);
sl.InsertPicture(pic);

sl.SaveAs("Pictures.xlsx");
```

I use the enumerations available in Open XML SDK as far as possible. This means you can sort of port over any programs you have written using Open XML SDK. And also that I don’t have to invent my own enumerations that are essentially copies of the SDK enumerations.

One thing that confused me was the 3D rotation options in Excel.

Excel used the vertical “axis” as the X value, and the horizontal “axis” as the Y value. To the user, the X, Y and Z values are like the first, second and third values of … something.

To me, X, Y and Z have special meanings, particularly when applied to the context of 3D. Those values don’t even increase in a consistent manner. The X value decreases if you use the right-hand rule (the left button), but the Y and Z values increase when you use the right-hand rule (left and right buttons respectively).

The Open XML SDK then complicates matters by using the terms latitude, longitude and revolution. Each ranging from 0 degrees to 359.9 degrees (or strictly less than 360 degrees in other words).

And in case you’re wondering about the state of the spreadsheet library progress, I froze the feature set of SpreadsheetLight for version 1. Otherwise, I’d never get it shipped. I’m now documenting the hashbrown out of the source code. The point is to make it ridiculously easy for a programmer to pick up SpreadsheetLight and start using it (without having to refer to a manual, which I’ll also be writing).

## How to insert multiple images in Excel Open XML

A blog reader, Fabrice Sacré, sent me an email with code about how to insert multiple images into an Excel file using Open XML. (I had fun playing Yanni’s “In The Mirror” on the virtual piano on his site.) He based it on my article on inserting just one single image into an Excel file. Based on his email address and signature, he works in the Red Cross. Wow! I’m honoured to be saving lives somehow, somewhere, even if in a long, circuitous and indirect manner. Possibly, I’m thinking too highly of myself and there’s no impact at all to the world at large, but I’m trying not to consider that.

Now when I started with the original code, I was in that “create xlsx, rename to zip, scrutinise xml till I fall asleep” mode. Frankly speaking, I had no idea what I was doing most of the time. And in this respect, the order of the individual code/data components was important, but I don’t know which one should come first (in fact, I still don’t). In particular, the code for inserting an image was before the code for inserting data. That worked fine for 1 image, but didn’t turn out so well when I tried with Fabrice’s code.

Ok, here’s the thing. Fabrice used an Excel template file (.xltx). He loaded that up into a new Excel file, then inserted images. Basically, you have an Excel file, then you obtained the worksheet you want, and then insert images to that worksheet. After I understood that (I had to ask him to send his full source code, because he only sent the image insertion function code), I made some changes to my original code to use his function code, and it worked! I’m not going to post the full source code he sent, because it might be sensitive (Red Cross! Saving lives, people!).

Here are the 3 images I used:

I like clouds and blue skies.

I used the “Crystalize” [sic] function in Paint.NET.

Everyone loves cats, right?

So here’s the main body of the code:

```static void Main(string[] args)
{
string sFile = "ExcelOpenXmlMultipleImageInsert.xlsx";
if (File.Exists(sFile))
{
File.Delete(sFile);
}
try
{
BuildWorkbook(sFile);
}
catch (Exception e)
{
Console.WriteLine(e.ToString());
}
Console.WriteLine("Program end");
}

protected static void BuildWorkbook(string filename)
{
{
Workbook wb = new Workbook();
FileVersion fv = new FileVersion();
fv.ApplicationName = "Microsoft Office Excel";
Worksheet ws = new Worksheet();
SheetData sd = new SheetData();

wbsp.Stylesheet = CreateStylesheet();
wbsp.Stylesheet.Save();

Row r = new Row();
r.RowIndex = 15;
Cell c;
c = new Cell();
c.DataType = CellValues.String;
c.StyleIndex = 1;
c.CellReference = "G15";
c.CellValue = new CellValue("We have multiple images!");
r.Append(c);
sd.Append(r);

ws.Append(sd);
wsp.Worksheet = ws;

// It happens the resolution for the 3 images are 72 dpi
// Images are 240 by 180 pixels

InsertImage(ws, 0, 0, "cloudstreaks.jpg");
// 2286000 = 180 (pixels) * 914400 (magic number) / 72 (bitmap resolution)
InsertImage(ws, 0, 2286000, "crystallisedpavilion.png");
// 3048000 = 240 (pixels) * 914400 (magic number) / 72 (bitmap resolution)
InsertImage(ws, 3048000, 0, "dozingcat.jpg");

wsp.Worksheet.Save();
Sheets sheets = new Sheets();
Sheet sheet = new Sheet();
sheet.Name = "Sheet1";
sheet.SheetId = 1;
sheet.Id = wbp.GetIdOfPart(wsp);
sheets.Append(sheet);
wb.Append(fv);
wb.Append(sheets);

xl.WorkbookPart.Workbook = wb;
xl.WorkbookPart.Workbook.Save();
xl.Close();
}
}
```

I used the CreateStyleSheet() from this article. I want you to note that the data creation (the single Cell containing the sentence) is now before the image insertion. My approach was originally to model what I see on an Excel worksheet. I would most probably see the image(s) at the top, then the (table of) data below. Hence I modeled the coding to be as such. Apparently, that didn’t work out so well.

[UPDATE: For the CreateStyleSheet() function, it turns out that the `NumberFormats` class is no longer available in Open XML SDK 2.0. It’s now `NumberingFormats`. I’ve changed it for the source code for this article (but not for previous code articles)]

Fabrice’s approach was to use a template, then insert images to a particular worksheet. The data sheet was already created (via the template). My approach was to insert images and data in memory, so to speak, before committing the save to the worksheet/workbook parts. Basically, I was trying to create a worksheet out of thin air, add images, add data, and then plug it onto the workbook. Oh well, whatever works then…

Here’s the image insertion function Fabrice sent:

```protected static void InsertImage(Worksheet ws, long x, long y, long? width, long? height, string sImagePath)
{
try
{
WorksheetPart wsp = ws.WorksheetPart;
DrawingsPart dp;
ImagePart imgp;
WorksheetDrawing wsd;

ImagePartType ipt;
switch (sImagePath.Substring(sImagePath.LastIndexOf('.') + 1).ToLower())
{
case "png":
ipt = ImagePartType.Png;
break;
case "jpg":
case "jpeg":
ipt = ImagePartType.Jpeg;
break;
case "gif":
ipt = ImagePartType.Gif;
break;
default:
return;
}

if (wsp.DrawingsPart == null)
{
//----- no drawing part exists, add a new one

wsd = new WorksheetDrawing();
}
else
{
//----- use existing drawing part

dp = wsp.DrawingsPart;
dp.CreateRelationshipToPart(imgp);
wsd = dp.WorksheetDrawing;
}

using (FileStream fs = new FileStream(sImagePath, FileMode.Open))
{
imgp.FeedData(fs);
}

int imageNumber = dp.ImageParts.Count<ImagePart>();
if (imageNumber == 1)
{
Drawing drawing = new Drawing();
drawing.Id = dp.GetIdOfPart(imgp);
ws.Append(drawing);
}

NonVisualDrawingProperties nvdp = new NonVisualDrawingProperties();
nvdp.Id = new UInt32Value((uint)(1024 + imageNumber));
nvdp.Name = "Picture " + imageNumber.ToString();
nvdp.Description = "";
DocumentFormat.OpenXml.Drawing.PictureLocks picLocks = new DocumentFormat.OpenXml.Drawing.PictureLocks();
picLocks.NoChangeAspect = true;
NonVisualPictureDrawingProperties nvpdp = new NonVisualPictureDrawingProperties();
nvpdp.PictureLocks = picLocks;
NonVisualPictureProperties nvpp = new NonVisualPictureProperties();
nvpp.NonVisualDrawingProperties = nvdp;
nvpp.NonVisualPictureDrawingProperties = nvpdp;

DocumentFormat.OpenXml.Drawing.Stretch stretch = new DocumentFormat.OpenXml.Drawing.Stretch();
stretch.FillRectangle = new DocumentFormat.OpenXml.Drawing.FillRectangle();

BlipFill blipFill = new BlipFill();
DocumentFormat.OpenXml.Drawing.Blip blip = new DocumentFormat.OpenXml.Drawing.Blip();
blip.Embed = dp.GetIdOfPart(imgp);
blip.CompressionState = DocumentFormat.OpenXml.Drawing.BlipCompressionValues.Print;
blipFill.Blip = blip;
blipFill.SourceRectangle = new DocumentFormat.OpenXml.Drawing.SourceRectangle();
blipFill.Append(stretch);

DocumentFormat.OpenXml.Drawing.Transform2D t2d = new DocumentFormat.OpenXml.Drawing.Transform2D();
DocumentFormat.OpenXml.Drawing.Offset offset = new DocumentFormat.OpenXml.Drawing.Offset();
offset.X = 0;
offset.Y = 0;
t2d.Offset = offset;
Bitmap bm = new Bitmap(sImagePath);

DocumentFormat.OpenXml.Drawing.Extents extents = new DocumentFormat.OpenXml.Drawing.Extents();

if (width == null)
extents.Cx = (long)bm.Width * (long)((float)914400 / bm.HorizontalResolution);
else
extents.Cx = width;

if (height == null)
extents.Cy = (long)bm.Height * (long)((float)914400 / bm.VerticalResolution);
else
extents.Cy = height;

bm.Dispose();
t2d.Extents = extents;
ShapeProperties sp = new ShapeProperties();
sp.BlackWhiteMode = DocumentFormat.OpenXml.Drawing.BlackWhiteModeValues.Auto;
sp.Transform2D = t2d;
DocumentFormat.OpenXml.Drawing.PresetGeometry prstGeom = new DocumentFormat.OpenXml.Drawing.PresetGeometry();
prstGeom.Preset = DocumentFormat.OpenXml.Drawing.ShapeTypeValues.Rectangle;
sp.Append(prstGeom);
sp.Append(new DocumentFormat.OpenXml.Drawing.NoFill());

picture.NonVisualPictureProperties = nvpp;
picture.BlipFill = blipFill;
picture.ShapeProperties = sp;

Position pos = new Position();
pos.X = x;
pos.Y = y;
Extent ext = new Extent();
ext.Cx = extents.Cx;
ext.Cy = extents.Cy;
AbsoluteAnchor anchor = new AbsoluteAnchor();
anchor.Position = pos;
anchor.Extent = ext;
anchor.Append(picture);
anchor.Append(new ClientData());
wsd.Append(anchor);
wsd.Save(dp);
}
catch (Exception ex)
{
throw ex; // or do something more interesting if you want
}
}

protected static void InsertImage(Worksheet ws, long x, long y, string sImagePath)
{
InsertImage(ws, x, y, null, null, sImagePath);
}
```

As of now, it supports only the image formats gif, jpg (jpeg) and png. However, there’s no stopping you from adding support for bmp, tiff, emf and other image formats supported by the `ImagePartType` enumeration.

Have fun.

There’s updated material and source code, together with more information on how to work with Open XML. Click here to find out more.

## OneCellAnchor might be easier to use than TwoCellAnchor

A reader Kevin emailed me about the use of `OneCellAnchor` class. This is for coding using Excel Open XML format. The relevant article is about image insertion in Excel. Here’s his comment:

I’ve been using the OneCellAnchor and find it perfect: position by row/column and size by image extent in EMU and best of all, the image will not resize when column widths are set.

If you’re having trouble with figuring out the `TwoCellAnchor` class properties, you might want to check out `OneCellAnchor` class instead. Might be easier to work with.

Thanks, Kevin, for that information.

There’s updated material and source code, together with more information on how to work with Open XML. Click here to find out more.

## Image rotation with bilinear interpolation and no clipping

I wrote an article on image rotation with bilinear interpolation to calculate the pixel information. Commenter Jahn wanted to know if the resulting image could be obtained without clipping. I replied with the required code changes (it was fairly minimal), then I thought it might be useful to you too.

The code was cleaned up a bit, and here it is:

```const int cnSizeBuffer = 20;
// 30 deg = PI/6 rad
// rotating clockwise, so it's negative relative to Cartesian quadrants
const double cnAngle = -0.52359877559829887307710723054658;
// use whatever image you fancy
Bitmap bm = new Bitmap("slantedtreetopsky.jpg");
// general iterators
int i, j;
// calculated indices in Cartesian coordinates
int x, y;
double fDistance, fPolarAngle;
// for use in neighbouring indices in Cartesian coordinates
int iFloorX, iCeilingX, iFloorY, iCeilingY;
// calculated indices in Cartesian coordinates with trailing decimals
double fTrueX, fTrueY;
// for interpolation
double fDeltaX, fDeltaY;
// pixel colours
Color clrTopLeft, clrTopRight, clrBottomLeft, clrBottomRight;
// interpolated "top" pixels
double fTopRed, fTopGreen, fTopBlue;
// interpolated "bottom" pixels
double fBottomRed, fBottomGreen, fBottomBlue;
// final interpolated colour components
int iRed, iGreen, iBlue;
int iCentreX, iCentreY;
int iDestCentre;
int iWidth, iHeight;
int iDiagonal;
iWidth = bm.Width;
iHeight = bm.Height;
iDiagonal = (int)(Math.Ceiling(Math.Sqrt((double)(bm.Width * bm.Width + bm.Height * bm.Height)))) + cnSizeBuffer;

iCentreX = iWidth / 2;
iCentreY = iHeight / 2;
iDestCentre = iDiagonal / 2;

Bitmap bmBilinearInterpolation = new Bitmap(iDiagonal, iDiagonal);

for (i = 0; i < iDiagonal; ++i)
{
for (j = 0; j < iDiagonal; ++j)
{
bmBilinearInterpolation.SetPixel(j, i, Color.Black);
}
}

// assigning pixels of destination image from source image
// with bilinear interpolation
for (i = 0; i < iDiagonal; ++i)
{
for (j = 0; j < iDiagonal; ++j)
{
// convert raster to Cartesian
x = j - iDestCentre;
y = iDestCentre - i;

// convert Cartesian to polar
fDistance = Math.Sqrt(x * x + y * y);
fPolarAngle = 0.0;
if (x == 0)
{
if (y == 0)
{
// centre of image, no rotation needed
bmBilinearInterpolation.SetPixel(j, i, bm.GetPixel(iCentreX, iCentreY));
continue;
}
else if (y < 0)
{
fPolarAngle = 1.5 * Math.PI;
}
else
{
fPolarAngle = 0.5 * Math.PI;
}
}
else
{
fPolarAngle = Math.Atan2((double)y, (double)x);
}

// the crucial rotation part
// "reverse" rotate, so minus instead of plus
fPolarAngle -= cnAngle;

// convert polar to Cartesian
fTrueX = fDistance * Math.Cos(fPolarAngle);
fTrueY = fDistance * Math.Sin(fPolarAngle);

// convert Cartesian to raster
fTrueX = fTrueX + (double)iCentreX;
fTrueY = (double)iCentreY - fTrueY;

iFloorX = (int)(Math.Floor(fTrueX));
iFloorY = (int)(Math.Floor(fTrueY));
iCeilingX = (int)(Math.Ceiling(fTrueX));
iCeilingY = (int)(Math.Ceiling(fTrueY));

// check bounds
if (iFloorX < 0 || iCeilingX < 0 || iFloorX >= iWidth || iCeilingX >= iWidth || iFloorY < 0 || iCeilingY < 0 || iFloorY >= iHeight || iCeilingY >= iHeight) continue;

fDeltaX = fTrueX - (double)iFloorX;
fDeltaY = fTrueY - (double)iFloorY;

clrTopLeft = bm.GetPixel(iFloorX, iFloorY);
clrTopRight = bm.GetPixel(iCeilingX, iFloorY);
clrBottomLeft = bm.GetPixel(iFloorX, iCeilingY);
clrBottomRight = bm.GetPixel(iCeilingX, iCeilingY);

// linearly interpolate horizontally between top neighbours
fTopRed = (1 - fDeltaX) * clrTopLeft.R + fDeltaX * clrTopRight.R;
fTopGreen = (1 - fDeltaX) * clrTopLeft.G + fDeltaX * clrTopRight.G;
fTopBlue = (1 - fDeltaX) * clrTopLeft.B + fDeltaX * clrTopRight.B;

// linearly interpolate horizontally between bottom neighbours
fBottomRed = (1 - fDeltaX) * clrBottomLeft.R + fDeltaX * clrBottomRight.R;
fBottomGreen = (1 - fDeltaX) * clrBottomLeft.G + fDeltaX * clrBottomRight.G;
fBottomBlue = (1 - fDeltaX) * clrBottomLeft.B + fDeltaX * clrBottomRight.B;

// linearly interpolate vertically between top and bottom interpolated results
iRed = (int)(Math.Round((1 - fDeltaY) * fTopRed + fDeltaY * fBottomRed));
iGreen = (int)(Math.Round((1 - fDeltaY) * fTopGreen + fDeltaY * fBottomGreen));
iBlue = (int)(Math.Round((1 - fDeltaY) * fTopBlue + fDeltaY * fBottomBlue));

// make sure colour values are valid
if (iRed < 0) iRed = 0;
if (iRed > 255) iRed = 255;
if (iGreen < 0) iGreen = 0;
if (iGreen > 255) iGreen = 255;
if (iBlue < 0) iBlue = 0;
if (iBlue > 255) iBlue = 255;

bmBilinearInterpolation.SetPixel(j, i, Color.FromArgb(iRed, iGreen, iBlue));
}
}
bmBilinearInterpolation.Save("rotatedslantedtreetopsky.jpg", System.Drawing.Imaging.ImageFormat.Jpeg);
```

In the original article, the resulting image had the same dimensions as the source image. This meant that after rotation, some pixel information would be lost. To counter that, we make the dimensions of the resulting image “large enough”.

Considering that this is a rotation operation with an arbitrary angle, when all possible rotations of the source image are placed on top of one another, you get a smudgy circle. Not sure if you can visualise that. So the resulting image must be able to contain a circle, and because our images are stored in rectangular format, we need a square. That’s why in the code, I didn’t use a separate X and Y component of the dimensions, because they’re the same.

The width of the square has to be, at the minimum, the diagonal of the source image (which you can use Pythagoras’ Theorem to calculate). I added a buffer (of 20 pixels) to ensure that our square can contain the resulting image.

Here’s the source image:

And here’s the resulting image:

Alright, have fun with the code.

## How to insert an image in Excel Open XML

This is a continuation of a series of articles on using Excel Open XML. Previously we looked at how to create stylesheets. Today, we’ll look at inserting images. (You will need the Open XML SDK from Microsoft)

Similar to when I was writing code for the stylesheet creation, I have little idea on why any piece of code has to exist for an image to be inserted successfully. All I can advise you is to insert one image into an Excel file, save it as an Excel Open XML format, rename it to a .zip extension, unzip it, and study the contents like crazy. That’s what I did.

Here, I am going to show you the shortcut, and relieve you of all that pain. You will learn to create a simple Excel file, with just one Excel sheet in it. And on that one sheet, is an image. That’s it. You’ll be surprised to know how much code you have to write…

```static void Main(string[] args)
{
string sFile = "ExcelOpenXmlWithImage.xlsx";
if (File.Exists(sFile))
{
File.Delete(sFile);
}
BuildWorkbook(sFile);
}

private static void BuildWorkbook(string filename)
{
try
{
{
Workbook wb = new Workbook();
FileVersion fv = new FileVersion();
fv.ApplicationName = "Microsoft Office Excel";
Worksheet ws = new Worksheet();
SheetData sd = new SheetData();

string sImagePath = "polymathlogo.png";
using (FileStream fs = new FileStream(sImagePath, FileMode.Open))
{
imgp.FeedData(fs);
}

NonVisualDrawingProperties nvdp = new NonVisualDrawingProperties();
nvdp.Id = 1025;
nvdp.Name = "Picture 1";
nvdp.Description = "polymathlogo";
DocumentFormat.OpenXml.Drawing.PictureLocks picLocks = new DocumentFormat.OpenXml.Drawing.PictureLocks();
picLocks.NoChangeAspect = true;
NonVisualPictureDrawingProperties nvpdp = new NonVisualPictureDrawingProperties();
nvpdp.PictureLocks = picLocks;
NonVisualPictureProperties nvpp = new NonVisualPictureProperties();
nvpp.NonVisualDrawingProperties = nvdp;
nvpp.NonVisualPictureDrawingProperties = nvpdp;

DocumentFormat.OpenXml.Drawing.Stretch stretch = new DocumentFormat.OpenXml.Drawing.Stretch();
stretch.FillRectangle = new DocumentFormat.OpenXml.Drawing.FillRectangle();

BlipFill blipFill = new BlipFill();
DocumentFormat.OpenXml.Drawing.Blip blip = new DocumentFormat.OpenXml.Drawing.Blip();
blip.Embed = dp.GetIdOfPart(imgp);
blip.CompressionState = DocumentFormat.OpenXml.Drawing.BlipCompressionValues.Print;
blipFill.Blip = blip;
blipFill.SourceRectangle = new DocumentFormat.OpenXml.Drawing.SourceRectangle();
blipFill.Append(stretch);

DocumentFormat.OpenXml.Drawing.Transform2D t2d = new DocumentFormat.OpenXml.Drawing.Transform2D();
DocumentFormat.OpenXml.Drawing.Offset offset = new DocumentFormat.OpenXml.Drawing.Offset();
offset.X = 0;
offset.Y = 0;
t2d.Offset = offset;
Bitmap bm = new Bitmap(sImagePath);
//http://en.wikipedia.org/wiki/English_Metric_Unit#DrawingML
//http://stackoverflow.com/questions/1341930/pixel-to-centimeter
//http://stackoverflow.com/questions/139655/how-to-convert-pixels-to-points-px-to-pt-in-net-c
DocumentFormat.OpenXml.Drawing.Extents extents = new DocumentFormat.OpenXml.Drawing.Extents();
extents.Cx = (long)bm.Width * (long)((float)914400 / bm.HorizontalResolution);
extents.Cy = (long)bm.Height * (long)((float)914400 / bm.VerticalResolution);
bm.Dispose();
t2d.Extents = extents;
ShapeProperties sp = new ShapeProperties();
sp.BlackWhiteMode = DocumentFormat.OpenXml.Drawing.BlackWhiteModeValues.Auto;
sp.Transform2D = t2d;
DocumentFormat.OpenXml.Drawing.PresetGeometry prstGeom = new DocumentFormat.OpenXml.Drawing.PresetGeometry();
prstGeom.Preset = DocumentFormat.OpenXml.Drawing.ShapeTypeValues.Rectangle;
sp.Append(prstGeom);
sp.Append(new DocumentFormat.OpenXml.Drawing.NoFill());

picture.NonVisualPictureProperties = nvpp;
picture.BlipFill = blipFill;
picture.ShapeProperties = sp;

Position pos = new Position();
pos.X = 0;
pos.Y = 0;
Extent ext = new Extent();
ext.Cx = extents.Cx;
ext.Cy = extents.Cy;
AbsoluteAnchor anchor = new AbsoluteAnchor();
anchor.Position = pos;
anchor.Extent = ext;
anchor.Append(picture);
anchor.Append(new ClientData());
WorksheetDrawing wsd = new WorksheetDrawing();
wsd.Append(anchor);
Drawing drawing = new Drawing();
drawing.Id = dp.GetIdOfPart(imgp);

wsd.Save(dp);

ws.Append(sd);
ws.Append(drawing);
wsp.Worksheet = ws;
wsp.Worksheet.Save();
Sheets sheets = new Sheets();
Sheet sheet = new Sheet();
sheet.Name = "Sheet1";
sheet.SheetId = 1;
sheet.Id = wbp.GetIdOfPart(wsp);
sheets.Append(sheet);
wb.Append(fv);
wb.Append(sheets);

xl.WorkbookPart.Workbook = wb;
xl.WorkbookPart.Workbook.Save();
xl.Close();
}
}
catch (Exception e)
{
Console.WriteLine(e.ToString());
}
}
```

It’s a console program, so create your C# project accordingly. There were 3 reference articles I used:

The use of the `ImagePart` I can understand. It’s to load the image file you had in mind. The rest is pure gibberish until you hit the part on resolution. That’s where I used the Stack Overflow articles as reference. Then it was gibberish until the positioning of the image.

This is where I did something different. If you did the trick on inserting image into Excel, saving it as Open XML format, renaming to .zip, and unzipping that file, you might find this in the xl/drawings/drawing1.xml file:

```-<xdr:twoCellAnchor editAs="oneCell">
- <xdr:from>
<xdr:col>0</xdr:col>
<xdr:colOff>0</xdr:colOff>
<xdr:row>0</xdr:row>
<xdr:rowOff>0</xdr:rowOff>
</xdr:from>
- <xdr:to>
<xdr:col>6</xdr:col>
<xdr:colOff>248195</xdr:colOff>
<xdr:row>7</xdr:row>
<xdr:rowOff>152608</xdr:rowOff>
</xdr:to>
```

This is for positioning the image. I’m using the `AbsoluteAnchor` instead of the `TwoCellAnchor`. After much hairtearing, I’ve decided the `TwoCellAnchor` class is too hard to use. Let me tell you why.

[UPDATE: A reader Kevin noted the following:

I’ve been using the OneCellAnchor and find it perfect: position by row/column and size by image extent in EMU and best of all, the image will not resize when column widths are set.

So you can look at the `OneCellAnchor` class and see if it suits you better.
]

The `from` tag should be easy to understand. The `col` tag refers to which Excel column (zero-based) is the left side of the image in. The `colOff` tag is the offset. What offset you ask? The offset from the left-border of the column referenced in the `col` tag, in EMU. Yes, that English Metric Unit. The `row` and `rowOff` tags refer to the Excel row and offset from said Excel row (based on the top border of the image).

So far, it’s 0 for all 4 properties, because we’re inserting the image at the top left corner. Now comes the fun part.

In the `to` tag, the bottom right of the image is in the Excel cell, column 6, row 7. That’s not all. The bottom right of the image is 248195 EMUs from the left of column 6, and 152608 EMUs from the top of row 7. Note that the image is not 248195 EMUs wide nor 152608 high.

Since those offsets are calculated based on its current cell, if the widths and heights of previous cells (column and row indices less than current ones) change, these offsets change. This means, if I widen the first cell, the offset will change from 248195 EMUs to say 1000 EMUs.

The fun part is, if I widen the first cell enough, the bottom right of the image could end up in column 5. So the column and column offset values are in flux.

To really nail this part down, I will need to know the exact widths of every column up to where the bottom right of the image is. I will probably decide on some default column width in EMU, set every column to be that width. Then I’ll convert the width of the image from pixels to EMU and calculate the offset based on that default column width. And woe befall me if I happen to place the image other than at the top left corner, since I would have to do the same thing for the top left corner of the image.

That wouldn’t be feasible if your document happen to require a fixed width for some columns, due to the content of those columns. So your columns won’t all have the same width. So you’ll have to keep track of all the column widths.

Needless to say, I researched and found the `AbsoluteAnchor` class. Much fewer properties to deal with…

Just in case you’re wondering, no, I don’t know why the ID of `NonVisualDrawingProperties` is 1025. That’s the value I found when I did the renaming-unzipping-gawk-at-xlsx-content trick. I believe it can be some other value, but I don’t know the valid range. If you do know, chime in with a comment.

Oh and here’s the image used.

Lovely, isn’t it? *smile*

There’s updated material and source code, together with more information on how to work with Open XML. Click here to find out more.

## Convert 360 degree fisheye image to landscape mode

In this article, you will learn how to flatten a 360 degree fisheye image back to its landscape panoramic form. But first, what’s a 360 degree fisheye image?

[image by gcardinal]

It’s created by using fisheye lens in your camera, take a bunch of pictures using that, and use stereographic projection to get the final image. Or so I understand.

Basically, it’s like taking a normal (but panoramic is better) picture and “curling” it along its bottom or top (but much more complicated than that).

Here’s how to visualise the fisheye image. Hold your right hand, palm downwards, in front of you, thumb towards you, little finger away from you. The tips of your fingers form the “left end” of the image. Your wrist forms the “right end” of the image. Now form a fist with your right hand. There’s your fisheye image.

### The conversion algorithm explanation

So given the fisheye image, we want to get that landscape image back. And the way we do that, is to “uncurl” the fisheye image. Here’s the diagram to explain the logic:

You may have noticed that the corners of the source image will not be in the resulting image. You may also notice that the centre of the source image is very “squeezed”, and the pixels around there will be repeated in the resulting image.

The first problem can be solved by using a bigger destination image. But it doesn’t matter to me, and you will get a jagged top with unfilled pixels. I didn’t like that, so decided to give those up.

The second problem… I don’t know if there’s a solution. Because you’re trying to “guess” the pixels mapped in the destination image from the source image. But the source image has less pixel information. The simplest solution seems to be to get a higher resolution source image. But that only mitigate the problem, not solve it.

You may also notice that only the pixels within the inscribed circle of the source image is used. Well, what do you get when you curl up a line? A circle. *wink*

What happens when circles are involved? Radius and angles, that’s what.

So in the destination image, in raster coordinates, going from top to bottom is equivalent to going from outer inscribed circle of source image to centre of source image. Or the variable l slowly reduces to zero.

Going from left to right is equivalent to going from 2PI radians and decreasing to 0 radians on the inscribed circle. It’s also equivalent to going from 0 radians to -2PI radians. Sines and cosines are periodic functions.

Here’s another diagram to show what happens when we iterate over the destination image:

Before we get to the code, here are 2 assumptions to simplify the process:

• The source image is square
• The width of the source image is even

They’re not necessary, but they make the programming easier. And I’m mapping the quadrants to the standard Cartesian quadrants because they make the math easier. The centre of the image should be the centre of the “circle” (or that small planet, as it’s affectionately known).

[The original source image wasn’t square, and its centre wasn’t the centre of the planet. So I cut the image, and guessed as best as I could on the centre. More info on the centre later in the article.]

### Fisheye to landscape algorithm/code

I’m plagiarising my own code from the image rotation with bilinear interpolation article for the bilinear interpolating parts. There are 2 resulting images, one with and one without bilinear interpolation. And here’s the code:

```// assume the source image is square, and its width has even number of pixels
Bitmap bm = new Bitmap("lillestromfisheye.jpg");
int l = bm.Width / 2;
Bitmap bmDestination = new Bitmap(4 * l, l);
Bitmap bmBilinear = new Bitmap(4 * l, l);
int i, j;
int x, y;

// for use in neighbouring indices in Cartesian coordinates
int iFloorX, iCeilingX, iFloorY, iCeilingY;
// calculated indices in Cartesian coordinates with trailing decimals
double fTrueX, fTrueY;
// for interpolation
double fDeltaX, fDeltaY;
// pixel colours
Color clrTopLeft, clrTopRight, clrBottomLeft, clrBottomRight;
// interpolated "top" pixels
double fTopRed, fTopGreen, fTopBlue;
// interpolated "bottom" pixels
double fBottomRed, fBottomGreen, fBottomBlue;
// final interpolated colour components
int iRed, iGreen, iBlue;

for (i = 0; i < bmDestination.Height; ++i)
{
for (j = 0; j < bmDestination.Width; ++j)
{
// theta = 2.0 * Math.PI * (double)(4.0 * l - j) / (double)(4.0 * l);
theta = 2.0 * Math.PI * (double)(-j) / (double)(4.0 * l);

// "normal" mode
x = (int)(Math.Round(fTrueX)) + l;
y = l - (int)(Math.Round(fTrueY));
// check bounds
if (x >= 0 && x < (2 * l) && y >= 0 && y < (2 * l))
{
bmDestination.SetPixel(j, i, bm.GetPixel(x, y));
}

// bilinear mode
fTrueX = fTrueX + (double)l;
fTrueY = (double)l - fTrueY;

iFloorX = (int)(Math.Floor(fTrueX));
iFloorY = (int)(Math.Floor(fTrueY));
iCeilingX = (int)(Math.Ceiling(fTrueX));
iCeilingY = (int)(Math.Ceiling(fTrueY));

// check bounds
if (iFloorX < 0 || iCeilingX < 0 ||
iFloorX >= (2 * l) || iCeilingX >= (2 * l) ||
iFloorY < 0 || iCeilingY < 0 ||
iFloorY >= (2 * l) || iCeilingY >= (2 * l)) continue;

fDeltaX = fTrueX - (double)iFloorX;
fDeltaY = fTrueY - (double)iFloorY;

clrTopLeft = bm.GetPixel(iFloorX, iFloorY);
clrTopRight = bm.GetPixel(iCeilingX, iFloorY);
clrBottomLeft = bm.GetPixel(iFloorX, iCeilingY);
clrBottomRight = bm.GetPixel(iCeilingX, iCeilingY);

// linearly interpolate horizontally between top neighbours
fTopRed = (1 - fDeltaX) * clrTopLeft.R + fDeltaX * clrTopRight.R;
fTopGreen = (1 - fDeltaX) * clrTopLeft.G + fDeltaX * clrTopRight.G;
fTopBlue = (1 - fDeltaX) * clrTopLeft.B + fDeltaX * clrTopRight.B;

// linearly interpolate horizontally between bottom neighbours
fBottomRed = (1 - fDeltaX) * clrBottomLeft.R + fDeltaX * clrBottomRight.R;
fBottomGreen = (1 - fDeltaX) * clrBottomLeft.G + fDeltaX * clrBottomRight.G;
fBottomBlue = (1 - fDeltaX) * clrBottomLeft.B + fDeltaX * clrBottomRight.B;

// linearly interpolate vertically between top and bottom interpolated results
iRed = (int)(Math.Round((1 - fDeltaY) * fTopRed + fDeltaY * fBottomRed));
iGreen = (int)(Math.Round((1 - fDeltaY) * fTopGreen + fDeltaY * fBottomGreen));
iBlue = (int)(Math.Round((1 - fDeltaY) * fTopBlue + fDeltaY * fBottomBlue));

// make sure colour values are valid
if (iRed < 0) iRed = 0;
if (iRed > 255) iRed = 255;
if (iGreen < 0) iGreen = 0;
if (iGreen > 255) iGreen = 255;
if (iBlue < 0) iBlue = 0;
if (iBlue > 255) iBlue = 255;

bmBilinear.SetPixel(j, i, Color.FromArgb(iRed, iGreen, iBlue));
}
}

bmDestination.Save("fisheyelandscape.jpg", System.Drawing.Imaging.ImageFormat.Jpeg);
bmBilinear.Save("fisheyebilinearlandscape.jpg", System.Drawing.Imaging.ImageFormat.Jpeg);
```

### So what did we get from our fisheye image?

Let’s look at our results, shall we? First, let’s bring our source image back.

Here’s the straight-up result from the algorithm:

Here’s the result with bilinear interpolation:

And I’m fortunate that the original photographer had the original landscape version for comparison:

[image by gcardinal]

Hmm… better than I expected. The code was also easier than I expected. I think it’s the math that was harder…

The fisheye image I used is one of those where the bottom of the original landscape image is curled into the centre. The other type is where the top of the original image is curled into the centre.

In that case, using the algorithm provided results in an upside-down image. I’ll leave it as an exercise to you for flipping the resulting image right-side-up.

One note about the centre of the source image. I found that if the centre is off, the horizon of the resulting image won’t be flat. I’m just showing you how to get the landscape image from a “perfect” source fisheye image. I’ll leave the details of fixing missing corners and undulating horizons to you.

Oh, while figuring out the algorithm and writing the code, I needed a test image. So I made this monstrosity:

with this result:

It ain’t pretty, but I needed to test if the quadrants map correctly…

P.S. A reader submitted this in a recent survey. I hope I’m interpreting the question and answering it correctly.