søndag den 12. april 2009

Some stuff clutter has been used for so far

http://www.youtube.com/watch?v=arL_-tQndzI&feature=PlayList&p=9FAEABEC2B33B600&index=0&playnext=1

I slapped together a playlist on youtube of some videos that demonstrate applications slapped together with clutter.

If you're wondering why there aren't any more videoes outthere, the answer is simple:

Clutter is only about 2 years old, and Intel only purchased openedhand in september 08. It hasn't even been released in version 1.0 yet.

Perhaps of some interest is the fact that the iphone is also fully capable of running clutter applications, at least with regards to hardware. The same can be said for all symbian smartphones, and many of HTC's windows mobile smartphones, though again, it might not be easy to rig the software to allow it to happen.

But the important point is, if something is written using the clutter library it's not particularly constrained or encumbered by graphics hardware.

The theoretical case for using the clutter framework in an actual game

I've been trying to wrap my mind around the clutter framework and the accompanying technologies for a while now, because it seems to be a powerful library that makes a lot of features easily accesible.

What might not be apparent, is that the problem it solves - the rendering of 2d graphics - is a problem in the first place. So let's examine for a moment what it actually is: A library that utilizes opengl as a backend for drawing hardware accelerated graphics.

Why is hardware acceleration necessary for a 2d game in this day and age? I hear you cry.
Well have a look at this:

http://en.wikipedia.org/wiki/Comparison_of_OpenGL_and_Direct3D#Early_debate

Excerpt from the linked section:

"By 1998, even the much-maligned S3 Virge was substantially faster than the fastest Pentium II running Direct3D’s MMX rasterizer."

The S3 virge, presumably, was an elcrapo graphics card with 4 megabytes of VRAM, that might've cost about 40$ to slap into your run-of-the-mill prefabbed consumer desktop computer at Dell or HP. The fastest Pentium II, by comparison, must've been about 6-700$ or so, not even available in preassembled computers unless specifically requested.

The perspective this is meant to impress upon you is this: Hardware acceleration rocks, software emulation sucks, and this has been the case since 10 years ago, with the gap in performance/price widening every quarter ever since.

Ok, so we need hardware acceleration for the graphics. How do we get that?
We use a library. Libraries are collections of functional code that performs operations detailed in an API. A library is different from a system call in that a system call is implemented in the operating system, whereas the library resides in a little bubble attached to the operating system. Indeed, it "talks" to other libraries and hardware as directed and permitted by the operating system.

Even Direct X, which is microsofts multimedia library, is in a bubble attached to the operating system. While it's true that you cannot run windows without having some form of direct x on your system, direct x is a library because it does not belong within the windows kernel. As such, it's a library in it's own right. Microsoft also includes a version of openGL with Windows.

So why not simply use Direct X or OpenGL? Why use a library build on top of another library?
Because OpenGL, for example, is very simple. It only allows a program you write to communicate with graphical hardware through some very rigid functions.
Textures are a common concept within graphics rendering; it's a bitmap projected onto a geometric shape. But to OpenGL, a texture isn't a file on disk; it's a bitmap in memory. Converting a file on disk into a bitmap in memory is a function which OpenGL does not provide. So you need to either program it yourself, or find another library which can be used to read files into memory in a way that makes the final result compatible with OpenGL.

But Mads, you say, surely there are 2d libraries with hardware acceleration other than OpenGL and Direct3d, which are, after all, predominantly 3d libraries?

No, no there are not. Yes, there are 2d libraries, but no, they do not have hardware acceleration. Look at the firefox browser you're probably using to browse this blog right now. It uses the Cairo library. It's not hardware accelerated, and it shows. That's why web browsing is so processor intensive compared to how piss poor the graphics are. The responsiveness of the firefox webbrowser is _not_ good enough that the same kind of thing would go over well in a game.

So, clutter. It's a library that supports text rendering (which is a motherfucking bitch to implement yourself), textures (as in, you can load them from disc and manipulate them using an established framework), and fluid animations, all in an easy-access framework. It's also brand spanking new. And it's primarily a 2d rendering library, but perfect for working with many layers and grouped things because it has a scenegraph and a z axis thingie.

Sure, it's meant for designing interfaces, but that doesn't matter; if it's hardware accelerated, performance will probably be up to snuff. It even supports rendering of videos onto textures, and yes, this can be hardware accelerated as well.

About system requirements:
On ebay, for less than 200$, an eee pc 4g, the lowest performance eee, can be had. It's intel GMA900 graphics card can hardware accelerate opengl 1.4 - which is what clutter uses. Now seeing as intel owns openedhand, the developer behind the clutter library, this should hardly come as a suprise - the intel GMA 900 and above almost exclusively make up the graphics hardware in netbooks, and intel purchased openedhand with the intent of using clutter for rendering the UI on it's moblix platform....which is the operating system designed to push netbooks and mobile internet devices developed by intel.

Making clutter run well on the GMA900 and above, as a concequence, is likely a high priority at openedhand...

This implies that clutter will be a future-proof rendering option for a 2d game, and it also means that any 2d game written with it will likely perform well even on future mobile internet devices. Oh, and it's a cross platform library, of course, so games written with it run both on windows, linux and mac os x.

But if running on tiny, low performance netbooks isn't good enough, the first geforce graphics card to support opengl 1.4 is geforce 4, ti4600, which was the top of the line model back in febuary 2002. Every geforce model since 5xxx and forward has supported it as well. Same goes for every radeon including and above the 9500, which is also ca. 2002.

It's also likely that most laptops support it, though it's difficult to say because laptop gpu's are considerably worse documented; but anything both nVidea and ATI have been shovelling into laptops since 2004 ought to support clutter, and it's supported on _all_ of intels graphics hardware.

Finally, about licensing: Clutter, and all libraries it uses and works with, is released under LGPL. This means that it's legal to build proprietary software using the library, but that anyone who does so, ideally, allows people to upgrade to new versions of the libraries used if they want. I haven't figured out how difficult this aspect might be, yet, but I don't foresee any problems.

mandag den 6. april 2009

The Spy Blimp

I might try to draw up some sketches at some point, but for now, I want to document a concept here for posteritys sake.

Spy blimps are a whole world of awesome. I spent an hour or so designing a particularly cool one in my head. Well, to me it's particularly cool.

See, blimps have always been fascinating to me...it's one thing to fly, but it's quite another to remain airborne. A propeller airplane is little more than a bird - it can get about much faster than walking or jumping, but it's still severely constrained. What goes up must come down and all that.

I've never been in one though...closest thing I've been in was an airglider airplane - a plane flying on airstreams, very light, requirering no engine...but, while capable of flying quite far, they must keep in motion, must maneuver with the wind, must...well, you get the picture. They're inhibited.

Air baloons, I suppose, are kindof cool too....in the same way hitchhiking is. You never know quite where you end up, but at least you're getting somewhere. It's, again, ihibited.

Not so with blimps. Blimps can remain airborne for as long as they want, like air baloons, but they can go places, like airplanes....Perfect!

Well, when I say blimp, it's because DARPA has been contracted for something the popular news outlets call a spy blimp...what I really thought up is an awesometastic derrigible. See, a derrigible has an internal construction and an internal structure, and the outer coating is not a balloon itself...rather, it's just there to cover the structure of the craft against the elements. Indeed, large derrigibles can be thought of as assemblies of helium or hydrogen balloons, embued with other properties (and engines!).

Large derrigibles are powerful, and are able to lift quite a lot of weight, but the classic ones are run on hydrogen as carrier element, which is highly volatile. The Hindenburg, greatest airship ever made (...somewhat similar to the titanic, greatest passenger craft ever made....) failed on it's virgin journey.

...and, at this point, I realize the irony of scribbling an idea down and the scribbling taking so long that I'm no longer in the mood for it when I just get to the important bits.

But fuck it:

Use carbon nanotube firbres for the internal balloons, rigid enough to maintain shape even when pumped empty of air
vary balloon displacement by means of nanotube fibre patches that can vary in length when electricity is applied
stick solar panels up top
use two dirrigibles side by side with a bridge inbetween
allow dirrigibles to alter outside shape from fairly air-streamlined to completely boxy in order to be rader invisible, using light tent-like material
coat the entirety of the bottom of the airship in blue-grey-black epaper for live action visual cammouflage
built a greenhouse on the bridge section for food
keep stock of new rapid-delivery lithium-ion batteries as those developed at MIT for rapid electricity availability at all hours (fast enough to power laser cannon batteries or railguns - stuff with infinite ammo...probably run dry in 4-5 shots, but hey, laserguns are LIGHT)
keep weight ratio low to be able to go higher than all jet engine missiles and hopefully out of range of rockets using supperior carbon-nanotube balloon goodness
Keep giant kites in top compartments for blazing speed on the jetstream (and maybe even for powering wind turbines...the sun may be blotted out by dust, but the winds will never cease, and ultra violet rays can grow foods! mm mm good)

Ok I think that's all my crazy ideas, for now...more, better, later.

fredag den 3. april 2009

Virtual Machines, Python, Clutter and Ubuntu

...I'm slowly making progress while working on my dialogue editor project.

In working with my first open source project, I'm finding that a lot of exceptional effort goes into the system - and that it's really quite elegant.

But it's also a bit of a nuisance...which I can mostly attribute to my own expectations and background. See, I've always considered myself more of a visitor whenever I've used linux, not an actual, permanent user. I've found many neat features in it, many that I'd increasingly like to combine with my windows rig - but it's only on this project that I've come to the conclusion that serious programming will often necessitate that you do it from a 'nix system.

See, I came accross this excellent UI framework called Clutter, being developed by openedhand. It's drawn using OpenGL for some parts, but native operating system rendering for fonts. And it has bindings such that it can be used from python.

Most UI libraries will give you one or two of these - an extensive library of premade functionality, excellent text rendering, hardware accelerated graphics rasterisation, and an elegant application programmer interface.

Clutter gives all 4. It's also a so-called copyleft licence, meaning that code which uses clutter can be proprietary if need be; but any modifications of clutter itself need to be published as either gpl or lgpl.

So it's great and all, but there is a catch. Clutter is a bitch to compile because it requires a lot of other libraries as binaries...and of course, neither clutter itself, nor the other libraries, are available as binaries on windows.

Enter VMware Player by VMware, and a virtual machine, called a virtual appliance, with ubuntu installed.

I managed to get it up and running in the course of about 4 hours....and that's on linux.

Like I said, it's a bitch. But now, it's all up and configured on the virtual machine, so work can soon begin...I do need to build a production pipeline soon, though, such that I can do test-runs on the native host machine, which is the next thing I'll be doing, and from there, well...hopefully, I'll link the virtual machine up with a repository, and have it run a virtual file server of compiled binaries....and _then_ I'll be able to work on the actual application =P

torsdag den 2. april 2009

The characteristics of data persistence

Data is structured information. Persistence is steadyness: A persistent state is a steady state; this is particularly true when the word is used in the context of data. Here, I will be addressing persistence with regard to a particular type of data: Digital Text.

So...Persistent data is data you expect to stay the same.

Well duh, you say. Don't. Let me go over a concrete example of how the implementation of persistence may vary:

Programming languages deal with persistence in different ways; the most elegant way is, without a doubt, the functional programming languages, such as Scheme and F#. Functional languages do not accept mutation of data; that is, once data has been entered, it's there, forever. You may, at some point, lose the ability to address it; but you'll never be able to change the data at the end of an address.

This may seem like a bigger burden at first than it really is: Functional programming languages also allow you to create the (seemingly) same adress multiple times, so rather than changing the data at the address FOO, you can create a new address FOO and put the changed data there instead.

So even if you do not allow data at various addresses to mutate, you do not necessitate creativety in coming up with new meaningful address names; you can just reuse old ones.
This does make it more tricky to get at the old one (you need to get at it by writing something similar to "FOO; THE OLD ONE" in the address field), and it does make for confusing programs when the address FOO does not always refer to the same data (remember, all references to the old FOO within your program still point to the old data, because that's where they pointed before you introduced the new FOO).

The point is, however, that it's perfectly plausible to never erase anything, and to never change anything that anybody else has been working on, in the context of any advanced system. Contrast this to java, a complex programming language for modeling complex systems.

Here objects are passed by reference; that is, if a Salesman object passes a Car object reference to a Customer object, and the Customer object then makes a change (mutation) in the Car objects Wheeltype attribute, that change will also be apparent to the Salesman if he still has a reference to the Car object.

This behaviour can be destructive if the Salesman object was counting on the contents of the Wheeltype attribute to stay the same in the car object: If persistence was somehow important, then in this case, it would have been broken.

Alright, now that I've given some concrete examples of persistence, so you hopefully have a grasp of why it matters (at all), let's look at the how and why of the whole thing:

There are two defining questions to data persistence:

- Why persist?
- How do we persist?

The answer to the first one is: Because you might want to use it later.
The answer to the second one is: By making sure that we are able to address the data at the point where we want to use it.

Understand, then, that the relationship between not persisting and being able to address data is crucial: If you have 100 data articles, and you choose to only persist 50 of them, then you only have to keep track of 50 data addresses. Perhaps this seems like it's not that different from keeping track of 100 data addresses, but try to consider the receipts you get after purchasing groceries in the place of data articles. If you save 100 of those rather than the 50 most important ones, when the time comes to dig out the receipt for your new television set, you'll have to rummage through twice as many articles before coming upon it.

Not persisting the grocery receipts - or the data addresses - makes it quicker and easier to discern the useful ones from the useless ones; it makes it quicker and easier to address that which you are most likely to want to address, and as a concequence, it makes your collection of data more powerful and user friendly.

This is all well and good in a single user environment, but in a multi user environment, things tend to change. Whats important to one person could seem useless to another, and suddenly an important article is missing from the collection.

The answer is, of course, that deletion of any kind is a very primitive type of ordering in a persistent data set. It's actually hiding an article very very well; rarely does i actually phase permanently out of existence, it just becomes so hard to get at that it's no longer worth it to retrieve it. It probably only has such appeal to humans because we can safely forget about things we delete, just like things we throw out.

The same could be accomplished by using a less effective hide operation than deletion - instead of throwing the grocery receipts out, you could toss them all in an old shoebox, so that they're there at least, even if hidden behind a decidedly user-hostile interface. But even the shoebox will repressent a useless article to some, though, so it might get deleted eventually...unless it's impossible to delete, as it would be if it were an object in a functional programming language.

So now that we understand that the goal of persistence is the ability to use later, and that deletion is a type of ordering that simply makes us more _likely_ to use a collection later at all (because it becomes more powerful), we can toss the idea of deletion out the window for good...at least when it comes to small data.

Simply put, there are other types of maintenence that yeilds better results than deletion, often in much less time because there's far less finality to it (ie. it makes the collection more powerful, faster), so as long as size is not a factor, there should be no deletion. Since we're dealing with digital text in this article, it's verifiable that size is, indeed, not a factor.

This makes for a conclusion to this article: Everything should be persisted. Everything that is likely to be useful should be systematically ordered. And finally, efficient algorithms for ordering and retrieval are essentiel to maintain the power of a collection.