fredag den 26. december 2008

Progress

This is a small status update on the various things I want to write about.

Firstly, there's the concurrent savegame system pattern.

That's currently on hold; it's an interesting subject, and I want to dive into it further, but for the moment I'm not well-versed enough in concurrency that I want to model it. I want to eventually utilize a concurrency modelling language to implement a real world model of it - I want to use either coloured petri nets or spin:

http://en.wikipedia.org/wiki/SPIN_model_checker

I've drawn up a preliminary diagram, but I need to work in the rigerous models in order to fully understand the problem and how to best tackle it.

Secondly, there's the non-linear story development tool. I'm inclined to call it Texton. I've been discussing with my good friend Aksel whether I should implement it in C# - Which I've never used before - or if I should implement it in Java. C# seems far more suited to the kind of enterprise level software it's turning into - and it comes with windows presentation foundation and if I do program it in that, I'll actually have C# experience.

Java, on the other hand, would mean I would have to use java swing for the interface, which is a lot less delicious than windows presentation foundation. It would be multi-platform, but that's not really a concern at this point. The biggest problem with not using java is, probably, that java is home to staggering amounts of open source software and frameworks.

Either way, that is a bit of a concern.

Thirdly, I've yet to device an appropriate storage model scheme for the tool. I'm inclined to use xml-based object serialization through an SQL database for all storage, since it effectively decouples all storage and concurrency concerns, and allows for remote sync of data. The problem is, I have to write an SQL database adapter at some point. On the plus side, I can start out simply relying on local xml object serialization.

On the other hand, I've been very enticed by the prospect of using a remote object-data-base, since it allows for transparent development and very clean code.
Problem is, such a database utilizes a model-view-controller system of it's own, where the view into it is often locked to a specific language very tightly; a java object db communicates through a java interface in the client application, for example, so you'd have to find some sort of interoperable object db in order to not get your data locked into a specific piece of software; but an interoperable DB, unless it's interoperable because there's a host of different client frameworks in different languages, would lose the elegance.

In the end, I think I'll go with a highly modular approach, where data can be saved and stored through the utility of the adapter pattern. That is, I write my own client in either C# or Java that relies on an adapter interface for loading and unloading data. How that adapter handles data persistence will then be variable, but the main application can effectively treat the adapter as the model it's working on.

If necessary, an adapter should be able to easily be fitted to a remote object data base for live sync, and one can be written for both sql databases and xml serialization. Given the modular approach, two adapters can also be chained up and translate between two different formats relatively easily. Finally, adapters can be written that are able to translate serialized or sql'ed objects into new ones with appended fields and properties, allowing for patching the data-structure without corrupting the data. Hopefully.

Fourth project is my computer game theory course exam. I need to work on it relatively vigorously, so I'll have to put the other projects on hold for now. Hopefully, the next time I post on that, it'll be concerned more or less directly with this exam.

fredag den 12. december 2008

Blog consistency

I sometimes feel the need to argue why I'm doing things a certain way. Honestly, it's not for posterity, or to make anything clear to anybody else; it's mainly for me.

When I write things down, the syntax and semantics of what I write help me express something I feel. On occassion, those feelings are misguided. That's a pretty way of saying that I'm sometimes wrong, in spite of presenting a thorough argument.

I know this, because it's happened in the past, and I'm not going about things terribly different these days, so it'll happen again.

I'm not always going to go back and say what part of an argument was right or wrong; in the end, I write this for myself, and if I feel making a wrong argument ended up making me smarter, and expressing how that happened doesn't appeal to me, chance is I won't do it.

Sometimes, that will mean my blog becomes inconsistent between blog entries, and on occassion, even within individual entries. This is a choice. I'd rather make entries distanced by clean breaks, even if that means the entries don't fit together, if that in turn means the entries themselves grow to be of better consistency internally. And I'd rather spend my time writing about new ideas I think might be right, than old ideas of my own I think might be wrong.

But, _for_ posterity, I will be making sure that the results I derive will be easy to seek out, handle, and utilize. So the good stuff, I will be certain to reexamine later. The blog is a part of the process towards arriving at those things, so those are what matter.

With that in mind, I hope to soon make a blog entry tying together the game concurrency pattern and the save game pattern.

tirsdag den 2. december 2008

Designing savegame systems

Since we've previously established that the design of the save game system impacts gameplay, even if it requirres specific behaviours by the player to do so, I think it's in order to discuss different behaviors in relation to different save game systems. Before we get that far though, we need to establish just how difficult the subject is to work with, how it can possibly fit within my pattern model, and generally how we should procede from there.

I hope to narrow the subject towards a somewhat monolithic conclusion, but I fear it will be far more divergent than any of the other patterns I've discussed. I don't think it'll turn into a nice uniform set of rules, but rather several sets of conditional rules, that can be followed depending upon what exactly you're trying to achieve. The key to the difficulty, in my view, is the heterogeniety of the player behaviour. Perhaps I should embark on explaining why exactly save game systems are different from other gameplay mechanics, and perhaps more importantly, on how this difference is reflected in the way players tend to percieve save game systems.

Save game systems are technically different from everything else because they can be invoked at any one time during the game, and because they completely change the state of the game. None of the other metafeatures have this effect. Changing your options and configurations do not, as a general rule, effect the play experience; but there are a few exceptions to this rule, if for example you play a timed game against a chess computer you can underclock the cpu that's driving it to make the experience easier. Or, if you get a faster mouse and monitor for an FPS game, or a game with quick time events, it should improve your reaction time by as much as a 10th of a second. And of course any difficulty settings.
And then, there's the save/load feature, the focus of this blog, which alters the games state completely, at any time. Unless there isn't one, in which it simply impacts the players life, if he needs to leave the game, instead. Either way, once invoked, either by shutting the game down because there isn't one, or by saving it because there is, once the player returns, the game will be altered. Either because it allows the player to start from that point regardless of what happens over and over, or because it forces him to start from the beginning again.

Obiously, it's very difficult to anticipate the invocation of the feature, since it's invocation may not relate directly to anything that is within the game as a product. In fact, it's very normal not to respond to the invocation at all, for that very reason, nomatter where or how it occours.
This relies on a, by now completely ingrained, very specific suspension of disbelief amongst players.
Anecdotally, when disguised as anything but a save-game system, this feature would be looked upon as being completely different; a common system could be percieved as an undo-button which you can press if you anticipated that you might need to press it ahead of time. Omnipressence of this feature, this ever-pressent choice, within the game is at the heart of why it is very difficult to work with. But the bulk of the trouble in dealing with it in any innovative way is the vague notion that the feature is to be treated as special and somehow "outside" the game, which has by now festered and taken root in the public consciousness amongst players, even though it can clearly alter the experience very considerably.

Choosing wether or not to cooperate with the notion is not a difficult choice to make on the surface, as the advantages are many and ample. Many players will appreciate incorporating a traditional save game feature because it makes matching their lives with their gaming. This will largely be casual gamers, and if anybody uses it to improve their abilities within the game, it doesn't really relate to them because those who have the largest benefit from a save-at-any-time system are likely so casual that they wouldn't bother. The most hardcore gamers, of course, will likely choose to limit themselves from abusing the system because they want to play the way it was meant to be played, and somehow they feel that not using the save-game system to their advantage is more "real".

But the downside is that, somewhere in between, there will be a demographic that will likely use it to alter their game state in various ways, and it's incorporation will alter their play experience significantly. For this reason, what to do is not obvious, and because the demographic is difficult to pinpoint with any accuracy, it's difficult to say wether or not it is safe to ignore it. If it composes 75% of players, their reasoning behind using the feature to min-max their chance of doing well should be of no concern. They're your target audience, and you should try to maximize the fun they have; that's why they pay you. Not to fuel your artistic expression, but because they want to have fun. Unless you want to deny them that, you must try to cater to them.

But then...is it really worth maximizing their fun by doing something that will probably also inconvinience 30% of your players greatly? The choice is bipolar. But to me, the conclusion of this discussion is clear: If you want to design good, solid games, you'll put care into make the right decision about what save game system you choose for your game, and you'll take both the fun of your audience, the current perception of the save game feature, and the non-linearity of your game into account.

Now that I feel I've argued how to best approach the problem, I want to go over some different save game systems; I'll probably only blog about one today.

The first system is the no-save.

This is often used with arcade shmup games; it's impossible to save the state of the game, often because games in arcades are simple, easy to play, and revolve around learning level patterns in order to optimize and anticipate events that always occour in the same manner. Also, player death is directly what earns the arcades money, so if the player could skip over difficult sections after completing them once, it would be a less efficient manner of getting the players cash.
Another advantage is that playing the same levels over and over again is how you get good at many shmup games - there is no arbitrary skillset, only how good you are at the specific levels. This no-save system, then, lines up perfectly with what the game is trying to do - make the player feel as if he's constantly improving, getting better. But it comes with many drawbacks too, one of them being that when not alligning

It seems that games with more than 20 hours worth of playtime would be impossible to complete for any single player, as that appears to be the point where most get sick without rest. There have been reports of heart attack and similar ails setting in after 36-50 hours of playtime, for instance.

So there is a theoretical limit to the no-save system, as well as a, presumably, much lower practical limit. An estimate for the practical limit could be as low as perhaps 2-3 hours.

This is where the players behavior comes into play however. Some players will only be able to accept a sessionlength of 10 minutes or less for a no-save system game, whereas some players will be willing to accept much longer sessions without the ability to save their games.

This is perhaps one of the largest problems I've encountered so far in my work on game design patterns. Which demographic groups are able to accept which playsession lengths, and which invonviniences? And if so, does a particular type of content, which attracts a particular demographic group, need to be paired up with a particular kind of save game system in order to optimize the design? And how exactly should your demographic distribution be before you should cater to the convinience of one part, or the fun of another?

Naturally, my greatest interest is within one particular type of game - those with adaptive narratives and non-linear storylines. Sadly, those do not even fall within any one of the commonly accepted genres - and they make up only a small minority of those genres they do fall within. So there's no real chance of finding empirical evidence one way or another.

Instead, I have to approach the problem using only anecdotal evidence collected from various demographic groups. And in order to really capture the possibilities, I'll of course have to go in depth with examples from different genres and what kind of concequences game systems can have.

Oy vey. This pattern will take a while.

mandag den 1. december 2008

The "save" pattern

Many games implement some manner of saving and loading progress and game state. I've been thinking over, for a while, just how these various features affect the game play experience.

Some examples of the pattern are in order:

- A friend of mine played max payne with me many years ago. In my games, max, the titular character, was often limping around on low health, dodging bullets and dodging death with them. In my friends game, he was always stacked with 5 bottles, the maximum number, of painkillers, the games healthpack, and he was always at full health.

The difference was, he always loaded if he got hurt and he had the clear sense that he'd be able to do better. He would replay some encounters 3-4 times on occasion, till he got it "right". I only ever loaded if I died.

The game had an adaptive AI - it would leave a lot of painkillers around for the player if he was hurt, leave few if he had a good stash and he was healthy.

It should be painfully obvious that our different utility of the save game system made for different gameplay. I was consistently running the risk of bumping into an encounter that was too difficult for my hit point level, forcing me to reload over and over at the same place 10 or 20 times. He was constantly reloading nomatter what, but rarely many times at the same place; it would be wrong to say that he wasn't challenged, because clearly, it was more difficult to do every situation perfect rather than only those where max was low on health. But he always knew he had leeway, if it came down to it, whereas I always knew I didn't. Quite possibly, I might even have had easier encounters, and I deffinately had more healthpacks to pick up, than he had, so in some manners I was challenged less.

The point is not that there is a wrong and a right way to play a game. The point is that if you give a utility, such as saving games and reloading games on demand at any time, to the player, you may greatly impact his play experience by doing so. Certainly, you could argue that the how responsibly the player uses it is his choosing, not yours; but if you know that whatever you choose to make for a save game system will effect 99% of the players playing your game, and that 30% of those people will use the save game system to play the game in a min-maxing fashion, your choice concerning the system could make a difference between the game being called good or bad by many players, so responsibility doesn't even come into it.

The save game system is therefore part of the game as a product. So, the application of the save pattern, then, is concerned with a very product-oriented manner of percieving the game. It relies far more on outside context than many other patterns I have described thus far.

As such, boiling it down to rules will take a lot of work, and will likely relate a lot more to the consumer-producer analysis of the game as a product, and the context the consumer provides, rather than the contextual constructs which we, as game developers, provide within the internals of the game.

I'll be fun to write more about this in the future =]

søndag den 30. november 2008

Converting the decision pattern into rules

The summary of the previous two articles I wrote on the decision pattern goes as follows:

If you hide the concequences of the players actions from immediate view, the player won't be tempted to choose both actions via savepoints and savegames. But, if he chooses to reload based upon alusions made post-decision, and these alusions differ, he will be able to verify that his choice likely matters - just not right now.

If you do not allude, show the concequences immediately, and the concequences are shallow, allowing the player to try both will break the game - the player will no longer feel he has any influence, and he cannot consider his choices, former and future, actual gameplay, but will rather consider them out of his hands, and he will cease trying to influence it.

So: When implementing decisions, in a game that allows reloading, these rules apply:

- Always allude to concequences both before and after the player makes a decision.
- The closer to the action you show the concequences, the larger and more impactful the concequences must be.

The second bit also makes sense conversely; if we presume that a decision is supposed to have a big impact on a story, and the full effects of it are something the player might be unhappy with, he should not be "punished" with having to do-over very much.
The psychology here would say that the player should be conditioned to choosing his actions with great care, and that punishment would be effective; but if he regrets a decision enough to discard his current thread and wants to do it over, he is _already_ as into the story as you could want. He cares about the story and the characters. You're not only punishing him for making the wrong decision, but also for caring. The whole reloading thing should be avoided when possible, as a concequence. As such, the player should generally not experience concequences to his actions that there's a high probability he would regret. And if you must take tis route, make a do-over easy.

A couple of examples are in order: in Ultima X you encounter a deathly ill orphaned girl, and you're offered the choice of wether or not to euthanize her. No matter what you choose, the games villain taunts you - either over the fact that you now have the blood of an 8 years old girl on your hands, or that you left one to a slow, grueling and cruel death over the next couple of hours.

Nomatter what the player chooses, the adverse reaction, the long taunting session, and the guilt will likely make him try the other option if he cares about the story and his influence. When he finds out that negative concequences are unavoidable, and that he has only been presented with wrong choices, at the very least he will be glad that he can do over easily. But even then, he is still getting punished for caring about the game world by having to use a lousy interface to explore the concequences; all because some developer somewhere thought it would be a good idea to punish the player with concequences he will be likely to reject nomatter what decision he makes.

In the end, I played through the above sequence 4 times - one euthanizing her, one saving her, and one euthanizing her, then after spending 15 minutes on gamefaqs before I concluded that my first decision was what I really wanted to do, I euthanized her _again_.

- The player must generally not experience concequences he will regret.
- Concequences that the player cannot escape must not be attributed to his choices if he is likely to reject them, and he is able to figure out that there is no way around them

Or, all in one place, the decision pattern in summary:

When implementing a decision, follow these guidelines:

- Always allude to concequences both before and after the player makes a decision. Make the post-decision allusions differ depending on choice made. Exceptions can be made if, and only if, you can be certain that the player won't reject the concequences in favor of a do-over.
- The closer to the action you show the concequences of it, the larger and more impactful the concequences must be. Long stretches are therefore cheaper to make and implement with the same impact. You can even allude to something that never happens if it's the exception rather than the rule.
- For big concequences, the player must not recieve punishment for attempting to figure out all possible outcomes. You would punish him for caring.
- The player must generally not experience concequences he will regret to such a degree that he would be willing to go through anything more than a quick and painless do-over to rectify it.
- Do-overs should happen out of caring for the world, not out of frustration with the concequences.
- Concequences that the player cannot escape must not be attributed to his choices if he is likely to reject them, and he is able to figure out that there is no way around them. There must always be a "right" choice, so to speak.

lørdag den 22. november 2008

Storyline concept

I need to dump this concept somewhere:

As a method for international star travel, in an age where human science has conquered star faring propulsion technologies but not the human lifespan, nor the ability to store a computerized mind more powerfull than a cat in anything smaller than the volume of an elephant, humans develop a mechanized shell with the ability to grow a humanoid host.

Designed as a cyborg, the shell is 2 stories tall, with a life support chamber within that can grow and support a human being. If the human is ever killed, dies of old age, or leaves the shell, or is terminated, a new one can be grown to serve as host based on a storage bank of zygote clones with identical DNA to the former host. The shell nurtures and raises the human within the chamber, utilizing hormonal growth regulations, optimized sleep cycle regulation, and databanks containing interactive virtual reality recordings that allows the human to be raised, and eventually come to understand it's position, based on the lifes of the previous hosts.

Any space travel accross long distances means the death, rebirth, and re-raising of the host upon closing to the destination, effectively leaving the shell as a whole immortal, but always alone, and suffering from varying degrees of amnesia depending upon it's development.

I'm not quite sure what this concept means, but I thought it was cool when I came up with it....

The decision pattern, some refinements

In my last post, I described the decision pattern as something pertaining to the rest of the play-experience within the game. That's a pretty loose definition, so I'll try to narrow it down a little further: The concequences of the decision should influence either the main plot or a diverse number of distinct and unrelated subplots, such that it is different from making a different decision.

Implementing decisions is one of the greatest ways to give the player the impression that his performance matters, if he is able to recognize that his decisions actually make a difference. In other words, if the player does not realize that judgment is being passed on him, he will not modify his manner of playing, and thinking, accordingly, and his experience will arguably be more shallow.

This is perhaps one of the most problematic aspects of the decision pattern; since a lot of games do not pass permanent judgement, and chooses not to have significant concequences for many decisions, a game which utilizes the decision pattern will necessarily need to make the player understand that that it utilizes it. That is to say, one can safely assume that the player, at the onset, suspects the game is very linear, and that he will suspect any choices implemented of being unimportant.
I can be quite difficult to dispell this suspicion, and perhaps calls for a special "tutorial" section to help the player understand the dynamics of the game.

It's actually worthy of note that once the player has become convinced that judgement will be passed on him, his manner of thinking will be henceforth modified. So, theoretically, you could have very intricate manners of passing judgement in the beginning of the game, and then later introduce choices whereupon the same judgement will always fall, without the player realizing it if the judgement is a reasonable response to all choices.
This utility is exceptionally conniving and very powerful as long as the player is fooled, since you can make the player think he's being responded to without actually being responded to. In other words, you get the same effect that non linear storytelling offers without actually having to make it particularly non linear.

There is a huge problem with this approach however: The player may be inclined to reload the game at an earlier point and try again if he is unhappy with an outcome. If he realizes that you're not actually judging him anymore, he will likely immediately modify his behavior and be inherently suspicious of the implementation of decisions again, a suspicion that may not go away as easily a second time around; fool me once, shame on you, fool me twice, shame on me, and all that.

The trick is to make the player see that there is an immediate difference; there has to be a distinction which is deep enough that the player thinks it may matter in the long run, even after playing through the same 5 minutes of game to try out deffirent choices. If the player can tell that it doesn't even matter immediately afterwards, the choice won't matter much to him because it doesn't matter much to the game.

A good idea is to implement distinct rewards and punishments for different choices in the immediate term, since at the very least, that will make the player care out of greed. But additionally, it's important to have a large amount of long threads; this is to instill in the player that not only is there an immediate difference, but there may well be long term differences as well. In fact, foreshadowing can be a powerful tool in this regard. If a character openly tells the player that he hates the player in one branch, and not in another, then the player will expect that there may be concequences to being hated. There doesn't have to consistently be concequences, as long as the concequences pop up later often enough that the player starts trying to act based on the idea that there may be concequences.

Once the player is in that mindset, doing a conversation where a character decides to now hate the player, that conversation will have an impact. It doesn't actually matter wether or not the player always or never have a future confrontation about something or other; all that matters is that if the player does have a confrontation, it gets tied to the conversation where the character admitted (or didn't admit) to hating the player, such that the player is able to tie the idea of judgement to his earlier action.

If the confrontation and the conversation are spaced far enough apart, the player won't be inclined to reload to check the difference; he may speculate that, if he didn't make the character hate him, the character'd been more confrontational, perhaps the character would have brought a lot of goons along with him; or, if he did make the character hate the player, then he may speculate that the confrontation wasn't inevitable. Either way, it's possible to do a tangible feeling of judgement without actually doing a lot of work to implement that judgement.
On a second playthrough, such things will be revealed to the player, but if you've actually made a game that is good enough to warrant two playthroughs, that's good enough, so you should just take that and be glad. At that point, you can live with the player deciding not to play anymore; he'll still, undoubtedly, recommend the game far and wide.

fredag den 21. november 2008

The decision pattern

One of the things that a fair amount of interesting games do is provide you with a meaningful choice you can't go back on that will affect the entirety of the rest of the game.
I'd like to talk a little bit about the concequences of this type of design, and what it entails.

The choice is often placed either at the beginning or at the end, and only rarely in the middle. It makes "sense" to place it in the end because a 2branched choice placed 3 hours before the end only necessitates about 6 hours of gameplay, of which 3 are the price of the choice.

A choice placed at the begining, on the other hand, is arguably either very expensive to design, or it doesn't diversify the experience afterwards as much from the other options - only few games will place a choice before the player 40 hours before the game ends, and effectively pay 40 additional hours of gameplay time as the price.

Various strategies are can be used to lessen the price. Some go so far as to rely on the player forgiving the lack of concequences of most choices because it would be too time consuming to implement. It's a completely viable strategy, since players expectations are measured by their perception of viability and quality. If they find that implementing proper concequences would have been detrimental, they may be willing to accept the compromise because they cannot think of a better one.

Applying the decision pattern, then, is a question of weighing returns. A lot of older games provide much more detailed responsiveness to choices than many state of the art modern ones. The newer games gamble that gamers are very forgiving; since the demographic of gamers has changed over the past decades, it may well be a sound gamble. All the same, I do not feel that blindly submitting to it is the optimum choice.

A good example of the application of the decision pattern is the choice of wether or not you save Paul Denton in Deus Ex. A bad example is wether or not to be bad or good at the end of Deus Ex, if not for the fact that the choice was placed right at the very end.

I'll go into the intricasies of the decision pattern in a later post, and narrow down how to apply it correctly.

mandag den 17. november 2008

From patterns to rules: The McGuffin Pattern

I wrote earlier about what I called the McGuffin Pattern.

The basic idea was that if you drive your quest with a McGuffin, like a unique stateless item, the player will be able to figure out the solution to the quest at it's onset. The player doesn't, as a concequence, have to solve a problem; he simply has to provide a specific, choiceless solution. Now, this can work if you want to imply that the player must do a very specific job within very specific parameters. But the player won't own the solution, he will be following orders. If you want the player to commit, you must make him own his choices, and therefore his solutions. At the very least, it must be this way in the general case. The McGuffin pattern hinders that.

Within gaming, the McGuffin pattern is without a question an anti-pattern, so it should not be followed. It should be known, because it should be avoided. And what's more, it doesn't simply concern McGuffins - it can be characters or fairly abstract concepts too, that provide the player with the solution at the onset, and on the other hand, McGuffins can be constructed to avoid the McGuffin pattern. Yes, that makes it sound like I just coined a misnomer, and I kinda sorta did, but I didn't because the pattern emerges through the brainless use of McGuffins 99% of the time. The generalization simply says: If you abstracted the McGuffin out of the equation somehow, while keeping the solution within the definition of the quest, you didn't solve anything at all, dummy.

Now, as a quick interlude: I like rules. Rules partition everything into right and wrong, that which follows rules and that which doesn't. Any good pattern can be made precise and easily applicable through the use of rules. So, what rule can be taken away from this?

Baseline rule of it is: Do not present the player with unfulfilled solutions. Present him with problems.

The fine print should add: And perhaps hints, but never reveal to him the number of possible solutions to the quest at the onset. He needs to figure out the solution. If you want to optimize the players experience so open quests don't confuse him, be certain to inform the player of his progression through 2nd person narrative; but even then, only log what he has done and discovered in a clean cut, unambiguous fashion. And here's the cigar: try to always make hints ambiguous in their certainty: Make them suggestions rather than plans.

The McGuffin can almost always be avoided, if desired. Say the quest is to look for the holy grail. Firstly, try to make the grail an ambiguous item. Now generally, there can be no more than one holy grail, but that's because the holy grail doesn't have a purpose. Suppose it did have a purpose; suppose it gave eternal life if you drink from it. Then, what the player ostensibly needs is to be searching for is something that gives eternal life.

At times, you cannot make the item itself ambiguous though; perhaps it's a holy grail without a purpose, perhaps it's a smoking gun. But the maxim holds: if the item is unambiguous, it's location, or it's validity, must be ambiguous at the onset. The player must make a choice on how to make the solution less ambiguous. Perhaps it turns out that there is no imposter, that the location is obvious, or that the task can really only be completed in one way. But narrowing this down should be a process, and not revealed through the initiating dialogue. Hitman is perhaps the most excellent example; there's typically a myriad of puzzles built into each level, and many have multiple solutions. But there is only 1 goal. The player owns the hit, however, because he owns the process; he was presented with a problem, rather than a solution.

To reiterate the first rule of non-linear narratives :
- Do not present the player with unfulfilled solutions. Present him with problems.

torsdag den 13. november 2008

The McGuffin Pattern

http://en.wikipedia.org/wiki/Mcguffin

I'm going to go out on a limb here and define my own terminology for some patterns I believe are occouring within non linear narratives. Terms are useless in the broader context; noone will ever come to know and use mine, to be sure. But I had to put some sort of memorable title for this, and various other patterns that are emergent within non linear narratives so my own terms will have to do untill I find a more official theory that agrees with my own.

And really, that might sound exceptionally lazy, but within non linear litterary theory, very little context is available. Indeed, the best topology I have come upon is from 95 and has 10 hits on google for most of the terms it defines. And even that is very basic; it simply defines names with meaningful etymology for some of the most basic constructs within this area. I'm speaking of Espen Arseths essay called something to the effect of "a topology of "....and I forget the rest. It's in my bookcase, and if you google arseth I'm sure you'll come upon it...eventually.

At any rate, the McGuffin Pattern. What am I on about?
In books and novels, a McGuffin is a unique item that is intrinsically linked to the plot in some manner; in fact, the whole point is that it's linked to the plot, and has been since it's conception. It's such a fundamental piece, by definition, that the plot designer invented it to help him rationalize something he wanted to do. For example, it is the holy grail of the arthurian legend, or the ring in the lord of the rings.

In computer games, it is an item that allows the game designer to fashion his quest in a manner that makes sense to him. To follow the McGuffin pattern in game design, though, is to make major parts of the game play experience linear. Within non linear narrative theory, then, it imposes a linear narrative on an experience that is, at a lower level, non linear. Many will see it as a necessary evil, probably, and I tend to agree if you're ok with imposing a linear narrative on a non-linear experience. If that somehow makes more sense than the alternative, then it's obviously the sensible thing to do.
But let me step back a bit and explain why the application of a mcguffin makes the game experience linear:
Fallout 1 and 2 are both driven by the search for specific items at the outset, or so it seems. In fallout it's the waterchip, in fallout 2 it's the garden of eden creation kit. But these are, in fact, not McGuffins. At least, the waterchip isn't; what you're really searching for is a clean, permanent source of water for the vault because the waterchip broke. And in fact, there are more than one solution to this - you can aquire a waterchip from a number of sources (though that number is not sufficiently high, in my oppinion) - and you can, similarly, get water merchants to bring water to the vault.
To be a McGuffin, it would have to be a unique item. Had it been a unique item, the game would necessitate that you do certain specific actions on any given playthrough.
In this fashion, the McGuffin pattern necessitates that the player commit to certain very specific actions, and a very specific narrative, whenever he accepts any sort of task that involves a McGuffin.
So essentially, the McGuffin Pattern is an anti-pattern - when utilized within RPG's and similar games, you lock down the narrative, and this is the kicker: You do so, because you, as a writer, want to do something specific with the narrative. You take away the players liberty because you feel you have something better than liberty of narrative to offer him.

Now it may be that you do, but you have to realize that you make the player into a passive observer rather than an active participant.

So how do you avoid that? Don't use unique items. And try not to make characters the goals of quests. Ideally? Present the player with open ended problems rather than solutions that need to be filled in.

tirsdag den 11. november 2008

Why writing and programming are fundamentally different

A lot of people spend time writing stuff on their computers. What and how they write depends almost entirely upon what their goal is first, and their personal manner of going about things second.
Take this blog. My goal is mainly to express various essays that are bubbling and steaming in my head and are just begging to become a little more concrete, and a little less elusive. Perhaps help me cut the crap from the cake with a little bit of written analysis afterwards. My goal is not, on the other hand, to explicitly to argue why I believe all my presumptions to be true.
Since my readership is likely to be exceptionally limited, I'm fully expecting it'll only be me, it seems like I would per default agree with my own presumptions if they're worth agreeing with.

As such, you'll see a focus on presenting an idea in a manner which seems to be sorta correct to me at the time of writing, but also which does not get lost in too many trivialities; and one which does not make being right a more important property than being written.

So various writers write for various reasons; programmers want to be absolutely precise, and absolutely correct. Of course, that is not enough, but those two are preconditions for writing working programs.

Writing political arguments is different; there, the goal is to be convincing. Being precise is not important, but always being right, at least in the eyes and ears of your followers, is very important. It is no suprise, then, that programmers aren't terribly interested in popular politics - the lack of finesse and precision, in favour of vague omnicorrectness, can be particularly jarring when you're used to dealing with absolutes and total precision. To a certain degree, this makes perfect sense - it is very hard to get someone to like you if they don't agree with you on things you purport to be of the utmost importance.

Writers who focus on a whole lot more than being right. Certainly, what they write must make sense, but they attempt to invoke much more varied responses than agreement or disagreement. As such, they employ language for a completely different reason than that of programmers - who are attempting to achieve something very specific. And different from politicians - who are attempting to be correct while saying something you agree with.
All because a writers goal is completely different. Certainly, the goal may be fairly specific, so it is akin to what a programmer does - but the writer does not have the option of being entirely precise. And certainly, he wants the reader to accept what he writes rather than reject it - but then, the writer does not need to prove that it is right, just that it isn't wrong granted fair suspension of disbelief.

Any writer who does not realize that making factual arguments, or attempting precision or correctness is not the focus of the discipline is, as such, doomed to fail. It's supposed to be about the story, stupid.

mandag den 10. november 2008

Concurrent behaviours within non-linear dialogue

Concurrency is an area of programming that few people enjoy wrestling with. I have tried my hand at it a couple of times, both in practice and theoretically as a part of my university studies.

Obviously concurrency is everywhere, but typically it only sees large use when very specific task parameters are in place. The internet and cell phones are great examples of humongous concurrent systems where concurrency is inherent in their structure; it is a prerequisite, so to speak, but also an obvious pattern. Each cell phone, each computer, often live in their own little isolated worlds, untill they decide to make a transfer to another, single computer.

Since each transfer between two computers is unrelated to whatever else is going on, many such transfers can take place concurrently without any interference.

But while most concurrent computer systems are "pretty" like this, the only really interesting concurrent structures are the ones where the various transfers are not unrelated; where everything isn't locked down nicely into little isolated systems that can be run concurrently without a moments pause.

The way in which concurrency is used within non linear dialogue is absolutely crucial, then, because we would prefer that if we do need to use it, we want the full benefits. Oblivion and morrowind are exceptional examples of non linear games. The second I write games, I feel the need to legitimize that I, a grown man, am fooling around with such nonsense, but exactly for that reason, I think I wont.
But let me continue: They're exceptional examples because they utilize a hugely concurrent system. You can easily be working on 10s, if not 100s, of quests at the same time. The players journal, if thought of as a language with many composite pieces, is huge and expressive - what direction the player takes within each individual quest, and which quests he does first and last, is entirely up to him. It even goes that the players customization at the start of the game is reflected in a myrriad of the quests, further adding to the unique experience of the individual player.
How all this works out, though, is managed mainly as a dull concurrent system - every quest is like a transfer on the internet, but no two transfers have effect on eachother. Even if you're doing 2 quests for the same guy at the same time, you're going to recieve the two seperate thankyous that each quest yeilds upon your return.

It's not that the connection is mangled - it's that it isn't there. Of course, if it were there, it could well be mangled - there's a reason why the various quests are not intertwined in their narrative. It would be too difficult to maintain any sort of overview. Mistakes would be introduced. The game could well be broken in many ways. And it would take much more work to intertwine various quests.

But all the same, reality is intertwined, because when concurrent behaviour is exhibited across communications, they _do_ affect eachother fairly often, even if the communications are unrelated at the outset. And for that reason, developing concurrent narratives within non linear dialogue requirres that the full suite of concurrency-architecture tools be utilized to engineer natural sounding narratives.

søndag den 9. november 2008

Context management within complex adaptive structures

I'm very fond of programming patterns. The concept is easy to describe, but hard to fully grasp.
The idea is that if you program things in a certain way, where things relate to eachother in a particular fashion at an abstract level, then your life will be a lot easier.

Utilizing an advanced programming pattern to solve a tricky problem is a bit like how I imagine bungie jumping must feel. You fasten the pattern securely around your waist, take aim at the ground below, and then you jump. And for a while there, you're in free fall, just as if there is nothing there to support you, nothing to catch you. You just have to have faith that you're using the pattern right, because you won't know if it'll work untill the point where it's workings snap into place, and saves you from crashing and losing all your hard work.

Now that might sound somehow risky, and the first time you use it, it sure feels that way. Because sure, your teacher may have told you that it'll work, or you may have read it in the book. Or you may have seen it presented in one of googles lovely programming podcasts.

But you have to actually use programming patterns in contexts where you dig them forward, and you follow them like a recipe, without knowing for sure how to solve the problem. Because then you understand the hidden value: When you're using a programming pattern, you can program an incredibly smart solution without being incredibly smart. That is also why using programming patterns when you fully grasp the benefits is not as rewarding at all. If you fully grasp them, then you're not making a solution that would otherwise be beyond you. While it'll almost certainly keep things tidy and smooth, there is no leap of faith, and no reward which you couldn't have claimed without knowing that your solution was indeed a pattern in the first place.

It's not entirely unlike a habitual clubber discovering the safety and luxury of cabrides for getting home. A task which can otherwise be difficult, and requirre much resourcefulness and brainpower (in the situation, at least) like sticking to the sidewalks while walking in the right direction, is suddenly made easy because of the reliance on something otherwise seemingly unreliable - that a nice fellow in a big leatherseated yellow car will just happen to come by and pick you up when you're drunk and need to go home.

There's a price to be paid for using patterns, as well as taxi cabs, though - you must either know of them ahead of time, or be able to discover ones that'll solve your problem in your time of need. And if you go looking for one that doesn't exist, it'll waste your time when you likely need it.

Since programming is a highly logical discipline, and since it is extraordinarily simple (humans just naturally happen to suck at it really badly), it is a great place for patterns to live, but there is nothing that prevents pattern based utility in other contexts. In fact, the reason it's called patterns in the first place is because some dude a long time ago coined the pattern term in relation to real physical architecture, buildings and such. There, they were akin to a methodology very similar to programming patterns, where relying upon the methodology would provide certain predictable (nice) results.

Yeah....I'm not great with giving credit to my references. Google it if you care so much. I'll put them in when I have them handy.

And now we're getting to the reason I'm making this blog post. Programming patterns absolve humans of logical responsibilities and relations, but allow them to simply rely on proven-to-work methods of working. A programming pattern, in this regard, is almost similar to an algorithm which is human-executable - in fact the algorithm analogy holds up exceptionally well, since many good programming patterns are inherently integrated into programming languages as these languages are developed.

Programming patterns are, in a word, context free - and that means they are equally applicable for managing non linear dialogue. Since I have yet to encounter a programming language developed specifically for writing non linear dialogue, I highly doubt there is one which implements a great many patterns. Rather, I would not be suprised if developers had instead taking to develop toolsets for managing solutions to common problems in writing non linear dialogue. That would probably be a much better place to look for such patterns.

In fact, I know of a few game developers who have taken to developing their own makeshift methods for developing non-linear dialogue. I have heard of people utilizing microsoft excel, of all things, for keeping an overview.

That is not a satisfactory situation. Something must be done. Something that abstracts away the logically simple and mundane, but to humans feverishly complicated, management of variable context structures within writing. Without it, I fear we will be stuck with relying on exceptional people for designing around this obstacle to non linear stories, rather than facing it head-on with a multitude of ordinary writers that have dedicated themselves to crafting beautiful natural language constructs, rather than boring boolean logic. That last part would be my job.

fredag den 7. november 2008

Logical theory and how it relates to writing non-linear stories

A subject of great interest to me when I was younger was non linear stories. There was something magical about the unassuming interactivety that preprogrammed addaptive dialogue conveyed to me. Perhaps it's that I was listened to and had a say; perhaps it's that I was responded to regardless of being a kid spending the afternoon at home with only TV, homework and the computer for entertainment. It doesn't really matter why, but I was definately starved, and games with nonlinear dialogue in particular fed me quite well as I sunk my teeth into them.

Now, having come to understand at least part of the structural intricacies of languages, and more specifically how to translate between languages, it seems like a good idea to re-examine non-linear dialogue. It'll allow me to put my old hobby into the context of what I've learned since then; and hopefully I can boil down that which I used to like into rules which are more generally applicable than "oh yeah, they did this cool thing in this game I played ten years ago, and it was soooo awesome but I'm not quite sure why!". If I succeed, which probably won't happen in this entry, that should improve the way I think about narratives in general, and nonlinear ones specifically.

I'll try to come back to applicability for games later.

Essential to non-linear structures is a grasp of logical languages. I call them logical languages only to differentiate them from natural languages such as Englsih - which sounds moronic as English _is_ a logical language. The point would be the link of association - languages are associated with communication between people, or in the case of programming languages, the decleration of data manipulation mechanisms.

But technically speaking, anything that defines a set of legal operations and results (or meanings) of those operations can be thought of as a logical language. This is because it allows the user to express meaning through the use of the operations. In the case of natural languages, it should be obvious that there often is no set definition, and that the meanings conveyed may in fact be unlimited.

I bring up logical languages, because in order to make interesting non-linear dialogue, we need to utilize a context free language to organize the natural language expressions in a non-linear fashion. This limitation is two-fold: we cannot program a computer, which is requirred to "run" the non-linear dialogue, without utilizing context free languages (yet, at least, because computers have no grasp of language-contexts at this stage of technology), and we can't impress humans without using natural languages; and argument for the latter follows:

In simple terms, the most important aspect of any language remotely interesting for humans is context. This is in part because all natural languages are defined by their context (even their grammar, alphabet, spelling and semantics are, in the grander scheme!), and in part because the context can never be fully known, remembered, or even understood, making the reading of almost any context-based text a highly individual experience.
Even normal conversation has this element of uncertainty, which then means that every expression we make carries with it a choice in how we word the expression, and a result, how it is understood. Making things all the more intriguing is the fact that we don't know the result, the impression, our expressions have on our listeners before we have a chance to listen to some of their following expressions.
To hammer in my point agai : Our only basis for understanding those who speak to us, and coinstructing expressions for those who listen, is context. All of our decisions with regard to how we communicate comes from how well we understand the context, and how quickly we understand it.

This fairly complex work is handled by us at automatically, without conscious analysis. That isn't to say that we aren't fallible in this endeavor - we're very fallible, particularly concerning areas where no reliable and common context exists as the basis of communication - but we're very good at it, and the enjoyment that comes from conversation probably is in no small part due to how it flexes and excercises this part of us.

The idea I proposed above - arranging natural language expressions in a non-linear fashion - is not the only solution. It is also possible to generate natural language expressions utilizing context free languages, but these, while usually gramatically correct, reek of simplicity and are exceptionally mundane by natural language standards.

So, we're currently left with the organization of chunks of natural language by utility of context-free mechanisms language mechanism. Essentially, this means that the only people really qualified to design non-linear dialogue are people who understand how context free languages work, and are able to use them. This is a discipline which is not derivative of being a writer, but rather of being a programmer, which is a large problem.

Having a firm grasp of clear-cut expressions in context free languages, the programmers forte, often means having a sub par or completely ruined grasp of natural language constructs. Either direction is not exclusive, to be sure, but compared to the precision of context free languages, many programmers are likely to find natural languages imprecise and unfullfilling to deal with.

So where am I going with this?

Well, my idea is, more or less, that because you can introduce context-based values to a context free language, you can write touring-compatible languages within the confines of context free grammars and languages. This is what allows for the various branches to exist.

But because this is all touring compatible and originates in a context free language, it means it's possible to translate, or compile, the dialogue into wholly natural language based strands and examine them without the branching context.
Sortof like unfolding a tree into a number of linear paths. So, it may take a programmer to construct the strands, but a writer can edit and sharpen the strands.

While it may well be technically tricky, it is therefore perfectly possible to develop a branch revision structure that allows a multitude of writers to pour over forks in the dialogue while keeping the dialogue inherently compatible. Of course, this is what you need to develop tools for if you want to construct non-linear dialogue.

I'll get into, another entry, why exactly the tools are necessary - but this blog post should explain why the tools are possible. Also, I want to adress the problem of having several writers on a project; since the writers have different personal natural-language contexts, a certain degree of meta-management is necessary every time you add another writer. Since the number of writers means a decrease in the common denominater, so to speak, managing a common context will necessarily become paramount in such an endeavor as the team size grows.

But more on that later.

lørdag den 11. oktober 2008

This blog

Why, writing, blog, this, you, the reader.

As in, why do a blog, what do I write about, what is this, why a blog, and who you are.

A blog, because I wish for some of my thoughts to be accessible from the web by myself and others.

This is therefore a log set up on blogger with an account I already had. More specifically, what you are reading right now is a brief guide on the context.

For a context to make sense, there has to be a reciever - someone who needs the context to better understand what I'm on about. The reciever is you - wether you are actually exists - that is, wether I have more than 0 recievers, including myself - is up in the air.

But if you are there, this is what I assume about your context:

It doesn't matter why you are reading this, and it doesn't matter if I know you, but it does matter that you don't mind thinking over and analyzing what you read here, because my writing is habitually convoluted. It also matters that you have the patience to deduce my intention from the context. Finally it does matter that you ackknowledge that I don't write anything here if I don't have a point to make.

It may be an incorrect point, or incorrectly made, perhaps even incorrectly thought out if such a thing is possible. It may be a point relating entirely to me. But either way, there's an intention behind putting it up here.

The context I provide is this:

You may be able to take something away from this blog - fine. That's good. Allow yourself to be inspired. But don't steal anything. What I write here belongs to me, so if you take something, make sure it's something original to you. It will be a distinct advantage, when reading this blog, to be into culture, computers, programming and language. It will also be a distinct advantage to know me. In fact, it will be the most advantageous if you are me, which you probably are.

I will be writing here to jot down ideas, and weed out inconsistencies.

I do this, now at least, because I feel the need to keep this stuff in a central location where it will always be available and where I won't forget it.

Good luck reading and have fun with it. And don't take it too seriously - I'm not a particularly serious person. I just happen to enjoy expressing my ideas as if they are serious.

Tejlgaard