I program stuff.

Thursday, October 27, 2005

Why do so many license games suck?

Seems like there are a lot of games that take a license to use a good property, like a movie (for example: The Matrix or The Lord of the Rings or Dungeons & Dragons), what have you, that suck. But it could be that the projects from licenses don't get canceled as often if they don't meet the bar of non-suckiness. There are a lot of projects that get canceled. There are a lot of games that suck. There would be a lot more games that suck if projects didn't get cancelled probably -- although not all projects get canceled for this reason. Wish I knew how many did for this reason though.

It makes less sense to cancel a project from a well known license just because the game is turning out to be sucky. If you can finish it but it's pretty sucky as a game, you can still make money off of it because people are more likely to buy a game from a well known license. Seems like that could be a main driver of why so many games built off of licenses suck.

Also as an aside, when you do a license title it makes little sense to do something innovative in game design because you might screw the value-add of the license if the innovation is not easily recognizable as a marketable game (like if it's not in a known genre). If it's so innovative it could detract from the marketing of the license to sell the game and if the license isn't the key part of the game, why even have the license at that point (licenses can be costly to get)? Also the license holder is probably not thinking of having some innovative game for their property, they just want a regular game to give a boost the value of their property.

Plus, there is a high risk that the innovative stuff that you invest R&D on will fail to be compelling (you find that you can't build a cool innovative game for whatever reason); that risk is too unnecessarily high to the project have when you have the safety of the sure bet of a license to drive your game sales.

Friday, October 21, 2005

High Latency = What the hell is this program doing?

Friday, September 09, 2005

Whatever, as long as it works

There are a lot of layers of complexity in the life of a coder. Most of it is made up. Examples: VB, DMAC, COM, DCOM, ADO, sockets (or Winsock), TCP, UDP. Just like Word is a beefed up version of notepad with more functionality. All these complexities are there to help us. Joe Spolsky said, these are abstractions of some high level functionality mapped to some low level process to make it work.

For example, as coders, we use tools like C++, Java, COBOL, LISP, or etc. to write programs in a higher order language that creates lower level object code (or byte-code, what-have-you). This allows us to think in higher level abstractions, and closer to how we normally would want to think about solving a particular problem. This way we don’t have to mess with or hopefully not even think about the lower level process at all.

Joel says, I believe correctly, that sometimes there are leaks in the abstraction. There are points when the abstraction breaks down because the lower level process is creeping outside of what the abstraction is build or designed for. The makers and advocates of these abstractions usually don’t get around to tell you about the leaks so they usually will catch you off guard the first couple of times you run into them. Oftentimes when I see coders run into this sort of thing the first time they tend to yell out things like, “Well that’s just dumb! Why would it do that?!” (as I have on a number of occasions). An example situations is when someone writes a massive SQL select query that takes 10 seconds longer than you think it should to run but then when he reorders the where clause in a way that you’d think shouldn’t make any difference, all of the sudden it runs in under a second. The coder in this case may only see the top level abstraction and doesn’t know enough about the lower level stuff in how the database handles SQL select to see why re-writing the query a bit can save a bunch of time. As coders we’ve all been in this situation and suddenly we have to go search around to try and find a deeper understanding of what is actually happening. This is a difficult thing to do. This can be were some coders will just “give up” and get a faster machine or update the requirements for their software to twice as much RAM as was originally needed. Hey, whatever works. But not looking closer can loose you an opportunity to gain a greater insight into your toolset, which is something that we can all obviously all benefit from. And this search might be the true art of coding.

An odd thing about these layers of abstraction is that it almost never seems to end. Basically you have the software layer on top of the operating system (OS) layer on top of the hardware. But within those layers especially the OS and software layer there are all sorts of layers that the software engineer has to be aware of. Like in Windows you might have your COM layer going to ADO systems which link to ODBC then to your database via the network layer. You may have OpenGL and/or DirectX, Windows API’s, .NET runtime, Winsock. In Linux you may have OpenGL via SDL with socks, OpenAL. There are so many layers that it seems impossible to actually understand what’s going on in the hardware. Even the memory is abstracted. Each program treats memory like one long block of memory when usually the operating system treats it as a collection of pages of memory scattered all over RAM with memory from other programs intermixed with it that are invisible to other programs. Sometimes these pages are not actually in RAM at any one time, they might be swapped out to the disk waiting for the program to try and access that bit of memory. All of this is complete hidden from the program. Even the hardware sees memory as something different. It doesn’t see it a pages just a long stream of bytes, but it might access it in a total different type of way based on the architecture.

Even then there are abstractions within that. Some pages or bits of memory might be temporarily held in a cache of memory that is quicker for the CPU to access than regular memory. So in these cases it can just grab is from the quick cache rather than main memory.

Even the CPU in most modern architectures can run instructions out of order while it waits for the longer memory fetches and instructions to finish, then it will reassemble the correct result as if it ran the instructions in the right order. This is only an example of such abstractions. There are many, many others of these. It really does seem impossible to understand what is going on here. I believe that it truly is impossible to truly know. Even if you understood every detail of every abstraction and how it really works it is impossible to hold all that information in mind at the same time and know how it’s going to work. The complexity level is just too high. Weird unexpected things can happen all the time. In fact all this complexity is exactly why we have abstractions in the first place. Otherwise we’d be perfectly fine writing to the hardware in straight byte-code or maybe in some even lower from abstraction.

Like loading a level in a game might tie up a bunch of memory for pre-processing of some game data then the memory is released afterwards but the memory is not really released back to the OS because the memory manager in the game is still holding on to the big block of memory because it thinks its going to need it again soon when it’s not. Things like this happen in development all the time. Abstractions, which in this case might be an abstraction that the developer himself created, can act unexpectedly. Or take the case of a developer that created string function that handles Unicode strings but fails on a strange character set because the user is using a language that uses Unicode character 3 bytes long rather than the expected 2 byte maximum.

Each system in a non-trivial program is usually an abstraction of a concept in the same way. Memory managers are one. It abstracts the tedium of carefully managing your memory throughout the application. A rendering system in a game is another, or sound system, or file reading/writing system, or scripting system, or user input system, ui system. All these abstractions help a ton but can easily lead to strange unexpected errors that you might not be ready for when writing some new functionality. Normally these systems are the ones that you really need to look out for. The ones you write yourself. I’m not saying don’t look out for other systems for problems, but the ones you write yourself (or in-house) can be a major problem because they are usually written to cover a limited number of functional points, but as you develop the system you inevitably want to add functionality to it to do something new and you find that you already have a system that can help. So you go and use that system to do something it may not have been designed for, but it sure looks like it was designed to do that, so you use it. It might even work at first then later when you start using it a lot to do that new thing it might tax the system and just crash, or leak memory all over the place, or bog down in its routines processing the new cases, or fill the disk, or so on. Soon you wonder why the system that you wrote last week is not handling what you thought it could handle yesterday.

The more systems you have in the program there seems to be a combinatorial explosion of the number of things that can go wrong in the system. We all know what people say about things that can go wrong. I know a lot of Object-Oriented people will say thing like, you really need to tightly define the interfaces between your systems and you won’t have so many problems. I’m sort of skeptical of this. When latter changing your code to do something new these little OOP constructs are usually the first to go. They don’t really do anything to solve the current problem they are just there for the programmers benefit to help her understand what the hell is going on. But hey, we need all the help we can get so it’s all good.

This is sort of like the idea of adding getters and setters around all private/protected members of a class “because you know someone it going to want to do that in the future”. The odd part of that is that, is it even worth it to design for some future event that may or may not happen and if it does will it even happen remotely like you may think it will? So in the end, you usually just write a set of functions that just set the member variable without doing anything. But then they can say you now have a hook for whenever other people do something in your class you can now have control over it because they have been using your getter/setters. So later you find it convenient to go change it to update the state of your class when someone uses your setter, then it goes and breaks another set of classes that have been calling your class and using the setter. “Not my problem anyway,” the class says, “That other class should not have been doing that in the first place, it should have been doing it the right way in the first place, let the other class change themselves, this design is the right way.”

I seriously don’t think anyone has any clear idea of what the “right” way is when it comes to software design. Whenever someone says “this is the right way” the first thing that should come to mind is, “Now wait a sec, is that really true?” A lot of people have good ideas on the subject but sometimes they stick to it like it’s written in stone or something. “That function has more than 10 lines of code! It is way too long! How is one to understand it? Its time to break it apart into smaller, bite sized portions.” Hmmm. Sometimes its easier to me to have some stuff all together rather than browse over to another function to see what it’s doing with the other half of the parsed string or what-have-you, even if it is 5 pages long. In some cases the order of a ton of operations is more important to keep in mind than readability of the code. Personally I’ve never put much stock in that one. Are we all so afflicted with ADD that we cannot possibly understand 20 function longer than 10 lines? Sometimes we need to tax that part of the mind to correctly understand what is going on. Also, in a real way it’s more complex when you have 3 smaller functions. You have to look all around for the little bits of code and put each piece together in your mind. And if you are always breaking your functions apart just to fit the model, that’s a lot of extra work. Is it really a good idea?

Many people call this a “religious” debate. Usually each side has strong opinions about how it should be and they just yell at each other until one side is louder than another or has more books on the subject or whatever. A common example is the famous Design Patterns book. The book is so high level about OOP class design that it barely touches real examples and only then in a sort of academic fashion. A lot of the design patters make sense in concept but it is so far from concrete examples that it makes me uneasy reading about it. I’m not saying it right or wrong, just that it’s so self referential to the OOP argument that I wonder how much application it deserves to real word problems. Perhaps I’m being unfair. Usually books like this take a particular point of view on design and apply it to real-word software and say things like, well a lot of people use a class to create other kinds of classes, so that must be a design pattern that people use, so lets call it a Factory Pattern. This is actually very useful to take a particular point of view on design and apply it this way. It tests the point of view and stretches it a fits it to real world problems and solutions. People should just realize that it is just a point of view, an abstraction of the real world problem and stop fighting like it completely fits real world problems without exception.

I see the same sort of thing with arguments between tools. Like the vi / emacs debate. It’s not like vi is the only way to edit text files and emacs is just a rough hack to simulate the pure process of vi or vice versa.

vi, like many things in software, is a made up concept. Even memory as software sees it is a made up concept. A lot of things are just arbitrary; it didn’t need to be that way. Oftentimes things are just built that way because of convenience or performance reasons; the same sort of reasons that you use build your own software. Someone thought it would be cool to do something new so they try and build it. Something doesn’t run quite right so they tweak it a bit now it runs well. But now it’s a bit slow so they tweak it a bit more and get it faster. Then they think it would be easy to make it do this other this other useful thing as well so they add that and soon they have BIOS 0.3 or something. We got to realize that in software, we standing on a house of cards. But all is not lost, far from it. The house of cards is very well tested and stands up very well. Sometimes minor things in software breaks or flexes a bit but the rest of the structure can handle it so the whole thing doesn’t come crashing down. But the principles that build them are essentially arbitrary or at best fits a problem that someone was trying to solve at some particular time. The point is that it didn’t have to be built that way at all; each part could have been built differently. It wouldn’t be dumb to try and build it differently. There is no right way to build something.

This is actually pretty cool. If there is no right way to build something you can build it almost any way you want and as long as it works it’s good enough. We’re used to people propping up technologies and idea as being the best in its field. Go to any software vendor and you’ll see things like,

“IntelliJ IDEA is recognized by many Java developers and industry experts as the best Java IDE on the market. With its industry-leading features, IntelliJ IDEA relieves Java programmers of time consuming routine tasks, remarkably boosting their productivity.” – from the IntelliJ website

While I agree that IDEA is a great product, it couldn’t be the best way to do it. There is always another way to do it that someone else might prefer. As they say, there is always another way to skin a cat.

There seems to be many ways to do the same thing. If two software shops build the same app, I think everyone would agree that the two apps will not be written the same way. One might be in Java and one might be in ASP or LISP CGI. But even if they apps act the same way and are written in the same languages with the same tools and on the same platform and even using the same programming methodologies and both teams learned from the same mentor and had built 10 projects together over 5 years, the programs are not going to be identical. They’ll probably be a lot different even then. The funny thing is that they will probably both work. This is not the expected answer if you expect one to be better than the other. One might say something like the code is easier to read or is easier to maintain in one case or the other but that is just a matter of preference as far as I can tell. Some people find it easier to read COBOL over Assembler, but some people find the straightforwardness of Assembler easier to understand than COBOL’s data structure and sentence-like format. Someone might come up with a cool message routing system because he was playing Frisbee the other day and he saw the messages in the code like the Frisbee. Some might say that performace is better in one way or another in the two systems but that is just preference as well. Some people expect certain things to be faster than others while others may expect the opposite, which is also probably why the two teams build it differently in the first place, because each had their own preference of how it should work.

The mantra that is emerging in this writing seems to be, “whatever as long as it works.” I think that there are so many layers of complexity to software that no one really understands what’s going on. Personally, I think that is just great. It just makes for a lot of potential problems to solve. You can re-write the whole bloody OS if you want. Plus you get to solve them any way that you want to. Hell, you could even write a Tic-tac-toe program and turn it into a AP/GL package just because you like Tic-tac-toe programs as applied to accounting packages. It’d probably work just as well, or maybe even better, as starting with a Payroll program.

I think the “whatever, as long as it works” approach might be the only practical approach to building software. You just build it the way that you think it should be and then test the hell out of it. If it works great, if not tweak it or re-write it or whatever until it does work. I feel that this is the only constant out there. The only bar to entry is if it works, there are no bars like OOP (I keep picking on them but there are many other things that do the same thing), or eXtreme Programming, or Water-fall design, or even breaking you code into many source files, just put ‘em all in one monster file and you’ll only need to open one file to edit your code! Ack, not for me, but whatever floats your boat, and it really is your boat, not the OO Consortium’s boat. They don’t have clue one about how you run your ship.

This makes practical because this idea is a bit like the scientific method. Which seems to work pretty well. The scientific method doesn’t care about how you do things or even what conclusions you come to, all that matters is that the experiments back up what you say. How else could physicists come up with something so counterintuitive as quantum electrodynamics, which makes no sense why it works but it actually works and rather well. The scientific method seems to work well because it forces you to think in terms of “well I have no idea how nature actually works, so well just see if this one thing here works the way I expect it to.” This can be applied to any system and seems to work well with nature with is incredibly complex, so why not apply it to computers and the software world which is also very complex, who’s mechanics is also mostly unknown to us as individuals or probably as a communities of coders and computer scientists even though it’s almost completely man made by us.

There are just too many unknowns out there to think you know just how it should be done, whatever it may be. This may be somewhat a self-fulfilling prophecy anyway since you can take most any approach to solve most any software problem and usually get it to work somehow. If you solve a problem a certain way it can give a false sense of confidence and make you think that this is the right way to solve it.

So I don’t care how you solve the problem, just do it the way you want to and test the hell out of it and make sure it works. It’s all good as long as it works.