Monthly Archives: March 2006

Web technologies paradox

Reading The Art of Unix Programming… This talk about C/C++ being put out of general use by higher-level languages. Well, I can sort of understand this, there are really specialized languages that better suited for a specific jobs than C or even C++. Take Perl for example. There is no way you can achieve the same degree of text processing power with C++, provided that you do not have too much free time. And even if you do have this time, you probably will just end up with another Perl implementation in C++ ^_^

But I am here not to judge about it all. I just want to note one strange thing. Among all kinds of development, there is one area where lower-level languages would be more appropriate than anything. No, I am not talking about nuclear explosions modeling nor about real-time streaming application (telemetry?) and 3D graphics. There is much more common thing. We all know it by the name of WWW. Yes, the World Wide Web. I mean that even nowadays web applications have pretty high hardware requirements. Not just to run, but to process a lot of requests simultaneously and to share limited resources of the same web server. Look at the prices and resources! Lowest possible configuration for a virtual dedicated server gives you only about 32 MB RAM! And people still tend to run Perl CGI scripts on that! At the same time, people tend to write desktop software in C, while it could be also written in any language without any visible effect on the performance at all (well, I am not talking about buggy Java implementations here!).

I am not going to blame people for using C and C++ for desktop software development. In fact, I am one of those people as well – it is way too difficult to develop desktop software in plain C or C++, but with the help of great toolkits such as Qt (and possibly GTK+) it is much easier and more fun. And after all, if I am able to quickly produce a short, high-quality maintainable source code in C++ for some task, why in the world should I go with another language? Well, enough of this, it is too philosophic question to answer it in a short and definite way.

What I am talking about is another part of this paradox. Why in the world are people using Perl, PHP and even Java for web software development?! Because C/C++ is importable? Bullshit! C is one of the most portable languages in the world and C++/Qt combination is more portable than Java, for example. And if we are talking about binary code portability, then I am going to ask: how often you are going to move your web applications from one server to another? Probably not too often. Another reason may be lack of the web technology in C/C++ toolkits. Well, I can sympathise with this. But then the next question arises: why are web technologies in C/C++ world so underdeveloped? Why have we monsters like Java JSP/Servlets, Perl mod_perl and CGI modules, PHP as a specialized language and a lot of other stuff, but have not simple libraries that would allow us to do the same things in C++? I am completely at a loss here!

Just being curious, I have made a prototype of such a technology. It took me about one week of lazy coding in evenings. It is a simple module for the Apache web server that looks for specific requests, then maps these requests to the application (which is just a shared library, to reduce latency), loads and runs it. It gives low enough latency, I have compared it with equivalent Perl CGI (without mod_perl), C CGI, C++ CGI and PHP. Of all those, only PHP has lower latency because of its high integration with web server. But I am talking only about startup latency here! If I were to measure execution performance for some typical task, I am sure that implementation in C++ would be much faster than in PHP, just because it is not interpreted language. And there are also optimizations possible, such as application caching and deferred unloading.

I am not sure yet whether I will be able to make this technology into something better than just a prototype, but the point is that I just do not see any good reason why nobody did not do that before me!

Copying text from this document is not allowed?! You are kidding, right?

Сегодня был ещё один классный прикол. Л. поинтересовалась, нельзя ли из PDF скопировать текст. Я ответил, что вообще-то можно, но далеко не всегда и далеко не всегда – просто. Подошёл, смотрю, Adobe Reader дурит – текст выделяет, а копировать почему-то не хочет. Взял файл себе, открыл его xpdf и был до глубины души потрясён сообщением на терминале:

Copying text from this document is not allowed
или что-то вроде того.

Нет, ну я вообще… если эта дрянь может отобразить его на экране, какого, извините, чёрта она не может передать его в буфер?! Кем не разрешено? В общем, я подумал, и пришёл к выводу, что это в принципе невозможно реализовать на криптографическом уровне – программа должна иметь доступ к тексту, чтобы его нарисовать, если только он не хранится в графическом виде, но тут явно не тот случай…

Тогда я взял исходники xpdf и нашёл в них это сообщение. Там было что-то типа if(!okToCopy()) бла-бла-бла… Тогда я нашёл okToCopy(), которая делала странные манипуляции с какими-то правами доступа. И написал там вместо этого просто

return true;

Перекомпилировал, запустил. Работает! Правда текст почему-то перекодируется в Latin1, но это любой дурак может превратить обратно в русские буквы…
Потом я сделал патч на основе имеющихся исходников, запихнул его в /usr/ports/graphics/xpdf/files и сделал make package clean. Теперь у меня есть установочный пакет xpdf, в котором исправлен баг, заключающийся в корректной интерпретации прав доступа к документам PDF ^_-

Изврат с датой и временем

Сегодня был классный прикол. Приходил П. и спрашивал, в каком виде хранится дата в определённом ТМ параметре. Ему было честно сказано, что в секундах. Тогда ему стало интересно, с какого момента. Тут ему уже ничего толкового не объяснили, так как на этот вопрос ответа просто не было, а объяснять ему почему, не захотели из политических соображений. Последнее на меня навело, тоску, как водится, а вот техническая подоплёка была довольно весёлой.

Прикол в том, что есть два понятия: время и дата со временем. Время – это миллисекунды от начала суток, и форматируются они как “чч.мм.сс.ммм” (какого хера время разделяется точкой – вопрос тот ещё), а дата и время – это секунды от ниоткуда и форматируется оно как “ддд:чч.мм.сс”, где “ддд” – дни, получающиеся делением этих самых секунд на количество их в сутках. То есть, к примеру, “128:13.25.30” – это 128-й день, 13 часов, 25 минут, 30 секунд. При этом откуда отсчитывается 128-й день систему совершенно не волнует. Тот, кто на это будет смотреть, располагает этой информацией. Другими словами, это относительное время.

Далее. Чтобы получить дату в виде “день-месяц-год” (который системой не поддерживается) был проделан следующий прикол: в дату со временем день был засунут на своё законное место, месяц – на место часов, а день – на место минут. Секунды остались нулевыми. В результате 25:03.06.00 – это 23 марта 2006 года. Вот тут-то П. и оказался в полной жопе. Даже сама идея хранения даты со временем в относительном виде и то вызывает у нормального человека глубокий шок, а уж додуматься, что часы – это месяц, а минуты – это год, это уже ни один нормальный человек не в состоянии. Ну сами посудите, эта переменная увеличивается за год на величину, в 60 раз меньшую, чем за один месяц!

FVWM WindowList usability problem solution

One funny thing just happened. FVWM has function called WindowList which acts similar to Windows’ Alt+Tab. But unlike Alt+Tab, which displays some special fancy thing, WindowList displays simple menu, just like many others. It is good. What is bad is that when I use Alt+Tab to switch between menu items, FVWM also moves mouse pointer over them, just like it does for other menus. But for some reason, it does not return mouse pointer back to where it was before I pressed Alt+Tab. For other menus there is no such problem, so it may be just a bug. Anyway, it is bad because I often keep my mouse pointer at the edge of the screen, where it does not bother me. And then I press Alt+Tab and get it in the center of the screen! So I have to move it away each time, which is really frustrating.

But of course FVWM is open source. Any bug can be fixed by any developer in the world. So I tried to look at the sources a little, but understood that it will take me a lot of time and effort to understand what is going on there. Well, low-level X programming was never one of my strong points. I also tried to read the documentation, but found no option like “DontTouchMyMouse”.

Then I thought: well, maybe there is a way to save mouse position before displaying WindowList and restore it back when WindowList closes. I looked at the documentation more, but found out that although there are variables $[pointer.x] and $[pointer.y] containing just what I need, but there is no way to save and restore them. Strangely enough, FVWM configuration file allows a lot of programming, but no data manipulation except read-only access to predefined variables.

Then I looked at the FVWM modules and became interested in the FvwmPerl module. After reading documentation a little, I came up with the following solution:

It certainly does not look too complicated, does it? Although I think that using Perl is a little bit of an overkill, but hey, it works and works almost fine. By “almost” I mean that it would be very nice not to move mouse back if I used mouse to select window to switch to, not keyboard. Still have no idea how to do that, though. But it is still better to have choice between “move back” and “do not move back” instead of just “do not move”.

Complexity and simplicity paradox

Complexity and simplicity are exactly opposite things. But strangely enough, one often leads to another. Simplicity may lead to complexity and complexity may lead to simplicity in both good and bad meanings.

In software development, simplicity is definitely a good thing. The simpler the code, the less bugs it contains, the easier it is to maintain and extend. Oops, did I just say “to extend”? Does not extending a simple code represent a risk of making it more and more complex? Yes, and that is exactly one of the bad ways to do it. Here is one way how simplicity may lead to complexity – complex things are usually unable to produce more complex things, just because they are unmaintainable and tend to get thrown away. But that is rather natural way for simplicity to turn into complexity. I would like to talk about more paradoxical ways.

Suppose we have a really simple program. Suppose we have a lot of them. Now, if they are designed “a Unix way”, we can easily connect them together and get some complex construct. It is impossible to do with complex programs since they are almost never compatible enough. You may build a house out of bricks, but you are never going to build a bigger house out of smaller ones! Or, more to the topic, you can easily use sed from your shell script to do some simple replacements, but you are never going to use Microsoft Word the same way. So, one way to build complex things from simple ones is to connect them together. And the simpler they are, the more complex things you can make (just think about how many kinds of houses you can build out of small bricks and out of big blocks). This is one paradoxical way to go from the simplicity to the complexity.

Another one is about how complex instruments can solve the problem the easier way. This is about whole computers thing, actually. I mean, that computers in nature are the complex devices meant to simplify our life. Instead of spending centuries calculating “Pi” value up to the millionth digit, you just write a simple program to do it for you. But in order to do it you need to have very complex device called “computer” and another complex thing called “programming language implementation”. Thus complexity leads to the simplicity.

But it is all still natural enough. Now for the most wonderful thing. Suppose you have some complex program. The question is: is there any way to make it more complex while making it more simple? Sounds like some black magic, but in fact it is possible and easy! Just break it into simple parts and make each part a little bit more complex. Each part will be even simpler than original program and the whole thing will be more complex. And these things are both good. Simple parts make it easier to maintain and overall complexity is actually only gives you better abilities.

Remember that telemetry sending-receiving application that I was talking about? At one point, I thought that it would be nice to have a feature such as to be able to start several services simultaneously. But then I thought, “Hell, it is so complicated already! Let us live without it!”. Now I think that if I had chosen to implement the whole thing as four independent applications (one to receive, one to convert, one to send, and the last one to control everything) then it would be much more easier to implement this feature, because the receiving part would not look like such big complicated mess as the whole thing. Thus the best way to increase complexity without scaring everyone is first to take a step to the simplicity. Note that the reverse would be much harder: if you already have one big mess, then it is not a best thing at all to first make it into a bigger mess, and then trying to make it simpler.

Of course, the fact that it is easier to make things more complex by breaking them into simpler parts should not encourage one to do so right away. Even better would be to solve a more complex problem without complicating anything at all. If there is a way to do it, go for it!

Think before do

Okay, now I am going to say one very obvious trivial thing. Before implementing something, think about it! Really obvious, is not it? Hell, then why so many people do not do it?!

I think that the reason is because inexperienced developer usually looks for just a solution. When he thinks he found it, he usually rushes to implement it so he can think about a solution for the next problem, and so on. Well, then he realises that such a way of developing software is just not very productive. Then he reads a lot of books and articles and finally understands that he has to design something before implement.

Okay, so any good developer usually finds it out on his own. So why am I talking about it here then? Because there is actually much more to the thinking before implementing than just design. Here is what am I talking about. Suppose you have to develop some kind of network application. Inexperienced developer will probably rush off to write code right away and end up with big unmaintainable monster with screwed up network protocol. More experienced developer would think about how and what exactly the application should do and implement it only then. So he has some global idea about what is going on, so his application tend to be a much better one than in the first case.

Now get to the point. For some reason, “a much better one” is often far from “the best” and it tends to become worse with the time. There may be a lot of reasons, actually, such as a lack of time, developer’s stupidity and such. But one of those reasons is often because of the misunderstanding this “think before implement” concept. Well, I would even say “over-complicating”, because it is actually simpler than it seems. When one reads a book about “software design”, he often thinks that the book contains some truths that he must know in order to design good software. Well, in some way it is true, but what people often forget is that no amount of knowledge can replace a good thinking! Enough of theorising, however. Let us go back to the example.

Suppose we have this application designed well on the global level. We know what and how it must do. And we start to implement it. But we do not live in ideal word, so ideas often collide with harsh reality. So we may have, for example, two C++ classes called “NetworkApplication” and “ClientPart”. The first is a general control class that is supposed to control everything. The second is a client protocol implementation. Both are well designed and well implemented. But then we start to connect them together and find some collisions. For example, different parts of NetworkApplication (for example, “beginSession” and “resumeSession” functions) need slightly different implementations of the same operation in the ClientPart, say, “connect”. But developer already designed and implemented both classes, so when he faces this problem he thinks of it as minor design problem and carelessly makes some dirty patch. For example, he implements a some kind of wrapper over ClientPart’s “connect” in the NetworkApplication, called “resumeConnect”. Or he implements it in the ClientPart, it does not matter. What matters here that he forgets to think before doing so.

What happens next usually is that code tends to become more and more messy over time and people start to blame wrong design, stupid developer or whatever. The main point here is that they are right in some way about “stupid developer” – I mean that everyone are the stupidest beings in the world when they do not even try to think! It is actually not about software development only – it happens everywhere. But software development seems to be one of the areas when the smartest people tend to make this mistake too often. What could be done to prevent the code from turning into a big mess over time? Well, here is what should happen:

1. Developer finds a problem. It may not seem to be a big problem, but he must nevertheless understand that it is a problem. A good way to achieve it is to notice yourself thinking “there is something has to be done about this” – if you are thinking this way, you have encountered a problem. It may be a little, but you still have to solve it a correct way, not “just solve”.
2. Once he identified a problem he should stop doing anything and start to think what exactly has to be done and how it has to be done. And be careful to not pick up the first solution, but to think a lot and decide what solution is the best. The funny thing is that the easier the problem, the harder it is to do – because people tend to think less of smaller problems or to not think at all. This is just a good way to make small problem into big one.
3. Once the best solution has been found, developer implements it. If it encounters something he did not thought of, he must go back to thinking until he is sure that this solution is still the best one, or a new best solution is found.

Remember dwarven king Loghaire Thunder Stone from the Arcanum said “Humans act first, think later and feel last of all”. The correct way in the software development is first to feel that something is wrong (no matter how insignificant it may seem), then to think carefully what to do about it, and only then do it. No excuses! Even if you are thinking about such a simple thing as the name for your local variable or about whether to declare it local or field.

The important point here is to find the best solution possible. Another reason why developers tend to fix things quick and dirty is the programmers’ natural laziness. Well, programmer have to be lazy to be a good programmer, but any good programmer knows that the best thing he can do being lazy is not to find a solution quickly, but to find a solution that will save him from doing a lot of boring stuff later. So better to find a good solution now, perhaps by redesigning the whole model, than to get confused by a lot of “connect” implementations later.

Another important point is to be able to find the best solution. That is what books like “The Art of Unix Programming” for.

Also, I would like to say it one more time: no amount of knowledge can replace a good thinking! One may say, “hey, that developer just did not know about thin glue rule!”. Well, his fault it is, right. And it is true as well that if he knew about it, he would quickly identify his problem connecting NetworkApplication and ClientPart classes as the glue layer problem and would did his best to avoid glue bloating, thus going the right way. But the main point of this post is that no knowledge can provide the best solution to any problem. So even if he is the most knowledgeable developer in the world, he would still encounter situations when he does not have enough knowledge to solve another problem.

As I already said, it applies not only to software development. People often tend to find solutions by habit, by asking another people, but not by thinking. And the smaller the problem, the more often it happens. Why it is not so important in other areas is because small problems do not usually tend to turn into big ones in other areas. “My computer keeps crashing! – Go buy a new one!” – okay, that is a solution. If he thought about it, maybe he would find a reason why it keeps crashing and solve it without spending a few hundred dollars on a new one, by reinstalling OS, for example. But that is all! Old computer gets thrown away and it will not turn the rest of his life into nightmare by trying to take a revenge for it. Badly developed software probably will. At least until the problem is fixed the right way or until the software gets thrown away (in the worst case).

It is a really simple thing: feel, think, do. Nothing more. No need to believe in God or Satan. No need to buy expensive proprietary tools that promise you to create finest products without thinking about anything at all. You still have to know a lot of things to be able to find the best solutions, though, especially if the problem is not actually very simple one, but that is a really obvious thing – one can get away without thinking and end up with badly developed software, but there is no way one can develop something in any way if he does not know how!

Rule of Representation of The Art of Unix programming

Tremble in fear, oh lj-cut worshippers! Especially, “Russian-only” ones. Today is another day I step upon your beliefs. Well, sorry for that (as if!).

Once, I had to implement one C++ wrapper class over a poorly designed library, so its poor design would not affect the rest of my application. This library uses a lot of callbacks, each of which may be called in different situations and therefore should be interpreted differently. For example, a “serviceStopped()” callback can mean that the service has just been stopped (surprised?), but it can as well mean that the service has failed to start (surprised!). Another really “nice” thing was that most calls in this library are non-blocking, using callback mechanism to notify application about events, but “connect()” call used to establish a network connection, was blocking. Great! Now I have to fork separate thread only to establish a connection!

And so, to correctly handle each possible situation (for it is really important thing in real-time telemetry transmission software, you know!) I created this wrapper class and introduced a few concepts: state, target and operation. State is where exactly are we now: this whole thing can have three stable states (“no connection” or “idle”, “connection, but no service” and “connection and service”), and four intermediate states (“connecting”, “starting service”, “stopping service”, “disconnecting”). Target is what we want to achieve – this is actually a subset of states: “disconnected” (we want no connection), “connected” (we want connection, but not service), “running” (we want connection and service). And operation is what exactly is going on right now – these correspond to four intermediate states plus “no operation”, which means that we are not actively doing anything right now. Difference between “connecting” state and “connecting” operation, for example, is that we can be stuck in the “connecting” state, which means that we are trying to establish a connection, but current operation may be “connecting” (if there is and ongoing connection attempt) or “no operation” (if we are waiting between connection attempts). We really need this “operation” concept because if we suddenly want to cancel connection attempt and there is “connecting” operation going on right now, we can do nothing but wait until it is complete (just because there is no way in the library to cancel it). But if we are in “connecting” state, but “no operation”, we can just switch state to “idle” and forget about pending connection attempt. Get it?

Now for implementation. Okay, I must have a lot of callbacks in my class. Only a couple of them are doing really useful and significant work. Others are just there to watch over the library and make sure things will not break completely just because something nasty happened. Not an easy task if you do not (and I really do not) know what to expect! So I ended up with a lot of “switch (state)”, “switch (operation)” and alike. About 800-900 lines of code. Well, it is not very much, but still a lot. But then I thought “well, it is much more better than having to deal with this library directly, and maybe is just a price I have to pay to avoid all that mess”.

Now for another part. That one actually was about receiving telemetry. But once it has been received, I must convert it into another format (a big problem, too, but outside of this post) and send it to the processing server. The good thing about processing server is that I do not need a special library to do it – just connect to the specified TCP port, send a little of magic data, and go on! Good. The bad thing is that there may be problems with the processing server and they are to be treated with care, because just ignoring them would easily just break everything on the server, including possible ongoing ISS telemetry sessions. Okay, so what am I to do? Well, first thing I did was to implement a kind of library to send telemetry to the processing server. Much simpler and much cleaner than one I have to use to receive telemetry. Easy part. Just a simple mechanism, nothing more.

Okay, now I have to plug it in my application. But I still have to control it! I have to make a different decisions basing on what is going on right now. Well, I have better knowledge about what is going on (because the library is written by myself), but this is not freeing me from making decisions. Okay. Having a (relatively) positive experience with the receiving part, I implemented similar concepts for the sending class, except that there was no “service” concept, so only four states (two stable and two intermediate), two operations (three including no-op) and two targets. Nice. Then I started to implement “decision making”, but quickly realised that it gives me a little bit too much of those switch statements. While I though that was okay for the receiving part, here it became clear that here, where I have the complete control over the code, it would be definitely better to have simpler but still clean and powerful mechanism. But how do I get around with making decisions without using “if” or “switch” statements?!

Fortunately, a few days ago I stumbled upon a great book called “The Art of Unix programming“. I have just started to read it, but at the very beginning I found "Rule of Representation": "Fold knowledge into data, so program logic can be stupid and robust". And then I asked myself a question. What exactly determines what should I do at the some point of execution? I thought a while, and realised that it is operation-state-target combination. I though a little more and found out that the time when the current operation has started (or when the last operation was completed, if current operation is no-op) also matters. For example, what to do if we are in “connecting” state, “connecting” operation and “connect” target? Nothing, obviously. Just wait until connection operation is completed… Oops, did I just said “wait”? But how long? What if this operation never completes? So we have to check the time, and if too much has passed since operation start maybe we better just abort the connection attempt, print something in red font, and switch to the “idle” target. And if the time is not run out yet, well, we have to come back to decision making after it runs out – unless something happens earlier (like operation completion or failure). Looks clear enough, does not it?

So I came up with a solution, when “what to do” was just a structure (called “action”) with two (yes, that many!) fields, one of them being function name to call and another is time that must pass after the beginning of the current or the end of the last operation. If function name is NULL, do nothing. If time is zero, call function immediately. If it is not, check the time, and if it has run out already, call the function, otherwise wait remaining time and go back to decision making again. Then I created three-dimensional (operation-state-target) array filled with these structures, writing “errorAction” as function name in each “impossible” element (like “starting” operation in “stopping” state). This “errorAction()” prints out current operation-state-target combination, so I can easily debug it. And I implemented a “takeActions()” function that was actually doing this “decision-making” logic using that “actions” array. I had to add something else in it, like updating current state, handling program shutdown, and a little of such things, but still this function was small enough to fit on a single screen! And no single “switch” statement in the whole class! Only a few “ifs”, but those are unworthy to fight with “Rule of Representation” principle. As the result, about 350-400 lines of code – yes, twice as less as in the receiving class. Well, we do not have a “service” concept in it, and do not have to deal with that messy library, but it still looks surprisingly small, for example, compared to just a draft of the “switch-based” implementation. Just think about: trade a lot of huge switches for just one array of just two-field structures! Another good thing about is that if I realise that there is something else that affects my decisions or describes what exactly to do – no problem! Just add another field to the action structure or introduce another state, operation or target. And change one function a little.

Now I am wondering is it worth it to reimplement the receiving part using similar mechanism. Probably it is, and maybe it would be nice to implement this logic on more abstract level, without tying it to the specific problem: after all, nothing in “takeActions()” function is related to sending telemetry at all!

Summary: one may be a good coder, but there is much more to the software development than just coding. One have to be able to find better solutions, and for that one have to smell “bad” solutions (like that switch-based one). And in order to achieve that, valuable sources of information exist, like that “The Art of Unix programming” book and other stuff written by experienced developers (like Joel I mentioned earlier), who really know what they are talking about – not those who only theorize about everything without actually trying it and living it.

One of the major UNIX flaws: lack of the user-friendly GUI

I already told several times why UNIX is so good.

But now, when I stumbled upon www.joelonsoftware.com (an invaluable resource indeed!) and read about “Biculturalism“, I finally realized one thing is terribly wrong about UNIX. That is exactly what Joel writes about:

“Aunt Marge can’t really use Unix, and repeated efforts to make a pretty front end for Unix that Aunt Marge can use have failed, entirely because these efforts were done by programmers who were steeped in the Unix culture.”

That is it! It is not that UNIX lacks good software (not talking about OCR here), it is not that UNIX installation procedure is too complicated for an inexperienced user (the same could be said about Windows as well). The main problem is different and that is exactly what Joel had written. Unix is just user unfriendly. It is very obvious thing, actually, but I realized it completely only now, because for me it is not unfriendly – it is exactly opposite, actually.

What do I mean here. No, it is not only that KDE really sucks. I new that for a long time. It is something much more deep. UNIX lacks “average user layer”. I mean, on Windows we have a nice desktop, “My Computer”, “My Documents” and such. I know that disks “C” and “D” is not a part of any logical unit in the file system that could be called “My Computer”. I know that “My Documents” is just another directory on the hard drive, so there is no reason to give it a special icon or to make it a default directory for a lot of operations. But Windows pretends that it is all different. It pretends that we really have something in our PC that could be called a “desktop”, “My Computer” and such. The bad thing about it is that it makes things more complicated than they should be – I mean, why the hell we should give some directory (which even has space in its name!) some special status just because Microsoft thinks so? The good thing is, ironically, the same – for average user it is easier to work with “My Documents”, “My Computer” and “desktop”. No, not because he is too stupid to know what directory and file is. No, those who are really stupid have a lot of troubles with Windows too, so we better leave them alone, hoping that they will do the same thing to us, heh. What I am talking about here is not just a matter of knowledge. The key point (the point I came to understand only now) is that people feel more comfortable with familiar concepts. That is, while I do not really care whether it is named “docs” or “My Documents” and therefore choose former because it is shorter, lowercase and have not spaces, at the same time Aunt Marge will just get scared of “docs” because it looks too weird and too similar to all those “lib”, “etc”, “bin” and stuff. As you can see, it is not only problem of the good desktop environment – the nature lies deep within system, even in such simple things as directory names.

So what do we need in UNIX for it to be able to beat Windows completely? First thing, we need an ideal Windows emulation. Yes, just like something Wine gives us, but much better. Well, the good news about it that Windows API seems to be completely abandoned so we can hope that Wine catches up to it someday. Now we have another evil weapon called .Net, but it already has UNIX implementations like “Mono“. Second thing, we need a lot of good software, better open source, but good commercial software will probably do as well. And third thing, we absolutely, undoubtedly need something called “average user interface” layer. No, this is not a reason to rename all those sacred directories as Apple developers did. UNIX is UNIX. “bin” should stay “bin”, and there is no reason to move away FreeBSD’s /etc/rc.conf file just because it scares Aunt Marge. After all, we have a lot of much more awful stuff in Windows – just look into system32 directory! But no users really see this stuff, so that is okay. And that exactly what we need on UNIX. We need file manager that not only resembles their favorite Explorer – it behaves like it, and it hides anything that users are not used to see. We need desktop environment, that is not so complicated and buggy as KDE and that enables people to do that they are used to do – like dragging their files from disk “A:” (well, it does not have to be called “A:” but it should have very similar icon to that in Windows and should be called in their native language (like “Floppy Disk”), not “fd0”) to their desktop. We need something similar to dreadful Autorun feature because very many people are used to inserting disk in the CD drive and waiting for Something Good to happen. And we need a lot of another stuff – and the main point that it all should work, should not be more buggy than Windows, and should make people feel comfortable! And to achieve it all, it should be done by people who feel what is “user friendly” like, not just by good UNIX software developers.

In ideal world, this should look almost identical to Windows, but be free, run faster, require a lot less RAM and disk space, should crash less often than Windows (easiest part, even when comparing to modern stable versions of Windows) and at the same time it should be UNIX – I mean, that pressing some magical key sequence like “Ctrl+Alt+Scroll Lock” or choosing “Switch to classic UNIX” in “Log out” menu should bring us back our favourite OS without any weird attachments like “/My Documents” directory. You say it is impossible? Well, I am not so sure about it, but even if it is, then it just means that Windows will remain most popular OS for a while. Bear with it or fight with it. Not by screaming “Microsoft sucks and is not fair!”, but by giving people what they are expecting, not what you are thinking they should expect. This is a reality, not your dreams.