Fri 19 Jun 2009
I have been a computer freak for longer than I would like to admit (I just realized that when I was writing programs on my first computer – the venerable C64 – most of my current team was not even born yet). The shape of computing has changed a lot over that time. Computers now have infinitely more memory and processing power than my first machine. But one thing didn’t improve as much: overall software quality.
Every new programming paradigm or idea coming along since computers crawled out of government labs had “better quality” as part of its promise. Yet somehow we are still surrounded by unreliable, poorly designed software. Making it more graphical didn’t change it fundamentally. And sadly web applications are no exception.
At first at least “hard to use” part was partially solved, because the web interfaces were simple which made them clean and easy to grasp. It also made them fast. Now, however this is quickly becoming a thing of the past. Modern web apps are full of heavy graphics, JavaScript is being pushed to the limits of its capabilities and the result is heavy UIs browsers barely cope with. Opening a few of those modern web apps in Firefox makes it the most CPU consuming process on my system, usually eating lots of memory (and frequently leaking). Frequently sites display errors, get overloaded or otherwise malfunction.
Why it is so? Let me offer two reasons, that don’t exclude each other so both can be true.
First, I think we are all spoiled by the computing power and memory getting cheaper and more abundant. In the past machines changed very slowly. For example the C64 I mentioned didn’t change at all over a period of 10 years, yet software for it did improve immensely. Programmers studied the hardware and by late eighties made it perform things the machine’s original designers didn’t think it would be capable of. That was truly pushing the limits.
Now if an application is a resource hog it is easy to just give it enough to make it run. That’s why each release of MS Office is slower and bigger than previous with marginal functional improvements. Same happened to web browsers and almost all desktop apps. Increase in speed & memory sizes mask the fact that today’s desktop apps are usually bloated and slow, but only to a certain degree. The subjective perception is that they work as fast as the previous generation and everyone accepts that – even though everything should run way faster considering how much the machines this software runs on improved.
Clusters and now cloud computing made it an easy solution for server-based software too – and that’s what web apps are apart from the UI. You can throw a whole bunch of servers on the problem and forget about optimization. And this works, also economically. Sites still make profit on mediocre codebases, because the computing power is cheap and because usually very little depends on web apps. If they break or run slow no one dies and nothing of importance is lost – the impact of low quality is not easily seen. So, there is no economic incentive to improve.
This and overall acceptance of software as buggy (effect of customers being trained for decades to accept dismally poor software from Microsoft) causes buyers of software products to accept low quality as the norm. They learned to live with it and it takes some effort to bring quality into their prospective at all. In fact, I think even some of our clients don’t appreciate all the effort we put into optimization and testing because they don’t see benefits of having a piece of web software that is robust and scalable.
Second, I think many of the web applications today are created by people with little or no education (formal or otherwise) in computer science as such. I think many of “cool web 2.0 kids” have no idea how the computers they use really work inside. I don’t think many of them know for example what preemptive time sharing is let alone are able to calculate the computational cost of the algorithms they use (if they know this difficult word with Greek roots). Very few know and understand the operating systems they use or ever heard of highly available design and other such practices.
Tim O’Reily’s predicted in late 90-ies that easy web technologies (which back then meant PHP and early CMSes) will allow people with little or no technical knowledge to express themselves on the web. And indeed they do – but much of this “creative expression” is, well, crappy software.
All this means that despite everyone saying a lot about quality there is very little push to actually deliver it in web apps. In fact, quality is squeezed out of prospective from both sides. Client’s don’t care for it when they order their web apps and developers frequently don’t have the knowledge necessary to deliver it.
The upside is that this will change as more and more will depend on IT systems that are based on web technology. Crashes, data loss or hour-long “scheduled downtimes” (like Twitter’s) won’t be acceptable. And also the buyers will with time bear the cost of neglecting the quality from the start and – learning from their mistakes – will insist on it in their next project. Which is great news for us, because we already have the knowledge and practices in place to deliver web apps that are also good software. In the meantime we have to keep on educating people that even in web apps quality counts.
June 20th, 2009 at 7:06
Software development is an impressively broad field.
Many devs do it because it was easy to get something done and little is required at the very beginning, and it is a well-paid job. Some of them have never deepened their knowledge, though. But hey, some CS masters are no better.
I think this is the real issue. You can spend three times as much and produce a high-quality software, but why would you do that if it appears to be working already? That’s the crazy thing about so-called software *engineering*. If you produced anything that bad in any real engineering field, it would be clearly visible how bad it is, provided it would last until somebody saw it.