May 28, 2007

You can't write everything by yourself - the missing link

I always tend to build software myself, not just coding, but also thinking about it, architecting it, experimenting with it and so on. It is always difficult to convince anyone in this approach compared to an off-the-shelf product reuse. Whenever such an argument comes up, I often hear this - "Well, you can't write everything by yourself, you have to use product X !" Indeed, it is impossible to rewrite an OS or a database engine, or a programming language, you have to reuse an existing piece of software. Not that I believe in a developer with a golden keyboard who writes infallible code (in Redmond, Bangalore or anywhere else) but doing so is just very impractical. Moreover, the truth is - you really can reuse just about anything, and doing so often gives good results. I never knew an answer to this, until now. I suddenly understood that it is this:

Even if you haven't written it yourself, you ought to know it so deep, as though you have.
The typical approach contradicts to this, you install something and just assume that it does what you need, exactly the way you need, that it will save you from all the problems you might possibly have. This is a mistake, a very convenient illusion. More often than not, it results in products being used in the wrong way, performing terribly, breaking on you, being cursed and thrown away only to be replaced with another of the same kind, only more expensive and fashionable.

It is therefore crucial to understand the working principles of the software that you reuse, to know the internals, what kind of problems to expect, and so on. And then, will you excuse me for repeating myself, but doing a lot of development on your own helps enormously in understanding the software written by somebody else.

April 13, 2007

Push a button, have ten bucks paid, repeat a thousand times

Consider a situation, where a customer pays for something on the Internet. There appears to be a huge perception difference between the client who pays and the provider who collects the payments. The impression that the two sides have on the scale of the affair is completely different.

See, if you are a customer and all you want is to pay $10 for something on the Internet, to you it's a matter of efficiency - through which hoops you have to jump to have it done and how fast you get the stuff you pay for. In an online transaction like that the money themselves don't matter much to a client, for the following reasons -

1. The client thinks of the electronic payment as of the payment with real money, which cannot be mishandled or require any processing which can be delayed or refused;

2. It's actually the merchandise or the service which the client wants at the time. The necessity to pay money, even online, is the mandatory inconvenience, an obstacle to it;

3. The amount of money in question is not that large. Even in the worst case the customer's risk is nearly zero.

I'm not saying a customer will tolerate losing money in an online transaction. What I'm saying is that at the moment of such transaction the client will not worry much about what could happen. To a client, it's all in "push a button, have ten bucks paid, how complicated such a simple procedure could possibly be ?"

And it wouldn't unless there were thousands of customers. When serving a single customer, it's easy to take the money, even manually, over a counter and process the payment, but processing a stream of thousand payments is different. Then you have a different perspective at the same $10.

When you are a provider, the problems that you face all have the same root - that you are a money pipe - anyone can use your services to buy something for themselves. What are the outcomes ?

1. Responsibility before customers. You simply cannot afford to fail. When you fail to deliver, the customers will haunt you, even for the same lousy $10. Dealing with this requires certain investments in reliability of the solution.

2. Freedom to be abused. A strong incentive is presented to the entire world to hack you and profit from that. This asks for security-oriented thinking.

3. Overwhelming complexity. Unlike the customer, you understand the guts of the service, and see the great many places in which any given payment can fail. And you have to maintain it.

See, from the provider's point of view the same ten bucks payment becomes nothing short to a hand grenade.

As it is my primary job to develop such solutions, I'm obviously on the provider's side. And since I'm a software developer, I have one more problem to deal with - the deceitfully simple outside look of the solution, remember - one button, ten bucks... Should the management adopt a customer's view, then for them it's a similar question of "how difficult a development of something that simple could possibly be ?".

But then, indeed, how difficult could it be to develop something that simple ?

March 13, 2007

Re: The Illusion of Certainty

Found this article on Design Observer:

The illusion of certainty

An interesting observation on how illusionary a structure imposed by form (or anything really) can be.

Quote:

... the rational side of our brains leads us to such solutions because they gesture to an odd kind of certainty. That tension — between structure and freedom, between form and its variation — is an essential characteristic of design thinking.

How new is the new Blogger ?

A quick question before I proceed to another post I was originally up to.


How new is the new Blogger compared to the old one, if to user experience they differ only in the login form ?

March 06, 2007

Do radars have screensavers ?

I was just wondering -



do radars have screensavers themselves ?

March 01, 2007

What difference does it make to know your people ?

A banner has just popped up - "monster.com, 40 000 000 resumes". Or something like that.

Imagine a database of 40 000 000 documents prepared by people who have strong incentive to lie. What would level of noise be ? How reliable would it be ? What amount of sifting through is required to find anyone ?

Compare this to how this company works - Core Search Group

A recruiter from them once contacted me, but not with a standard template offer, heck, not even with an offer at all. The guy has actually read my blog and (I'd guess) my published resume and addressed me with remarks about it that made sense ! I responded with a detailed explanation of who I am and how I work and this, beside my resume may be somewhere on their file.

I realize this is their work, but still, what difference does it make !

February 26, 2007

What did you do today ?

This one kept bugging me for a long time now.

It appears that many people hate the work they do. I can see how one can be forced to take a job he doesn't like, fate, bad luck, blah blah, but still, I totally don't understand this - how could you live if you cannot be proud of the work that you do ?

What do you say when you come home every day ? And I don't mean - to your family, but even to yourself, what do you say to yourself - I did what today ?

How comfortable it is when you can't point a finger and say "I made this" ?

When you are making crap, and everybody knows it, when people curse and spit when they encounter something that you've made, how do you feel ?

I do believe that most people still feel bad when then do their jobs wrong. Although they can find excuses and even blame somebody else, there has to be something, because if such behaviour was perfectly ok for them, they wouldn't have been looking for excuses and blaming others, and the more agression means more discomfort.

Which means - a lot of people hate their jobs, do them miserably and are at more or less permanent stress about that. Isn't that terrible ?

February 20, 2007

Never underestimate the power of randomness

I've just returned from a deep testing and debugging session and all I can say is again - wow ! never underestimate the power of randomness !

The system I was testing is a complex of network services build on top of Pythomnic platform with multiple Python processes scattered across multiple servers and intertwined together in redundant and fault-tolerant fashion. When it's live, it's going to be the billing hub service for the bank where I work. It has to deal with all sorts of payments to all sorts of providers. And so my job is to build a system to which modules for specific providers will be plugged later. It is also transferring money, so it'd better be reliable.

Someday I'm going to describe the design of that system as a case-study for Pythomnic and publish it on its web site. That will be, but for now, here is my recipe for the best testing:

Stress + failure injection + randomness

Stress: don't spare the system you are testing. The users will not. Give it as high load as it can handle and then some. It's no problem if it breaks now, but the amount of problems (not always bugs) that are revealed under unbearable load is surprising.

Failure injection: don't expect the problems to happen just because you are testing. Make them happen. Break stuff. Insert something like:

if random() < 0.01:
raise Exception("failure before provider request")

if random() < 0.001:
sleep(3600)
raise Exception("provider request hangs")

result = provider_request()

if random() < 0.01:
raise Exception("failure after provider request")


Insert it all over the place. Well, there is no point inserting failures between each two statements, it quickly gets cumbersome, but you should decorate each "external" call with such injected failure frame. It may be a database request, a specific API call etc.

Randomness: that's my favourite part, in testing, you can't beat randomness. You would never make up such a combination of failures that random() would. Make sure your random switches cover all the major code paths and let it running for a while. If it succeeds you can be pretty certain the system is working. To be sure, such random testing may not catch all of the special border cases in each of the modules, but for load testing - it's invaluable.

January 18, 2007

What eternal have we produced ?

For any system, living or artificial to span a long time it ought to change. Evolution, change over time are a sure signs of life. Rigidness, immutability and stasis are a no less sure signs of death.

Now, there is an interesting perspective to this. The strive to perfection leads us to creating something absolute, things that will exist forever. Any artist or craftsman would certainly like his creation to last for thousand years. But is eternal unchanged presence good ? By the way, have you ever seen anything eternal recently ?

Look around. Look for perfect. Indestructible. Eternal. See anything like this ?

Plastic, glass, and radiation
.

Pretty much anything man-made will eventually disappear. Any piece of art will die out. Any building or construction will collapse. But not those three, not in any foreseeable future.

Every single grocery bag will stay forever. Empires rise and fall but the empty milk bottle will swamp around. Radiation will remain invisible, but the half-decay principle will ensure it will only vanish in asymptotically distant future.

Isn't it ironic ? Do we need eternal plastic bags ? No. Perfect milk bottles ? No. But this is what we've got. To make it worse, nothing really complex and useful can be done out of plastic or glass alone. Anything constructed with it will decay and leave around useless pieces of eternal stuff. Somehow we have finally managed to produce something that will outlast us million times. Something perfect. Something perfectly useless and dead.

December 30, 2006

Why having processing power wasted ?

I don't know about you, but seeing unused hardware troubles me. Doesn't matter if it's an old 386 desktop or a 8 years old Compaq Proliant 7000 which I've recently acquired.

Hardware is always fun to set up, see it running and play around with. Folklore about 100s refurbished Pentium's making up a cluster in a garage fascinates me. I do realize that a single modern server will eat that hundred for breakfast, but still, wouldn't that be fun, running that garage ?

And so, my working PC and the servers that are in testing or spare, are normally loaded up high most of the time, doing what - running workload tests of course ! Right now I'm leaving it running a stress test for Pythomnic, specifically its new capabilities of distributed transactions with recovery. The test will be running for a few days now, and I hope when I return from holidays it's still up and running.