April 29, 2006

Software: too many layers, too few tiers

Layers and tiers are both ways of enforcing structure onto a software system. I will be using term layer = "logically isolated group of modules within a process" and tier = "physically isolated group of modules within a system". The difference is basically logical vs. physical.

Now, layers are way more popular and friendly to any development environment, because they
  • are directly supported with the language and/or runtime
  • allow greater flexibility and freedom of interaction between different layers
  • are accustomed way of thinking about software
OOP with classes and interfaces are all about layers.

Tiers, in turn, are more difficult to deal with, because they
  • require more upfront design
  • require complex and/or restrictive and/or expensive (un)marshalling
  • require different way of thinking
There also are obvious upsides to tiers, mostly about independence, ex.
  • deployment and execution independence
  • development, language independence
  • reuse independence
CBD with components and interfaces are all about tiers. Right now the choice is basically limited to a few popular component-based environments, such as DCOM, CORBA or (somewhat differently) web services and SOAP.

I sincerely believe that having more tiers is beneficial and it would be great if there was a way of making tiers easier to use in a way similar to layers. And so, guess what, this is one of the ideas behind the Pythomnic (Python framework for building reliable services) - to allow fast and easy way of converting layers into tiers - to mix and match modules in any way.

For example, if there is a cleanly separable function (ex. CPU intensive XSLT transformation) currently allocated to a module or a set of modules in a layer, you may also take one step ahead and declare that this function can possibly be separated to a different tier. For example, instead of calling
pmnc.xslt.transform(...)
you do
pmnc.execute.on("xslt_transformer").xslt.transform(...)
the key thing to note is that the way execute.on works, if "xslt_transformer" RPC channel is not configured in an appropriate configuration file, the pmnc call will still be local, but as soon as you modify the configuration file and save it, the very next call will go to the specified server. There is no need to restart Pythomnic on this machine, all you need to do is to copy the xslt module to a separate server and start it there in its own Pythomnic thus turning a layer into a tier.

I do believe that such a feature is beneficial to a middleware development framework.

April 23, 2006

One note on Python simplicity in handling phoenix problems

One of the major problems in developing long running applications (and applications in general really) is cleanly handling shutdown. Among them a particular subproblem of phoenix singletons - what happens sometimes if one entity references the other (typically a global singleton), which has already been unloaded.

For instance, consider the following shutdown code in moduleA (using simplified Python syntax):

moduleA:
...
def shutdown():
moduleB.callme()
there is no guarantee that moduleB has not been shut down yet. Now, if moduleB is also written in a delayed initialization fashion, ex:

moduleB:
...
impl = Impl()
...
def callme():
if not impl.initialized():
impl.initialize()
return impl.callme()
...
def shutdown():
impl.cleanup()
then what happens upon moduleA's shutdown is a reinitialization of the impl - one sort of a phoenix. It just went on me that instead of building a complex synchronization schemes to handle this cleanly, all I need to do to prevent this is just

moduleB:
...
def shutdown():
impl.cleanup()
del impl
and now as impl is just not there, the moduleA's attempt to reference it at shutdown will fail and throw - a much more appropriate behaviour in a given situation. This is less clean a solution, but how simple it is !

April 19, 2006

Shortcuts to icons to shortcuts: what happens if you click that ?

It seems that desktop icons have rather confusing nature. Let's say there is an object, e.g. executable file. Whenever it physically presents on the desktop, it appears as an icon. Whenever it's not on the desktop itself, which is a more frequent situation, but a shortcut, an object of different kin is, the shortcut appears as the same icon albeit with a little arrow in the lower left corner.

Hence problem #1. There are different icons on the desktop with and without arrows, but single-clicking on them reveals identical behaviour - the target object is invoked. What arrow is for then ? For the user to see which is "real thing" and which is a representation of a concept she can't quite grasp anyway ? Wouldn't it be more logical if non-shortcuts couldn't be placed on desktop at all ?



Next, surprisingly, there are other places on the desktop where icons appear - task bar, quick launch bar and system tray. Those add to the confusion, because although their icons look identical, their behaviour is even more different:

Icon on the quick launch bar represents a possibility to start an application, clicking it starts another instance. Icon on the task bar represents an already running application, clicking it brings it to front. Icon in the tray represents an application willing to talk, but what happens if you click it is not known beforehand.

Hence problem #2: wouldn't it be appropriate if all these icons were different to some degree, or behaved in more consistent fashion ?

April 18, 2006

On (information security) audit: giving money to the developers

Auditing information systems is fiendishly difficult. Think about it - a typical situation for a developer is discovering problems in the _small_ pieces of code that she's working on _right_now_. Few days later other problems may be discovered. Half a year from that - yet other.

Then, as the system is assembled, parts developed by different people come together, a whole new world of problems emerge. The people who built it have scattered knowledge of the system themselves.

Now, to audit. Suits come in, unpack their laptops, run standard tests, look (!) at everything and ask tough questions. A week after they conclude whether the system the very authors have no complete knowledge of is good or not. And then they leave.

Hence my point - a good team should be doing internal audits as it goes. A good developer should be running custom-tailored tests, looking at the thing, asking tough questions no worse than the auditors. And the knowledge remains with the company.

Therefore, why not investing the same money into team education, so that they become their own auditors ? It's the old "give fish" vs. "teach to fish" thing.

I realize there are PR and sometimes legal aspects to audit, but to a developer PR along with legalities don't make much sense.