26. November 2009

IT-Projekt der Bundeswehr in der Krise. Wie blöd kann man sein?

Die neue Nachricht: "beim IT-Projekt Herkules der Bundeswehr ufern die Kosten aus".

Eigentlich nicht ganz so neu. "Bereits im Juni hatte ein interner Bundeswehr-Bericht dem Projekt eine verheerende Bilanz ausgestellt".
Aber: Herkules kam in diesem Blog schon vor 5 Jahren vor: Grosse Softwareprojekte in der Krise.

Besonders interessante Fakten:

  • IBM, Siemens und der Bund haben für das Projekt ein Joint Venture gegründet
  • Seit 5 Jahren ist das Projekt in der Krise, aber Führungskräfte haben auch dieses Jahr Boni bekommen
  • Der Bund hat nichts zu sagen, da er nur 49 % der Anteile hält
  • Das Projekt "lässt sich inzwischen nicht einmal mehr kalkulieren"
  • Ministerium: Grund für steigenden Kosten ist fehlender Wettbewerb
Ist eigentlich schon jemand aufgefallen, dass der Bund 100% zahlt. Bei jedem Drecks-Startup setzen Investoren, die 30 % zahlen, jemand in den Aufsichtsrat. Ab 50 % Finanzierung haben sie gerne die Mehrheit im AR.

Der Auftraggeber hat den Dienstleistern gestattet, eine Abwicklungsfirma zu gründen. Wahrscheinlich hat jemand das dem Bund empfohlen, "um alles zu bündeln". Aber die wichtigste Eigenschaft dieser Firma ist, dass man sie einfach dicht machen kann, falls etwas schiefgeht. Der Bund kann dann nicht mal gegen den Auftragnehmer klagen. Der ist ja dann gelöscht. Siemens/IBM sind fein raus. Eine speziell gegründete GmbH ist DIE Methode, um Verantwortung abzustreifen. Das weiss doch jeder.

"Der fehlende Wettbewerb lässt die Kosten explodieren", hä? Was erwartet man denn, wenn man eine Firma mit der Abwicklung von mehreren Milliarden Euro beauftragt. Wo soll der Wettbewerb denn herkommen?

Wer hat eigentlich diesen Vertrag verhandelt? Das ist ja nicht nur Blödheit, sondern Untreue. Vielleicht sollte man sich mal anschauen wo das Management vorher gearbeitet hat. Ob da nicht jemand aus dem Ministerium rübergewechselt ist und jetzt gut verdient.

Wie wäre es mit einer parlementarischen Anfrage: Arbeitet jemand, der auf Seite des Bundes den Auftrag verhandelt hat, jetzt beim Joint Venture oder bei IBM/Siemens?

_happy_moneywashing()

21. November 2009

Integration Tests are a Superset of Unit Tests

The agile world is unit test crazy. That's ok. But it is not enough. Integration tests are a much under valued species. I like unit tests. I need unit tests. I do not know how we could ever build software without unit tests. But pure unit tests jump too short. We need more.

I am talking about unit test frameworks like NUnit. You just write the test function, add an attribute and the framework finds the function and adds it to the test list. Your test function tests a real function. Hundereds of those make a complete test set. Great, but not enough. What is missing?

You are supposed to write unit test code, which tests only single functions and methods. The idea is to isolate functionality and test isolated. I know the theory: Interfaces are contracts. There are unit tests for every behaviour. Even failures should be verified. TDD (test driven develpment) write the test as specification. I know all that bloat.

Fact is: these tests are important, but they ignore reality. Most problems result from interdependencies and side effects. In theory, there are no side effects. In reality they are there. Unit testing reduces them. Unit testing gives complex systems a chance to work. Chance is not enough. We must make sure, that systems work, not only functions. That's the part of the integration test.

Integration tests verify complex operations. Example: assumed I have a typical cached business object. An integration test would check the complete process:

  • fetch the object from the cache,
  • if it's not there, construct it from the DB,
  • put it in the cache and
  • return it at the same time.
  • communication with the DB includes a
  • web service interface and 2 layers of
  • storage driver and
  • SQL access code.
This is a vertical test.

In contrast: unit tests would test everything isolated:
  • does the cache access work? with a simulated in-memory cache. Beware of the network inside of unit tests.
  • does the database access work? using a fake DB, because relying on a real DB server is uncool, not "isolated" enough
  • would the webservice return the correct data? using a mock request object and fake data.
  • does the webservice format correctly? again: fake data
  • everything tested with made up configuration data and carefully constructed dependency injection.
I am totally pro isolated tests. But they are so un-real, mocked-up, simulated. They need something real on top. They need integration tests. Integration tests assert, that complex systems work despite complexity. We need integration tests. Wenn need lots of them. Unit tests just hope, that everything works together. Dream nicely.

We need integration tests anyway. And integration tests are in the real system. They are live. Unit tests are in a separate project and can not be live. Two separate test sets. Too many for me. There is no reason to split testing into 2 test sets, which are operated very differently. You run the test project from time to time and you are happy about the green bar. I run the real project all the time. Why should I run the test project from time to time, when I can run tests in the real environment with just one click.

Integration tests can be written to test isolated functions. They are a superset of unit tests. My interation tests check isolated functions PLUS vertical functionality PLUS live system health.

Unit test frameworks jump too short. They help, but ignore reality.
Get real. Switch to integration testing.

Update:

Changed the title from "Superior to" to "a Superset of".

_happy_testing()

3. November 2009

mod_mono Control Panel Extension

Just added a list of URLs to the mod_mono Control Panel (CP).

The Apache module mod_mono has a small control panel. The CP has some (actually very few) control features. You can see how many requests are currently processed, how many are waiting and you can restart the server process (the mono worker process, not the web server).

It can be enabled by

  1. <Location /mono>
  2. SetHandler mono-ctrl
  3. </Location>
Then go to http://your.host/mono. Unfortunately, it only shows how many requests are in work. I was missing information which requests are processed and waiting. I need the URLs. I want to know which ones take much time and which are blocking the server under high load.

So, I extended the CP to show a simple list of currently processed and waiting URLs.
  • First column is the request serial number.
  • The second colum is the processing time in seconds.
  • The third column is the URL with query.
Waiting requests are also listed. This happens only if the max processing limit is exceeded and requests wait (not in the screenshot). The waiting list shows the wait time instead of the processing time.

All data is stored in a shared memory segment.

BTW: mod_mono is programmed in C with not too much structure. No offense guys. I am very thankful for it. Great work. I know, it's open source. I should not complain, but improve it (which I did).

Why did I extend it? 2 reasons:
  1. I believe that an operator of a real service needs more information about what is going on than just the number of requests. We used apache server-status heavily. This is the same for mono.
  2. Flooding the server with dozens of integration tests, which spawn 200 backend requests each, stalled the server. I was afraid, that mon_mono has a problem under high load.
Once I knew the URLs, I could clearly see, that mod_mono is fine. It was just a normal deadlock situation where an frontend request tried to make a backend call via HTTP and could not get a free slot. Processing slots are limited by the thread pool. It can be increased, but there must be a hard limit. Allowing unlimited threads would make the server unusable under load, whereas deadlock situations can be avoided.

All slots are occupied by requests which wait for completion of backend requests, which do not find a free slot. Not a mod_mono issue. The situation has been resolved by splitting frontend and backend into separate mono applications. This is the normal configuration of a real multi-tier system, anyway.

The patch:

  1. % cd mod_mono-2.4.2/src
  2. % wget http://wolfspelz.de/download/mod_mono-2.4.2.patch
  3. % patch < mod_mono-2.4.2.patch
_happy_patching()

Update: the patch has been integrated into mono 2.6