Reusing UNIX semantics for fun and profit

Posted by bert hubert Thu, 20 Sep 2007 19:57:00 GMT

I’ve long been a fan of some of the techniques Dan Bernstein uses to leverage the power of UNIX to achieve complicated goals with little effort. For example, he uses a technique called Chain Loading to clearly separate and insulate several programs from each other by loading a new program *in place* of the current one, once a critical task has been performed, like checking a user’s credentials.

This guarantees that the outer program, that might actually be exposed to the internet, can restrict itself to very basic functionality, and only launch an inner, more useful program once authentication has completed.

Other tricks are to leverage UNIX user names to insulate various programs from each other, leaving the task of getting the access control details right to the very well tested operating system (which we need to rely on anyhow).

While sometimes unconventional, techniques such as those described above can simultaneously reduce code complexity AND increase security, by more or less hitching a ride on top of existing functionality.

Some time ago, I was involved in the development of a computer program with a classic ‘producer/consumer’ problem. We were inserting events in the database, and wanted to scale by getting a dedicated and very fast database server. To our surprise, getting an additional, and far more powerful system did not improve our performance, and in fact made things far worse.

What happened? It turns out we were doing a lot of small inserts into the database, and even while we were using a transaction, each of these inserts incurred a slight latency penalty, caused by the query & answer packets having to travel over the network. And when doing hundreds of thousands of queries, even half a millisecond is a lot of time. Add in operating system and TCP overhead, and the end to end latency is probably even higher. The obvious solution is to no longer actually wait for the inserts to complete, but to transmit them to the database asynchronously, and continue to do useful work while the packets are in flight and being processed. This way, no time is wasted waiting.

Since most database APIs are synchronous, a separate helper thread of execution needs to be spawned to create the fiction of asynchrony, and this is where things get interesting.

In the PowerDNS nameserver, a complicated ‘Distributor’ abstraction is used to send queries to database threads, and this Distributor contains locks, semaphores and a zoo of other concurrent programming techniques to make things work well. For example, we need to perform checks to see if we aren’t building up an unacceptable backlog of queries, and block if we find we are. This comes with additional choices as to when to unblock etc. I was not looking forward to reimplementing such a thing.

Additionally, our database interface needed to offer an extra feature: every once in a while a query comes along that we DO need to wait for, and because of coherency issues, such a query can only be executed once all queries ‘in flight’ have finished.

So we spent some time pondering this, and suddenly it dawned on me that many of the features we needed exactly match the semantics of the venerable UNIX ‘pipe’.

A pipe is normally used to communicate between two processes, as exemplified by this sample shell script command, which shows us the largest directories on a disk:

$ du | sort -n

The program ‘du’ generates a list of directories and their sizes, which is then fed to sort which outputs this in ascending order. However, nothing prohibits us from using a pipe to communicate with ourselves - and as such it might be a might fine conduit to pass database queries through to our database worker thread.

This has some very nice benefits. Pipes are incredibly efficient, since a lot of UNIX performance depends on them. Additionally, they implement sane blocking behaviour: if too much data is stuck in the pipe, because the other process does not take it out again quickly enough, the sending process automatically blocks. The operating system implements high and low water marks to make this (un)blocking happen efficiently.

Furthermore, pipes guarantee that data up to a certain size can either be written as a whole, or not written at all - making sure we don’t have to deal with partial messages.

Finally, pipes automatically detect when the process on the other end of them has gone away, or has closed its end of the pipe.

However, not all is good. In order to transmit something over a pipe, it must be serialised into bytes - we can’t transmit ready to use objects over them. Additionally, because pipes implement ‘stream’ behaviour, we need to delineate one message from the other, because the pipe itself does not say where a message begins and ends - unlike datagram sockets for example.

And this is the clever bit of our idea. As stated above, pipes are usually employed to transmit data from one process to the other. In our case, the pipe goes from one thread of execution to the other - within the same process, and thus within the same memory space. So we don’t need to send serialized objects at all, and can get away with transmitting pointers to objects. And the nice thing is, pointers all have the same (known) length - so we can do away with both delineation and serialisation.

Additionally, pointers are a lot smaller than most messages, which means we can stuff more messages in the same (fixed) size of the pipe buffer.

So, are we done now? Sadly no - we have the additional need to be able to ‘flush the pipe’ in order to perform synchronous queries that we do need to wait for.

This is where things get complicated, but for those who really want to know, I’ll explain it here. It took almost a day of hacking to get it right however, and I’m explaining it for my own benefit as much as for that of the reader, since I’m bound to forget the details otherwise.

If a synchronous query comes along, we need to flush the pipe, but UNIX offers no such ability. Once we’ve written something to a pipe, all the kernel guarantees us is that it will endeavour to deliver it, but there is no system call that allows us to wait for all data to actually be delivered.

So we need to find a way to signal a ‘write barrier’, and the obvious way to do so is to send a NULL pointer over the pipe, which tells the other end we want to perform a synchronous query. Once the worker thread has seen the NULL pointer, it unlocks the single controlling mutex (which is the return signal that says “got you -the pipe is empty”), and then waits for further pointers to arrive.

Meanwhile, the sending thread tries to lock that same mutex immediately after sending the NULL pointer, which blocks since the receiving thread normally holds the lock. Once the lock succeeds, this tells us the worker thread has indeed exhausted all queries that were in flight.

The sending thread now performs its synchronous database work, knowing the database is fully coherent with all queries it sent out previously, and also knowing the worker thread is not simultaneously accessing the connection - since it is instead waiting for a new pointer to arrive.

If our program now wants to perform further asynchronous queries it can simply transmit further pointers to the worker thread - which oddly enough does not need to retake the mutex. This is what caused us many hours of delay, because intuitively it seems obvious that once the sending thread is done, it must release the mutex so the worker thread can retake it.

As it turns out, doing so opens a whole world of nasty race conditions which allow synchronous queries to ‘jump the queue’ of asynchronous queries that are in flight and have not yet arrived.

So, the sequence is that the worker thread only unlocks the mutex, while the sending thread only locks it.

And this basically is it! So how much lines of code did we save by using the magic of UNIX pipes? The pipe handling code takes all of 90 lines, whereas the Distributor code of PowerDNS takes a round 300, even though it does not offer synchronous queries, does not automatically block if too many queries are outstanding, and most certainly couldn’t implement the sensible wakeup ability that UNIX pipes do offer.

Oh, and you might be wondering by now, did it help? Indeed it did - our program is now at least 20 times faster than it used to be, and there was much rejoicing.

3 comments

The whole oil thing

Posted by bert hubert Sun, 26 Aug 2007 16:18:00 GMT

Ok - Steorn is quieting down for now, and it got enough attention anyhow, so it is time to look a bit into the things behind the appeal of alternative energy sources.

Many readers will recall that in the past, there was debate as to when the ‘oil would run out’, and that this date was supposed to be somewhere in 2045 or so, which was more or less far enough away not to worry about it.

At least I remember thinking about it like that back in school. It is amazing how this sentiment fooled us for so long. Modern tubes of toothpaste are easy to empty down to the last bit, but in the past this wasn’t so. This should’ve told us something.

Oil is not like modern toothpaste, it is like ketchup. Far before it has run out, it becomes hard to extract. And oil is remarkably worse than ketchup.

Back in 1956, one of Shell Oil’s scientists noticed that wells started to become less productive once 50% of their contents had been extracted. He then proceeded to predict US oil production based on this assumption, and correctly calculated it would peak somewhere in the late 1960s, and decline from that point onwards. And so it did.

Additionally, he extrapolated this result to the whole world, and determined global oil production would go into decline somewhere after the year 2000.

Controversy

Nobody much liked this prediction, and it was widely ridiculed. New wells would continue to be found, and importantly, new techniques would enable us to extract more and more oil from existing wells.

As it turned out, especially this last prediction was correct, which is why the world production of oil hasn’t declined already.

However, no major new fields have been found over the past decade.

Many players in the oil industry now believe the predictions, and agree that oil production might decline from 2010 onwards, or perhaps a bit later.

Production is peaking, demand is increasing

Controversy aside, the International Energy Agency has produced graphs of oil production and demand since 1974, and it is clear that production will one day be overtaken by demand.

It is easy to see why - as it comes out of the ground, oil is not immediately suitable for all kinds of use. For many purposes, it first needs to be ‘refined’. Building a refinery is hard work, and typically takes up to a decade. Additionally, environmental rules mean that it is easily possible to spend a similar amount of time just getting permission to build.

No major refineries have been built over the past years, and no major refineries are nearing completion. The existing refineries are running at or near peak production.

On the demand side, the world economy is growing at an unprecedented clip.

Will demand exceed supply?

The few graphs that plot oil production and demand in one plot (readers, if you know of any, please comment!) typically show a ‘and then a miracle occurs’ event when demand is about to overtake supply.

This reflects the usual market behaviour that once oil becomes scarce enough, prices will rise, and oil that was hitherto uneconomical to produce becomes economically viable. In other words, exploding prices make more oil available.

But as remarked previously, refineries are already running flat out. This means that no miracle will occur in the immediate future, and oil might very well run out temporarily.

To reiterate, this does not mean the oil is gone, just that it isn’t available at the rate we need it.

And then what?

This is the scary bit, and the main reason I worry. Already we see posturing by the big oil suppliers and consumers. China is pouring money into Africa, and has even deployed part of its army in certain countries to make oil production possible.

Russia is throwing its weight around in a frightening way as well, and making it clear not all of its customers are equal. It plays geopolitics both with hydrocarbon availability and pricing.

The various armies in the Middle East speak for themselves. A peaceful Middle East produces more oil, and it might very well sell it preferably to its occupiers or sponsors.

Here in Europe, we appear to believe oil might become mighty expensive, but that we’ll weather it.

But if oil becomes truly scarce, will market prices influence who will get access to it? Or will it be supplied to those countries with the ability to project power, and back up their monetary offers with military encouragement?

Or might suppliers become king-makers, with the power to determine which economy lives or dies?

Our European belief that our ability to pay steep prices will allow us to continue as normal might be seen as exceedingly silly by then, possibly comparable to Neville Chamberlain’s appeasement policy in the 1930s.

So when will all this happen?

It is happening already, but crunch time is not yet upon us. Some countries have already had problem getting access to enough energy, mostly those who (like Europe) depend on Russian oil and gas.

The crunch might be postponed if the economy stops growing at this rate, it might be advanced if any of the major refineries is damaged by terrorism, weather or bad luck.

At any rate, the issue should start making more headlines in the near future.

What about coal, nuclear energy, wind and solar energy? Tar sands?

Some countries have already accepted that we should start building more nuclear power plants because the energy is running out. However, building such installations also takes decades, and it has been argued we’d need to open a new power plant each month or so to make up lost ground.

Coal is currently environmentally harmful or expensive, but might save part of the industry for some time.

Wind and solar, although interesting, struggle to generate an appreciable fraction of our world energy need.

Tar sands, sand that contains oil, are interesting but not for the near future. They might make Canada extremely rich though.

Further reading

Google on ‘Hubbert Peak’, and head on from there. ‘Peak oil’ is also a nice phrase to search on. The International Energy Agency has long published honest and truthful graphs that presaged the issue, but up till recently the IEA did not put this into words. Recently they’ve begone to describe the near future oil situation as ‘extremely tight’.

1 comment

Steorn updates, things are cooling down...

Posted by bert hubert Wed, 04 Jul 2007 22:29:00 GMT

Well, it might’ve been too good to be true.

First the announcement that internet streaming of the ‘Steorn’ device over at Kinetica Museum would start at 6PM, which was later “clarified” to mean 6PM US eastern time.

And when that time passed, nothing happened. After a while, a notice appeared that due to technical difficulties, streaming would start on July 5th.

Perhaps this is the beginning of the end for Steorn.

Update: Steorn has confirmed the device is not operating as it should, but they say they are working on it, and intend to turn on the streams tomorrow, even if the device is still not working, so we can see “stressed engineers” trying to fix it.

In one of my first posts on this enigmatic company, I mentioned the possibility of them deluding themselves, and I’m afraid the things that have happened over the past few days point in that direction.

I’d still be very happy if Steorn turned out the be on to something, but the signs are not good..

The websites of Steorn and Kinetica still promise a demo, so perhaps they simply are having problems streaming. Will keep you posted.

no comments

Steorn updates, things are heating up!

Posted by bert hubert Tue, 03 Jul 2007 06:33:00 GMT

Life is quickly getting silly on the Steorn front (for more details, see my previous post). Reliable sources have now confirmed a demo *IS* being setup, and although most sources are bound by NDA, it has become clear the demo is this week, and most likely at the Kinetica Museum.

Update: update, webcam images are appearing here!

Update: the Kinetica website now contains an announcement that it is hosting a new exhibition starting Thursday, and that details will be announced on Wednesday.

Update 2: An article on RTÉ news reports that Sean McCarthy says the device will be demonstrated tonight from 6PM London time, and that it will be lifting a weight to prove it is generating energy.

Additionally, a short movie has surfaced last night showing someone who looks like Sean McCarthy (the Steorn chief executive) smoking a cigarette across the Kinetica museum. Sean is wearing a t-shirt that says ‘CEO versus CoE’, where CoE stands for Conservation of Energy - the basic law of physics their device is claimed to break.

This movie is classic viral marketing material, which in itself tells us something: Steorn is being REAL serious about generating a hype. They’ve previously availed themselves of the services of Citigate Dewe Rogerson, a high-end public relations firm. This viral is a strong indication they are again taking PR seriously.

If they are as serious as this about getting massive media attention, this will only be the beginning of the onslaught. By itself this movie will not turn heads, but it might be a good start.

We’ll only know more if and when the demo arrives, and many are sceptical about the chances of being convinced by a demo, but for now, expect the hype to increase, with an expected peak on Saturday the 7th coinciding with the Live Earth concert.

For more information, see the Steorn homepage, the Free Energy Tracker Blog, the Dispatches from the Future blog, the Fizzx forum and the relevant Wikipedia page. Another interesting place to look is the Steorn Forum, but be aware this place is populated by all and sundry, and heavily edited by moderators.

2 comments

Steorn Demo: our own iPhone moment

Posted by bert hubert Sun, 01 Jul 2007 18:28:00 GMT

Quick plug of my new laptop, a spiffy Dell XPS M1210. Kudos to both Dell and Ubuntu, everything just works, and it just works very well. If you need a small but powerful laptop, and want to run Linux, you should consider this one!

Ok, on to the hype.

Update: earlier entries on Steorn are here, here, and here.

Steorn

This week saw the launch of the iPhone, perhaps one of the most anticipated technology events in living memory.

On the physics side of things, we are sort of going through the same thing, except a thousand times smaller. Perhaps a million.

To very briefly recap, Steorn hit the scene in August 2006 with an (expensive) advertisement in the Economist newspaper, claiming to have developed technology that generates energy without consuming fuel.

Their story was that nobody believed them, and that they were challenging the scientific community to join a ‘jury’ to prove or disprove their claims.

Since then, they’ve said the jury is now in place, and will report their results when they feel like it. Additionally, in April 2007, it was stated that there would be a publicly accessible demonstration of the technology in London, early July.

And early July is upon us now.

Why the hype?

Well, if what they claim is true, Venezuela, Russia, and the Middle-East are seriously out of business. Oil would then remain useful mainly as lubrication, and as an ingredient of many substances.

If their technology works, stock markets will collapse, whole countries will default on their obligations, and the nascent ‘new cold war’ between Russia and NATO will be over.

The demo

Since ‘energy out of thin air’ violates most of our understanding of physics, the scientific community is rightfully skeptic. It has been said that extraordinary claims require extraordinary evidence (which I dispute, btw - most new physics started out as small, hard to see effects), so the demo had better be pretty impressive.

There is rampant speculation going on about the nature of the demo, and a few things are known.

The demo will happen ‘early July’, and the public will be able to see the machine, talk to Steorn employees, and view the whole thing over the internet 24/7. Additionally, several people have been invited by Steorn to bring their screwdrivers to open up the device. Lots of people on the various forums have already announced they are bringing heat-sensitive cameras, magnetometers, RF spectrometers and other instruments to verify if any demonstrated device is truly operating without consuming power.

The exact date and location of the demo remain unsure. Steorn has repeatedly stated they are not in it for the maximum amount of money, but that they want their technology to benefit developing countries, as well as being good for the environment.

On Thursday the 7th, London hosts the Live Earth concert, which is all about the environment. This has prompted many to believe the demo will coincide with this concert.

Information about Steorn comes from various sources: their own homepage, the Free Energy Tracker Blog, the Dispatches from the Future blog, the Fizzx forum and the relevant Wikipedia page.

On these various pages, several anonymous or pseudonymous sources have stated contradictory things. It has been reported by a guy called ‘MikeDuke’ that the device will be unveiled to the press on Monday the second of July and Tuesday the third, and might be available for viewing on Wednesday, but surely on Thursday. This is then all supposed to take place at the Kinetica Museum.

Another forum comment reports calling the Kinetica museum and being told they are hosting a private exhibition starting Thursday, about which they can’t say a lot.

Then there is a second poster who says the demo will be held at the Science Museum, starting Friday, and that it will only be announced on Thursday.

Finally, and this one is furthest out, there is an anonymous post stating the demo will be of the ‘rev4.c’ device, and that it will produce 720kW! Not only will it do this, it will emanate a ‘distortion field’!

So what will happen?

As said, speculation is rampant. Many are predicting there will be no demo, or that it will be unconvincing. I’m personally on the edge of my seat, having nearly booked a flight to London to see for myself, but at the last minute decided not to go because it is unsure when the demo will be available.

If there is more clarity, I’ll go over and report here.

4 comments

Small update on "cold fusion", Steorn

Posted by bert hubert Sun, 17 Jun 2007 21:07:00 GMT

Ok, people have been harassing me that I should update my blog more often. This strikes me as somewhat odd, blogging is not mandatory - sometimes I feel the need to share some thoughts with my readers (it appears there are 3000 of you!), and sometimes I don’t.

I still don’t have a lot to say, but perhaps this might interest you.

“The trouble with physics”

I’ve been following cold fusion, and other ‘alternative’ physics subjects for a long time now, and I keep tabs on quite a number of interesting investigators. Over time it has become clear to me that physics is dangerously locked in to the ‘mainstream’.

Careers are built on getting grants; grants are disbursed by risk-averse boards, journals are very worried about their reputation, and rely on vested scientists to review papers. The upshot of this is that it is very dangerous for a physicist to do ‘interesting’ research.

I felt like this for a long time, but as I’m not part of the physics community, and in fact never got past half of my physics degree, what I feel is not very interesting.

However, when Lee Smolin feels something, it is. He’s written a captivating book called ‘The trouble with physics’. Its main point is that physics has become stuck in a rut called String Theory, which is a complicated set of ideas that has for decades been hailed as the next big thing.

Dr Smolin describes the current state of physics very well, and he appears to confirm the feelings I describe above.

I heartily recommend this book, it is one of the few books that continue where an earlier generation of ‘books for laymen’ stopped.

There are some indications the physics community is more open to ‘interesting’ results again, which should be very good. I’ve made a small list of things I find interesting, and keep track of.

“Cold Fusion”, or as it is often called these days “Low Energy Nuclear Reactions”

I’ve blogged about this before, but this field appears to be heating up again in a big way. Basically, hot nuclear fusion (which powers the sun, as well as hydrogen bombs), would solve most of our energy problems. However, it turns out to be very hard to make a hot nuclear fusion reactor that survives its own operation AND generates energy.

Cold fusion started out by the claim of Messrs Pons and Fleischman to have found proof of hydrogen fusing under ‘kitchen table’ conditions. It quickly turned out nobody could (reliably) reproduce their results, and controversy ensued. Additionally, our current understanding of physics appears to prohibit ‘cold fusion’.

However, over the following 18 years, it never went away entirely. There is a slow but steady trickle of results that appear to form the smoke to a possible fire. Dr Dieter Britz keeps track of all cold fusion related papers and reports, his database now contains over 1200 items.

Some of the die-hards in researching cold fusion have been a group of employees of a US Naval laboratory, called SPAWAR. Recently, they’ve developed a very simple experiment that reproducibly shows signs of “low energy nuclear reactions”. There has now been at least one replication of their simple experiment, which appears to show the same signs.

The experiment is simple enough that it can be performed at home, and I am sometimes tempted! I’ve since found that quite a number of replications are already going on, so no need to try to build a laboratory at home :-)

More information can be found here

Strange gravity effects

Much of the same goes for experiments with rotating superconductors affecting gravity. It should be realised that gravity is truly unstoppable, as far as we know, there is nothing that could ever ‘shield’ oneself from this universal force.

If one could do that, spaceflight would become a lot easier. It would also put a rather large dent in our understanding of physics - although gravity is poorly understood anyhow.

The Russian metallurgist Evgeney Podkletnov grabbed the attention around 1992 and 1996 with papers describing gravity shielding above a rapidly rotating superconducting disk. The problem was that his disk was very hard to make, so the experiment was not easy to reproduce.

His reports were interesting enough to get NASA to try however, but they never really managed to replicate his conditions. Interestingly, one of the theorists (Ning Li) involved with the experiment appears to have vanished!

This is the stuff of conspiracy theories, but it has been reported that Boeing has at one stage been involved in making devices based on this theory, but this has been widely denied. Meanwhile, Podkletnov has withdrawn some of his papers. All very messy.

However, some time ago, a scientist working for the European Space Agency, made similar claims, which are however very different in detail. Interestingly, Tajmar cs also have theories on why their spinning superconductors produce gravity effects.

It appears their work is being taken seriously. I’ve been in contact with them, and although they didn’t want to reveal a lot, they did say they expected to report new results.

Less controversial, but no less strange, is the current state of our understanding of gravity, which includes such incredible things as invisible objects which do have gravity (dark matter), as well as invisible things that offer ‘negative gravity’ (dark energy). We currently only know that we need these dark things to explain the universe - we just don’t know what lies behind these science fiction-like names!

Unlike other things mentioned in this post, dark energy and dark matter are 100% part of mainstream physics - even though we have only faint ideas on the physical nature of these forms of ‘matter’.

Steorn

I’ve blogged about this fascinating company before, so I’ll only post an update here.

It is hard to figure out their strategy. They claim to have discovered a device which generates free energy, and that they are trying to make some money from this invention, while also making it generally available. They’ve assembled a jury of 22 scientists which is supposed to validate their technology, but this is expected to take a long time.

In the meantime, their CEO has been posting quite a lot on their forum, dropping hints on how their device works, while otherwise retaining a high level of secrecy.

One of the forum members, Mike Rosing (known as ‘drmike’) heard enough to design an experiment to test at least part of what Steorn intimates lies behind their technology.

This revolves around ‘magnetic viscosity’, which is one of the darker areas of how permanent magnets work. Drmike now has data, but no results yet, as he has to extract these from his heaps of data.

I’ve been in contact with Mike, and we’ve worked out something I might try to program to extract results from his data, but I didn’t yet find time to work on it.

Steorn is said to demonstrate their device in London in July, and Mike and others are going to see this demonstration, and I’m again sorely tempted to join in :-)

More information can be found on the two blogs that follow Steorn, called Free Enery Tracker and Dispatches from the Future.

no comments

DNS & Crypto Power Lunch

Posted by bert hubert Wed, 21 Feb 2007 22:13:00 GMT

Enjoyed a fun and stimulating “DNS & Crypto Power Lunch” with Dan Bernstein (left) and Tanja Lange (not in picture). As was to be expected, the intersection of cryptography and (secure) DNS was discussed, and some evil plans might ensue! If implemented in djbdns and PowerDNS, we might actually achieve something..

Posted in , ,  | no comments

ISOC presentation on "The Future of VoIP2"

Posted by bert hubert Thu, 08 Feb 2007 21:39:00 GMT

Just a quick note that I’ll be presenting at The future of VoIP 2 event as organised by the Internet Society of The Netherlands, part of the (global) “Internet Society”.

The event takes place on the 15th of March, in The Hague. For more details, see the links above.

As always, I love to meet PowerDNS users, or in fact, anybody interested in doing interesting things with DNS. So should you be there, it would be good to talk.

Posted in  | no comments

(a)synchronous programming

Posted by bert hubert Sun, 04 Feb 2007 12:14:00 GMT

Ok, I’m going to lecture a bit, a bad habit of mine. The summary is that an important enhancement of the Linux kernel has been proposed, but in order to understand the significance of this enhancement, you need a lot of theory, which follows below.

I use the word “computer” sometimes when I properly mean “the operating system”. This exposes a problem of this post, I’m trying to explain something deeply theoretical to a general audience. Perhaps it didn’t work. See for yourself.

Doing many things at once

People generally tend not to be very good at doing many thing at once, and surprisingly, computers are not much different in this respect.

First about human beings. We can do one thing at a time, reasonably well. There are people that claim they can multi-task, but if you look into it, that generally means doing one thing that is really simple, while simultaneously talking on the phone.

This is exemplified by how we answer a second phone call, ie, by saying “The other line is ringing, I’ll call you back”, or conversely, telling the other line they’ll have to wait.

We emphatically don’t try to have two conversations at once, and even if we had two mouths, we still wouldn’t attempt it.

Let’s take a look at a web server, the program that makes web pages available to internet browsers. The basic steps are:

  1. Wait for new connections from the internet
  2. Once a new connection is in, read from it which page it wants to see (for example, ‘GET http://blog.netherlabs.nl/ HTTP/1.1’).
  3. Find that page in the computer
  4. Send it to the web browser that connected to us
  5. Go to 1.

Compare this to answering a phone call, step 1 is the part where you wait for the phone to ring, and answering it when it does. Step 2 is hearing what the caller wants, step 3 is figuring out the answer to the query, 4 is sharing that answer.

This all seems natural to us, as it is the way we think. And programmers, contrary to what people think, are human beings, too.

Where this simple process breaks down is that, much like a regular phone call, we can only serve a new web page once the old one is done sending.

And here is where things get interesting - although we people have a hard time doing multiple things at once, we can give the problem to the computer.

What is the easiest way of doing so? Well, if we want to increase the capacity of a telephone service we do so.. by adding people. So on the programming side of things, we do the same thing, only virtually: we order the computer (or more exactly, the operating system) to split itself in two!

The new list of steps now becomes:

  1. Wait for new connections from the internet
  2. Once a new connection is in, split the computer in two.
  3. One half of the computer goes back to step 1, the other half continues this list
  4. (2) Read from it which page it wants to see (for example, ‘GET http://blog.netherlabs.nl/ HTTP/1.1’).
  5. (2) Find that page
  6. (2) Send it to the web browser
  7. (2) Done - remove this “half” of the computer

I’ve prefixed the things the second computer does with ”(2)” . This looks like the best of both worlds. We can “serve” many web pages at the same time, and we didn’t need to do complicated things. In other words, we could continue thinking like human beings, and use our intuition, by thinking of the analogies with answering phone calls.

So, are we done now? Sadly no. What basically has happened is that we have invoked a piece of magic: let’s split the computer in two. That is all fine, but somebody has to do the splitting. This job is farmed out to the CPU (the processor) and the operating system (Windows, Linux etc), and they have to deal with making sure it appears the computer can do two things at the same time.

Because the truth is.. people can’t do it, and neither can computers. They fake it.

This faking comes at a cost, incurred both while splitting the computer (“forking”), and by making the computer juggle all its separate parts. Finally, it turns out that practically speaking, you can divide a computer up into only a limited number of parts before the charade falls down.

Busy websites have tens of millions of visitors, we’d need to be able to split the computer into at least that many parts, while in practice the limit lies at perhaps 100,000 slices, if not less.

Now what

Several solutions to this problem have been invented. Some involve not quite splitting up the entire computer and making split parts share more of the resources (like for example, memory). This is called ‘threading’. Perhaps this could be compared with not hiring more people to answer the telephone, but instead giving the people you have more heads, so as to save money.

In the end, all these solutions run into a brick wall: it is hard to maintain the illusion that the computer can do multiple things at the same time, AND have it actually do a million things at the same time.

So in the end, we have to bite the bullet, and just make sure the program itself can handle many many things at once, without needing the magic of pretending the computer can do it for us.

“Asynchronous programming”

This is where things get hard, and this is to be expected, as it was our basic premise that people can’t do multiple things at the same time, and what’s worse, they have a hard time even thinking about what it would be like.

The new algorithm looks like this:

  1. Instruct the computer to tell us when “something has happened”
  2. Figure out what happened:
    • If there is a new connection, instruct the computer that from now on, it should tell us if new data arrived on that connection
    • If something has happened to one of those connections we’ve told the computer about, read the data sent to us on that connection. Then find the information requested on that connection, and instruct the computer to tell us when there is “room” to send that data
    • If the computer told us there was “room”, send the data that was previously requested on that connection. If we are done sending all the data, tell the computer to disconnect, and no longer inform us of the state of the connection.
  3. Go back to 1.

If this feels complicated, you’d be right. However, this is how all very high performance computer applications work, because the “faking” described above doesn’t really “scale” to tens of thousands of connections.

How does this translate to the telephone situation? It would be like we have lots of small answering machines, that lots of callers can talk to at the same time. Whenever someone has finished a question, the operator would listen to that answering machine, and leave the answer on the machine, and go on to the next machine that has a finished message.

From this description, it is clear it would not work faster that way if you’d try it for real. However, in many countries, if you call a directory service to find a telephone number, you’ll get half of this. Your call is answered by a real human being, who asks you questions to figure out which phone number you are looking for. But once it has been found, the operator presses a button, and the result of your query is sent to a computer, which then reads it to you, allowing the operator to already start answering a new call. Rather smart.

Something in between

If the previous bit was hard to understand, I make no apologies, this is just how complicated things are in the world of computing. However, we programmers also hate to deal with complicated things, so we try to avoid stuff like this.

People have invented many ways of allowing programmers to think ‘linearly’, as if only a single thing is happening at the same time, without having to split the entire computer.

One way of doing this is having a facade that makes things go linearly, until the program has to wait for something (a new connection, “room” to send data etc), and then switch over to processing another connection. Once that connection has to wait for something, chances are that what our earlier ‘wait’ was waiting for has happened, and that program can continue.

This truly offers us the best of both worlds: we can program as if only a single thing is happening at the same time, something we are used to, but the moment the computer has to wait for something, we are switched automatically to another part of the program, that is also written as if it is the only thing happening at the same time.

Actually making this happen is pretty hard however, because traditional computer programming environments don’t clearly separate actions that could lead to “waiting” from actions that should happen instantly.

A prime example of the first kind of action is “waiting for a new connection” - this might in theory take forever, especially if your website is really unpopular.

Things that should happen instantly include for example asking the computer what time it thinks it is.

Traditional operating systems can be instructed to be mindful of new incoming connections, and not keep the program waiting for them. This is what we described in the complicated “if X happened, if Y happened” scenario above.

They can also do the same for reading from the network and writing to the network, both things that might take time. This means you can ask the operating system ‘let me know when I can read so I don’t have to wait for it, and I can process other connections in the meantime’.

Furthermore, there are some limited tricks to do the same for reading a file. The problem is that back in the 1970s when most operating system theory was being invented, disks were considered so fast, nobody thought it possible you’d ever need to meaningfully wait for one. Of course disks weren’t faster back then, but computers were slower, and massively so. So by comparison, disks were really fast.

The upshot is that in most operating systems, disk reads are grouped with “stuff that should happen instantly”, whereas every computer user by now has experienced this is emphatically not the case.

Modern operating systems offer only a limited solution to this problem, called ‘asynchronous input/output’, which allows one to more or less tell the computer to notify us when it has read a certain piece of data from disk.

However, it doesn’t offer the same facility for doing a lot of other things that might take time, like finding the file in the first place, or opening it. Things that in the real world take a lot of time.

So, we can’t truly enjoy the best of both worlds as sketched above, which would mean the programmer could write simple programs, which would be switched every time his program has to wait for something.

Enter ‘Generic AIO’

Zach Brown, who is employed by Oracle to work on Linux, has now dreamed up something that appears to never have been done before: everything can now be considered something that “might take time”.

This means that you can ask Linux to find a certain file for you, and immediately allows you to process other connections that need attention. Once the operating system has found the file for you, it is available for you without waiting.

Although almost every advance in operating system design has at one point been researched already, this approach appears to be rather revolutionary.

It has ignited vigorous discussion within the Linux community about the feasibility of this approach, and if it truly is the dreamt of “best of both worlds”, but to this author, it surely looks like a breakthrough.

Especially since it unites the worlds of “waiting on a read/write from the network” with “waiting for a file to be read from disk”.

Time will tell if “Generic AIO” will become part of Linux. In the meantime, you can read more about it on LWN.

Posted in , ,  | 2 comments

This draft is a work item of the DNS Extensions Working Group of the IETF!

Posted by bert hubert Fri, 12 Jan 2007 21:16:00 GMT

The workings of the Internet are described, or even proscribed, by the so called ‘Requests For Comments’, or RFCs. These are the laws of the internet.

Today the IETF DNS Extensions working group accepted an “Internet-Draft” Remco van Mook and I have been working on. And the cool bit is that over time, many such accepted “Internet-Drafts” turn into RFCs!

Read about it what our draft does here and here.

The actual Internet-Draft can be found over at the IETF, or over here as pretty HTML.

In short, this RFC documents and standardises some of the stuff DJBDNS and PowerDNS have been doing to make the DNS a safer place.

Besides the fact that it is important to update the DNS standards to reflect this practice, it is also rather a cool thought to actually be writing an RFC, especially one that has the magic stanzas “Standards Track” and “Updates 1035” in it.

So we are well pleased! Over the coming months we’ll have to tune the draft so it confirms with the consensus of the DNSEXT working group, and hopefull somewhere around March, it will head towards the IESG, after which an actual RFC should be issued.

Exciting!

Posted in , , ,  | 10 comments

Older posts: 1 2 3 4 ... 8