I never understood why some people are counting lines of software-code. What is the goal behind it ? Some people think, you could measure the “size” or the “complexity” (whatever is meant by one of these terms) of a piece of code by counting the number of lines it consists of. Or you could use this metric to measure the efficiency of a developer writing code.

In my opinion, this approach does not apply to code developed in SQL in special and to code in general.

The best example that “counting lines of code” is worth for nothing is the following (which – by the way – has nothing to do with SQL):

In 1995, David Bailey, Peter Borwein and Simon Plouffe discovered the following algorithm:

(the so called “miraculous Bailey-Borwein-Plouffe Pi algorithm” – see for example A New Formula for Pi ).

What’s so miraculous with it ?

It allows you to calculate the dth digit of pi without being forced to calculate all the preceding d-1 digits.

Former to that, no one previously had even conjectured that such a digit-extraction algorithm for pi was possible. Instead of, there was a common agreement in believing that you **have to** calculate all preceding d-1 digits before you can calculate the dth digit. (The authors applied the algorithm to show that the 10 billion’th hexadecimal digit of Pi is 9).

Note, that it took mankind some milleniums to discover this algorithm:

Algorithms to calculate pi were implemented and applied long before,see for example A history of Pi

Note further, that implementing this algorithm does not require many lines of code – you can find a very compact implementation in Python for example on Pi with the BBP formula (Python)

So, it is an excellent example in showing what you can measure by counting lines of code:

Not more than nothing.

Update 12/07/2011:

a very nice SQL example that “lines of code” measures nothing can be found on the

asktom-“sorting by number”-thread

Thanks to Laurent Schneider and a reader called Brendan !

They show how to solve a quite hard SQL problem with a two-liner.