Even if the languages I wrote in are dead, the knowledge of how to write good software is timeless.
There are some problems with this, though. Many of the techniques for good coding that I was taught, I no longer agree with. Sometimes I completely disagree with them. At university I was taught that there should be as many lines of comment as there are of code, which led to absurdities like:
C Increment counterIn my code nowadays, comments are the technique of last resort for explaining what's going on. I much prefer to use plentiful, sensibly named functions to express my intent.
I = I + 1
Nor do I any longer believe that every function should have only one return statement. Maybe that made sense once, but modern compilers are quite capable of coping with multiple return points. The emphasis on speed of execution and compactness that I learned at University also looks quaint. For most of my work, the most important quality that software can have, after correctness of course, is readability, so that the poor git who has to debug it three months later (quite often me) has a sporting chance of understanding it. Speed and size optimisation almost always reduce comprehensibility.
Another point that occurs to me is that I'd been programming for 16 years before I came upon Object-Oriented Design, and 25 before I had a chance to try Test Driven Development. I don't have any of my code from 16 years ago (what with it being the intellectual property of my former employers and all), but I rather suspect that most of the good practices I was using then haven't proved portable to the current day.
This isn't looking too good for the usefulness of my 30 years of experience. Let's look at my second argument instead.
I have acquired a sense of perspective, useful in a field like software engineering where the landscape changes at exponential rates.
As many others have already noted, IT is a field that undergoes exponential change. Processor speeds, memory size, disc capacity, etc. are all doubling every year or two. We all experience this; however, you have to be a certain age for the full magnitude of what's going on to sink in.
From this side of 50, I have a sense of perspective that younger (more energetic, faster learning, possibly more talented, not that I feel in any way threatened) programmers still lack. A good example of this involves the prefixes we use for size and speed. G for Giga is typical nowadays. When I started my professional career in 1980 though, memory size was typically measured in Kilobytes, and disc space in Megabytes. The HP1000 I worked on in 1980 had just 128K of RAM (and you could only address 32K of that at once). Three or four years later when my department was buying a Comart Communicator, my boss had to weigh up the pros and cons of getting one with a 10 or 20 Megabyte hard drive (in those days referred to as a Winchester drive).
My favourite example: in the mid-eighties Cambridge University was upgrading its mainframe system. I forget what computer exactly they were buying (though I remember it was replacing an IBM 3081), but I listened in stunned silence when a friend told me that it would have 50 Gigabytes of online hard drive. 'Giga' was new then, so I had to take a second to work out that that meant an astounding fifty thousand Megabytes! And that was to service a whole University. I have three times that much space in my Western Digital Passport external hard drive now, and I've nearly filled it up. Also, I've just noticed that, for the price I paid for it less than two years ago, Amazon are now selling the 500Gb version. And there's also a 1 terabyte version! Tera is coming to personal computing. That's a million-fold increase in disc space in a couple of decades. Also, what used to need its own air-conditioned room now fits comfortably in your pocket.
All this is starting to make me sound like some boring old f**t, going on and on about what things were like when he was your age. I will therefore stop now, and try to salvage my argument in part 3.