8 items are tagged with development:
All developers suck
Not only the top 20%, but every single one. Maybe they do not suck at programming, but on something else - documentation, communication skills (I remember some mails from my coworkers) or driving their car. So everybody sucks at most things he or she is doing. Unfortunately, most of us suck at what they do at work.
I know that most of the code I have programmed so far sucks, and there is only a handful of programs I’m proud of (and I started some 15 years ago…). According to Jeff, I should put myself to the alpha programmers. But that doesn’t mean I’m good at what I do - just that I’m trying to get better. Thats why I’m sitting on my computer on a Sunday night writing a blog article.
But not everyone sees his profession that way. Many of my colleagues never had a formal education in computer science, some of them had other jobs and got downsized, other never got one in their field. But I think they all have some areas of interest where they really excel - it just happens not to be their job. Maybe its something where you cannot earn money with, meybe they just don’t want to have their hobby as their main job.
For companies trying to get their programming jobs done, the best advice could be to try to find these 20% programmers, and to encourage all of the other ones to think about their work. Educating them is one way, or find other incentives for them to get better. The wrong way is to aim low and try to write your software in ways you think that even the worst 10% of your developers can comprehend. Do that, and you will get the worst 10% of software ever written.
(Its interesting - in programming, we accept it as normal that there are bad programmers, and we think we need to live with them. When my car needs fixing, I would not give it to a carpenter, but require someone how knows his work).
Code generating code
At work, we are currently experimenting with model driven software development. In the past, we were using the UML2 editor on Eclipse to create models of out persistent objects and later generated code directly from the model. The step to create not only Java code, but also other artifacts (like SQL scripts or code to import the persistent data from XML files) was small, the step to create a more abstract level was a little bit larger. Eclipse makes this somewhat simple because it delivers a infrastructure for modeling, transformation and generating code. The most interesting part seems to me the decision, how the higher-level model should look like.
The me, this decision illustrates the problem with MDSD: if forces you to decide for a model. You then need to do everything in this model, and hope it fits to ones needs. If not, you need to create another, higher-level, model, and transform this into the other one. There is no concept of ad-hoc generation of code. But it’s exactly this what makes generating code so useful - the ability to replace every repetition of code by generating it from a higher level description of it.
MDSD only seems to be useful if the higher-level model is used multiple times, because its definition and all the work around model transformation is too much to use it only a single time (or in a single project). But code generation principles are designed to be simple, so it is much more economic to use them. In a recent example, I used grep and a small shell script to create stubs from about 100 XML files (we have a dynamic lookup mechanism in our software, and these files would otherwise been visible to the end user). This is code generation - albeit I executed it only once. With the proper infrastructure in place, I would have written a small program to determine all the files to be used, and would create all the stubs every time during the build process.
So, that’s why I’m currently trying to build a small code-generation infrastructure for Java. It uses bean shell scripts for filling an (internal) model, which is then used by Freemarker templates to create artifacts (one or more). These artifacts could be files, or just another bean shell scripts, which are then feed back into the generation engine, and the process starts over. I would prefer to eliminate the internal model (so I could have bean shell scripts directly creating other scripts), but since Java code cannot be handled as data, this is not really feasible. But even more important is to add the ability for incremental generation (of the final files). Often I need to add some methods to the generated Java classes, and I don’t want to loose them every time.
So, there is still some work to be done, but I hope that it will prove useful (and I think I learned something new from LISP).
In computer science, all great ideas have be thought
Cruizer said that .NET 3.5 does things which Smalltalk already did nearly 40 years ago. A while ago, while learning more about Smalltalk history (and about the great things therein), I came to the conclusion that all great ideas in computer science have been made before 1975:
- all programming paradigms: LISP (1958), Smalltalk (around 1970), FORTRAN (1953), PROLOG (about 1970), FORTH (1970), COBOL (1960)
- the relational database (Codd, 1970), OO-Databases in the 70s
- Garbage Collection and virtual machines: LISP & Smalltalk
- the laser printer: 1970
- graphical displays and CAD systems: 1963 (Sketchpad)
- the mouse: 1963 (Doug Engelbart)
- Hypertext: 1945 (Vanevar Bush’s memex and Ted Nelson Xanadu, 1965)
- computer collaboration: 1968 (Engelbart’s oNline System)
- the computer demo: 1968 (NLS)
- the GUI: 1970 (for the Dynabook, in Smalltalk)
- Networking: 1969 (ARPANET)
- Laptops: 1970 (Alan Kay’s Dynabook)
- WYSIWYG: 1974 (the Bravo editor for the Alto computer)
- and many more
What’s most disgusting for me: many of the great ideas of these inventions have been forgotten by the majority of all the developers and architects and computer engineers (or maybe they just ignore them). And so we are damned to invent all these ideas again - in most cases in a much weaker version.
Asynchronous messages as OOP
Originally I wanted to write about cargo cult programming, but I read Michael Feathers latest post and the idea was too good to let it pass by.
Since I’m currently struggling with creating a component system, which should serve as the base for a somewhat large domain model, I’m playing with thoughts about being able to see each component as its own small server, which can be located on some arbitrary system. This leads to a distributed system (like Amoeba as operating system), with all its advantages and drawbacks. My biggest problem so far is the communication between the components:
- either I have RPC styled services, which results in a SOA-styled architecture (which would be really awful
- or I have real messaging between objects, and must deal with all the problem which can result from a missing answer to a message
Michael solves this with removing the need for answers to messages - messages just generate more messages, and somewhere in the end the right things happens. Surely this is the essence of OOP - Alan Kay not only said the OOP for him is just passing messages between objects, he also said that on the Internet (as the prime model for a distributed system) each object should have its own IP (or at least its own URL)
But since my domain model system needs to react to requests from users (it should serve as the base for large web applications), for now I need some kind of answers to my messages. But for that, I can always use blocking and wait for the messages to flow back to the original requester
I will need to think more about that - the advantages of this model are way too large to miss them, and I hopefully can deal with all the consequences.
Fewer people are faster
Smoothspan writes: To Build Better Software, You Need Fewer People. He talks about the problem that larger teams tend to be slower because of all the communication issues between them. 10 Developers seem to be a reasonable limit for a software team, but I think there are tasks where the limit is even lower. I’m currently working on a team trying to develop a new architecture for a whole software system. We started with about 8 engineers, and it was impossible to even reach a conclusion about the requirements for the new architecture. When the team shrunk to 4 people. with 2 sharing the main work, it was a matter of a few days to get the ideas flowing. We still have the larger team in the backyard, and we will need to defend our result in presentation, but the main work is not restricted to all the goals these people are trying to reach. With 2 people sharing their ideas on a daily basis (while working on different layers of the architecture) we were able to define not only the goals in a nice way, but also to define the architecture in a comprehensive and understandable way - something which a committee will never be able to to (think about all the WS-Deathstar mess). It helps that we are standing on the shoulders of giants, and reading papers from Richard P. Gabriel and Alan Kay was a huge inspiration.
But back to the team size - the limiting factors are communication and the different goals. There is no simple way to have all the ideas of 8 or more people flowing smoothly between all of them, discussing and criticizing them. This is even problematic with 4 people, like in our team. The other limiting factor could be solved by a (maybe benevolent) dictator who just decides what the goals of the project are. But if you want to reach a coherent set of goals in a committee you will need a really long time.
So, for development 10 people may be the upper bound for a working team, but for doing creative work like designing a system architecture, I think 3 is the best size. One person can run into all the wrong directions, and 2 person could be stuck because they cannot resolve their differences. But 3 people can communicate effectively, and should be able to resolve all problems in a short time.
FORTH programs are also DSLs
I explained in the last entry how I see Lisp programs as domain specific languages (or rather, how Lisp as a language encourages the creation of all programs as DSLs). When reading the explanation by Richard M. Jones for his minimal FORTH compiler, I was reminded that FORTH plays in the same league. You write your program by building new words based on the already existing words, and try to capture the problem domain with them. In the end, you should have a minimal set of words describing your program. This is just another description of a domain specific language.
When talking about programming languages, I have no real idea how to look at Smalltalk. On the one hand it uses a very small set of keywords (just 5) and concepts (namely message passing and assignment), and builds everything from there. But I cannot see how it encourages the writing of programs as more specific problem descriptions the way Lisp and FORTH do it.
Thinking about model driven software development
I hate buzzwords, and MDSD is one of them. No so much because as an bad idea, but because its so often misunderstood. When someone talks about model-driven development, he usually means two things:
- drawing some pictures (or diagrams, if a tools is used)
- he can solve all development problems with these pictures, and there is no need for further abstractions
I came to think the see these pictures as a special kind of domain specific languages. Its graphical, and will therefore require special tool support (as long as don’t want to create code, you can draw them on a flip chart but generating code is what MDSD is all about). This also means that its always difficult to create these diagrams from another tool (meaning that you have a higher language).
The last thing is the main point - DSL are created to solve some problems easier than in a general programming language. The DSL abstracts some things away, and make other things easier on its way. But this doesn’t mean that it is the end of the road - for some even more specific problems it can be a good idea to create an even higher abstracted DSL, which generates code in the former DSL. If doing so is easy, this encourages the creation by the developers. This can be seen in the Ruby & Rails community - as Jamis Buck stated: /most well-written Ruby programs are already a DSL/. For me thats one reason development in Ruby is so fast, and so much fun.
Of course, Ruby did not invent this. Lisp with its /defmacro/ did this some decades earlier, and took it to a much more extreme level: since data is code, and code is data, every DSL specified in Lisp is also Lisp code - which means that you need no special facilities for code generation, and you get all the power of Lisp on all levels of abstractions you want to create. To change Jamis’ quote: in Lisp its not possible to write a program which is not a domain specific language.
For my journey with Common Lisp, this means that I now need to write Lisp programs :)
Learning LISP
I wanted to learn Lisp for some years now, but I never got around doing it. Maybe its just that Lisp is too different from all the languages I have used so far (which includes COBOL, Forth, but was mainly Pascal, C++ and Java for the last 8 years). Maybe its the bad memories to my time in university - we had a course on artificial intelligence there. We spend the 3 months assigned to learning Lisp just with fiddling aroung with lists and CAR and CDR (the other 3 months were more useful - we used PROLOG to model relationships (mother, father, child etc.)).
But I knew that at some point I would need to learn it - there are just too many good concepts in there, and one should know them. So when I stumbled across Practical Common Lisp it was the right time. I was already learning Scala (which is the nearest thing for any Java developer trying to get his head around functional programming), so the step to Lisp seemed not so big this time.
Installing LispBox was a small adventure - on Windows it wants to create some directories on drive C, which is not allowed if you have no administrative rights. On Ubuntu you can download everything via apt, but need to activate SLIME manually in Emacs (which is explained nicely in the manual). After I figured out everything it works like a charm, though.
So I’m currently working through chapter 5, “functions”, and learned already my first lesson: code generation, when done right(TM), can be a really powerful tool for better code.