Right now, there is no good macro preprocessor for Java. Annotations somehow come close but they don't really fit the bill. In the Java VM, annotations are a runtime feature. You cannot enhance an existing class; only create new ones. This means that you cannot add setters and getters to a class.
When you look at OR Mappers, they even do this at runtime, so there is no way to see what is actually happening: When the error happens, the code which is executed can be completely different than what you see in the source. Even decompiling the class file will not help anymore because the information isn't there, yet. It's only added when the classloader reads the file.
So from a certain point of view, Sun's solution is the worst of all worlds: Your code is changed at a point in time when you can't see it anymore and you cannot move the modification in the compile cycle because the API simple doesn't allow it.
To know where you want to go, you must have a goal. The goal here is to reduce the amount of code to write for a certain feature. Specifically, the idea is to be able to move common, repeated code into a single place and be able to reference it easily.
The code must be more flexible than a method call and easier to manage than cut&paste.
A bound property in a Java bean is a field which sends notifications to listeners when it is changed. This means it is made up of these parts:
OR Mappers will only get you so far. While they will solve many or all problems, they also introduce new ones:
So what do we expect from AST Macros in this case?
Some simple examples:
Before we look at solutions, let's look at what the code ought to do in the end.
This leads to a couple of demands which an AST Macro Processor (AMP) must met:
In a perfect world, an AMP should be able to modify the code on a source level and pass it back to an IDE, for example, so that I can see (and debug) what is actually compiled (instead of only seeing the Annotation).
SQL enhanced code is pretty similar to bound properties but more code is generated. The first step is to define the class which maps a database table to a Java object:
After this is compiled, I want to see a special field "SQL" which I can use to build database queries like so:
This gets converted by the compiler into:
The SQL object in Foo also gives access to the standard DAO methods like loading an object by its primary key:
In addition to the simple bound property example, the AMP must also be able to note the usage of an annotated object, so it can convert the Groovy code into SQL at compile time (and possibly check it for mistakes).
Groovy 1.x must run on Java 1.4. We must decide what to do with non-macro annotations, whether we want to support a switch to generate Java 5 classfiles (so Groovy can generate code for third party APTs like Hibernate)
It seems that it is possible to write annotations into Java 1.4 classfiles (see Commons Attributes). But the questions is: Is this futile? There are only a few tools which support annotations and Java 1.4.
In this light, it makes more sense to add a switch to allow Groovy to write Java 5 classfiles, so users stuck to 1.4 can still use it and Java 5 users can upgrade when they want to.
The compiler needs a way to decide what to do with an official Java 5 annotation like
javax.persistence.Entity which is defined in EJB3: Expand it as a macro or pass it on into the class file so a third party library/tool can process it later.
Here, the user might want to decide differently per class (i.e. handle most of these cases with Hibernate and some corner cases with her own AST macro).
For Groovy-specific macros, the solution is to add a marker interface to the macro annotation.