Category Archives: Java

Hacking JCIFS to support setting share permissions

For a recent project I was working on, I was required to set permissions on a remote windows share. All of the roads seemed to point to JCIFS as the library to do this. Unfortunately, JCIFS did not have support for the operations I required so I went about seeing what it would take to add them. This is the story of my JCIFS journey.

JCIFS is not what you might expect from a typical modern open source project. What source control system do they use? Git, SVN, Surely not CVS? I was surprised to find that the answer was none. There is no source control system that controls the official JCIFS releases. This stems from the fact that there is a single developer/maintainer of the codebase. The next thing I looked for was to see if I could find their bug tracking system. Same story. There is no bug tracking system for JCIFS either. The one thing JCIFS did have going for it was the active mailing list. Michael B Allen, the JCIFS developer/maintainer of the project, was very helpful in answering my questions to get me going.

What I Needed

What I was looking for was the ability to set Access Control on file shares of a Windows server. I found a promising patch that I thought was my answer on the JCIFS mailing list http://comments.gmane.org/gmane.network.samba.java/9045. It turns out that this was not exactly what I was looking for. This patch can be used to set file permissions (returned from JCIFS SmbFile.getSecurity()). What I was really looking for was to set permissions of the share (returned from JCIFS SmbFile.getShareSecurity()). This patch was a starting point but it would need some work.

If you have done any coding in Java that requires interoperability with Windows systems, you have probably come across JCIFS. JCIFS is an “Open Source client library that implements the CIFS/SMB networking protocol in 100% Java.” Many other java projects out there such J-Interop and many others use JCIFS internally. The reason for this is because JCIFS has implemented a Java version of Microsoft’s version of DCE/RPC. Leveraging this protocol, you can call pretty much any remote procedure call Microsoft has implemented. A great resource on what Microsoft has in this area is the MSDN documentation on Microsoft Communication Protocols (MCPP).

Microsoft has two protocols that I needed to add operations for:

  • [MS-SRVS]: Server Service Remote Protocol Specification
  • [MS-SAMR]: Security Account Manager (SAM) Remote Protocol Specification (Client-to-Server)

To SRVS, I needed to implement the NetrShareSetInfo call to set the permissions I was looking for. After working through this I realized I needed a way to lookup a user SID by name. To do this, I also implemented the SAMR call SamrLookupNamesInDomain.

Implementing My Changes

Implementing changes to the DCE/RPC calls in JCIFS was not trivial to figure out. There seemed to be generated code (srvsvc.java and samr.java) that was generated from srvsvc.idl and samr.idl. I figured Corba at first but quickly realized that this was not regular IDL. It was not even the Microsoft IDL as described in the Windows calls. This IDL was massaged into a format that JCIFS could work with. I spent a long time trying to find out how this IDL was compiled until I got a reply on the mailing list with this blog post by Christofer Dutz. He pointed out a tool that I missed called midlc that is part of JCIFS. It is unfortunately not referenced in the JCIFS main website at all other than having the download listed. Following his instructions, I was able to get midlc compiled and running.

The IDL compiler can be downloaded from http://jcifs.samba.org/src/midlc-0.6.1.tar.gz. It was originally built for Linux but compiles and runs fine on my mac. In a nutshell to compile it:


$ cd midlc-0.6.1/libmba-0.9.1
$ make ar
$ cd ..
$ make

Running the compiler was pretty simple as well.


./midlc -v -t jcifs -o [pathto]/srvsvc.java [pathto]/srvsvc.idl

Implementing the code to use this generated code was fun. There were a lot of good samples so it was not hard to get going.

Available for the future

I have made all of the work I did on JCIFS available on Github. Hopefully others will find it and use it.

https://github.com/chrisdail/jcifs

Edit (March 30, 2012): Updated to include original setSecurity.patch I based my work on. This has since been removed from nabble.

JAXB Without an XML Schema

Have you ever received an XML Sample without a Schema that you wanted to use with JAXB or some other XML Binding? It actually happens more than I would like. The common thing to do is simply not to use any XML Binding at all. This is not ideal since JAXB is so much easier to use than DOM. Yes there are better Java libraries out there to deal with XML (or Groovy) but that is not the topic of discussion here.

You can always try to write your own schema from the sample provided but that can take some time to do. Many times by the time you are done, you could have just used something else. Another option is to generate a schema for you.

Let us consider the following scenario. You have been provided the following XML sample.


<?xml version="1.0" encoding="UTF-8"?>
<people>
    <Person id="123">
        <name>Chris Dail</name>
        <phone>555-1111</phone>
    </Person>
</people>

The first thing you need to do to use JAXB is to get a schema for this XML. There is a free software tool called the Trang Converter that can be used to convert between schema types. It has a cool feature that can generate a schema from an XML sample file. This is what I am going to do here.

The XML editor I use is called Oxygen XML. It actually has the Trang Converter built in as an option. Going to Tools->Schema Converter gives you a UI on top of the Trang Converter.

Using this, you take the sample and generate a schema for it. After the convert, you end up with a schema looking something like this:


<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified">
  <xs:element name="people">
    <xs:complexType>
      <xs:sequence>
        <xs:element ref="Person"/>
      </xs:sequence>
    </xs:complexType>
  </xs:element>
  <xs:element name="Person">
    <xs:complexType>
      <xs:sequence>
        <xs:element ref="name"/>
        <xs:element ref="phone"/>
      </xs:sequence>
      <xs:attribute name="id" use="required" type="xs:integer"/>
    </xs:complexType>
  </xs:element>
  <xs:element name="name" type="xs:string"/>
  <xs:element name="phone" type="xs:NMTOKEN"/>
</xs:schema>

You can then use JAXB to generate your object model from this using the following command:


"%JAVA_HOME%\bin\xjc" -p com.chrisdail.jaxb.sample *.xsd

The result looks like this:


parsing a schema...
compiling a schema...
com\chrisdail\jaxb\sample\ObjectFactory.java
com\chrisdail\jaxb\sample\People.java
com\chrisdail\jaxb\sample\Person.java

Now you have a JAXB generated object model from just an XML sample file.

Example of the generate Person.java class.


//
// This file was generated by the JavaTM Architecture for XML Binding(JAXB) Reference Implementation, vJAXB 2.1.10 in JDK 6 
// See <a href="http://java.sun.com/xml/jaxb">http://java.sun.com/xml/jaxb</a> 
// Any modifications to this file will be lost upon recompilation of the source schema. 
// Generated on: 2010.08.30 at 12:31:56 PM ADT 
//


package com.chrisdail.jaxb.sample;

import java.math.BigInteger;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import javax.xml.bind.annotation.XmlAttribute;
import javax.xml.bind.annotation.XmlElement;
import javax.xml.bind.annotation.XmlRootElement;
import javax.xml.bind.annotation.XmlSchemaType;
import javax.xml.bind.annotation.XmlType;
import javax.xml.bind.annotation.adapters.CollapsedStringAdapter;
import javax.xml.bind.annotation.adapters.XmlJavaTypeAdapter;


/**
 * <p>Java class for anonymous complex type.
 * 
 * <p>The following schema fragment specifies the expected content contained within this class.
 * 
 * <pre>
 * &lt;complexType>
 *   &lt;complexContent>
 *     &lt;restriction base="{http://www.w3.org/2001/XMLSchema}anyType">
 *       &lt;sequence>
 *         &lt;element ref="{}name"/>
 *         &lt;element ref="{}phone"/>
 *       &lt;/sequence>
 *       &lt;attribute name="id" use="required" type="{http://www.w3.org/2001/XMLSchema}integer" />
 *     &lt;/restriction>
 *   &lt;/complexContent>
 * &lt;/complexType>
 * </pre>
 * 
 * 
 */
@XmlAccessorType(XmlAccessType.FIELD)
@XmlType(name = "", propOrder = {
    "name",
    "phone"
})
@XmlRootElement(name = "Person")
public class Person {

    @XmlElement(required = true)
    protected String name;
    @XmlElement(required = true)
    @XmlJavaTypeAdapter(CollapsedStringAdapter.class)
    @XmlSchemaType(name = "NMTOKEN")
    protected String phone;
    @XmlAttribute(required = true)
    protected BigInteger id;

    /**
     * Gets the value of the name property.
     * 
     * @return
     *     possible object is
     *     {@link String }
     *     
     */
    public String getName() {
        return name;
    }

    /**
     * Sets the value of the name property.
     * 
     * @param value
     *     allowed object is
     *     {@link String }
     *     
     */
    public void setName(String value) {
        this.name = value;
    }

    /**
     * Gets the value of the phone property.
     * 
     * @return
     *     possible object is
     *     {@link String }
     *     
     */
    public String getPhone() {
        return phone;
    }

    /**
     * Sets the value of the phone property.
     * 
     * @param value
     *     allowed object is
     *     {@link String }
     *     
     */
    public void setPhone(String value) {
        this.phone = value;
    }

    /**
     * Gets the value of the id property.
     * 
     * @return
     *     possible object is
     *     {@link BigInteger }
     *     
     */
    public BigInteger getId() {
        return id;
    }

    /**
     * Sets the value of the id property.
     * 
     * @param value
     *     allowed object is
     *     {@link BigInteger }
     *     
     */
    public void setId(BigInteger value) {
        this.id = value;
    }

}

Writing Java Performance Tests in Groovy

In my previous post, I mentioned writing performance tests anytime you need to do optimization to slow areas of code. Writing effective performance tests can be tedious in Java. Every test you want to run in Java needs to have the same timing logic before and after the tests. Groovy‘s Closures make separating timing code from actual test implementations easy. I write all my performance tests in groovy because it simplifies the testing code logic and allows me to focus on what I am trying to test.

The basics of a performance test is to get the current time before the test, run the test and then get the time after. In Groovy, we can use a closure to express this.


def timeit = {String message, Closure cl->
    def startTime = System.currentTimeMillis()
    cl()
    def deltaTime = System.currentTimeMillis() - startTime
    println "$message: \ttime: $deltaTime" 
}

This allows you to call a test like this:


timeit("Test 1") {
    // This would be the code you want to test
    Math.pow(2, 6)
}

If you are going to be writing many tests, this format is much shorter than having to constantly repeat the currentTimeMillis() calls in Java. Also, no heavyweight testing framework is required. The ‘message’ that is passed in is a convience method so the output of the test can be distinguished. The results look like this:


Test 1: 	time: 0

Right away you will notice that a time of 0 milliseconds is not that useful. The code simply ran too fast to measure. Yes we could use nanoseconds and may get better results. What I prefer to do is to run the test many many time and to take the average. This way you get an average of how fast it is and it provides more repeatable numbers.

Updating the groovy closure, we end up with the following that runs the test 500 times.


def timeit = {String message, int count=500,  Closure cl->
    def startTime = System.currentTimeMillis()
    count.times { cl() }
    def deltaTime = System.currentTimeMillis() - startTime
    def average = deltaTime / count
    println "$message:\tcount: $count \ttime: $deltaTime \taverage: $average" 
}

The output of this looks like this:


Test 2:	count: 500 	time: 18 	average: 0.036

Another thing that should be considered in Java is discounting the first few runs. The first time Java executes a particular class, things are always slower. Some work has to be done by the Java VM to load all of the classes for the first time. Subsequent invocations of the same code get faster. To account for this, I include a warming period in the tests. Essentially I run the code to be testing a bunch of times before I record the time. This discards these initial runs that will be slower. The closure for this looks like this:


def timeit = {String message, int count=500, Closure cl->
    // Warming period
    20.times { cl() }
    def startTime = System.currentTimeMillis()
    count.times { cl() }
    def deltaTime = System.currentTimeMillis() - startTime
    def average = deltaTime / count
    println "$message:\tcount: $count \ttime: $deltaTime \taverage: $average" 
}

The output of this looks like this:


Test 3:	count: 500 	time: 6 	average: 0.012

Another thing you might want to do is to run a multi-threaded test. In Java, this would require quite a few extra classes. In groovy, a simple modification to this closure can allow it to be run in multiple threads. Here is the new closure and test call calling it with 5 separate threads:


def timeit = {String message, int numThreads=1, int count=500, Closure cl->
    // Warming period
    20.times { cl() }
    def startTime = System.currentTimeMillis()
    count.times {
        def threads = []
        numThreads.times { threads << new Thread(cl as Runnable) }
        threads*.start()
        threads*.join()
    }
    def deltaTime = System.currentTimeMillis() - startTime
    def average = deltaTime / count
    println "$message:\tcount: $count \ttime: $deltaTime \taverage: $average" 
}

timeit("Test 4", 5) {
    Math.pow(2, 6)
}

An extra parameter to the timeit closure allows you to pass the number of threads to concurrently execute. The results are the following:


Test 4:	count: 500 	time: 465 	average: 0.93

As you can see, groovy makes it much easier to write performance tests for Java code. The closure listed above can be used in all sorts of different projects to test performance. Hopefully this snippet will make your life easier when it comes to writing your own performance tests.

Performance Tuning

Performance Tuning is one of those black arts in programming. It takes skill to do it properly. Often people end up attempting to optimize the wrong things for performance. As the great computer science wizard, Donald Knuth put it: “We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil”.

I think of it in these terms. Readability comes first and foremost because that leads to maintainability. If you have a performance issue, then worry about tuning performance. I am by no means saying you should completely ignore performance and brute force everything. You need to be aware of performance and do things in an optimal way. You should simply not go out of you way to make something faster at the cost of readability.

Occasionally you will be tasked with the job of performance tuning. On the last three major projects I have worked on, each of them required performance tuning at some point. For each of these there were some basic tools I used to go about looking for areas to optimize.

The Hunt

The first thing you must do when looking to boost performance is to go on the hunt. It is important to know what is slow before you can make it fast. You will be very surprised to find that most times the thing you think is slow may not be so slow and another thing that seemed to be trivial may be the cause of lots of performance issues.

Before going on the hunt, you first must have the proper tools. Here are some essential tools for tracking down performance issues:

  • Profiler – A code profiler allows you to see how long your application is taking doing various tasks. At my company, we use Eclipse for our Java development. It comes with a profiler as part of the testing and performance toolkit. There are lots of other commercial profilers out there that are likely much better. Pay attention to the ‘hot spots’ in your code that are executed more than others. Though it might not seem like it spends a lot of time each iteration, a small boost here could end up being a lot.
  • Poor Mans Profiler – Sometimes you might not have a profiler or you want to just look at a small section of code. In these cases, putting a few System.currentTimeMillis() will allow you to get some timings. In the current project I just worked on for performance optimization, the code already had extensive use of the Java Monitoring API (http://jamonapi.sourceforge.net/). Using this has the same effect as currentTimeMillis() but has a more refined API to work with. It can also help for seeing how fast certain calls are.
  • Performance Test Suites – I often like to write Unit Tests for specific functionality that I’m trying to optimize for performance. This way it is easier to profile and check performance on a specific part of the code. This way you can also do this in a unit test rather than starting the whole application.
  • Process Viewer – Task Manager on Windows and top on Unix are invaluable tools as well. These allow you to watch CPU usage when running performance tests. Often times a sure sign of a synchronization bottleneck in a multi-threaded application is watching a single CPU be maxed while the rest are idle. Always do development on a multi-core machine if you are writing multi-threaded applications so you can look for these issues.

Approaching your Target

After you have found a performance issue, it is time to attack the performance issue. You know where the performance issue lies but you don’t know what is the cause. There are a few things to look for.

  • Synchronization – A big performance issue I alluded to earlier is synchronization. In multi-threaded development, sometimes you need to work with shared objects. The easy way to do this in Java is to use the synchronized keyword. You need to be careful the scope of where this is used and keep it as narrow as possible. The CPU usage is a good indication of this problem. If this is your problem, you may want to look at using a modern concurrency library like the java.util.concurrent library for Java. The ConcurrentHashMap can solve many issues around synchronized maps and is much better than using Collections.synchronizedMap(). Many synchronization issues can be difficult to track down because a debugger cannot show you them.
  • Serialization – Serialization is another big performance hit. Anywhere you are going from data objects to XML, JSON or binary on disk or in memory, you have a performance hit. These operations are notoriously slow but are often necessary at times. You should make sure these are not being done more than they need to. Often times a cache on deserialization of objects can greatly improve performance here.
  • Nickels and Dimes – Often times there is not one single performance issue that is the cause of all of the problems. More than likely there are a few things that add up over time. If you shave off 1 millisecond from a call that is called 100000 times, you have saved yourself a second worth of processing time. This can often be better than shaving 50 milliseconds off of a call that is only called once. This is where your profiler and performance tests help out in knowing where the problem is.
  • Databases and Performance – If you are using a database and notice performance issues you should check a few things. Make sure you are using database queries. Most of the time the database can manipulate things faster than you can in code. Also make sure you have proper indexing on your database tables so the queries run fast. Sometimes things can be done faster manually in code. Make sure you run performance tests before an after to compare any changes.

The Cleanup

After you finish your performance tuning, it is very very important that you re-run your performance tests. You need to prove that the improvements you made had a positive effect on performance if they did not, then they weren’t needed and are more likely to introduce bugs than anything else. If the performance did not improve, throw out the change and return to the hunt.

Along this note, it is important to only be on the hunt for one issue at a time. If you make 2 changes at once, it is not possible to tell which one may have given the performance gain. Each change must be done in isolation so you can be sure each change is required.

Maritime Dev Con 2010 Followup

The Maritime Dev Con was a huge success. About 95 people total attended the event making it a huge success for developers in the Maritimes. I had a great time at the event and met a bunch of really cool people.

The presentations I gave went well with a number of attendees. I’m putting up the slides from the presentation here in case you want to review them.

I’m also going to include the sample code from the presentations. The samples was for the modern hello world example in Java and Groovy. It used the twitter API to query MaritimeDevCon from twitter to find my ‘Hello World’ tweet.

Modern Java Development

Slides: MaritimeDevCon2010 – Java Jumpstart


package com.chrisdail.monctondevcon;

import java.util.List;
import twitter4j.*;

public class ModernHelloWorld {
    public static void main(String[] args) throws TwitterException {
        Twitter twitter = new TwitterFactory().getInstance();
        Query query = new Query("MaritimeDevCon");
        List<Tweet> tweets = twitter.search(query).getTweets();
        for (Tweet tweet : tweets) {
            System.out.println(tweet.getFromUser() + ": " + tweet.getText());
        }
    }
}

Groovy Primer

Slides: MaritimeDevCon2010 – Groovy Primer


@Grab("org.twitter4j:twitter4j-core:2.1.0")
import twitter4j.*

def twitter = new TwitterFactory().instance
twitter.search(new Query("MaritimeDevCon")).tweets.each {
    println "$it.fromUser: $it.text"
}

Maritime DevCon 2010

There is going to be a maritime developers conference coming up on June 18th in Moncton. It is going to be a great opportunity to have developers from Moncton and other areas of the maritimes get together and learn a bit about other languages and technologies they might not have been exposed to. All of the presentations are limited to 45 min and will mostly give an introduction to the language or technology.

Information about the DevCon can be found here: http://careertown.ca/devcon/.

I will be giving two presentations at this conference.

  • Modern Java Development – In this presentation I’m going to give an introduction to Java for non-Java developers. It will cover the basics of knowing where to start and getting started.
  • Groovy Primer – This is essentially going to be a rehash of the Groovy talk I gave at the Maritime Java User’s Group a month or so ago. This will focus on showing what Groovy has to offer (particularly to Java developers) and how to get started with Groovy.

Hope to see you there!

Introduction to Groovy Talk

Last night I gave a talk at the Maritimes Java User Group in Moncton. The presentation was an introduction to Groovy for Java Developers. I had initially done a 3h internal course for iWave Software to bring them up to speed on what Groovy is and why we should use it. I took this course and cut the size down a bit (about 1h 15min + questions) for the presentation last night.

It was a great third event for the Maritimes JUG in Moncton. We ended up with a few less people than we hoped but everything learned something and had a good time. In case you missed the talk or want a review, I have made the slides used for this presentation available.

Version Control and Bug tracking Integration (with Subversion and Bugzilla)

Two of the most useful tools to a developer outside of their development environment are version control and bug tracking systems. Version control allows tracking of changes to the product and allows for branching and merging. Bug tracking systems allow for tracking issues with the product whether they be bugs or enhancements.

Even though these tools are often separate products, they have a major commonality which is the code you are working with. Often times you want to be able to see for any given bug number, what code was changed for that bug. Also, for a change in the code (in version control) you want to see if it was associated with a particular issue in the bug tracking software.

At the company I work for we use Subversion for version control and Bugzilla for bug tracking. We have some best practices around these tools to make things easier.

Version Control and Bug Tracking Best Practices

When resolving issues in the bug tracking database, our team always puts in the build number of the build that contains the fix. This way a person who is looking at the bug can know if the build they have contains the fix. Anytime our team fixes a bug we put in a comment that looks like this:


Build Fixed: 1.0.1.12354

The last number is the revision number in Subversion.

When we commit code changes to Subversion, we also include the bug number for the bug being fixed. Our commit messages always appear in this format:


Bug 1234: Fixed this bug

Subversion Tooling

Recently I came across a neat feature in Subversion that allows you to link it to a bug tracking system. Basically this allows clicking on the bug number in the subversion history view to take you directly to the bug number in the bug tracking software.

Enabling this feature is fairly simple to do and involves setting 2 properties in the subversion repository. These properties need to be set on the root folder in subversion that you would use to checkout your project from. It automatically is available for everything in that tree but you need to checkout from this root for it to work. These are the two properties that need to be set.

  • bugtraq:logregex – This defines a regular expression to ‘match’ bug numbers in subversion comments. For the pattern I listed above, we are using: [Bb][Uu][Gg] (\d+)
  • bugtraq:url – This defines a URL to go to when the user clicks on a bug number. The browser is launched when the number is clicked on and takes you to this URL replacing the BUGID parameter. For our bugzilla repository we are using: https://some.server.somewhere.localhost/show_bug.cgi?id=%BUGID%

The following steps walk through this process of how to set this up using Tortoise SVN:

  • On the root folder of your subversion working copy, right click on the folder and click TortoiseSVN -> Properties.

  • Add each property listed above as new properties to the list.

Groovier Integration with Third Party Systems

A lot of the work I do deals with integration with a third party system. Each of these systems typically has their own API and mechanisms for handling errors. Often times writing some code to ‘wrap’ the API is required to make it easier to use from an application. Many times this ends up being an ‘adapter’ from the third-party API format to something more manageable and familiar to the application you are writing.

If you write this sort of thing in Java, you usually end up having a lot of duplicate code, specifically around error handling, that is hard to reuse without a lot of extra code. Lately, I have been using Groovy to solve these sorts of problems. The rest of the post shows a few ways I have learned to use Groovy to solve these problems in a better way. The result is simpler, more readable code.

Groovy Closures for Error Handling

Have you ever written this around a few methods?


try {
    // Some code in here
}
catch (APIException e) {
    log.error(e.getMessage(), e);
    throw new WrappedException(e);
}
finally {
    // cleanup here
}

Using Groovy’s Closures, this can be written as a single closure to do the error handling. Doing error handling this way is much simpler and allows the code to be reused easily.


// Definition of error handling closure
def errorHandling = { Closure c ->
    try {
        c()
    }
    catch (Exception e) {
        log.error(e.getMessage(), e);
        throw new WrappedException(e)
    }
    finally {
        // cleanup here
    }
}

// Code with error handling
errorHandling {
    // Some Code in here
}

For those not familiar with Groovy’s closures, let me explain what is happening here. The errorHandling variable is defined as a closure and is invoked around the code. This closure in turn takes a closure (function or code block in other languages) as a parameter. The error handling routine provides the stock error handling through the standard try catch syntax. Inside the try block, it invokes the closure that the user passed in. This allows the same error handling logic to be reused with the code to run being dynamically provided by the caller.

Groovy Categories for Mapping

Another problem is adapting between types in your application and types in the third party application. This is further complicated when dealing with collections of objects. Consider the following Java code to map from one system to another:


// Assumes some object instance 'util' is provided to map between object types.

// Given an input of InternalObject internal
List<InternalObject> list = new ArrayList<InternalObject>();
List<ExternalObject> results = api.operation(util.mapToExternal(input);            
for (ExternalObject o: results) {
    list.add(util.mapToInternal(o));
}
return list;

Instead of defining mappings as methods in some utility class, Groovy categories can be used to provide a more readable syntax.


// Category definition
class MappingCategory {
    static List<InternalObject> toInternalList(List<ExternalObject> o) {
        o.collect { toInternal(it) }
    }
    
    static InternalObject toInternal(ExternalObject from) {
         // Map from External to Internal
    }

    static ExternalObject toExternal(InternalObject from) {
         // Map from Internal to External
    }
}

// Code using the category. Input object is 'input'
use (MappingCategory) {
    api.operation(input.toExternal()).toInternalList()
}

The groovy category essentially adds dynamic methods to both the InternalObject and ExternalObject. The definition of the category has static methods which take one or more parameters. The first parameter is the class that the method should be added to. When the new method is invoked, the object it is invoked on is always added as the first parameter.

Also being used here is the list ‘collect’ method. This replaces what needed to be done in Java with creating a new list, looping over each item, converting it and adding it to the new list. The collect method does this in a single step by using a closure that converts the object to a new format. The result is a new list with the objects in the new format but in a single line of code.

Putting it all Together

Using this category can also be combined with the error handling closure that was previously introduced. Consider the following updated version of the error handling closure.


// Definition of error handling closure
def errorHandling = { Closure c ->
    try {
        use (MappingCategory) {
            c()
        }
    }
    catch (Exception e) {
        log.error(e.getMessage(), e);
        throw new WrappedException(e)
    }
    finally {
        // cleanup here
    }
}

With the previously defined error handling closure and category to handle the mappings, we can finally create a very minimal and readable adapter between our system and the third party system. Consider the following operation that uses the third party API. The input and output takes objects in the internal format.


public List<InternalObject> operation(InternalObject internal) throws WrappedException {
    errorHandling {
        return api.execute(internal.toExternal()).toIncidentList()
    }
}

Groovy’s closures allow error handling to be defined more easily and centralized in a single location where the code can be reused. Groovy’s categories allow data transformations to be done in a more readable fashion. The result is code that is easier to read and use.

SpringOne2gx 2009 Post Mortem

I had the privilege of attending the SpringOne2gx conference this year in New Orleans. This is the first tech conference I have attended. Where I live, in Moncton, is a bit off the beaten path and is a bit far to travel to these types of conferences regularly. Past employers of mine have not always invested in their developers as much as they should. Thankfully, iWave Software was able to send me this year.

I had an excellent time at the conference and met a lot of smart and equally minded developers. There was plenty of great content at the conference and the Roosevelt hotel in New Orleans was a great place to host the conference. The food was amazing the the service top notch. On a whole the conference was a great experience.

For me, I made a Smörgåsbord of the conference taking a bit for all of the tracks that were provided. In the rest of this post I would like to cover some details on the plethora of technologies on display. Most of them were from SpringSource, the commercial company that backs the Spring technologies. There was a lot of great stuff but some of it I am a bit more skeptical on. The following are purely my opinions on what I saw. I am by no means an expert on any of these technologies but I do want to give you my first impressions of these from the perspective of a Software Developer and Architect working in the Enterprise IT space.

Spring Framework 3.0
One of the main focuses of the conference was the upcoming Spring Framework 3.0 release. Beyond just the framework itself, many of the other technologies leveraged some of the new 3.0 features. RC2 of the release should be available shortly. I would like to touch a bit on the feature set and particularly what excites me.

  • Spring Expression Language (SpEL) – This is one of the coolest new parts to the framework. A new expression language can be used to evaluate expressions related to other beans. In simple cases you can use something like #{refBean.value} in either the value attribute on a property or in the annotation directly. More complex expressions are also supported. This will be able to simplify configuration a great deal.
  • @Configuration – This is a rework of the Spring JavaConfig project that allows you to write your spring configurations in Java code directly. This project is now merged into the Spring framework 3.0. This feature did not excite me a whole lot. It seemed like something ‘cool’ but not something I would likely use. The main benefits of using this over XML were explained to be for refactoring, strong typing and losing the verbosity of XML. The refactoring and strong typing is less of an issue if you are using the Spring IDE (or STS) since it does all of that for you and even shows you where errors are. As for the syntax, I found the @Configuration to be clunky and still pretty verbose. If I wanted to express beans in code that are more readable, I would just use the Grails SpringBeanBuilder.
  • Java 5+ – Spring Framework 3.0 will be Java 5 and higher only. This is a great thing because it means that the core framework will fully support generics. All of the core libraries have been updated for this. In the past I stayed away from the JpaTemplate and other Spring libraries because the lack of generics made them more verbose than what I could write in wrappers of my own.
  • Task Executors – All of the scheduling stuff and executors were refactored. The Java 5 until.concurrent package is now used instead of the spring wrappers that they had. Also, scheduling is supported as part of Spring now. So for simple scheduling jobs (including cron), only spring will be required. This is nice since it is a common thing that is needed and Quartz is pretty heavy for this simple feature.
  • Rest Support – Spring Framework MVC now supports Rest. If you are an Spring MVC developer, this is very nice since you will be able to do REST directly from a @Controller method. If you are not using Spring MVC, there is no reason to look into this. The JAX-RS (JSR-311) spec is the way all of the other Java REST frameworks are going. I don’t see any reason to use this custom syntax over the standard for Rest unless you are using Spring MVC.
  • OXM – The Spring WS project created a bunch of stuff for handling Object to XML Mapping (OXM). Basically this is ORM for XML instead of a database. JAXB is one of the most popular frameworks for this and it is built in to the JVM. The OXM from Spring WS is now part of the core framework and allows anyone to have a single binding interface that can delegate to anyone of the many OXM frameworks out there including JAXB, XmlBeans, JiBX and Castor. This would be very nice for someone who wanted to support many different binding types but didn’t want to have to write all of that from scratch.

    On a whole the Spring Framework 3.0 release looks solid. I am going to have to grab the RC and check it out.

    Spring Roo
    Spring Roo was one of the technologies being shown at the conference. I had heard of it before but did not really do much digging into what it was. In a nutshell, it is a coding by convention framework for building applications (mostly web) in Java. Similar to Rails or Grails, it provides a console based utility to create entity beans (model), controllers and views. Instead of taking the dynamic language approach, Aspect J is used to merge common tools and scaffolding into the Java beans you create. It is a very cool product especially if you don’t have the luxury of using dynamic languages.

    One of the more impressive aspects of this was the console they created. The console was intelligent and had full auto-complete support. I would like to see other console based applications use this type of thing in the future.

    I played around with this a bit. It worked well for what it did. It will likely be used mostly for rapid prototyping. I think if I had to do a new web project, I would probably just do it in grails since I am already comfortable with Groovy. This would be great for people learning Java or with a need to rapidly prototype something in Java using Spring. Also, it saves the hassle of setting up things like JPA and MVC. Once the scaffolding is in place, you can customize it how you need.

    SpringSource Tool Suite 2.2 (STS for short)
    The SpringSource Tool Suite is an eclipse based development environment for all things Spring. It includes the old Spring IDE capabilities for editing spring XML files, support for Spring Roo, Groovy, Grails and Maven. The whole package works very well (except for the fact that the 2.2 release did not include the groovy plugin).

    Not much is really new here. I had been using the Spring IDE and Groovy plugins already. Most of the components are available standalone. This will be very useful for people who don’t have any of these plugins and provides a single download.

    The main focus of the IDE was centered around delpoying to dm Server and tc Server, both of which are included. The new Groovy plugin is great and has greatly improved over the last little while. It is available standalone which is probably how I will use it for now as I don’t have a need for many of the other features at this time.

    Spring Integration
    Spring Integration is like an ESB inside a single JVM. It seemed like a good project if you had a need for this type of thing. For me, I see the crossing JVM boundaries (clustering) one of the most important aspects of an ESB and this does nothing to tackle that problem. The samples that were presented showed how the integration could be done with OSGi. The capabilities were there but the configurations were very verbose and required writing some code in places for filters and other such things. I kept thinking how much easier it would be to just do these things in code directly and they would be a whole lot simpler. Thread Pools are so much easier now with Java’s util.concurrent and dynamic languages do routing really well. Both of those I am comfortable with. I think this project still has a long way to go before it can truly compete with an ESB type product.

    Spring Batch
    Spring Batch was shown alongside Spring Integration for handling large batch operations. The framework looked good and provided a lot of monitor capabilities around the status of the batches.

    SpringSource tc Server
    tc Server is SpringSource’s enterprise version of Tomcat. Essentially they took standard Tomcat and wrapped it with better monitoring and clustered deployment capabilities. This looks like a great solution for services type companies who want to deploy to a more enterprise version of Tomcat. For a products company though, it may be more difficult to manage since it is a commercial product. Most customers I have encountered either want to use their millions of dollars JEE environment or expect you to provide your own.

    SpringSource tc Server Developer Edition
    The Developer Edition of tc Server was announced at this conference. The main feature included here was called Spring Insight. This is a very cool tool targeted at application developers. It provides real time monitoring of an application that gives developers information about what is going on in the application. It includes health information, performance and the ability to drill down on web requests to see each call. Even SQL queries being run are shown from this console.

    Everything is done through AOP runtime weaving so no code changes are required to run in this environment. Also, having this free for developers means developers will have a great environment for analyzing the performance of their applications. This is definitely something to check out for anyone who has a .war application.

    SpringSource dm Server
    SpringSource dm Server is Spring’s OSGi server. Similar to tc Server, this is an enterprise grade OSGi Server. They provide a lot of features to make OSGi easier including being able to deploy a standard .war file to the server and have it run.

    OSGi is ready for prime time and has proven to be. It is definitely not one size fits all though. It is not something you just use because you can or because it is cool. The demos of dm Server and OSGi with Spring with littered with caveats and workarounds. OSGi is a great platform but still has a lot of pitfalls for application developers. Yes it has some cool things you can do with dependencies and versioning but they are not free.

    Grails
    I did not get into any of the Grails sessions. There were too many sessions I wanted to attend and I have already had some exposure to Grails so I stayed away from these for the most part. On a whole, it seems like Grails has come a long way from a competitive web framework to one of the best ones out there. If I had a new web application project to do, I would be most likely to choose Grails to do it in.

    Griffon
    Griffon is one of the newer Groovy frameworks to come out. It provides a Coding by Convention framework for Swing development and is targeted for developing Rich Internet Applications. The core is based on the Grails and will likely be familiar for those who already have Grails projects. The framework is built around making MVC easy through Swing and use of data binding. I am really looking forward to digging into this more.

    The biggest hurdle I see is being able to leverage this technology in an existing application. One of our main products is a very large Swing based IDE. The code has been around for a long time and could benefit from newer patterns for GUI development. I would love to be able to integrate Griffon into it but at this time it may not be easy. I intend to look into this more to see if it is possible.

    Groovy
    This groovy language has also come a long way. I started using it when 1.0 was released. Since then it has really grown in capabilities. I learned a few tricks this time that I was not aware of. Most of them had to do with the annotations that were added in 1.6.

    The biggest thing for Groovy though was the IDE support. I love the groovy language but have found that I could write Java faster in many circumstances even though I write more than twice the amount of code. The reason is because of the IDE support. Code Completion, Import completion and Refactoring are huge. The new Groovy plugin for eclipse is fantastic and includes all of these. Finally, I feel I can switch to using Groovy as my primary development language on the JVM. This is also a big step toward convincing others of the benefits of Groovy.