Sweet Simplicity of a REST App with Spring Boot

Did you know that you can build a fully featured REST app, right down to the database, with only three Java classes?  Yeah, you can, and it’s pretty sweet – here’s the skinny.

What you’re going to need

The ingredients list is pretty straight forward – you’ll be working with the following packages:

  • Spring Boot (you probably figured that out from the title)
  • The Spring Data REST package
  • All of the dependencies that these pull in – but you probably won’t care too much about these

That’s pretty much it – aside from Java, Gradle, and your favorite IDE.

What we’re going to build

We’re going to start off with something simple – say we need a micro-service to keep track of registered users in our system.  We need to be able to store user data, allow it to be retrieved, updated and deleted.  In other words, a typical CRUD interface.

The interface, of course, will be exposed as a REST service – in addition to the regular old GET, POST, PUT and DELETE methods, though, we’ll want to support some sort of discoverable to our API, so we’ll also be exposing a HATEAOS interface, to allow clients to discover, and dynamically adapt to our service, as we expand it in the future.

We’re already talking about a fair amount of functionality here, but trust me, we won’t be breaking the three class rule.

The Model

Our model starts with a simple User class:

public class User {
    private String firstName;
    private String lastName;
    private String email;

    //Getters, Setters, equals and hashCode methods removed
}

Simple enough place to start – I did remove some boiler plate code from that sample, but I literally used IntelliJ to generate them all, so they really weren’t that interesting.

The Repository

Our model, of course, is pretty much useless on it’s own – this is where the Spring Data project comes in.  Spring Data is a set of components that help make it easier to manage data – the range of functionality provided is huge, and best discovered on your own here.  For this tutorial, the short version is that we’ll be taking advantage of Spring Data’s ability to make it really easy to work with JPA objects

JPA?  I’ve been duped!

Yeah, I know, I didn’t mention anything about JPA earlier, but the fact of the matter is that for the very basic functionality we’re talking about here, we just don’t need to stress out about it too much.  Here’s all you really need to know:

  • JPA will be storing our Model object in a single table called Users
  • The columns will be named the same as the field names, and will be typed as varchar’s
  • You won’t be writing any SQL
  • Hibernate will be doing the actual work for us
  • You can customize pretty much everything above, if you want
  • Oh, and in case anyone cares, JPA stands for Java Persistence API, and you can read about it here.

Carrying on

Now that we’re all more comfortable, the first thing we need to do is update our model a bit – we’re going to give it a db-generated primary key, and we’re going to annotate it as an Entity, so the system knows that it should care.

@Entity
public class User {
    @Id @GeneratedValue
    private long id;
    private String firstName;
    private String lastName;
    private String email;

    //Getters, Setters, equals and hashCode methods removed
}

Now that you’ve added a field, don’t forget to regenerate your equals and hashCode object, and give it it’s own getter and setter.

We’re still on just a single class, of course – here comes our second.  Well, nearly – it’s an interface, actually:

@Repository
public interface UserRepository extends PagingAndSortingRepository<User, Long> {
    public User findByEmail(@Param("email") String email);
}

So what have we done here?  We’ve created the interface for our repository, extending the PagingAndSortingRepository that Spring Data provides – as you can probably guess, Spring Data assumes a lot for us here, since we didn’t need to declare any ‘save’, ‘update’, ‘delete’, or similar methods.  By extending this interface, we actually get a whole bunch of good stuff:

  • Full CRUD functionality, including the ability to load all entities, load a single entity by primary key, and of course save, update and delete entities.
  • The additional ability to page and sort our result sets, because nobody wants to load our entire database all at once.
  • A standard set of query extensions – as you can see here, we have a ‘findByEmail’ method that allows us to define queries in a really simple manner, based on the field names on the Entity.

And Finally – the Main Class

Of course, we need something to run – so here is our main class.  As you can see, we’ve annotated it with @SpringBootApplication, and our main class is calling SpringApplication.run:

@SpringBootApplication
public class Application {
    public static void main(String... args) {
        SpringApplication.run(Application.class, args);
    }
}

All we need now is a build script:

buildscript {
    ext {
        springBootVersion = '1.4.1.RELEASE'
    }
    repositories {
        mavenCentral()
    }
    dependencies {
        classpath("org.springframework.boot:spring-boot-gradle-plugin:${springBootVersion}")
    }
}

apply plugin: 'java'
apply plugin: 'spring-boot'

jar {
    baseName = 'rest-in-three-classes'
    version =  '0.0.1'
}

repositories {
    mavenCentral()
}

sourceCompatibility = 1.8
targetCompatibility = 1.8

dependencies {
    compile('org.springframework.boot:spring-boot-devtools')
    compile('org.springframework.boot:spring-boot-starter-data-jpa')
    compile('org.springframework.boot:spring-boot-starter-data-rest')
    compile('com.h2database:h2')

    testCompile('org.springframework.boot:spring-boot-starter-test')
}

And away we go!

Shenanigans!  I Call Shenanigans!

Yes, really, that’s it.  Don’t believe me?  Build and run the thing with ‘gradle bootRun’, and then open http://localhost:8080/users in your favorite browser.  This is what you’ll see:

{
 "_embedded": {
 "users": []
 },
 "_links": {
 "self": {
 "href": "http://localhost:8080/users"
 },
 "profile": {
 "href": "http://localhost:8080/profile/users"
 },
 "search": {
 "href": "http://localhost:8080/users/search"
 }
 },
 "page": {
 "size": 20,
 "totalElements": 0,
 "totalPages": 0,
 "number": 0
 }
}

Instant REST service!  Go ahead, play around — send a POST to the same URL with this JSON to create a new user:

{
 "firstName":"Bing",
 "lastName":"Crosby",
 "email":"white@christmas.com"
}

List them, PUT them, DELETE them – it all works!

But Where’s Everything Else?

I know, you really want a super complicated Spring configuration – sorry, not here.  You were really excited to implement that repository interface – nope, not today.  You even wanted to download and install your favorite Servlet engine, and configure it just so – sorry to disappoint!

There’s no sorcery here – this is Spring Boot in action, a set of libraries that make it very easy to build small, nimble, easy to extend and configure micro-services.  It’s no longer a hassle to setup a new project – you can literally do it in five minutes. But before we congratulate ourselves, let’s take a closer look at what’s going on here.

Simplified Configuration

I’ve been a fan of Spring for years, when I realized i could use it to make my code easier to read, more testable, and get me the transactionality of EJB without miles and miles of boilerplate code (anyone remember EJB 2.0?).  But Spring’s Achilles Heel has always been in the configuration – when it’s working, its magic, but when it’s not, it’s maddening.

Spring Boot (and, in fact, Spring 4 in general) addresses this with some simple and rather clever auto configuration.  With Spring Boot, all you need to enable a certain feature is to include a ‘starter library’ on the classpath.  There are a slew of these available, both from Spring, and from third parties – Spring has even provided what looks like a pretty comprehensive list of both.

So in our case, we’ve included ‘spring-boot-starter-data-rest’ on our classpath, which gives us a whole lot:

  1. It adds the Spring Data libraries to our class path, obviously
  2. It includes the Tomcat Servlet engine on our classpath, and embeds it into the jar file.  Yes, this makes the jar file larger than it otherwise would be, but it tremendously simplifies the deployment of our app
  3. It includes a bootstrap library that ties everything together when executing the jar file
  4. Our Repository interface has a full REST web service defined and implemented automatically, complete with paging support, and a full HATEAOS design, all based on the definition of the Repository and the Entity class.  Even the findByEmail method is exposed as a search resource.

Repository Implementation

The ‘spring-boot-starter-data-jpa’ library is what ties Spring Data and Hibernate together – it takes our Repository interface, and provides a default implementation, meaning that we are free to focus our JPA efforts into the mapping, and we don’t need to touch the API.  While this doesn’t mean we can get by without understanding what’s going on behind the scenes, it allows us to simply eliminate an entire class of code that tends to include a lot of boiler plate, and can be error prone.

In addition, it tremendously simplifies the configuration – simply add a JDBC driver to the classpath and the connection info in an application.properties file, and you’re all set to use that database.  Heck, if you include the H2 in-memory JDBC driver, as we do above, it will start and stop the database for you with no further configuration at all – sweet for testing.

JPA is not my favorite library – it gives us Annotation overload at times, it’s tricky to work with complex object relationships, and it provides us with a query language that’s close enough to SQL to look familiar, but different enough to not work the way I usually think it should – but with a library like Spring Data, it’s hard to argue that this isn’t a great option.

The End

And that’s it – really.  Download the code from my GitHub repo, and please, poke around and find whatever else is interesting.

This was obviously only a taste of what you can do, but it shows off that with the current state of tools, you can motivate your team to build small, independent micro-services without a lot of overhead.  This isn’t all that’s to it, of course – good testing practices, simple deployment mechanisms, and solid discipline are still required for working with micro-services, but the bar is being lowered every day!

Advertisements

Hmm, this place is sort of familiar…

Wow, it’s been a while – I’ve got a lot of cleaning up to do, but you might just see me start to put some thoughts here again.  It’s been seven or eight years since I’ve done any writing, so I’ll call this all experimental, but I’ve got a few thoughts wrapped up in my head that I might be able to yank out.  What will it looks like?  Who knows – one thing I will say is that it likely won’t be about any upcoming Java standard like my older posts – I had actually forgotten that I used to pay that much attention to that crap :).

Stick around, see what happens, and make sure to leave me a note – it’s always good to know what my audience looks like (I.e. – is there one!? 🙂 )

M

Java EE 6 – Who’s In?

Been a while since I’ve written anything, so I’ll ease into the waters with this one – it’s been over a year since Java EE 6 was released with some very cool updates that I’ve discussed here and here and here and here and here and here and here and here and here and here and here (dang, I was busy!). So I’m interested in hearing what kind of adoption it’s gotten so far. Anybody?

Now, I know that there still aren’t a lot of servers that support it — let’s see, there’s Glassfish, and then there’s… hmmm… well, I think Resin 4 has been released… JBoss 6 isn’t quite there yet, nor are any of the more expensive products, at least not to my knowledge (I’ll be perfectly honest – I don’t pay much attention to them!)

One that interests me is SIwpas – it’s a Web Profile implementation based on Tomcat, and apparently several other open source products, although I fear it suffers from AAS (Awful Acronym Syndrome!). But the question is, is anyone using it, or the other products? I’d love to know!

M

BTW – the last time I blogged about JBoss not having a server released after an extended period of time, they released it the very next day – if I were a bettin’ man, I’d put money on JBoss 6 going final tomorrow, but since I’m not, and since no one releases software on a Saturday, I’ll have to go with a firm guess that’ll be out soon!

http://pagead2.googlesyndication.com/pagead/show_ads.js

Organize Your Logs With a Cool Java EE 6 Trick

Picture this — it’s 9:00 Friday night, and you’ve just gotten a phone call asking why the hell a key part of your system is down… after verifying that something’s definitely busted, you open up the only resource you have — your system logs… it doesn’t take you long to find some exceptions, but they don’t tell you much of the story… pretty soon, you realize there are 5 or 6 different errors being thrown, plus messages from areas of the system that appear to be working fine… to boot, it’s the middle of your busiest time of the year, which means that you may have a few thousand users on the system at this very moment… yikes — how the heck do you make heads or tails of this mess?

Logging — no longer an afterthought

Ok, so four or five days later, when you finally sort out your issue, it’s time to make things better before that happens again… it’s time to actually put some thought behind your logging practices… first stop — learn how to log, and put some standards in place! I’m not going to elaborate on the details of that article, because I think the author does a fine job… frankly, I was hooked when he defined the logs as a ‘secondary interface’ of your system — your support staff (i.e. — you) can’t see what your customers are looking at in their browsers, so you need to make damn sure that you’re providing enough information in your logs for you to understand what’s going on!

Let’s be real, though — the traffic on your system hasn’t gone down any since that fateful Friday (luckily), and you don’t have the time to rework all of the logging in your system… there has to be a way to put some incremental improvements in here that will make your life easier the next time things catch fire, even if that’s tomorrow…

Adding the Context without the Pain — or a single change to your core code

Ultimately, you were able to make some sense out of that catastrophe by realizing your logging framework was providing you with a subtle piece of context — the thread name… seems innocuous, but in most Servlet containers, it’s enough to identify that each line in the log belonged to a particular thread — or request… It’s not perfect though — you didn’t have any messages in your logs that stated “Starting GET request for /shoppingcart/buySomething.html”, so you couldn’t tell exactly where each request started and ended… luckily, with Java EE 6 and a good logging framework, it’s not hard to get there…

Before I dig in, though, let’s get acquainted with the Mapped Diagnostics Context, or MDC — hopefully, your logging system supports it (log4j does, so most folks will be covered)… MDC provides the ability to attach pieces of context to the thread of execution you’re in, and allows for you to add this info on all log messages…

The following example shows a piece of code that uses the MDC in SLF4J — a logging framework much like the Apache Commons Logging framework that can provide a single interface to multiple logging run-times — excellent for building libraries when you don’t want to impose a logging system on your users… Anyway, on to the show:

public class RequestLoggingContext implements Filter {
private static final String SESSION_CONTEXT = "session-context";

...

@Override
public void doFilter(ServletRequest req, ServletResponse resp, FilterChain chain) throws IOException, ServletException {
if(req instanceof HttpServletRequest) {
HttpServletRequest httpRequest = (HttpServletRequest)req;
session = httpRequest.getSession(false);

if(session != null)
MDC.put(SESSION_CONTEXT, session.getId());
}

chain.doFilter(req, resp);

if(session != null) {
MDC.remove(SESSION_CONTEXT);
}
}
}

Pretty simple — two static methods on the ‘MDC’ class — ‘put’ and ‘remove’… while I’m not a particular fan of the static API, this is about as simple as it gets (incidentaly, this is the only ‘unfortunate’ use of static methods that I have seen in SLF4J — they use the standard method of having static factory classes, but that at least makes sense, and has precedent)… so what the heck did this do? Well, we now have the ability to refer to that “session-context” as a part of our logging ‘Pattern’, using the “%X{session-context}” flag — like so:





%d{HH:mm:ss.SSS} [session-context=%X{session-context}][%thread] %-5level %logger{36} - %msg%n







BTW, that is not a log4j config file — it’s a Logback config… Logback is the ‘native’ implementation of the SLF4J library that’s written by the same folks who brought you Log4J — kind of a ‘take two’, if you will… anyway, it should be obvious that it’s driven heavily from Log4J’s configuration 🙂

So we have now added context to our logging system — and all without disturbing a single line of code in our existing system… but wait, there’s more!

The Trick

One of the interesting additions to Java EE 6 is the combination of Servlet Annotations, and web fragments — this allows library authors to self configure the use of their library, where previously the end user would need to make additions to the web.xml… a great use of Convention Over Configuration, and very powerful, indeed!

So let’s take the above code sample and expand it to include a randomly generated context id for each HttpRequest, and some basic log messages to delineate the start and end of every request:

@WebFilter("/*")
@WebListener
public class RequestLoggingContext implements Filter, HttpSessionListener {
private static final String REQUEST_CONTEXT = "request-context";
private static final String SESSION_CONTEXT = "session-context";

private Logger log = LoggerFactory.getLogger(RequestLoggingContext.class);

@Inject
private ContextGenerator contextGenerator;

@Override
public void init(FilterConfig fc) throws ServletException {
}

@Override
public void doFilter(ServletRequest req, ServletResponse resp, FilterChain chain) throws IOException, ServletException {
MDC.put(REQUEST_CONTEXT, contextGenerator.generateContextId());

StringBuilder msg = new StringBuilder();
if(req instanceof HttpServletRequest) {
HttpServletRequest httpRequest = (HttpServletRequest)req;
HttpSession session = httpRequest.getSession(false);

if(session != null)
MDC.put(SESSION_CONTEXT, session.getId());

//Build Detailed Message
msg.append("Starting ");
msg.append(httpRequest.getMethod());
msg.append(" request for URL '");
msg.append(httpRequest.getRequestURL());
if(httpRequest.getMethod().equalsIgnoreCase("get") && httpRequest.getQueryString() != null) {
msg.append('?');
msg.append(httpRequest.getQueryString());
}
msg.append("'.");
}

if(msg.length() == 0) {
msg.append("Starting new request for Server '");
msg.append(req.getScheme());
msg.append(":\\");
msg.append(req.getServerName());
msg.append(':');
msg.append(req.getServerPort());
msg.append('/');
}

log.info(msg.toString());
long startTime = System.currentTimeMillis();

chain.doFilter(req, resp);

msg.setLength(0);
msg.append("Request processing complete. Time Elapsed -- ");
msg.append(System.currentTimeMillis() - startTime);
msg.append(" ms.");
log.info(msg.toString());

if(((HttpServletRequest)req).getSession(false) != null) {
MDC.remove(SESSION_CONTEXT);
}
MDC.remove(REQUEST_CONTEXT);
}

@Override
public void destroy() {
}

@Override
public void sessionCreated(HttpSessionEvent hse) {
MDC.put(SESSION_CONTEXT, hse.getSession().getId());
}

@Override
public void sessionDestroyed(HttpSessionEvent hse) {
}
}

All that’s left is to literally throw that in its’ own .jar file, put it in your WEB-INF/lib folder, and add either or both of the ‘context’ keys to your logging config and presto — you have logging context! (I have omitted the definition of the ContextGenerator class for brevity — it just generates a random string) Now your logs will look something like this:

INFO: 00:02:11.140 [request-context=sonqc52zbqia][http-thread-pool-8080-(1)] INFO  c.m.l.support.RequestLoggingContext - Starting GET request for URL 'http://localhost:8080/Test/'.
INFO: 00:02:12.156 [request-context=sonqc52zbqia][http-thread-pool-8080-(1)] INFO c.m.l.i.TimingLogInterceptor - Executing com.test.facade.LoadHomeFacade.loadData
INFO: 00:02:12.156 [request-context=sonqc52zbqia][http-thread-pool-8080-(1)] INFO c.m.l.i.TimingLogInterceptor - Doing something interesting.
INFO: 00:10:36.250 [request-context=sonqc52zbqia][http-thread-pool-8080-(1)] INFO c.m.l.support.RequestLoggingContext - Request processing complete. Time Elapsed -- 719 ms.

So now, without touching a single line of existing code or modifying a single class, we can now clearly associate any logging message in our system with other messages generated on that request, and we have clear delineation of where each request begins and ends, and how long it took to execute… pretty damn sweet! So now when your system blows up next Friday night, you’ll be a bit more prepared to sort things out before the weekend is over! (just don’t throw out those scripts that sort based on ‘request-context’!)

Final Word

Final word? I guess that means there’s more — three things, actually… first — there is absolutely nothing preventing you from putting the above in place if you’re on an earlier version of the Java EE spec (and let’s face it — that’s pretty much all of us!)… The only thing you lose is the self configuration, so you’ll need to add the appropriate and elements to your web.xml

Second, if you’re on Java EE 6 (wow, that was fast!), and your application already makes use of Servlet Filters, whether they’re ‘self configured’ or not, you may need to do some configuration in your web.xml to provide an explicit ordering — note that this is not strictly required, although it is probably a good idea :)…

And finally, I mentioned above that Log4J users were in luck when it came to supporting MDC… unfortunately, the JDK Logging API doesn’t support MDC (come on! Why not! Am I the only one who seems to think they haven’t advanced this API in the last five years!?) — those users aren’t entirely out of luck, though… there is a way to ‘subclass’ the JDK Logger and add logging info to the front or end of any logging message, although it’s tricky — unfortunately, I don’t have this code handy anymore, but perhaps I’ll sit down and figure it out again if I’m so inclined one day (of course, if I get feedback to do this, it might make me more inclined 🙂 )

Now don’t forget to get back and add better logging messages to your code!

M

<!–
google_ad_client = “pub-3840214761639097”;
/* 300×250, created 8/9/09 */
google_ad_slot = “7488975184”;
google_ad_width = 300;
google_ad_height = 250;
//–>

<script type="text/javascript"
src=”http://pagead2.googlesyndication.com/pagead/show_ads.js”&gt;

@DataSourceDefinition — A Hidden Gem from Java EE 6

In the old days, DataSources were configured — well, they were configured in lots of different ways… That’s because there was no ‘one way’ to do it — in JBoss, you created an XML file that ended in ‘-ds.xml’ and dumped it in the deploy folder… in Glassfish, you either use the admin console or muck with the domain.xml file… in WebLogic you used the web console… and this was all well and good — until I worked with an IT guy who told me just how much of a pain in the ass it was…

Up until then, it wasn’t such a big deal to me — I set it up once, and that was that… then I ran into this guy a few jobs ago who liked to bitch and complain about how much harder it was to deploy our application than the .NET or Ruby apps he was used to… he had to deploy our data source, then he had to deploy our JMS configurations — only then would our application work… in the other platforms, that was all built into the app (I’ll have to take his word for it, since I haven’t actually deployed anything in either platform)… I was a but surprised at first, and then I realized that maybe he had a point… nah, it couldn’t be, he must just be having a bad day (lots of us were having bad days back then 🙂 )…

Then I ran into Grails, which is dead simple — you have a Groovy configuration file that has your db info in it… you even have the ability to specify different ‘environments’, which can change depending on how you create your archives or run your app… pretty slick…

The Gem

Well, lo and behold, we now have something that’s nearly equivalent in Java EE 6 — the @DataSourceDefinition attribute… it’s a new attribute that you can put on a class that provides a standard mechanism to configure a JDBC DataSource into JNDI, and as expected, it can work with local JNDI scopes or the new global scope, meaning you can have an Environment Configuration that uses this attribute making it shareable across your server… it works like this:


import javax.annotation.sql.DataSourceDefinition;
import org.jboss.seam.envconfig.Bind;
import org.jboss.seam.envconfig.EnvironmentBinding;

@DataSourceDefinition (
className="org.apache.derby.jdbc.ClientDataSource",
name="java:global/jdbc/AppDB",
serverName="localhost",
portNumber=1527,
user="user",
password="password",
databaseName="dev-db"
)
public class Config {
...
}

As you would expect, that annotation will create a DataSource that will point to a local Derby db, and stick it into JNDI at the global address ‘java:global/jdbc/AppDB’, which your application, or other applications can refer to as needed… no separate deployment and no custom server-based implementation — this code should be portable across any Java EE 6 server (including the Web Profile!)…

It’s almost perfect!

In typical Java EE style, there’s one thing that just doesn’t appear to be working the way I’d like it — it doesn’t appear to honor JCDI Alternatives (at least not in Glassfish)… Here’s what I’m thinking — we should be able to have a different Config class for each of our different environments… in other words, we’d have a QAConfig that pointed to a different Derby db, a StagingConfig that pointed to a MySQL db somewhere on another server, and a ProductionConfig that pointed to kick ass, clustered MySQL db… we could then use Alternatives to turn on the ones that we want in certain environments with a simple XML change, and not have to muck with code… unfortunately, it doesn’t appear to work — it appears in Glassfish that it is processing them in an undeterministic order, with (presumably) the class that is processed last overwriting the others that came before it…

There is a solution, though, and it is on the lookup side of the equation — using JCDI Alternatives, we can selectively lookup the DataSource that we’re interested in, and then enable that Managed Bean in the beans.xml file… it’s definitely not ideal, since we need to actually inject all of our DataSources into JNDI in all scenarios, but it works, it’s something I can live with, and is probably easily fixed in a later Java EE release… Update: Looks like it’s in the plan, according to this link — thanks, Gavin 🙂

Here’s how it works — first the ‘common’ case, probably for a Development environment:


@RequestScoped
public class DSProvider {
@Resource (lookup="java:global/jdbc/AppDB")
private DataSource normal;

public DataSource getDataSource() {
return normal;
}
}

Simple enough — has a field that looks up ‘jdbc/AppDB’ from JNDI, and provides a getter… now for QA:


@RequestScoped @Alternative
public class QADSProvider extends DSProvider{
@Resource (lookup="java:global/jdbc/AppQADB")
private DataSource normal;

public DataSource getDataSource() {
return normal;
}
}

Pretty much the same, except this does the lookup from ‘jdbc/AppQADB’, and it is annotated with @Alternative… so how do these things work together? Take a look:


@Named
public class Test {
@Inject
private DSProvider dsProvider;

...
}

Again, simple — we’re injecting a DSProvider instance here, and presumably running a few fancy queries… Nothing Dev-ish or QA-ish here at all, which is the beauty of Alternatives… finally, when building the .war file for QA, we turn on our Alternative in the beans.xml, like so:




com.mcorey.alternativedatasource.QADSProvider


You’ll notice that this solution requires us to rebuild our .war file for QA, which I obviously don’t like — not to worry, there will be support for this in the Seam 3 Environment Configuration Module, which will effectively create a binding by mapping from one JNDI key to another… I have no idea what the syntax will look like at this point, but it should be pretty straight forward, and will allow us to — you guessed it — build our .war one, and copy it from place to place without modification…

M

http://pagead2.googlesyndication.com/pagead/show_ads.js

Say hello to the Seam 3 Environment Configuration module

A funny thing happened after my last post — I got an email from Dan Allen, from RedHat, with some interest in making my last JCDI Portable Extension — EnvironmentBindingExtension — into a Seam 3 Module… pretty cool for a fairly modest effort at finding a new way to solve a problem I’ve faced in the past… it will be my first official foray into open source (not counting that one line NetBeans patch I submitted in, like, 2000), so it will be interesting to see how this will actually work from the authoring side, as opposed to the user side, especially in a relatively well organized project like Seam

What it’s about

The idea behind the Environment Configuration module is to inject fairly static configuration information into any JEE 6 environment… it’s typically done outside of your application, in a deployment that isn’t regularly deployed or updated, so you can configure each of your environments separately, including Development, Testing, QA, Staging and Production — once this is done, you can build your application once (or better yet — have a Continuous Integration server build it!), and copy the same binary from server to server without having to reconfigure it, ensuring that the archive that you deploy to production is the same exact archive that you tested in QA… this allows you to streamline your deployment processes, removing any possible human error involved in building your code over, and over, and over again (and in some cases, it’ll save a lot of time if you have a particularly slow build!)

How’s it work? It takes advantage of JNDI — one of the resources that all JEE servers provide… say, for example, that you have a system that needs to access a database, a filesystem, and has a batch process that runs at a specific frequency — in development, you’ll want to point to a personal Derby database, use a local folder on your Windows box for your filesystem, and run the batch process very frequently for testing… QA is similar, although it has different database, but say Staging and Production run on a cluster of Linux boxes that access a MySQL database, use a mounted shared drive for its’ filesystem, and have its’ batch processes run once an hour…

With the Seam 3 Environment Configuration module, you can create a simple .ear file for each of these environments that contains all of this data — create them once, deploy them once, and you’re good to go… take a look at the following example of a configuration that you could use in development:


/**
* An Environment Configuration for Development
* @author Matt
*/
@EnvironmentBinding
@DataSourceDefinition (
className="org.apache.derby.jdbc.ClientDriver",
name="java:global/jdbc/AppDB",
serverName="localhost",
portNumber=1527,
user="user",
password="password",
properties={"create=true"},
databaseName="dev-db"
)
public class Config {
@Bind ("myApp/fs-root")
String rootFolder = "C:\fs-root";

@Bind ("myApp/batch-frequency")
long batchFrequencyInMs = 60 * 1000;
}

Pretty simple — toss this class into its’ own .war file, and it will define three global JNDI entries, one for each of the items mentioned above… your other applications are now free to read these resources in whatever way they need to, even using the standard @Resource(lookup=”java:global/myApp/fs-root”) notation… a similar configuration file would be created for QA, but perhaps the @DataSourceDefinition annotation will use a MySQL datasource, and likewise for Staging and Production…

What next?

Well, there are a few things on my list of features here, including, but not limited to:

  • Test, Test, Test!
  • Using the @Bind attribute on methods, including @Produces methods
  • Support ‘unbinding’, if needed
  • Create a Maven Archetype that could be used to quickly and easily setup an Environment Configuration deployment
  • Create an interface of some kind to be able to review the available findings — either web app or simply JAX-RS based

I am, of course, interested in any ideas or feedback anyone would have, but one goal I would have here is to keep it simple and portable — what this module is intended to do isn’t exactly brain surgery, so I don’t think it’s necessary to throw in too many ‘extras’…

M

http://pagead2.googlesyndication.com/pagead/show_ads.js

External CDI Configuration with Portable Extensions

A common requirement for web and enterprise applications is that they have the capability to configure themselves for each environment without modifying the archive itself — most commonly this is used only for environment specific attributes such as a test vs. production data store, or for Strings describing a file or directory on the file system which will be different on a developers box vs. a clustered production server, or perhaps it is a URL that points to your test payment gateway vs. your production gateway… This is the sort of thing that might easily be done with ‘Alternatives’ in CDI, but many shops put a premium on the ability to package the application once (on a Continuous Integration server, for example) and copy that file from development to integration to QA to staging to production, all of which are on very (very!) different platforms, using different databases, different file systems, and must integrate with different third party environments — configuring this stuff externally means you don’t have to deal with the error prone and possibly time consuming process of building for each environment… unfortunately, this doesn’t appear to be a scenario that Alternatives can help us with…

One resource that works really well for this sort of configuration is JNDI… configure these items on your servers’ JNDI registry independently from your application, and then have your application read the environment configuration settings from here — and CDI makes it very easy to manage both sides of this scenario!

Reading from JNDI

The easier side of this is reading the data from JNDI, so let’s start there… actually, you don’t need CDI at all to start doing this — the easiest way is to use the ‘@Resources’ annotation provided in Java EE 5, like so:


@ApplicationScoped
public class FolderConfig {
@Resource(lookup="java:global/folderToPoll")
private String folderToPoll;

public String folderToPoll() {
return folderToPoll;
}
}

Not much to this — we have an ApplicationScoped Managed Bean which does a lookup from JNDI, and provides a getter for the result… in this case we’re pulling from the new “java:global” context that is provided with Java EE 6 — there’s no reason we couldn’t map this to local context, but frankly, I wanted to fiddle with the global context 🙂

Ok, now on to something more interesting…

Writing to JNDI

Writing to JNDI is pretty easy — get an InitialContext and call ‘bind’… it’s basically an overblown HashMap… for some reason, though, configuring JNDI outside of an application always seems to be more difficult than it should be — several years ago, I actually had to write a JBoss plugin to do it, even though they had quite an advanced configuration mechanism for the time… all I wanted to do was put String ‘A’ at Key ‘B’, but no — not supported out of the box!

That solution was configured by an XML file, which left me dealing with Strings… this solution is better on two accounts: 1) It can bind any Object into JNDI, and 2) it’s a Portable Extension, and should therefore work on any platform… whew!

So here’s how it works — this extension would likely be packaged into a .jar library, and deployed with a simple webapp or ear archive that is packaged separately from the main application… the piece that provides the configuration is actually a class or a set of classes that are annotated to bind certain fields and/or methods into JNDI, like this:


@EnvironmentBinding
public class Env {
@Inject @Bind(jndiAddress="adminUser")
private User admin;

@Bind(jndiAddress="test") private String test = "This is a test";
}

Pretty straight forward — what’s going on here? Well, first you’ll notice that the class is annotated with @EnvironmentBinding — this is a Stereotype annotation that extends @ApplicationScoped, and acts as a marker for the class to be processed later on… further down, we have two fields that are annotated with @Bind and provided with a jndiAddress… this pretty much works as you would expect — the value of that object is injected into JNDI, with the ‘java:global/’ prefix added to the front…

You’ll also notice that one of the elements has its’ value injected into the field — this means that the Objects that are bound into JNDI can be derived from a more complex application if need be, so the support that we have here goes well above and beyond the simple XML file configuration that I dealt with way back when…

So how does this thing work? Well, one implementation that I put together has a two part infrastructure to do the job… remember, the end user should never be exposed to the following two items — the extent of their exposure into this library will be the two annotations shown above…

First, our Portable Extension class:


public class EnvironmentBindingExtension implements Extension {
private Set envBeans = new HashSet();
private BeanManager beanManager;

public void discoverEnvironmentBindingClasses(@Observes ProcessBean pb, BeanManager bm) throws Exception {
this.beanManager = bm;

Bean bean = pb.getBean();
Class beanClass = bean.getBeanClass();

Set sts = bean.getStereotypes();

for (Class st : sts) {
if (st.equals(EnvironmentBinding.class)) {
log.info("Found class annotated with EnvironmentBinding: " + beanClass.getName());

envBeans.add(bean);
}
}
}

public Set getEnvBeans() {
return Collections.unmodifiableSet(envBeans);
}

public BeanManager getBeanManager() {
return beanManager;
}
}

This Extension class is pretty straight forward — as with all Portable Extensions, it starts by implementing the ‘Extension’ interface… in this case, we’re also creating an Observer method for the ‘ProcessBean’ event… this event is fired during the application startup lifecycle for every ‘Bean’ that is discovered in a Bean archive… this will fire for Managed Beans, EJB’s, Interceptors, etc, but here, we’re specifically looking for beans that have the EnvironmentBinding Stereotype on them — that is the trigger to further process this class… in this case, our process simply consists of adding the Bean to our ‘envBeans’ Set for later use… in addition, we provide accessor methods for the BeanManager (which is injected into our Observer method), and the envBeans Set… Now let’s have a look at what we do with these Beans…

The next class is the one that does most of the heavy lifting — it is a Singleton EJB which is marked as a Startup bean, meaning it will be instantiated upon application startup, after the CDI discovery phases are complete… in this case, we have created a PostConstruct method to do our work for us:


@Singleton
@Startup
@ApplicationScoped
public class BindingsProcessor {
@Inject
private EnvironmentBindingExtension bindingExtension;

@PostConstruct
public void processBindings() throws Exception {
Set envBeans = bindingExtension.getEnvBeans();

log.info("Processing EnvironmentBinding Classes: "+envBeans);

Context appContext = bindingExtension.getBeanManager().getContext(ApplicationScoped.class);
for(Bean bean:envBeans) {
Class beanClass = bean.getBeanClass();

Object beanInstance = appContext.get(bean, bindingExtension.getBeanManager().createCreationalContext(bean));

Field[] fields = beanClass.getDeclaredFields();
for(Field field:fields) {
if(field.isAnnotationPresent(Bind.class)) {
field.setAccessible(true);

String jndi = field.getAnnotation(Bind.class).jndiAddress();
Object val = field.get(beanInstance);

bindValue(jndi, val);
}
}
}
}

Hey, wait a minute — this is pretty simple, too! Iterate over the set of Beans that we’ve collected, use reflection to find all of the fields that are annotated with @Bind, and bind the value into the appropriate JNDI location… I’ve even removed the JNDI api work here, because it’s not interesting at all…

This could be expanded in a couple of way, most obviously to allow methods to act as Binders as well… I do want to discuss my choice here of using the Singleton EJB as well, since I’ve had a few posts recently which talk about doing away with EJB’s altogether — well, initially I was attempting to use the ‘AfterBeanDiscovery’ or ‘AfterDeploymentValidation’ events to trigger this loading, but I was having trouble getting an instance of ‘Env’ that was capable of having its’ injection points… er… injected…

The Singleton EJB is somewhat of a last-ditch sanity effort, but after considering it for a few days, I’m actually alright with it… the Startup Singleton EJB’s are something that has interested me for a while, and it proves its’ usefulness here, but what’s more, I’m still able to take the EJB interface out of the end-user’s experience here… they simply need to make use of the EnvironmentBinding annotation, and be on their merry way, as long as they are deployed in a container which supports Singletons (which all Java EE 6 containers do)… that being said, I’m hoping that Gavin will show me what the heck I was doing wrong 🙂

One other thing — using an @Inject method on an ApplicationScoped bean doesn’t appear to do the trick… reading the spec, it appears to be caused by the fact that ApplicationScoped beans are ‘active’ during Servlet calls, EJB calls, etc — meaning it doesn’t have it’s own ‘startup’ lifecycle, but depends on the lifecycle of other Java EE component models… interesting, to be sure — adding a more generic Startup capability would be a cinch if done similar to how I’ve done this…

Wow, that was a lot of words

So what does this all mean? Basically, it just shows another way of skinning that old, damn cat that is environment configuration — but it also shows that it’s pretty darn easy to put together some CDI extensions, and when working with the surrounding Java EE specs and resources, that it can be done in a minimal amount of code… in this case, I was looking at a requirement that I often have to support external configuration — one that CDI doesn’t accommodate out of the box… with a few lines of code, it turned out to be possible to break that box open and stuff some more toys inside 🙂

Finally, the more complete code samples can be found here — the EnvironmentBinding project has the core code, the TestEnvironmentConfig project shows a test web application that could be used to create the binding configuration, and the EnvTest project is an application which makes use of the JNDI entries… have fun!

M

http://pagead2.googlesyndication.com/pagead/show_ads.js