Leveraging Spring Container Events

Another thing you don’t have to invent yourself when building a Spring application is in-process eventing.  There is a lot of literature out there about the architectural advantages of eventing and messaging using a broker such as JMS or AMQP.  Spring provides lightweight templates for publishing and subscribing with these brokers, but that’s not what I am going to talk about.  The architectural principals that you benefit from with using a message broker are decoupling (you never talk directly to the listener of your message) and asynchronous processing (you don’t block waiting for a response).  Without a message broker, it’s still possible to benefit from these principals by simply plugging in to the Spring Container’s own publish and subscribe mechanism.

Old Mailbox wedged into a wall

Why would you want to do container messaging versus broker messaging?  First of all it’s never an either/or situation, you can leverage both, it’s simply a matter of what you can do locally versus distributed and it’s simple enough to mix and match.   Let’s take a common example use case for this: a user signs up for your web application.  This triggers all sorts of system activity, from creating an entry in a User table, to starting a trial or subscription, to sending a welcome email, or perhaps initiating an email confirmation process, or if you are “social” application pulling data from the connected social networks.  Imagine a UserController kicking this off by accepting a registration request.  A “coupled” approach would have this controller invoke each and every one of these processes in sequence, in a blocking or non-blocking fashion OR it could publish an event that a new user was created, and services dedicated to each function would pick up the message and act appropriately.  The difference between the two approaches is that the former is a God Object anti-pattern, and the latter is a nicely decoupled Pub/Sub pattern with a sprinkle of by Responsibility-driven design.  An added bonus is the ability to test all of these functions in isolation – the pub/sub mechanism simply binds them together at runtime.

Let’s look a bit more in detail about how this works.  First you need to decide what events you want to publish.  In the example above, it makes sense to have a UserCreationEvent with an attribute of the User, or the UniqueId of the user whatever you feel more comfortable with.  This object would extend Spring’s ApplicationEvent which itself has an event timestamp attribute.  So the class might look like this:

public class UserCreationEvent extends ApplicationEvent {
     private User user;

    public UserCreationEvent (Object source, User user) {
        this.user = user;

    public User getUser() {
        return user;

To publish this event, in our case from the UserController or better yet a UserService that takes care of creating the user in the system, we can just wire up the ApplicationEventPublisher:

 private ApplicationEventPublisher publisher;

Then when we’ve finished creating the user in the system, we can call

UserCreationEvent userCreationEvent = new UserCreationEvent (this, user);
publisher.publishEvent(userCreationEvent );

Once the event is fired, the Spring container will invoke a callback on all registered listeners of that type of event.  So let’s see how that part is set up.

A listener is simply a bean that implements ApplicationListener<YourEventType>, so in our case ApplicationListener<UserCreationEvent >.  Our listeners will all implement their own callback logic for the onApplicationEvent(UserCreationEvent  event) method.  If you are using Spring’s component scan capability this will be registered automatically.

public class UserWelcomeEmailListener implements ApplicationListener<UserCreationEvent > {

private EmailService emailService;

public void onApplicationEvent(UserActionEvent event) {

It’s important to note a few things about the default event multicasting mechanism:

  • It is all invoked on the same thread
  • One listener can terminate broadcasting by throwing an exception, and thus no guaranteed delivery contract

You can address these limitations by making sure all long-running operations are executed on another thread (using @Async for example) or plugging in your own (or supplied) implementation of the Multicaster by registering a bean that implements the interface with the name “applicationEventMulticaster”.  One easy way is to extend SimpleApplicationEventMulticaster and supply a threaded Executor.
To avoid one listener spoiling the party, you can either wrap all logic within each listener in a try/catch/finally block or in your custom Multicaster, wrap the calls to the handlers themselves in try/catch/finally blocks.

Another thing to be aware of as you think about this – if a failure in any of the processing that occurs after the event is published causes any inconsistency in the data or state of the User in our case, then you can’t do all this.  Each operation has to be able to deal with it’s own failures, and recovery process.  In other words, don’t do anything that needs to be part of the whole “create user” transaction into this type of pattern, in that case you don’t have a decoupled process so it’s better to not pretend you do.

As I mentioned before, there is no reason you can’t also leverage a distributed message broker in concert with Application events.  Simply have an application event listener publish the event to the broker (as sort of a relay).  In this way you get the benefit of both local and distributed messaging.  For example, imagine your billing service is another system that requires messages through RabbitMQ.  Create an ApplicationListener, and post the appropriate message.  You’ve achieved decoupling within and between applications, and leveraged two types of messaging technologies for great justice.

public class SubscriptionSignupListener implements ApplicationListener<UserCreationEvent > {

RabbitTemplate rabbitTemplate;

public void onApplicationEvent(UserActionEvent event) {
SubscriptionMessage newSubMsg = new SubscriptionMessage(new Date(), SubscriptionMessage.TRIAL_START, event.getUser());
rabbitTemplate.convertAndSend(MqConstants.NEW_SUB_KEY, newSubMsg);

So what about using something like Reactor instead?  Again, it’s not an either/or situation.  As Jon Brisbin notes in this article, Reactor is designed for “high throughput when performing reasonably small chunks of stateless, asynchronous processing” .  If your application or service has such processing, then by all means use that instead or in addition to ApplicationEvents.  Reactor in fact includes a few Excecutor implementations you can leverage so you can have your ApplicationEvent cake and eat it too!

Leveraging Spring Container Events

Spring Boot ConfigurationProperties and Profile Management Using YAML

From Properties to YAML

There are dozens of ways to handle externalized configuration in an application or service. Over the years Spring has provided quite a few, and recently the @Value and @Profile annotations have started to bring some sanity to the situation by attempting to minimize the developer’s interaction with the filesystem to read what should be readily available. With the advent of Spring Boot there are a couple new interesting twists – YAML files and @ConfigurationProperties.

Defining Configuration

First let’s look at YAML. The non-Java world has been using YAML format for quite some time, while the Java world appears to have been stuck on the .properties file and format. Properties files can now be a relic of the past if you so choose, as Spring Boot gives us the option to configure an application for all profiles within one file – application.yml. With the YAML file format you define sections that represent the different profiles. For example, consider this application.yml:

  profiles.active: default
  profiles: default
  driverClassName: com.mysql.jdbc.Driver
  url: jdbc:mysql://localhost:3306/myappdb?autoReconnect=true
  username: devuser
  password: devpassword
    enabled: true
    role: ADMIN
  wow: 10
  such: so
  very: true
  profiles: qa
  url: jdbc:mysql://qa.myapp.com:3306/myappdb?autoReconnect=true
  username: qauser
  password: qapassword

Note a few things about this format – there’s a default active profile defined in the first section, followed by the default profile itself, followed by the qa profile (separated by “—“). There is no longer the namespacing format for each property, but an indentation-based markup to delineate hierarchy (yaml is space-sensitive). Also note that that in the qa profile we don’t re-define the management configuration, it is inherited, whereas we want to override the datasource url, password and user, but not the driver.

If we did nothing more than this the application configuration would default to the default profile, which would be fine in dev mode, but when deployed to the qa environment, we would pass the -Dspring.profiles.active=qa to the command line params for that profile to take effect. You can have multiple profiles alive at the same time too, otherwise it would have been called spring.profile.active 🙂 So who wins if there are multiple overrides? The lower you are in the yml file, the more override power you have, so that’s one way to think about how you organize your configuration.

This is all well and good, however what happens when the default profile is NOT a great profile for some of the development or QA team, and they want their own overrides? This is where you do need to resurrect the properties file, but this time for great justice. Your application.yml should be checked in, however you don’t want each and every team member checking in their own little overrides, or the file will get unwieldy and nobody will be happy. The trick is to create an application-<developername>.properties locally and exclude it (or all properties files) from your source control (.svnignore, .gitignore). With this file in the classpath you can reference the profile in your startup just as we did for qa, EG: -Dspring.profiles.active=default,dilbert.

But this trick is not just for developers. It’s never a good idea to check in production keys/tokens/secrets to your source control for all to see (unless you have absolutely no controls in place or don’t care) so this is a great mechanism for operations and/or SCM staff to have their own properties override file that contains all the sensitive content and let’s them control who has access to it.

Consuming Configuration

So how do we get access to all this fancy stuff from code? Sticking to the principals of keeping developers out of the business of managing what files to load and when, Spring Boot comes with even simpler support for references to the configuration file values, where you can create strongly typed beans to represent sections in the configuration. Let’s look at an example and then examine it.

  public class DogeSettings {
    private int wow;
    private String such;
    private boolean very;
 …getters and setters

@Configuration tells Spring to treat this as a configuration class and register it as a Bean
@EnableConfigurationProperties tells Spring to treat this class as a consumer of application.yml/properties values
@ConfigurationProperties tells Spring what section this class represents.

Note that there is implicit type conversion for primitives and that this class is a Bean. That means you can @Autowire it into other Beans that require it’s values. EG:

private DogeSettings dogeSettings;
public boolean requiresDogeness() {
  if (dogeSettings.getWow() > 5 && dogeSettings.isVery == true) {
    return true;
  return false;

Pretty easy and straightforward. It would be even more straightforward if @ConfigurationProperties was all you had to annotate, which the other two being implicit, but I leave that to the Spring team.


Using this scheme in a unit test environment is quite easy as well. You can of course define a unit-test profile with any overrides from the defaults you would use in such scenarios, and your @ConfigurationProperties classes will respect those, as long as you have the profile set. One easy way to do that is to create a TestConfig class and use it to set up all the test configuration overrides (such as beans, mocks, etc).

public class TestConfig {
  public DataSource dataSource() {
    return new EmbeddedDatabaseBuilder().setType(H2).build();

Note the @Profile annotation to set the context for this configuration. Without it we could accidentally have this configuration in our production configuration… The rest is just boilerplate setup that is optional depending on what you want to test. From the test itself you can then leverage this:

@SpringApplicationConfiguration(classes = {TestConfig.class})
public class DogenessTest extends AbstractTestNGSpringContextTests {

  private DogeSettings dogeSettings;

  public void testRequiresDogeness() {

With @ActiveProfiles we can now isolate the configuration for both the application properties as well as any bean config. The @SpringApplicationConfiguration gives us an annotation-based spring context to work with for the execution of the tests.


If you have the luxury of starting a project from scratch, it’s worth giving this approach a try.   There’s very little code required, no static access to config utilitites, and you get strongly typed configuration objects that you can inject anywhere you need them.  To see some of this code in action, check out my project in GitHub.


Spring Boot ConfigurationProperties and Profile Management Using YAML