Avoiding Sh*tty Interfaces And Building A Better Taco Service

Seen Better Days
Sh*tty Interface 

Software evolves organically, regardless of how much so-called up-front design you put into it. This is especially true with so-called Agile development where you take requirements one step at a time. Who hasn’t seen this happen while iterating on an interface:

Day 1: TacoService initial design

/**
* Returns a Taco based on the type of meat passed in.
**/
Taco getTaco(String meat);

Day 2: TacoService refactored to support vegetarian options

/**
* Returns a Taco based on the type of meat passed in.
* If vegetarian flag is true, meat may be null.
**/
Taco getTaco(String meat, boolean isVegetarian)

Day 3: TacoService gets a vegan intervention

/**
* Returns a Taco based on the type of meat passed in.
* If vegetarian flag is true, meat may be null.
* If vegan flag is true it supersedes vegetarian and all other options.
**/
Taco getTaco(String meat, boolean isVegetarian, boolean isVegan)

Day 4: It’s not a party until the Gluten-free needs are introduced

/**
* Returns a Taco based on the type of meat passed in.
* If vegetarian flag is true, meat may be null.
* If vegan flag is true it supersedes vegetarian and all other options.
* If glutenfree flag is true, meat is allowed but vegan
* and vegetarian flags are honored.
**/
Taco getTaco(String meat, boolean isVegetarian, boolean isVegan, boolean isGlutenFree)

Day 5: QA brings up the scenario of the picky eater.

/**
* Returns a Taco based on the type of meat passed in.
* If vegetarian flag is true, meat may be null.
* If vegan flag is true it supersedes vegetarian and all other options.
* If glutenfree flag is true, meat is allowed but vegan
* and vegetarian flags are honored. A list of acceptable toppings can
* be optionally provided, otherwise all toppings are included.
**/
Taco getTaco(String meat, boolean isVegetarian, boolean isVegan, boolean isGlutenFree, List<Topping> toppings)

What we have here is called “Parameter Creep”. Rest assured, it will never end, some other dietary fad or innovation in Mexican cuisine will provide more reasons to add even more options. Let’s recap why this interface is getting shitty:

  1. it’s always changing as requirements change (breaking the interface)
  2. the method signature is going off the page (party foul)
  3. it is probable most arguments are not even needed by the majority of callers (solving for the 20%)
  4. you might as well make the taco yourself with the effort it takes to make the call

There are a few ways we can mitigate some of this. Obviously we could keep each and every one of these signatures and have a TacoService that looks like this:

Taco getTaco(String meat)
Taco getTaco(String meat, boolean isVegetarian)
Taco getTaco(String meat, boolean isVegetarian, boolean isVegan)
Taco getTaco(String meat, boolean isVegetarian, boolean isVegan, boolean isGlutenFree)
Taco getTaco(String meat, boolean isVegetarian, boolean isVegan, boolean isGlutenFree, List<Topping> toppings)

You probably see this a lot, especially in framework code that is trying to solve all things for all scenarios without breaking existing contracts (i.e., the method overloads keep expanding or changing the arguments to provide new functionality without breaking existing code). The issue with this is not knowing which API to call. If I want a vegetarian taco what do I provide for meat parameter, and what if I want toppings but don’t care about the intermediate options? This is a shitty interface.

Speaking of contracts, the point of an interface is to provide a contract to calling code. The contract should be stable and not break existing clients if possible, yet it should be flexible enough to adapt to change.

So on to the first way to improve this progressively shitty interface: The Messenger Object, or Data Transfer Object (DTO). Simply put, the Messenger or DTO encapsulates the required parameters for the method. For example in the example above we could replace all method signatures with the following:

Taco getTaco(TacoRequest request);

You can of course imagine that a TacoRequest has properties of:

String meat;
boolean isVegetarian;
boolean isVegan;
boolean isGlutenFree;
List<Topping> toppings;

The astute will begin the counter-argument that while you may have just cleaned up the TacoService (great), you moved the problem to the DTO. How? Well also imagine all the constructors you could construct to make DTO construction more “convenient”. They are essentially the same as method parameters we tried to simplify. The astute would be absolutely correct. We did move the problem, but is it a problem? Let’s look at the context of the code that would call this interface: a client request from some kind of user interface no doubt. The UI collects the user’s order and maps that to a DTO, using either convenience constructors or simply setting the properties that are relevant. This part is non-negotiable, has to be done regardless. Now, is there any doubt what method to call on the TacoService? Is there any doubt about the contract the service provides? Can we add additional properties to the request without breaking the interface? Like it or not we are in a better place, but it could be better.

There are various ways to return results for method calls – from error codes, to objects, to callbacks, to nothing at all in a publish and subscribe scenario. Each and every context merits considering which pattern makes the most sense, however I have found that in general, a Service should be obligated to indicate the status of the request it processed on behalf of the caller, as well as any data it was to return to it as part of the result. If you don’t do this, you end up with awkward contracts like we have with our TacoService.getTaco() that returns a Taco, but what if there’s a problem (ingredients missing, chef on strike, kitchen on fire)? Do you get an empty Taco, null, an exception thrown? There’s a school of thought espousing throwing business exceptions when something bad happens, if you are of that school, you’re going to be sorely disappointed with what I am about to advocate. In any case, let’s first rule out returning null or just error codes.

Returning either the object or null works in the happy case when Tacos are readily available, but when one of many error conditions occurs all the service can do is return null. It is now up to the client to determine what that means, which of course it can’t because the other side of the line essentially hung up on them. Was it bad inputs? Was there a temporary failure or is the service down hard? Were they out of cilantro? This is still a shitty interface.

We can make this better by returning error codes. Error codes can provide a means to communicate more details about failure conditions to the client. How do you return an error code for a method that is intended to return something (a Taco) to the client? In/Out parameters? Another method to get results? It’s tempting at this point to add an errorCode property to Taco itself. But are errorCodes part of the Taco “Domain”? Nope, they aren’t, they are cross cutting, so don’t even think about it.

Let’s look at the publish and subscribe and callback scenarios – both return a response asynchronously which can have it’s advantages, however in the end we face the same problem – if it works we get a Taco, if not, we get nothing…

Step away from Object-oriented programming for a second, move up a few layers to the Web and you will find that there’s a relatively elegant solution staring right at us. The web has standardized on Http Status codes, headers, and bodies. This provides the flexibility to return just a status code, a status code with headers, or a status code, headers and a body with additional information (either a machine-readable payload or user-readable HTML).

In the Object-oriented world we can do something similar – create a Response object than can convey status code and message, along with a (optional) payload if the call was successful. For example:

public class ServiceResponse<T> {
    int statusCode = 200;
    String errorMessage = "";
    T payload;
}

This allows any service method to provide additional details as well as the desired payload. So now our interface can look like this:

ServiceResponse<Taco> getTaco(TacoRequest request);

The caller can now assume that a statusCode of 200 will yield a payload (yum) and anything else will indicate something bad happened (actually tragic) and depending on the errorCode, the client may prompt the user to try again, for example in the case where an ingredient is no longer available. If the kitchen is on fire, maybe a 500 comes back.

So now we have a request, a response, and all is well except we forgot to include any context for the call. How you get access to the calling context will vary based on your runtime and framework. One option is to include the context in the request object, or play a similar trick where you have a ServiceRequest object that takes in a request object (our taco parameters) as well as standard context info (like userId, etc).

Voilà, a beautiful interface for Tacos, a clear contract, error handling, and a re-usable pattern for other services. And Tacos.

Avoiding Sh*tty Interfaces And Building A Better Taco Service

On The Virtues Of Platforms – Spring Boot and Rapid Application Development in Java

From a presentation I did at Silicon Valley Code Camp 2014

Rapid Application Development Defined

Rapid Application Development (RAD) is a loaded term, meaning many things to many people.  Let me start of by clarifying what I mean by RAD — RAD does not mean code like hell, nor does it does not mean employing a tool that codes for you. When I say RAD, I am referring to the practice of leveraging existing services, components, patterns and practices to maximize the code that relates to your domain, and minimize boilerplate or commodity feature development.  For example, code in your application that marshals XML or JSON has no customer value nor does it have a developer benefit.  It’s tantamount to creating TCP packets in your code to send a request to a service.  Sounds ridiculous, yet I have seen this practice over and over.  Not only is it a waste of time, it’s error prone, and unmaintainable.  As an application developer, you are not being paid to build commodity services…

Commodity Services Abound

There are a plethora of frameworks and platforms in every technology stack that attempt to address this and other commodity needs for the application developer.  Over time, the things we feel we need to code ourselves today become as preposterous as the TCP packet example – more and more moves down the stack into framework or platform components and services.  It’s the natural evolution of software, especially in the new world order of open source and massive developer contribution and collaboration (thanks Github!).

While it is wonderful that your current and future needs are being addressed by others, much of the developer effort starts to shift to finding the project that has a solution for you and how you can use it, if at all. Making sense of it all can be daunting. Fun facts, as of Oct 2014:

This is both the beauty and the chaos of the world of open source.

Frameworks Abound

Frameworks play a large role in helping tame this chaos.  They typically provide a collection of curated Components that work seamlessly together.  As well, these frameworks provide implied (or explicit) patterns and practices to help maintain good architecture and maintainability.  Some will go so far as endorsing other combinations of components in use for known use cases.

frameworks

The Era of The Platform

Platforms go beyond software components and frameworks.  Platforms typically add more commodity services beyond pure software components, including

  • Development Tooling
  • Deployment Automation and Services
  • Systems Management
  • Testing Infrastructure
  • Maintenance of Framework Components
  • Versioning and Release Management

Platforms are trying to create cohesive technology experiences for developers and businesses so that you choose them as your one-stop-shop.  This used to be restricted to the likes of IBM and Microsoft, but now it’s a much more diverse and heterogeneous world where cherry-picking best-of-breed software and patterns is the norm.

Less for you to worry about, more time to focus on your domain!

Making Good Choices

As a developer or (gasp!) software architect embarking on technology choices, you have a huge responsibility.  You don’t want to paint yourself into a corner with a framework that is simple but potentially limited in functionality, yet you don’t want to choose something feature-rich but overwhelming complex and gnarly.  Questions to ask yourself:

  • What do I need from the framework in the short term?
  • What might I need from the framework in the future?
  • How has the framework evolved, will it continue?
  • Are there enough people skilled in the use of this framework?
  • Is the documentation and support good?
  • Is it an ordeal or a joy to work with?
  • Is there a platform behind this framework?

Spring?  For Real?

Every time I tell someone I’m giving a talk on Spring with the word “Rapid” in the same sentence thinks a few things, some of which are not always uttered out loud:

  1. you are mad
  2. you are old
  3. you have so many other options, why suffer?

The short answer is, Spring has everything I know I will need down the road as the apps and services I work on evolve.  It has the patterns in place that provide sensible architecture and make extending it straightforward.  It gives me the flexibility to do anything that any other framework does.

springuniverse

Yes, it used to be DEATH BY A THOUSAND CONFIGURATION FILES but now there’s a a way to make things more cohesive and productive – Spring Boot.

So what does Spring Boot do to accelerate your development?  The Spring Platform consists of components and frameworks that are useful in and of themselves, but are brought together through Spring Boot in a way that lends itself to true RAD. The framework is road-tested, and continues to be actively developed, keeping up with the latest technologies and patterns (Big Data, Reactive, HATEOS, Microservices, Cloud-deployed) and actively partnering with other OSS projects.

What is Boot?

Spring Boot has been explained many times and probably better than what I am about to, so please take Josh Long‘s word over mine but I’ll give it a shot.  In essence Spring Boot provides:

  1. Dependency Management “Simplification”
  2. Automatic Component Configuration
  3. Cross-cutting infrastructure services
  4. Deployment and Packaging alternatives

How does it do it?  The simple answers are (respectively),

  1. “Starter Packs” of dependencies
  2. Convention-based configuration
  3. Production-ready services such as metrics, health, app lifecycle.
  4. Maven and Gradle plugins

Starter Packs

Starter Packs bring together Spring and Third-Party dependencies needed to do something functionally.  Examples of a starter pack include

  • Web
  • Data
  • Security
  • Test

A starter pack is essentially a manifest (in the form of a maven POM) that defines what each pack requires in and of itself. You simply include the starter pack you need in your app’s POM (or Gradle file).

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-security</artifactId>
</dependency>

The Magic of @EnableAutoConfiguration

The key to ending the suffering of the DEATH BY A THOUSAND CONFIGURATION FILES is to use the @EnableAutoConfiguration annotation.  This is the big hint that Boot uses to start performing various incantations and magic upon your application such that you should not, ever, never, seriously never need an xml configuration file.  Beyond not needing an XML configuration file, you don’t need that much configuration even outside of files.  Here’s how it works: Boot scans your classpath to see what kind of goodies you’ve included, and attempts to auto-configure for those.  It also scans your application to see what @Beans are in place and which aren’t (but perhaps should be?) and backs off (respecting your config) or autoconfigures according to convention.  Finally, you can help Boot with your application-specific settings (like DataSource config, ports, etc) with your application.yml or application.properties.

To understand why this is significant, it helps to remember how things were before this.

Traditional Spring Way

  1. Configure Web Container
  2. Create Web Project
  3. Configure POM/Gradle/Ant dependencies (with versions) build and and packaging (150 lines +)
  4. Configure web.xml with Spring context config files, dispatcher servlet, servlet mapping, Spring security Filter, etc..
  5. Write Controller, Service, Repository Beans and Domain Objects
  6. Configure *.xml files (Or annotated @Configuration Beans) with
    1. Bean definitions,
    2. View resolvers,
    3. content-negotiation,
    4. controller mapping,
    5. resource handling,
    6. error handlers,
    7. db pooling,
    8. connections,
    9. profiles,
    10. transactionmanager,
    11. jdbc templates or JPA config.
  7. Write code to create/update schema from script
  8. Package WAR file
  9. Deploy to running Web Container.

Spring Boot Way

  1. Use/Include Spring Boot Starter Parent in pom/Gradle, add Spring Boot Starter Web and JPA
  2. Write Controller, Service, Repository Beans and Domain Objects
  3. Write Application class with @EnableAutoConfiguration
  4. Run

I’m not kidding, you actually get to just write code for your app.

Infrastructure Services

Infrastructure services are things you don’t think about until after you deploy your app into the world, and realize you don’t know what the heck is going on out there.  It’s something you now can throw in right away, leverage what is provided, and then enhance them according to your own needs.  Including the Spring Actuator Starter give you (among other things):

  • Audit
  • Metrics
  • Guages
  • Tracing
  • Configuration Views
  • Remote management via SSH

Packaging Alternatives

As noted before, Spring Boot provides Maven and Gradle plugins that give you a couple packaging alternatives for you application.

Package as a JAR

  • Choose embedded container (Jetty, Tomcat)
  • Configure container programmatically and/or via application config
  • Run using java –jar

Package as a WAR

  • No web.xml required
  • Spring Dispatcher Servlet is auto-configured
  • Deploy to any Servlet container

Enough Talk

In the end, it developers don’t like reading manuals (or long blog posts), they want code!  I created a project to demonstrate using Spring Boot to make a halfway non-trivial application that goes beyond “Hello World”.

To-Boot

Just clone it locally to give it a spin, or fork it if you want to hack on it/

git clone git@github.com:hoserdude/to-boot.git

The app was cobbled together in about 2 days, which goes to show how productive Boot can be, or if you are faster than me, how slow I am.

To-Boot was meant to demonstrate the following

  • Boostrapping (POM, Application.class)
  • Controllers (Routes, Marshalling, Resource Handling)
  • Views (Thymeleaf, AngularJS)
  • APIs (Swagger, DTO definition, Content Negotiation)
  • Security
  • Persistence (Config, Repositories, Domain Beans)
  • Application Structure, DI
  • Monitoring and Metrics
  • Deployment Options

I welcome contributions to this example if anyone feels something can be improved or added to it.

Enjoy.

On The Virtues Of Platforms – Spring Boot and Rapid Application Development in Java

Leveraging Spring Container Events

Another thing you don’t have to invent yourself when building a Spring application is in-process eventing.  There is a lot of literature out there about the architectural advantages of eventing and messaging using a broker such as JMS or AMQP.  Spring provides lightweight templates for publishing and subscribing with these brokers, but that’s not what I am going to talk about.  The architectural principals that you benefit from with using a message broker are decoupling (you never talk directly to the listener of your message) and asynchronous processing (you don’t block waiting for a response).  Without a message broker, it’s still possible to benefit from these principals by simply plugging in to the Spring Container’s own publish and subscribe mechanism.

Old Mailbox wedged into a wall

Why would you want to do container messaging versus broker messaging?  First of all it’s never an either/or situation, you can leverage both, it’s simply a matter of what you can do locally versus distributed and it’s simple enough to mix and match.   Let’s take a common example use case for this: a user signs up for your web application.  This triggers all sorts of system activity, from creating an entry in a User table, to starting a trial or subscription, to sending a welcome email, or perhaps initiating an email confirmation process, or if you are “social” application pulling data from the connected social networks.  Imagine a UserController kicking this off by accepting a registration request.  A “coupled” approach would have this controller invoke each and every one of these processes in sequence, in a blocking or non-blocking fashion OR it could publish an event that a new user was created, and services dedicated to each function would pick up the message and act appropriately.  The difference between the two approaches is that the former is a God Object anti-pattern, and the latter is a nicely decoupled Pub/Sub pattern with a sprinkle of by Responsibility-driven design.  An added bonus is the ability to test all of these functions in isolation – the pub/sub mechanism simply binds them together at runtime.

Let’s look a bit more in detail about how this works.  First you need to decide what events you want to publish.  In the example above, it makes sense to have a UserCreationEvent with an attribute of the User, or the UniqueId of the user whatever you feel more comfortable with.  This object would extend Spring’s ApplicationEvent which itself has an event timestamp attribute.  So the class might look like this:

public class UserCreationEvent extends ApplicationEvent {
     private User user;

    public UserCreationEvent (Object source, User user) {
        super(source);
        this.user = user;
    }

    public User getUser() {
        return user;
    }
}

To publish this event, in our case from the UserController or better yet a UserService that takes care of creating the user in the system, we can just wire up the ApplicationEventPublisher:

@Autowired
 private ApplicationEventPublisher publisher;

Then when we’ve finished creating the user in the system, we can call

UserCreationEvent userCreationEvent = new UserCreationEvent (this, user);
publisher.publishEvent(userCreationEvent );

Once the event is fired, the Spring container will invoke a callback on all registered listeners of that type of event.  So let’s see how that part is set up.

A listener is simply a bean that implements ApplicationListener<YourEventType>, so in our case ApplicationListener<UserCreationEvent >.  Our listeners will all implement their own callback logic for the onApplicationEvent(UserCreationEvent  event) method.  If you are using Spring’s component scan capability this will be registered automatically.

@Component
public class UserWelcomeEmailListener implements ApplicationListener<UserCreationEvent > {

@Autowired
private EmailService emailService;

@Override
public void onApplicationEvent(UserActionEvent event) {
emailService.sendWelcomeEmail(event.getUser());
}
}

It’s important to note a few things about the default event multicasting mechanism:

  • It is all invoked on the same thread
  • One listener can terminate broadcasting by throwing an exception, and thus no guaranteed delivery contract

You can address these limitations by making sure all long-running operations are executed on another thread (using @Async for example) or plugging in your own (or supplied) implementation of the Multicaster by registering a bean that implements the interface with the name “applicationEventMulticaster”.  One easy way is to extend SimpleApplicationEventMulticaster and supply a threaded Executor.
To avoid one listener spoiling the party, you can either wrap all logic within each listener in a try/catch/finally block or in your custom Multicaster, wrap the calls to the handlers themselves in try/catch/finally blocks.

Another thing to be aware of as you think about this – if a failure in any of the processing that occurs after the event is published causes any inconsistency in the data or state of the User in our case, then you can’t do all this.  Each operation has to be able to deal with it’s own failures, and recovery process.  In other words, don’t do anything that needs to be part of the whole “create user” transaction into this type of pattern, in that case you don’t have a decoupled process so it’s better to not pretend you do.

As I mentioned before, there is no reason you can’t also leverage a distributed message broker in concert with Application events.  Simply have an application event listener publish the event to the broker (as sort of a relay).  In this way you get the benefit of both local and distributed messaging.  For example, imagine your billing service is another system that requires messages through RabbitMQ.  Create an ApplicationListener, and post the appropriate message.  You’ve achieved decoupling within and between applications, and leveraged two types of messaging technologies for great justice.

@Component
public class SubscriptionSignupListener implements ApplicationListener<UserCreationEvent > {

@Autowired
RabbitTemplate rabbitTemplate;

@Override
public void onApplicationEvent(UserActionEvent event) {
SubscriptionMessage newSubMsg = new SubscriptionMessage(new Date(), SubscriptionMessage.TRIAL_START, event.getUser());
rabbitTemplate.convertAndSend(MqConstants.NEW_SUB_KEY, newSubMsg);
}
}

So what about using something like Reactor instead?  Again, it’s not an either/or situation.  As Jon Brisbin notes in this article, Reactor is designed for “high throughput when performing reasonably small chunks of stateless, asynchronous processing” .  If your application or service has such processing, then by all means use that instead or in addition to ApplicationEvents.  Reactor in fact includes a few Excecutor implementations you can leverage so you can have your ApplicationEvent cake and eat it too!

Leveraging Spring Container Events

Spring Boot ConfigurationProperties and Profile Management Using YAML

From Properties to YAML

There are dozens of ways to handle externalized configuration in an application or service. Over the years Spring has provided quite a few, and recently the @Value and @Profile annotations have started to bring some sanity to the situation by attempting to minimize the developer’s interaction with the filesystem to read what should be readily available. With the advent of Spring Boot there are a couple new interesting twists – YAML files and @ConfigurationProperties.

Defining Configuration

First let’s look at YAML. The non-Java world has been using YAML format for quite some time, while the Java world appears to have been stuck on the .properties file and format. Properties files can now be a relic of the past if you so choose, as Spring Boot gives us the option to configure an application for all profiles within one file – application.yml. With the YAML file format you define sections that represent the different profiles. For example, consider this application.yml:

spring:
  profiles.active: default
---
spring:
  profiles: default
spring.datasource:
  driverClassName: com.mysql.jdbc.Driver
  url: jdbc:mysql://localhost:3306/myappdb?autoReconnect=true
  username: devuser
  password: devpassword
management:
  security:
    enabled: true
    role: ADMIN
doge:
  wow: 10
  such: so
  very: true
---
spring:
  profiles: qa
spring.datasource:
  url: jdbc:mysql://qa.myapp.com:3306/myappdb?autoReconnect=true
  username: qauser
  password: qapassword

Note a few things about this format – there’s a default active profile defined in the first section, followed by the default profile itself, followed by the qa profile (separated by “—“). There is no longer the namespacing format for each property, but an indentation-based markup to delineate hierarchy (yaml is space-sensitive). Also note that that in the qa profile we don’t re-define the management configuration, it is inherited, whereas we want to override the datasource url, password and user, but not the driver.

If we did nothing more than this the application configuration would default to the default profile, which would be fine in dev mode, but when deployed to the qa environment, we would pass the -Dspring.profiles.active=qa to the command line params for that profile to take effect. You can have multiple profiles alive at the same time too, otherwise it would have been called spring.profile.active🙂 So who wins if there are multiple overrides? The lower you are in the yml file, the more override power you have, so that’s one way to think about how you organize your configuration.

This is all well and good, however what happens when the default profile is NOT a great profile for some of the development or QA team, and they want their own overrides? This is where you do need to resurrect the properties file, but this time for great justice. Your application.yml should be checked in, however you don’t want each and every team member checking in their own little overrides, or the file will get unwieldy and nobody will be happy. The trick is to create an application-<developername>.properties locally and exclude it (or all properties files) from your source control (.svnignore, .gitignore). With this file in the classpath you can reference the profile in your startup just as we did for qa, EG: -Dspring.profiles.active=default,dilbert.

But this trick is not just for developers. It’s never a good idea to check in production keys/tokens/secrets to your source control for all to see (unless you have absolutely no controls in place or don’t care) so this is a great mechanism for operations and/or SCM staff to have their own properties override file that contains all the sensitive content and let’s them control who has access to it.

Consuming Configuration

So how do we get access to all this fancy stuff from code? Sticking to the principals of keeping developers out of the business of managing what files to load and when, Spring Boot comes with even simpler support for references to the configuration file values, where you can create strongly typed beans to represent sections in the configuration. Let’s look at an example and then examine it.

@Configuration
@EnableConfigurationProperties
@ConfigurationProperties(prefix="doge")
  public class DogeSettings {
    private int wow;
    private String such;
    private boolean very;
 …getters and setters
 }

@Configuration tells Spring to treat this as a configuration class and register it as a Bean
@EnableConfigurationProperties tells Spring to treat this class as a consumer of application.yml/properties values
@ConfigurationProperties tells Spring what section this class represents.

Note that there is implicit type conversion for primitives and that this class is a Bean. That means you can @Autowire it into other Beans that require it’s values. EG:


@Autowired
private DogeSettings dogeSettings;
public boolean requiresDogeness() {
  if (dogeSettings.getWow() > 5 && dogeSettings.isVery == true) {
    return true;
  }
  return false;
}

Pretty easy and straightforward. It would be even more straightforward if @ConfigurationProperties was all you had to annotate, which the other two being implicit, but I leave that to the Spring team.

Testing

Using this scheme in a unit test environment is quite easy as well. You can of course define a unit-test profile with any overrides from the defaults you would use in such scenarios, and your @ConfigurationProperties classes will respect those, as long as you have the profile set. One easy way to do that is to create a TestConfig class and use it to set up all the test configuration overrides (such as beans, mocks, etc).


@Profile("unit-test")
@Configuration
@EnableJpaRepositories("com.myapp.repository")
@ComponentScan({"com.myapp"})
@EnableAutoConfiguration
public class TestConfig {
  @Bean
  public DataSource dataSource() {
    return new EmbeddedDatabaseBuilder().setType(H2).build();
  }
}

Note the @Profile annotation to set the context for this configuration. Without it we could accidentally have this configuration in our production configuration… The rest is just boilerplate setup that is optional depending on what you want to test. From the test itself you can then leverage this:


@ActiveProfiles("unit-test")
@SpringApplicationConfiguration(classes = {TestConfig.class})
public class DogenessTest extends AbstractTestNGSpringContextTests {

  @Autowired
  private DogeSettings dogeSettings;

  @Test
  public void testRequiresDogeness() {
    ...
  }
}

With @ActiveProfiles we can now isolate the configuration for both the application properties as well as any bean config. The @SpringApplicationConfiguration gives us an annotation-based spring context to work with for the execution of the tests.

Conclusion

If you have the luxury of starting a project from scratch, it’s worth giving this approach a try.   There’s very little code required, no static access to config utilitites, and you get strongly typed configuration objects that you can inject anywhere you need them.  To see some of this code in action, check out my project in GitHub.

References

Spring Boot ConfigurationProperties and Profile Management Using YAML

Out with the Old…

HBP Kickoff

I just got back from a crazy but inspiring 40 hr hackathon for college students in Boston (www.hackbeanpot.com) that my employer (Intuit) co-sponsored. After talking to most the teams over the course of the event, and seeing the output at the end, it’s clear that there are fewer and fewer barriers between ideas and some concrete manifestation of them. I say that because when I first heard about some of the project ideas, I thought most of them were unlikely to be viable, especially in the time allotted. I was wrong. The students had a willingness to use whatever tool they could, push through barriers, fatigue and lack of experience. Nothing seemed to trip up or demoralized them. Several teams hit project-ending barriers, and simply “pivoted” and kept going. I saw teams collaborating using GitHub, deploying ruby and python applications to Heroku and Google App Engine, integrating with social networks, getting working Android and iOS apps, some connecting to the back-ends they had deployed. I walked away wondering how could they pull this all off so quickly? In another 40 hrs most all of those projects would be commercially or academically viable. Some did go to market (Chrome Extensions). Some even had their own domains registered. The web apps all looked polished (thanks Bootstrap, D3), the mobile apps were crafted with design in mind, one even introduced some new visual paradigms.  My mouth is still agape.

It took me much longer to get my head around all the fancy new toys that were in use, months in fact. Collaborative source control? Cloud deployments by pushing to a git repo?. Whatever happened to building out your server, configuring commercial OS’s and routers, then finding a co-lo to host your stuff, and a wicked “fast” T1 line, then FTP’ing a bunch of stuff there to “deploy it”. These are now the ways of the caveman, equivalent to hitting rocks together to make fire. To this generation, the new tools and processes are simply the way things get done, there’s no nostalgia, no need to understand what’s under the hood. And that’s the way it should be, obviously. As we always dreamed, the hard stuff has been abstracted away, and we can focus on what we came to do – build something useful, something fun, something that serves a purpose. All this junk between us and our ideas is getting out of the way. This means that the next generation of developers has a completely different focus – they can stand on the shoulders of giants, and build. My generation got caught up so much in the “how” of building that we spent most of our time and energy there. In some cases this has been paralyzing, where (I am not kidding) years have been lost in the melee.

Putting this in contrast with the process, politics, vested interests and technical religion I see with my generation of developers, I realize it’s time we either join them or be made obsolete. We need to forget the battles of yore, of technology vs. technology, of what would be better if it was hugely successful and had to “scale”.  Pretty much everything you need is there, and it’s all doing the same thing. Choose what is productive for you and your team if you have one and move along.

It’s not just us seasoned folk who need to change, it’s the environment we created around our old-school processes. In many cases we adopted traditional corporate organizational structures (useful for banks, insurance companies, manufacturing even). Every software company that adopts this structure always finds itself stagnating, lacking innovation and engagement. It’s no surprise that the startups and “rogue” companies that adopt flatter, more team-driven structures do more, change the status quo and create engaged, innovative people.  The company I work for is moving in that direction which is both encouraging and necessary.

Beyond seeing all this inspiring development it warmed my heart to see that the gender ratio was quite balanced, with several of the winning teams “all girl”.  The future is bright for the tech sector, there’s talent, motivation, diversity and balance coming our way. My only fear for them is that they will be discouraged by what we have made of this industry in some cases. However, after seeing what I did this weekend, I reassure myself with the thought that they will create something better, something more naturally adapted to this new world. Old age and treachery can only hold back youth and talent for so long…

Out with the Old…

Application Instrumentation Patterns

There are many different ways to instrument your application – and many tools available help with the effort.  I’m going to show how you can get a lot of “free” instrumentation out of any size system with just a little bit of code, and a little bit of discipline.  Let me start out by saying that none of this is news to many people – the patterns and practices I’ll discuss have been around for a long time, but it amazes me how with every system that I have met (and even heard of) for the most part instrumentation and logging is essentially random, due to it’s organic evolution.  It’s understandable given the realities of time constraints and short sightedness we typically have under time constraints, but it doesn’t have to be that way.

This post is based on a talk I have given at a few venues entitled Creating Highly Instrumented Applications with Minimal Effort.  I hoped to appeal to the both the lazy AND curious developer with this.  The lazy part is obvious, but you have to be curious (or care?) to want to know this level of detail about how your system works.

Let’s start with a scenario so that you can get an idea about the punchline here:  A customer calls in and says that they have been getting errors every time they try to update their account in your application.  With one element of context (the error ID on the screen they see OR their email address, OR some other identifying element) I can answer the following questions in very little time:

  1. The last time they had that error
  2. The first time they had that error
  3. The component that caused the error
  4. (Potentially) The root cause of the error
  5. The context in which they were working
  6. What they were doing before that error
  7. How many times they tried
  8. How many other customers had that error today
  9. How many other customers have ever had that error
  10. The first time any customer had that error

This all sounds pretty useful – root cause analysis, ability to reproduce the scenario, customer impact over time, and even tying it back to a release which may have introduced a bug in the system.

So how do you get this level of insight from your application?  You instrument it.

Types of System Output

There are a lot of different types of output from both the system (operating system and infrastructure services) and your application.  This is how I distinguish between them.

System Instrumentation

This is a class of data that is very low level – threads, memory, disk I/O that relates to the health and performance of the OS and Infrastructure.

  • JMX
  • WMI
  • SNMP

System Logging

This is a class of data that doesn’t plug into operational monitoring tooling as well, but gives the system and infrastructure some place to dump stuff that we typically ignore unless something goes wrong.

  • system.log (*nix)
  • Event.log (Windows)
  • apache.log
  • nginx.log
  • iis.log

Application Instrumentation

Application Instrumentation is the star of the show for us here – it’s useful when things go wrong, but it’s just as useful when things are going just fine.  We’ll see why a bit later, but here are some of the characteristics of Application Instrumentation:

  • Cross-cutting (free) and non-instrusive
  • Provides app activity metrics
  • Can passively trigger alerts (based on rules)
  • Source of performance data

Application Logging

Application Logging is

  • Intentional, in other words a developer has to go to the trouble to do it
  • Typically relates to Business Transactions and how well they worked out
  • Overtly triggers alerts (the database is down!)
  • Source of business metrics ($20 subscription processed)
  • Aids in troubleshooting failures, bugs

Quality Data

Let’s look at some examples of log output that I’ll randomly sample from my own laptop.  You can follow along if you’re on a Mac by opening the Console application and browsing around.  On Windows you can open up the Event Viewer and do the same.  The goal here is to determine how useful the log output is, and grade it accordingly. We’ll find what quality means as we intuitively see utility in certain characteristics of the data that is provided.

012-09-28 11:14:26:783|SyncServer|38272|0x10ad16f50|Logging|Info| Logging initialized, engine version 673.6 : log level 3
2012-09-28 11:14:26:927|Mail|38269|0x104e23700|ISyncManager|Info| register client com.apple.Mail.Notes from /System/Library/Frameworks/Message.framework/Versions/B/Resources/Syncer.syncschema/Contents/Resources/MailNotesClient.plist (bundleId = (null), bundleRelativePath = (null))
2012-09-28 11:14:26:961|Mail|38269|0x7fee75613020|Stats|Stats| com.apple.Mail: slow sync com.apple.mail.Account, com.apple.mail.AccountsOwner
2012-09-28 11:14:26:964|Mail|38269|0x7fee75613020|ISyncSession|Info| com.apple.Mail: prepare to slow sync com.apple.mail.Account,com.apple.mail.AccountsOwner
2012-09-28 11:16:26:993|SyncServer|38272|0x10ad16f50|Server|Info| Cancelling all sync plans.
2012-09-28 11:16:27:029|SyncServer|38272|0x10ad16f50|Server|Info| Goodnight, Gracie.

Here’s what I find useful:

  1. Timestamp
  2. Who created the log  (SyncServer)
  3. What part of the SyncServer made the log statement (eg: Logging, Server)
  4. Some kind of session or process id to tie things together
  5. The Log Level (Info)
  6. Pipe (|) seperated values

What I don’t find useful

  1. Non-semantic logging (commas, colons, periods)
  2. It said stats but I don’t see any numbers

Let’s do another.

2013-10-07 08:18:41.600 GoogleSoftwareUpdateAgent[27318/0xb0207000] [lvl=1] -[KSCheckAction performAction] KSCheckAction found installed product for ticket: <KSTicket:12345
 productID=com.google.GoogleDrive
 version=1.11.4989.8546
 xc=<KSPathExistenceChecker:0x1d259a0 path=/Applications/Google Drive.app>
 serverType=Omaha
 url=https://tools.google.com/service/update2
 creationDate=2012-09-14 22:20:29
>

Here’s what I find useful:

  1. Timestamp
  2. Who created the log  (GoogleSoftwareUpdateAgent)
  3. The Action that was being performed
  4. Some kind of session or process id to tie things together
  5. The Log Level (1)
  6. Some semantic (name = value) values

What I don’t find useful

  1. Some Non-semantic logging (colons, < >, LF delimiters)

One more, this time Tomcat starting up


Oct 09, 2013 9:35:14 PM com.springsource.tcserver.security.PropertyDecoder <init>
INFO: tc Runtime property decoder using memory-based key
Oct 09, 2013 9:35:14 PM com.springsource.tcserver.security.PropertyDecoder <init>
INFO: tcServer Runtime property decoder has been initialized in 225 ms
Oct 09, 2013 9:35:15 PM org.apache.coyote.AbstractProtocol init
INFO: Initializing ProtocolHandler ["http-bio-8080"]
Oct 09, 2013 9:35:15 PM com.springsource.tcserver.serviceability.rmi.JmxSocketListener init
INFO: Started up JMX registry on 127.0.0.1:6969 in 123 ms
Oct 09, 2013 9:35:15 PM org.apache.catalina.startup.Catalina load
INFO: Initialization processed in 884 ms
Oct 09, 2013 9:35:15 PM org.apache.catalina.core.StandardService startInternal
INFO: Starting service Catalina
Oct 09, 2013 9:35:15 PM org.apache.catalina.core.StandardEngine startInternal
INFO: Starting Servlet Engine: VMware vFabric tc Runtime 2.7.2.RELEASE/7.0.30.A.RELEASE
Oct 09, 2013 9:35:15 PM org.apache.catalina.startup.HostConfig deployDirectory
INFO: Deploying web application directory /deploy/webapps/manager
Oct 09, 2013 9:35:15 PM org.apache.catalina.startup.HostConfig deployDirectory
INFO: Deploying web application directory /deploy/webapps/ROOT
Oct 09, 2013 9:35:15 PM org.apache.coyote.AbstractProtocol start
INFO: Starting ProtocolHandler ["http-bio-8080"]
Oct 09, 2013 9:35:15 PM org.apache.catalina.startup.Catalina start
INFO: Server startup in 470 ms

Here’s what I find useful:

  1. Timestamp (on alternate lines?)
  2. Who created the log  (org.apache.catalina…)
  3. The Action that was being performed (init?)
  4. The Log Level (INFO)
  5. Some metrics (startup times)

What I don’t find useful

  1. Mostly Non-semantic logging (no delimiters, name value pairs)
  2. Split lines

From this we can see that basically knowing the classic who, what, when, where and how in some structured and consistent manner is what we find useful.  Also that it’s human readable AND machine readable.  And guess what, when you have all that, you have some potentially “quality” data.

Theory and Best Practices

There are two principals that I follow that have enabled this type of scenario to be a reality, rather than a fantasy.

  1. Create consistent, semantic data that can be analyzed with a variety of tools
  2. Get in the middle of everything in your system, time it, log it, and maintain contextual information along the way

Good Data

  • Timestamp everything – be sure to use Global Timestamps (UTC – 2013-08-21 22:43:31,990)
  • Use key value pairs (name=value) and delimit your data properly (use pipes, semicolons, whatever works).  When you go to analyze this data, trust me it will be easier.  Microsoft has gone so far as to create an “Application Block”  called the Semantic Logging Application Block that formalizes this type of approach.
  • Set and log the context data when you have it  (who/what/where/when/how) because it’s interesting (as we have seen) and very useful for correlation downstream.
  • Leverage log categories to identify what tier, and better yet what component emitted the log
  • Time everything.  It’s easy, free and you will thank yourself later when you have that aha moment when something related to the timings yields some insight that makes your system better.
  • Be conscious of security, never log sensitive data that could put you or your customers privacy and data at risk.
  • Be consistent in naming (action=purchase; sale=oct13; productId=123123) vs (action=buy; promo=oct13; sku=123123).

Get In the Middle Of Everything

Getting in the middle of everything in your system is the source of all your good data. Without doing this, you can follow all the above best practices against very little or random data.  There are many ways to meddle in your own business and most of them are incredibly simple, cheap and non-invasive.  The secret is to leverage existing patterns and frameworks to do the dirty work.  In the context of web applications life is good because there aren’t really any viable frameworks that don’t have a pipeline of some sort.  Whether you are in Java, .Net, Node.js or Ruby, you have some way to intercept incoming calls, and outbound responses (and in some cases MANY ways).  This is commonly used for audit, security, logging, performance tracking – and I have combined everything but security into one – all that information is more useful together since together they form the basis of the context that you will want to propagate downstream such that subsequent logging statements are enriched with this contextual information.

Another way to get in the middle of everything that is not “web” related is to leverage Aspects.  Just as filters or interceptors in the web framework let you get into the middle of the request/response flow, various aspect “enabling” technologies and patterns exist to let you get into the middle of the Object-to-Object request and response and apply the same audit, security, logging and performance tracking.  In Java there is AspectJ and this is heavily used by the Spring Framework itself and they have provided a nice abstraction to the developer to simplify common use cases.  In .Net you can get similar functionality by leveraging a dependency injection framework like Unity that can support dynamic proxies with internal interceptor pipelines.  In Node.js you can leverage event handlers, which can be chained together easily with frameworks like Express.   I have created examples of all three implementations in GitHub and will continue to tend to them based on interest and feedback.

Once you are in the middle of everything, creating consistent and contextual data, you then start to create a very rich dataset which can be parsed and indexed by various products or even just grep.

I’ve seen this leveraged for operations dashboards, business dashboards, SLA measurement, QA, customer support, performance testing, and much more.   For a developer, it really provides a level of transparency about what the system is doing that no debug session can replicate.

The slides from the presentation are here:

Log On!

Application Instrumentation Patterns

Fixed Ship Date Agility

Reality Sucks

Many software development teams work in an environment where they are beholden to date-driven releases.  And many of these same teams aspire to be “agile”.  Agile means a lot of things to a lot of people, but at it’s core it’s about only releasing things that are DONE.  What drove the whole Agile movement in software was that the waterfall process of days of yore could drag projects on for years without anything being released, because the standard of DONE was essentially unachievable.  Either the feature functionality was not complete, the testing was not complete (or do-able), or late-breaking integration or release issues were discovered which impacted the scope.

Valencia Street Graffiti

So what’s a team to do when they are told “You shall ship on this day, with this product“?

I have had to deal with this situation for most of my stint as a developer.  This post is intended to share some of the best practices I’ve garnered from the projects I’ve worked on that are of this ilk.

Lose the Religion

Step 1 is to come to terms with the fact that no process in a book is going to suit your circumstances.  As a team you need to decide what principles from Agile that you can actually apply.  The purists will cry foul (fair warning) and as long as they are outside your team you have nothing to worry about.  If, however there are purists on the team, they have to decide if they really want to be there because you can’t ship fixed date stuff according to doctrine.  The doctrine should be interpreted as a set of principles, but not to be followed to the letter.
 

Develop Tactics

Driving on ice simply requires the right training and equipment.  It’s a calculated and deliberate process but can be safe.  And if the road is made of ice, it’s all you can do.

Figure out your time constraints – map out your start and end dates.  Working backwards from the drop dead ship date – create a two week buffer at the end.  Everything in between is available for iterations.  First iteration is called Iteration 0 – this is to tackle is high risk items and discover the lay of the land.  You should come out of this with your technology and architecture choices.  Try to keep it to a week.  The product owner needs to be very alert during this phase as the team members report back every day on how they are doing in the discovery and design phase.  This is where you get your first warning signals if something is going to be harder than it looked.  Your team may have to do proof of concepts and compare options.  Make it clear they are going to be making critical design choices that have to solve the customer problem, but also be feasible in the constraints you have.

Plan the first 2 iterations at the end of this.

Subsequent iterations should be relatively short (2 weeks?), fit as many as you can in between Iteration 0 and your buffer.

Stop to demo, course correct and plan the next iteration as usual within the cadence of the iterations.  Celebrate the little victories, and obstacles overcome.

Don’t Over Track

Keep track of what you need to do, who is doing it, and what got done.  Get things sequenced in logical order (not just priority – some infrastructural stories might come before the top Product priority because they are enabling stories).  Tracking is very important for QA purposes (they need to get ahead of the development if they can) and not important for velocity stats – remember a death march is not worth tracking for metrics – it’s worth tracking for what is being done, that’s all.

Cairn, Arches NP, UT

Be Inclusive

A Team is not a bunch of “developers” and a manager.  No software product can go out the door without the support of QA, Operations, SCM, Marketing and sometimes Documentation, Privacy and Legal.  Anyone who is required to get the product out the door is on your team. As such, they need to be part of the daily standups, they need to have their work captured in Stories and Tasks and they participate in planning and review.  There’s nothing worse than spending a great deal of effort building something only to find out you can’t deploy it in your datacenter or you’ve violated some laws, copyrights or patents.
 

Embed Your Customers & Product Owners

One of the principles of agile development is transparency.  Some mechanisms are built into most agile processes to promote that, namely the daily standup, iteration demos/reviews, and in some cases pair programming and daily test suite reports.  These all keep the team up to speed on the status of the project, however it’s the product owner who ultimately has to do something with this data.

Embed your product owner, be it a Product Manager, a Customer or a stakeholder in the outcome of the project.  By embed I mean physically embed them with the team, right in the middle of the action, co-located (Yahoo! style).  The product owner benefits by hearing the chatter of the team beyond the scheduled meetings, and the team benefits from the availability to answer questions on requirement refinements, tradeoffs in implementation etc…

The knowledge the product owner has on a day-to-day basis gives them what they need to make course corrections, set expectations with other stakeholders, and also get a sense of what obstacles they can help clear in advance of work in that area.

Ensure Designers are Pigs, Not Chickens

Assuming the designers are embedded as I prescribed, they will quickly see the progress the team makes with their designs, or not.  The critical dynamic here is that there’s a tight feedback loop between the implementation and the design.  It’s ideal if a designer can sling some HTML/CSS in a web project, for example – that let’s them get ahead of the team and set up templates and patterns in advance.  When designers are committing code, they have skin in the game, and join the implementation team.  Designers should also be ready to offer more than one option to the team when push comes to shove – some designs are better, but sometimes they are harder to pull off – a good designer will appreciate constraints, and come up with something still usable and aesthetic, but easier to build and test.

Empower Your Team Members

The term product owner can sometimes disenfranchise some team members as it gives them the impression that they don’t have skin in the game.  Make it clear that the “owner” is simply the person appointed by the business to make decisions in scope based on the resources and time.  They should be the experts in the domain your team is working on and know what can be traded off when the time comes.  It’s up to the team members to bring issues to the forefront and present options to the owner.  The only way to encourage ownership of the outcome in some contexts is to put it in practice.  Find opportunities in the standup to tease out opportunities for team members to contribute their ideas (in the parking lot of course) and incorporate them into the plan. This cycle creates a series of celebrated contributions to the outcome that keep the team invested.

Ship It

Get it out there, there will be bugs, there will be compromises.  Deliver the value to your customers, celebrate it, be proud of it, and you will feel all the more inspired to go back and make it even better.

Fixed Ship Date Agility

I Am On Fire with Testability – Part 1

I’ve been taking some time off building out features to focus on reducing the unit testing debt we have in our codebase, and along the way build out some core testing infrastructure to make it easier to sustain building out tests in concert with features (imagine that). Some great lessons have been learned, notably about making code testable (see previous post).

Along this journey, the ASP.Net MVC framework that we use has turned out to be one of the more elegant bodies of software I have used in a while, but I didn’t really appreciate that until I tried testing code in it’s context. The main principal at play here is that in MVC you are always using an abstraction/wrapper over the core ASP.Net classes. You don’t have to use them (apparently if you are dumb or a sucker for pain) since you can always reference HttpContext.Current and do whatever you want to.

Light Painting With Fire

Long story short, whenever we bypassed the framework (and it’s abstractions), we paid dearly. It turns out the MVC framework was designed with testability in mind, so if you go with the flow (man) you get to inherit that for free. It took a fair amount of refactoring to make sure nobody was touching ASP.Net constructs directly. Let’s just say that I had to checkout 100+ files to remedy the situation.

Moral Of The Story: never reference HttpContext.Currentever ever ever. While this ASP.Net convenience function is handy and reliable, it makes your code inherently untestable. Why? Because it can’t be mocked, and if it can’t be mocked, it cannot be unit tested.

So what is a coder to do?  Every Controller and View has a Context (ControllerContext and ViewContext respectively).  These provide accessors to an HttpContext that derives from HttpContextBase which is an Abstract and thus mockable class.  So, if you want to work with the Request – ControllerContext.HttpContext.Request (or, even more conveniently the BaseController has a shortcut – HttpContext.Request).  Similarly, in Views you can reference the a Session like so: ViewContext.HttpContext.Session.  Neato peachy keen and a boon to testability!

So what do you need to do to fully mock the MVC application? We found all sorts of snippets of code around the intertubes that did this and that, even some attempts and a full blown mocking framework for MVC. Your mileage may vary with any of these, but at the core there are a few things you need to know. First off, you need to be able to take control of the

  • HttpSession
  • HttpRequest
  • HttpResponse

Everything pretty much pivots around these main classes. You have some choices to make about the “how” of mocking as well. You can use Mock Objects or Stubs.  Your choice depends on how much control you want over the objects themselves.  We use Moq which is pretty powerful in that it can Mock almost anything and you can add dynamic behaviors to objects with relative ease.  That said I like to mix in Stub (or pretend) objects that mimic real behavior.  For example, I want to Mock the HttpSession, which is a pretty dumb object (not a lot of logic) but it does have a storage mechanism.  By simply extending the Abstract base class, I can mimic real Session level storage.


public class MockHttpSession : HttpSessionStateBase
    {
     readonly Dictionary _sessionStorage = new Dictionary();

        public override object this[string name]
        {
            get
            {
                if (_sessionStorage.ContainsKey(name))
                {
                    return _sessionStorage[name];
                }
                return null;
            }
            set { _sessionStorage[name] = value; }
        }

        public override void Remove(string name)
        {
            if (_sessionStorage.ContainsKey(name))
            {
                _sessionStorage.Remove(name);
            }
        }
    }

Then I can use a real Mock and have it leverage this Stub

var Session = new MockHttpSession();
var Context = new Mock();
Context.Setup(ctx => ctx.Session).Returns(Session);

At test setup time, I can set variables in the session storage, and the executing code is none the wiser.

For simple property set/get you can just leverage Moq’s SetupAllProperties functionality. This will mimic the basic get/set on the object so that you can get/set on them without having to create a stub or define the dynamic functionality at setup. EG:

var Cache = new Mock();
Cache.SetupAllProperties();

So what does it look like to mock everything at once?  More on that in Part 2.

I Am On Fire with Testability – Part 1

Test-Enabled Development

I’m going to coin a phrase that for me captures a more realistic and achievable form of development than Test-Driven Development (TDD). TDD is nice, in theory and I’m happy for those who are capable of truly executing on it. The reality for me has been that even in “green field” development projects where you are coding fun new things, the feature development quickly moves past the tests for said features. For the purists out there, yes this is an affront to Scrum (which I portend to follow), TDD and perhaps even good engineering discipline. However, many developers don’t code for aesthetic reasons, they code to make products that serve customers which hopefully make money for them.

Fish, chillin'

So if we accept reality and the imperfections in process that tend to occur, how can we as quality-conscious developers still “do the right thing”?  My contention is that we can follow the practice of Test-Enabled Development (TED).

The principal of TED is that regardless of when you test your code, your code must be testable. More on what that entails shortly. The key point is that your code may be tested prior to impelementation (if you are a saint), during implementation (if you are pretty good) or after the fact (if you are lazy/under pressure to deliver). In all cases, your code should be amenable to testing.

So what characterizes code that is testable? In the world of strongly-typed languages like Java and C# we can do a few things right off the bat:

  • Interface-based programming – ensure all system components are defined with interfaces.  For one thing it encourages design by responsibility, secondly it provides a looser coupling, and most importantly, it allows for mocks, stubs and other chicanery on the testing front.
  • Don’t use static references if possible, and avoid stateful behavior – let the state be brought to you.   In other words, mind your own business and let either the caller or some persistent store tell you what state the world is in.  That also encourages looser coupling and lets your tests set up the state/context that they need.
  • Factor code into discrete functions – huge methods that do 100s of things are bad for many reasons but from a testability standpoint they are a higher form of sin.

Bonus Points: use a Dependency Injection framework like Spring, Ninject, Unity, or whatever – then the test code can take control of what is real and what is mocked.

Sound like a lot to do?  It is, but once you get the patterns in place, the goodness is self-replicating and yields dividends down the road.  Further, in the worse case you have a very well architected but untested system, and if there’s a rainy day or a sudden influx of engineering discipline you can actually effectively test the software.
    Test-Enabled Development

    Mavenizing your XMLBeans Code Generation

    Automating your Code Generation

    XMLBeans provides a nifty code generator (scomp) that creates Java POJOs to represent XML Schema Types from XSD documents. You can run scomp from the command line, an ant build file, but if you work within the context of Maven projects, I recommend taking advantage of the XMLBeans mojo

    As with most things maven, if you follow the standard conventions, things just work. The trick to setting up your pom.xml to execute your code generation is to fit into the convention, and in this case it is not that difficult.

    Keeping the source around

    I happen to like having the generated source code in the context of my projects, and scomp lets you specify where you want it to place the source it generates. Maven has conventions around the directory structure for your source code, so we need to ensure our source goes to the right place – in this case it would be

    src/generated/java

    Generated code should be “throwaway”, in other words we should be able to safely delete it, and run the code generator again. We can specify in our pom.xml that we would like to have the generated sources removed as part of the clean process. We do this by configuring the maven-clean-plugin.


    org.apache.maven.plugins
    maven-clean-plugin



    src/generated/java




    Now the sources will be cleaned out prior to any code generation.

    Generating the Java Source

    Now we can configure the XMLBeans plugin to do our bidding. First we tell Maven we want it to run at all – the plugin “plugs” itself into the proper lifecycle events in Maven so you don’t have to worry about that. Then it’s a matter of providing scomp with the appropriate options. Here again there are conventions that are good to follow, for example you may note the schema directory location and the .xsdconfig file location.

     
    org.codehaus.mojo
    xmlbeans-maven-plugin
    2.3.3



    xmlbeans



    true

    src/main/xsd/MyServiceTypes

    MyType.xsd


    src/main/xsd/xmlbeans.xsdconfig

    src/generated/java
    ${project.build.directory}/generated-classes
    1.5
    true


    Overriding the Package Names

    By default, XMLBeans uses the schema namespace as inspiration for creating your Java package names. In some cases this works well, but a schema designer may not take Java developer usability into account when creating XSD documents. For example, you may not appreciate a package name of “com.mylongcompanyname.services.public.purchaseorders.v356”, and would prefer something like “com.mylongcompanyname.po”. Happily, scomp will grant your wish by honouring any overrides you have defined in an xsdconfig file. In this case, we would create a file (xmlbeans.xsdconfig) with the following content:

          


    com.mylongcompanyname.po


    Gratification

    Now you can run “mvn clean install” on your pom and marvel at the automation you just implemented.

    Mavenizing your XMLBeans Code Generation