Serve PDF documents from templates in Express

There are a number of ways to generate PDF documents in Node. You could manually construct it step-by-step with PDFKit. You could generate it from one of many HTML-to-PDF conversion libraries. If you are already using Pug templates in your app then it would be nice to be able to generate the PDFs from pug.

This is possible in two steps:

  1. Render the Pug template into HTML
  2. Use PhantomJS to render the HTML into PDF

Inspired by and leaning on some excellent NPM modules

I created express-template-to-pdf

Install it, add it to express, and now serving up PDF documents from Pug templates in your express routes is easy

const pdfRenderer = require('@ministryofjustice/express-template-to-pdf')

app.set('views', path.join(__dirname, 'views'))
app.set('view engine', 'pug')


app.use('/pdf', (req, res) => {
  res.renderPDF('helloWorld', { message: 'Hello World!' });

With options to configure the downloaded file name, page margins, and use CSS, serving up PDFs couldn’t be easier.

My colleague Steven joined in and made a few brilliantly simple tweaks, and now you can use the same module to serve PDF generated from whatever type of templates you are using in your express view engine. Not just Pug, but Nunjucks or Mustache or whatever else you fancy.

Posted in Node, Uncategorized

Spock and Clean Test Code

It can’t be emphasized enough that after correctness, comprehensibility is the most important feature of test code.

Rob Fletcher, “Spock: Up and Running”


Posted in Clean Test Code

2015 in review

The stats helper monkeys prepared a 2015 annual report for this blog.

Here’s an excerpt:

The concert hall at the Sydney Opera House holds 2,700 people. This blog was viewed about 24,000 times in 2015. If it were a concert at Sydney Opera House, it would take about 9 sold-out performances for that many people to see it.

Click here to see the complete report.

Posted in Uncategorized

Top Ten Agile Books 2015

Here are the top 10 agile books that all agile software developers should read this year, including some old classics that should be re-read regularly!

Agile Product Management with Scrum

Agile Estimating and Planning

Scrum and Xp from the Trenches

Management 3.0: Leading Agile Developers, Developing Agile Leaders

Continuous Delivery

Fifty Quick Ideas to Improve Your User Stories

The Clean Coder: A Code of Conduct for Professional Programmers

BDD in Action: Behavior-driven development for the whole software lifecycle

User Story Mapping: Discover the Whole Story, Build the Right Product

The Agile Architecture Revolution

Posted in Agile, Uncategorized

2014 in review

The stats helper monkeys prepared a 2014 annual report for this blog.

Here’s an excerpt:

The concert hall at the Sydney Opera House holds 2,700 people. This blog was viewed about 9,000 times in 2014. If it were a concert at Sydney Opera House, it would take about 3 sold-out performances for that many people to see it.

Click here to see the complete report.

Posted in Uncategorized

A Better REST Exception Mapping Technique

The conventional REST Exception Mapper mechanism in JAX-RS means that exception mapping code is disconnected from the service code, so that developers have to hunt for some remote classes both to understand and to define exception mapping behaviour. It is unnecessarily awkward to trace backwards from a service method to discover what exceptions might emerge and how they might be mapped to HTTP responses.

A declarative annotation-driven approach provides simpler implementation, allows for automated documentation, and enables defining and expressing exception mapping behaviour directly at the point of use.




For REST services, exceptions emerging from lower tiers often need to be converted to an HTTP response

  • with an HTTP status code that matches the semantics of the exception in the context of that REST service
  • and commonly other details like error messages and codes

I once worked on a project where we had to expose a large legacy service tier as a REST service. The internal exception model was built for consumption by code in the same environment and to be understood by the same developers. The granularity and information content of the exceptions was not suitable for onward transmission to clients of the REST service, so there was a lot of exception mapping work to do.

The translations involved a lot of detail, but there were only a few simple patterns going on. Given an exception type, choose an HTTP status code. Given an internal error code, choose some external code. Given an exception type, select an error message. Given some combination of type and error code, map to some other status and message and other details.

The usual approach to exception mapping in REST is with, eg

public BookExceptionMapper implements ExceptionMapper<BookException> {
    public Response toResponse(BookException ex) {
        return Response.status(Response.Status.BAD_REQUEST).build();

When an exception occurs, the runtime will look for an Exception Mapper that can handle that type, or the nearest superclass handler.

A big problem with this is that when looking at the REST service code there’s no obvious local expression of the error responses that may emerge. You have to drill down into the code to see what exceptions might be thrown, then you have to go and find the exception mapper for that type and see what happens to it, eventually uncovering the HTTP status code and other details that will form the end result.

As well as being manual and tedious for the developer to trace, it’s also manual and tedious to document the possible errors that may emerge from the REST service for clients to understand.

Wouldn’t it be nice if the developer could look at the REST service code and immediately see what exceptions may emerge and how they will be mapped to HTTP responses, define new mappings right there, and also automatically generate the client-facing documentation that explains the potential errors and their meanings?

We achieved that using a custom annotation to declaratively express the exception mappings, and an interceptor to process and perform the mapping and translation.

By modifying our REST documentation tool (we were using Enunciate) to pass through our custom annotation, we could also automate the client facing documentation.

We defined the ErrorResponse annotation with the details needed to map an internal exception class to a given HTTP response, the supplementary error code we needed to pass in that error response, and an explanatory description that could be used for documentation:

    cause = KaboomException.class,
    httpStatus = 400,
    errorCode = 639,
    description = "This might happen if you've been really bad")


Now, when you look at the REST service interface you can see (and define) the exception mapping right there:


    cause = MyException.class,
    httpStatus = 403,
    errorCode = 789,
    description = "Ooh you did it totally wrong")

Thing getThing(@PathParam("thingId") int thingId){


In some cases we needed to further sub-divide a given exception type into HTTP responses according to some internal details of the exception. We added an optional element to the annotation to create a more-specific mapping, and modified the interceptor to match according to “specificity”, eg:

    cause = KaboomException.class,

    httpStatus = 400,
    errorCode = 639,
    description = "This might happen if you've been really bad")

Note how the first two elements of the ErrorResponse mapping are addressing the cause. When you see this exception containing this value…

The rest of it is specifying the result – use this status code, send an error body containing this error code, and here’s the documentation for what this error means to a client.


The interceptor that processes the exceptions by looking for these annotations can also simplify some other things. For exaample, the interceptor can see which method the exception came from and use a convention to lookup some predefined error message text to add the error response body before sending it to the client. For example, imagine a list of error message text linked to the method name and the status code such as

getThing.400=This is the meaningful description for this error from this method

The interceptor can automatically look up message text according to that convention and so we can separate it from the annotation, leaving the developer to just specify the bits that only change at development time such as the mapping from exception to status code. Message text is now separate and can change at its own pace.


It’s hard to lay it out well in a blog post, but being able to see and to define the exception mapping declarations at the point where those exceptions emerge from the service is so much nicer than having to root around for ExceptionMapper classes that have no direct connection:


@ErrorResponse(cause = MyException.class, httpStatus = 403, errorCode = 789,
               description = "Incompatible user status")
@ErrorResponse(cause = OtherException.class, httpStatus = 401, errorCode = 157,
               description = "Reading the manual failure")

Thing getThing(@PathParam("thingId") int thingId){


Being able to automatically process those annotations when generating API documentation is not just a timesaver, but also avoids the inconsistencies that arise from manual updates. Adopting simple conventions for mapping from exception details to things like textual clarification messages also leads to cleaner code and automaton both of execution and documentation.



Posted in Java

Connascence and Transformation Priority Premise

Just heard an interesting talk by Kevin Rutherford at Agile Manchester about connascence and how it can be used as a guide during the refactor part of the TDD cycle (red-green-refactor).

“Connascence of value”, for example, means that the test code and the production code are coupled by shared knowledge of a value. In other words, there is duplication of a value, such as a price or a limit or a quantity, and that duplication needs to be resolved by refactoring, thereby breaking (or at least weakening) the coupling.

There are 9 kinds of connaescence including connascence of value, of algorithm, of meaning, of execution order, and 5 more, and they can be ordered in terms of criticality. For example, connascence of value is more critical than connascence of meaning. Kevin suggests that if you learn to recognise the different types and tackle them in order of criticality, that it can help you progress through the refactor stage of TDD without getting stuck or going in circles.

It immediately struck me that there is a similarity here with the transformation priority premise. TPP seems to apply more to the “green” phase of the TDD cycle. When you have added a new test (“red”) you have made the test suite more specific, and now you need to generalise the code to make it pass the more specific tests. Uncle Bob noticed that the transformations you apply to the code tend to occur in similar patterns at similar times, and more importantly in some order. By identifying and ordering those transformations, does the “green” stage become a little easier to do without getting stuck if you can recognise what transformation should come next?

For example, a test that expects the answer “50” can be satisfied by code that always replies “50”. Add another test, thus making your test suite more specific, and you have to generalise the code to pass both tests, perhaps by adding a conditional.

Add another test, increasing specificity of the test suite, and you have to further generalise the code to pass the tests, perhaps by transforming the conditional into a loop.

If you become familiar with the idea that very often a conditional expression may survive a few iterations of the TDD cycle until a more specific test forces some transformation, and that the transformation is usually to a loop, could it make you go faster and more easily through the TDD process? By recognising a pattern and knowing the step that usually follows, do you have to think less? Does it make it easier to build the “instinct” that TDDers gain from years of practice and experience? Could it give a head start to those who haven’t spent years internalising the techniques?

If connascence and TPP capture some common patterns that experienced programmers instinctively use when doing TDD (and just when coding), can they be refined and cleanly expressed in a way that creates a more insightful degree of guidance than can be gained from simply saying “factor out duplication” and “write the code to pass the test”.

Are connascence and TDD alternate expressions of the same concept or are they distinct? Complementary? Can they be codified, even automated into static analysis tools or refactoring shortcuts or code completion suggestions?


Further work needed….

Posted in Clean Test Code

Value-driven User Stories

There are two keys to successful agile software development.

Two, because successful agile software development is a union of successful agile business process and successful agile technical practice.

On the technical side, the key to success is the continuous pursuit of technical excellence.

As Uncle Bob puts it, “The only way to go fast is to go well”. He also points out that “the primary value of software is that it is soft”. To succeed in agile where technical practices are concerned, means retaining the ability to change the code, to add new features, to evolve the design, to release new functionality, and to be able to do all of those things quickly. Pursuing best practice in areas like TDD, BDD, continuous delivery, continuous inspection, simple design, evolutionary architecture, and of course good old-fashioned SOLID code is what stops code rotting and prevents the technical efforts from gradually getting slower and slower. Keeping up to date with those things as the software community learns and improves and evolves is vital too – as a software craftsman you’re never done learning. Exploiting developments in technology, advances in frameworks, and evolution of languages, are also essential. Principles, patterns, practices, and tools, along with training, learning, and engaging with the wider software development community, are all essential.

All of those details fall under that one headline – “The Continuous Pursuit of Technical Excellence”.

On the process side, the one headline that summarises the details is this: “Value Driven Development”.

(Those two headlines – value and technical excellence – amount to saying “The right thing, the right way”, which is hard to argue with.)

If you’re guided by value-driven development, and understand that what that really means is developing a product bit-by-bit, starting with the most important, the most valuable, before adjusting and repeating with the next most valuable, and if you understand that “value” comes from a combination of things like potential revenue, potential penalties, technical risk, technical uncertainty, the actions of competitors, marketability and so on and so on, then you will avoid the classic mistakes of spending time and money on things that aren’t important, or having technical people working on technical things that business analysts or expert users or product owners don’t really understand or care about thus failing to engage the collaborative power that steers development in the right direction.

So if the value of the next chunk of development is so important, isn’t it odd that the most commonly used format for user stories puts the value last?

The Connextra format (As a… I want… So that) is a widely taught format for user stories, especially for teams new to agile. The idea is to help people avoid writing technical stories from a technical perspective, and to think not just about the required functionality (I want), but also defining and scoping the work better by thinking about the beneficiary (As a) and the value (so that).

The value is the last thing. I so often see teams who know what functionality they want to build next but waste time struggling for a while to make up some user for the “As a” part, often coming to a standstill if the “user” is a system actor or some project stakeholder and not an obvious type of human, mouse-clicking, screen-reading user. Then they tie themselves in knots trying to phrase it in a way that works with “I want” from that user’s perspective. By the time they get to the “so that”, they’ve lost interest, given up, have already invested so much time in the first two lines that the “value” is now locked in, unquestioned. We’ve made all this effort so we’re obviously going to do it…..

This is why I prefer a value-driven format. State the value first. There’s an equally simple format for this, that can easily be taught to and understood by inexperienced teams. Start with: “In Order To”.

When the first thing is “In order to”, then you can’t go anywhere until you’ve understood why you need to do this thing.

The simple act of discussing “In order to” in a story writing workshop, or in a backlog grooming session, or in a release or sprint planning session, so often causes people to discuss the real reason, the real value, sometimes to question it, sometimes to confidently back it.

That really helps prioritisation, it really helps with deciding what should be in and out of scope for that story, and it really helps ruling out things that aren’t yet needed, that aren’t actually being driven into existence to create some bit of “value”. It really helps you avoid doing work that isn’t yet justified, that can be deferred, and it gets you to focus on the things that help you make progress, that get the right feedback,  answer the right questions and move you closer to completion.

If it’s so important to identify why you’re doing something, make that the first thing on the story. By the way, the logical way to continue means that identifying the “user” for the story is the second thing to tackle, whereas the easy bit, the bit you already know, the “I want”, comes last:

In order to…

As a…

I want…




Posted in Agile, User Stories

Outlook macro to parse emails into an Excel sheet

Option  Explicit 

Sub  Test() 
Dim  myFolder  As  MAPIFolder 
Dim  Item  As  Variant  'MailItem 
Dim  xlApp  As  Object  'Excel.Application 
Dim  xlWB  As  Object  'Excel.Workbook 
Dim  xlSheet  As  Object  'Excel.Worksheet 
Dim  xlRow  As  Long 
Dim  Lines()  As  String 
Dim  aLine  As  String 
Dim  FileName  As  String 
FileName =   "C:\Data\inbox.xls" 
Dim  I  As  Long 
'Try access to excel 
On  Error  Resume  Next 
Set  xlApp = GetObject(,   "Excel.Application"  ) 
If  xlApp  Is  Nothing  Then 
	Set  xlApp = CreateObject(  "Excel.Application"  ) 
	If  xlApp  Is  Nothing  Then 
		MsgBox   "Excel is not accessable" 
		Exit  Sub 
	End  If 
End  If 

On  Error  GoTo  0 
'Add a new workbook 
Set  xlWB = xlApp.Workbooks.Add 
Set  xlSheet = xlWB.ActiveSheet 
'Access the outlook inbox folder 
Set  myFolder = GetNamespace(  "MAPI"  ).GetDefaultFolder(olFolderInbox) 

'Visit all mails 
For  Each  Item  In  myFolder.Items 
	If  TypeOf  Item  Is  MailItem  Then 
		If  Item.Subject  Like  "*keyword*"  Then 
			xlRow = xlRow + 1               
			Lines = Split(Item.Body,   ","  ) 
			For  I = 0  To  UBound(Lines) 
				aLine = Trim(Lines(I))                     
				aLine = Replace(aLine, vbCr,   ""  ) 
				aLine = Replace(aLine, vbLf,   ""  ) 
				xlSheet.Cells(xlRow, I + 1) = aLine 
		End  If 
	End  If 
With  xlApp 
	With  xlWB 
		.SaveAs FileName:=FileName 
	End  With 
	.Quit     ' Close our copy of Excel 
End  With 

Set  xlApp =  Nothing         ' Clear reference to Excel 

End  Sub 

Posted in Uncategorized

Ascending Post Order for Category Pages in WordPress 2012

Took me a while to get this working. Requires a hosted version of WordPress; I don’t think this sort of thing is possible if you’re using a free WordPress.

  1. Edit the main wordpress query to set ascending order on category pages
    1. Go to Appearance > Editor and open functions.php (“Theme Functions”)
    2. Add the following code after the <?php
    3. Click Update file
  2. View a category page – your posts should now be in ascending order
  3. Check the home page – the posts should still be in descending order
  4. On the category page you probably have an “older posts” button at the bottom of the page, which should really now read “newer posts”. An easy way to avoid this problem is to use infinite scrolling
    1. Go to Settings > Reading
    2. Check “scroll infinitely”

Here’s the code you need


add_action( 'pre_get_posts', 'my_query' );

function my_query( $wp_query ){
if(is_admin() || !$wp_query->is_main_query()){

if ( is_category() ){
$wp_query->set( 'order', 'ASC' );

Posted in Uncategorized