🔗 This article is a cross post from the Scott Logic blog

Last weekend I traveled to Bradford for the second DDDNorth - a community-organised developer event which featured some excellent speakers. DDD events are held all over the country throughout the year and are supported by local users groups. I highly recommend actively seeking out and attending these events, they are a great way of picking up new ideas and insights into technologies you don't get a chance to normally play around with. I'd like to now report on the five talks that I attended through the course of the day.

DDDNorth logo

Lions and tigers and hackers! Oh my! - Phil Winstanley

In my opinion this was a great opening talk from Phil. My only complaint was that this was not a keynote! Phil's talk, bursting full of anecdotes from a wide range of domains, demonstrated the urgency of the problem. We need to get better at security. To do that we need to start thinking seriously about security from day one.

We're moving more of our lives into the virtual - our money, our data and even our identities. This is a honey pot of unimaginable size, unknowable consequences, and is increasingly targeted over modern, more secure operating systems. Not only that, we are connecting disparate networks together and accessing content with mobiles devices. This introduces new security threats. The threat comes from hackers, organized criminals, rival companies and even governments.

Phil mentioned Microsoft's Security Development Lifecycle Process, designed to be a robust complement to standard models of software development. This is now deployed in all areas of Microsoft and Phil provided some stories to show how this has benefitted the company. As developers, we should appreciate that security is not a product, it is a process and a policy. Security is an essential, every day task. We should, at all stages in the software development lifecycle, analyse our products from a security point-of-view by performing threat modelling, discussing privacy, removing deprecated code and devising incidence response plans.

Test all the things or maybe not! - Will Charles

As promised, this was a pragmatic talk about TDD. Typically, code examples demonstrating TDD are so simple and detached from reality that they do not scale well for use in production code. Adding the extra complexity, with mock object frameworks for example, can often hinder test quality. Will's talk focused on the tips and tricks he has found help apply the principles of test-first development in the domain of a complex system.

Fundamentally, we add tests to code to demonstrate the correctness of our code, which has some demonstrable business value. This has caused people to advocate 100% code coverage and for code coverage to be seen as some form of quality metric. Put simply, code coverage is a guide and does not necessarily correlate with some concept of value. Focus instead on testing the areas of code which deliver business 'value' - this is going to be code that is complex, likely to change and likely to contain bugs.

Make your tests work for you. Meaningful test names provide an abstract overview of what unit of value you are testing. Using a domain-specific-language by encapsulating arrange, act and assert stages into meaningful method names goes a long way into making your test code 'production-quality'.

Finally, Will revisited Michael's Feathers' concept of sprouting as a means of introducing unit tests into legacy code. He has found using this technique identifies potential points of change and areas of responsibility in the code. These are areas ripe for refactoring and once you have tests in place, they can be used as a means of evolving the legacy code. Overall, this was the most practical talk of the day in my opinion and lead quite nicely into the next talk...

Bdd - Look ma, no frameworks! - Gemma Cameron

Gemma's talk focused on a very raw form of behaviour-driven development, which rings very true and was widely appreciated by the session attendees. Behaviour-driven development centres on a shared understanding through the discussion of examples. It is a marriage of test-driven and domain-driven design, not with a focus on a particular framework, but on the ongoing conversation between developers, testers, business analysts and product owners.

Gherkin syntax and tools such as Fitnesse and Specflow are often heralded as a pinnacle of requirements capture and developer-business coordination. Very often they're just a starting point to kick off conversations. Gemma argues, like Will, that we should aim to use domain-specific-language when capturing our requirements in an effort to eradicate areas of ambiguity and capture requirements in a pure, readable and (potentially) executable form.

The take-home message for me that conversations should be the primary focus. Us developers should work at getting better at this - we will be respected more for it. You may choose to use tools and frameworks to capture the requirements and examples which result from these conversations, but we should never become slaves to them. Systems are complex because problems are complex and there is value in solving them. The nature of this complexity is such that the solution cannot be skimmed from the surface. We need to delve deep to truly understand and manage the complexity. Doing this effectively requires various areas of expertise and is a collective experience. From this process we derive good-quality software which meets the expectations of the product owners.

JavaScript sucks and it doesn't matter - Rob Ashton

JavaScript should come with a warning - it will make you lose hair, and the hair you don't lose will turn grey. Many of the humorous examples demonstrated in the infamous WAT? video are largely to do with JavaScript's approach to scoping and type coercion. Rob argued that this makes JavaScript a fairly easy target for naysayers, but these issues don't really matter. Tools such as JsLint (and it's less authoritarian cousin JsHint) will point out where you have went wrong. There has been a large developer focus on integrating these tools into development environments and continuous integration builds - as an example, my colleague Luke Page has released a Visual Studio plugin for JSLint. By doing so, developers can get immediate feedback of potential errors and take action to correct these.

An interesting tangential point was made about Typescript. As a concept it has been well-received - but it has its limits. Greenfield development may benefit more from the Typescript than a project which essentially 'glues' multiple frameworks together. This is certainly worth some detailed research and is great fodder for a blog post or even a future DDD talk! (Hint, hint, etc).

Testing is possible in JavaScript, without the need for browser automation! There are tools such as Zombie and PhantomJS which simulate a browser and work extremely well for integrating into a continuous integration environment. Rob's talks are always entertaining but remain insightful and practical. From this talk I have learned a lot more about the JavaScript landscape, and to move as quickly as possible to the fifth stage of JavaScript denial: Acceptance. JavaScript's here to stay, let's make the best of it.

Event-Driven Architectures - Ian Cooper

Finally, Ian Cooper delivered a comprehensive, accessible talk on an entirely new concept to me: event-driven architecture, or service-oriented architecture. At an extremely distilled level, this is where an application is sliced into logical parts and a service is aligned with each individual part. These logical divisions correspond to some piece of business capability and are exposed to the outside world via a data contract. This boundary must be explicit and well-defined. JSON and XML are commonly used as a means of describing the data contract.

Inter-service communication is achieved through events, and Ian took us on a whistle-stop tour on the various techniques that you can use to implement event dispatching and processing. He then progressed onto more complex aspects, such as how to maintain consistency between services and how to handle erroneous situations through the use of sagas, orchestration services and more event processing. Ian also talked in some depth about how to best deploy caching and cataloguing data as a means of improving the overall efficiency of an event-driven system.

For someone new to this kind of architecture, my overall impression is that this is an extreme form of loose-coupling. Only schema and data contracts are exposed at boundaries - not types. Everything inside a service is entirely encapsulated, right down to the data storage mechanisms (only one service has write-access to a specific logical area of data storage). This naturally gives us the means to perform unit and integration testing. However, there is certainly a large degree of complexity in the interoperation between services that requires an extensive amount of coordination to overcome. I need to make a point of learning more about this architecture!