These days when I start writing code for a new project, I feel a sense of urgency to create an end to end development workflow as soon as possible. I love the phrase “walking skeleton”. So think of this end to end workflow as a walking skeleton of a continuous delivery pipeline. It’s basically taking a smallest possible version of your application and setting up the necessary infrastructure to be able to continuously build, run analysis, test and deploy to a production-like environment.

Please don’t confuse this idea with a waterfall concept. Your build, test and deployment scripts don’t have to be future proof. They don’t have to cover all the possible scenarios that you might need in the future. Far from it. The idea is to have (just) enough capability in the pipeline to be able to cover a (vertical) slice of your application. It should have enough “meat” to get you going.

Also note that the definition of “vertical” has grown larger with the DevOps way of doing things. You should probably not stop at automated deployments either. Can you also monitor this first small version of your application already — both in terms of usage and errors ? Do you already have a central place to track bugs, features and collaborate with other stakeholders? So yes, the idea is to really get your whole cycle up and running as soon as possible. But I digress..

.travis.yml file

YAML files have become a standard way of configuring continuous integration & delivery pipelines. Travis CI is no different.

YAML file below tells Travis to;

  • (lines 4-5) Install node 7.7.4
  • (lines 7-8) Make docker available.
  • (lines 10-20) Install and configure the latest stable version of Chrome
  • (line 23) Download all javascript packages that the Angular app uses
  • (line 26) Run all Angular component tests and do not watch files, quit immediately after.
  • (line 27) Run all end to end protractor tests. Think of this like Selenium but a bit more specialised for Angular.

  • (lines 29–30) If the build is successful and branch is master, prepare for packaging and deployment.
  • (line 31) Build the angular app using the CLI. This will transpile all .ts files and create .js bundles using webpack. It will eventually create a folder called “dist” which will include all the release artefacts that are ready for deployment.
  • (line 32) Build a docker image with the specified name and based on the specified docker file. You can see the docker file and the nginx configuration if you’re interested.
  • (line 33) Tag the image that’s just been created using the SHA of the latest git commit. This way any docker file can be reverse engineered to it’s exact source version.
  • (lines 34-35) Login to docker hub and push the image (using secret environment variables which I’d set earlier on Travis). You can see a list of images that I’ve already pushed on docker hub.

At the time of this writing there were 20 builds in the history.

So last step above pushes the brand new docker image to Docker Hub but how does it get deployed on Azure ?

Continuous Deployment of Docker Images to Azure

There’s one more step which is kinda invisible and that’s the deployment of the new docker image to an Azure Web App as a container. This is achieved through a webhook that is setup behind the scenes in Docker Hub and Azure Portal. There are a few commands that need to be executed against the Azure CLI:

Enabling container continuous deployment feature:

az webapp deployment container config -n Roost-Angular -g Roost -e true

Obtaining the WebHook URL:

az webapp deployment container show-cd-url -n roost-angular -g roost

Pasting the WebHook URL on Docker Hub:

Docker Hub WebHook

Once this is done, whenever a new image is pushed into Docker Hub, this webhook is automatically invoked. I’ve noticed that if I make a GET call to this URL manually, it also kicks off a new deployment. Straightforward.

And on Azure Portal there’s a designated “Docker Container” tab under the Azure Web App for this purpose. Notice the name of the image and the webhook url (which is concealed by default). One docker convention to keep in mind is that when a :tag is not specified it means :latest.

Docker Container Tab Azure

More information

Thanks for reading.



These days everybody is talking about DevOps and CI pipelines. Unicorn tech companies are building DevOps tools and CI pipelines as a service and bragging about their user friendly design and how easy it is to setup these pipelines on their branded platforms. This is all nice and useful, however, I can’t help but see another silver bullet syndrome.

Anyone who’s building non-trivial software systems long enough know that software development is hard. Especially with sizeable teams that are larger than only a few devs, it’s very easy to turn the system into a big ball of mud and get the codebase into a state that is difficult to understand and to reliably test in an automated fashion.

If you don’t have reliable automated tests with realistic test cases and good coverage, your CI pipeline is just a lie. So please stop treating your CI pipelines (that probably took you 10 minutes to create these days) as silver bullets.

All that said, if you have actually done the REAL work and payed attention to your codebase, your architecture and treated your tests as highly as your production code, then you can now go ahead and create your CI pipeline in 10 minutes and it will benefit you tremendously.

This is a follow-up on my previous blog posts where I did the real work and made my codebase clean and testable. You can find them here in two parts: Part 1 and Part 2. Having said that, these are no pre-requisites to this post by any means. Just continue reading if you’re only interested in the Continuous Delivery Pipeline itself, you should be perfectly fine.

Why CircleCI

I know it won’t sound very exciting but there’s actually no particular reason. My two biggest requirements were FREE and SaaS. I obviously don’t want to pay a monthly fee for the CI pipeline of an open source hobby project. And I just want to use it SaaS style and don’t want to own anything. To name a few alternatives to CircleCI which are free for non-commercial open source projects and SaaS; there is VSTS, AppVeyor and Travis CI. I used the first two, they’re all very good offerings, I would also like to carve some time to try Travis in the future if I can.

Configuring the Pipeline with CircleCI

There’s an in depth documentation that describes the ins and outs of CircleCI and how to configure your pipeline for various needs and scenarios. It starts with:

CircleCI automatically infers settings from your code, so it’s possible you won’t need to add any custom configuration.

If you do need to tweak settings, you can create a circle.yml in your project’s root directory. If this file exists, CircleCI will read it each time it runs a build.

The automatic inference of settings does not completely work for my codebase. There are certain build steps that I want to override and specify my specific need — like transpiling TypeScript and running tests against a DynamoDB emulator, so I need to create a circle.yml file in the project root directory.

Primary Steps of a CircleCI Pipeline

When we talk about a CI pipeline, we’re actually talking about a series of steps that are executed one after the other in a deterministic and repeatable manner.

The circle.yml file has seven primary sections. Each section represents a phase of the Build-Test-Deploy process:

machine: adjust the behavior of the virtual machine (VM)

checkout: checkout and clone code from a repository

dependencies: install your project’s language-specific dependencies

database: prepare a database for tests

compile: compile your project

test: run your tests

deployment: deploy your code to your web servers

For each of these sections CircleCI automatically infers commands based on the characteristics of the codebase.

You can specify when to run custom commands relative to CircleCI’s inferred commands using three special keys:

pre: run before inferred commands

override: run instead of inferred commands

post: run after inferred commands

CircleCI configuration of Hackathon Planner

I want to create the CI pipeline for Hackathon Planner’s REST API. This API is built with Node.js, ExpressJS, TypeScript, DynamoDB and deployed to AWS Elastic Beanstalk.


My REST API is running on node 5.11. So I want to use the same version of node to run my tests. 

I also need Java Development Kit (jdk) to run the DynamoDB emulator. My Unit/Integration tests will run against this emulated DynamoDB instance.

After node and jdk is installed on the machine, I need to run 3 commands to download, extract and run DynamoDB emulator (under section “post”):

version: 5.11
version: openjdk7
- curl -k -L -o dynamodb-local.tgz
- tar -xzf dynamodb-local.tgz
- "java -Xms1024m -Xmx1024m -Djava.library.path=~/DynamoDBLocal_lib -jar ~/DynamoDBLocal.jar --port 8000": background: true


In this part of the build step, I’m taking over the inferred behavior of CircleCI using the “override” keyword.

- tsc -p .

“tsc” is the TypeScript compiler and the “-p” flag tells it to compile the project given a valid configuration file (tsconfig.json). In my codebase, this file is located in the root directory.

“tsconfig.json” file below tells the compiler to transpile all TypeScript files in the folder (recursively) except the contents of the “node_modules” folder. For more info on TypeScript compiler options, checkout this documentation.

"compilerOptions": {
"module": "commonjs",
"sourceMap": true,
"target": "es6",
"moduleResolution": "node",
"allowJs": false
"exclude": [


Now that Node.js and DynamoDB is installed on the build machine (actually a container) and the application is transpiled into ES6 JavaScript that node can directly understand, it’s now time to run all the tests.

Once again, I’m taking over the inferred behavior of CircleCI using the “override” keyword and call into a predefined npm script.

- npm test

When the “npm test” command runs, behind the scenes npm looks into the “scripts” section of the “package.json” file to figure out what to execute.

// package.json
"scripts": {
"start": "node ./bin/www",
"test": "node ./tests/runUnitTests.js"

Via this lookup, it’s going to find and execute the “runUnitTests.js” file, which loads the jasmine test framework and executes the tests via another configuration file — “jasmine.json”.

// runUnitTests.js
var Jasmine = require('jasmine');
var jas = new Jasmine();

“jasmine.json” file specifies where the tests and test helpers are and adjusts certain test behaviour. For example here I specified that I don’t want tests to stop running on failures because I would like to get a complete feedback. I also specified that tests should run in random order. All my tests in this project (unit / integration) are isolated from each other, so I should be able to run them in any order.

// jasmine.json
"spec_dir": "tests/unit",
"spec_files": [
"helpers": [
"stopSpecOnExpectationFailure": false,
"random": true

Only when all tests run successfully, will the build continue to the next step and deploy the application to production.

From CI to CD — Taking the next step

At this point I’ve created a complete CI pipeline where at each commit to a branch a build will kick in, compile the codebase and run all the tests. Now all developers can integrate their work as early as possible and get continuous feedback from the build server.

However, if you and your team has invested into the CI pipeline enough that you trust the depth and breadth of feedback you receive from it, you can take the next step into continuous deployment.

Of course nothing forces you to deploy directly to production, automatically deploying to a staging environment for a final round of Q/A is definitely a more common approach.

dependencies & deployment

Below are the build steps where I install AWS Elastic Beanstalk CLI with its dependencies and call the deploy command for the right branch and profile.

- sudo apt-get install python-dev
- sudo pip install 'awsebcli==3.7.4' --force-reinstall
branch: master
- eb deploy --profile default

It turns out that “python-dev” tools is a pre-requisite for AWS EB CLI. Notice that I only deploy the master branch to production. None of the other branch builds will run this deployment step.

If you think about what needs to happen behind the scenes for an actual deployment process, this configuration seems to be too little. For example, how does EB CLI know where to deploy the application in AWS ? or how does CircleCI authenticate with AWS on my behalf to make this deployment? So indeed, this is not the full story. There are two more pieces of the puzzle that I want to show you quickly.

By convention, “.elasticbeanstalk/config.yml” is the path that EB CLI looks for during the deployment. One of the key things in this file is the first section where the environment name is defined. “hackathonplanner” is the environment that is uniquely defined under my AWS account and this is how EB CLI knows where to deploy the package to.

environment: hackathonplanner
application_name: Hackathon-Planner
default_ec2_keyname: null
default_platform: 64bit Amazon Linux 2016.03 v2.1.3 running Node.js
default_region: eu-central-1
profile: eb-cli
sc: git

The answer of the second question, which is, “how does CircleCI authenticate on my behalf to make this deployment happen” is below:

CircleCI AWS Keys Page

As you see, CircleCI has built in first class support for AWS. Just like it’s recommended on the configuration page above, I’ve created a unique IAM user in AWS and gave it deployment permissions to the “hackathonplanner” environment. Then I copy and pasted the “Access Key ID” and “Secret Access Key” over from AWS to CircleCI. Now CircleCI can deploy to my environment on my behalf. It’s that easy.

Build results

At the time of this writing I’ve got 72 builds queued and ran both green and red.

If you look at one of the successful builds, let’s take the last one, you can see each and every step that were executed and their respective logs. 

After uncollapsing the test step and reading through the logs, I noticed that for some reason my tests can not tear down DynamoDB tables anymore. It doesn’t fail any test since each test creates new tables with random names, so the build is still green and everything is fine. I need to dig in and understand what has changed in the meantime — while I was not paying attention. 

This reminded me of a recent talk from Ian Cooper where 17 mins in, he asks the audience if they deployed a piece of software and left it alone for a year, what would they expect when they come back. Would it still be up and running correctly or not? So the point is, things keep changing around our software (especially in the cloud) and when nothing is done, our software decays by itself.

Anyway, after this little segway into a different subject, I think it’s time to stop here. Thanks for reading and if you have any comments or questions you can write it below or tweet me at @hakant!



In the first part of this blog post, I shared some fundamental ideas that form as background information for what I want to achieve here. So if you haven’t read that one yet, I recommend you to check it out.

A few months back, I decided to convert my hobby project — Hackathon Planner API from pure Javascript to TypeScript and wrote a blog post about it. This time I sat down to build an Automated Test Suite and a Continuous Delivery (CD) pipeline around it.

Effective automated testing is a natural prerequisite for Continuous Integration & Delivery. How can it not be? If you’re not getting quick and broad feedback from your software, how can you delivery frequently? So having a testable architecture and an effective test strategy is very crucial. Let’s dive in.

Right Architecture for Subcutaneous Testing

I’m a huge fan of subcutaneous testing. This type of testing starts right under the UI layer with a large scope (which preferably spans all the way down to the data store) can have a great return on investment. On Kent Beck’s feedback chart it would score high up and to the right: fast and broad feedback.

Kent Beck's Feedback Chart

So now that we want to target our automated tests below the delivery mechanism layer (UI, Network etc.), this is where an important architectural decision comes into question. What is the main boundary of my application? Where does my significant business logic start? Plus, how can I make my application boundary very visible and clear to all developers? If you follow down this path of thinking, one nice place to end up is a combination of command and mediator patterns.

The combination of these patterns is about sending commands down through a very narrow facade, where every command has a very clean input and output “data transfer objects” or POCO’s, POJO’s.. whatever you like to call them depending on your stack of choice. They’re simply objects that carry data and no behavior. 

To be able to use this pattern in my REST API, I’ve created a simple module in TypeScript that allows me to execute commands and optionally get results from them. It’s here: TypeScriptCommandPattern.

In the example below, you can see how a test request is sent down to the executor and a result is returned back.

And here is how a “Hello World” style command handler looks like. Notice that it has a clean request and response definitions. At the end of the implementation we make sure that the handler is registered and mapped to the request. In this structure handlers are singleton objects and they can later be resolved by the type of the request and then get executed.

Using this pattern in a real world Node.js REST API

One great architectural benefit of this pattern is that it allows you to nicely separate an application into many independent scenarios — vertical slices. Take a look at these scenarios from Hackathon Planner:

Kent Beck's Feedback Chart

Each of these TypeScript modules are a vertical slice. They’re the full story. Yes they make use of other external modules when necessary, but when developers read through these scenarios, they get the full picture. And if code sharing between these scenarios are done wisely (with correct abstractions), then a change in one scenario is contained and does not necessarily affect any others.

Let’s take a look at the implementation of one of these scenarios and extract some key properties. Let’s pick the scenario in “GetIdeas.ts”:

  • Line 3–4: Imports the base async command handling structure and also the container to register itself at the end of the file.

  • Line 6–9: External modules that this handler uses.

  • Line 11–12: A module that is used by multiple scenarios. Be careful, I say “used” not “reused”. Remember the “fallacy of reuse”. This module (IdeaPrePostProcessor) is used to massage and sanitize the Idea entities before they’re sent to the user interface and before they’re written to the database. Multiple scenarios use this module the exact same way and for the same need. There are no slight variations per scenario, that’s why it’s a right abstraction and that’s why it’s “use”, not “reuse”.

  • Line 14–16: Since our handler is an async one (because it’s accessing the database), it implements the AsyncCommandHandler<TRequest, TResponse> and implements the HandleAsync method that returns a Promise of GetIdeasReponse object.

  • Line 17–38: Complete vertical implementation of the handler. It’s basically scanning the database for all the ideas, sorting them based on an algorithm, encapsulating them in the response object and return. Notice that at line 25, the “await” keyword is used. It’s a simple and very readable way of representing asynchronous code. It exists in recent versions of TypeScript as well as in ES7.

  • Line 42–47: Request and response objects that are used by this handler are defined and exported. These objects should be available to the rest of the application as well.

  • Line 50–52: A singleton instance of the handler is instantiated and registered to the type of the request object. Based on application needs, this can be made more complex and sophisticated. Statically typed languages incorporate lots of ideas around IoC containers whereas dynamically typed languages like javascript, not so much. Also keep in mind that these TypeScript generic constructs only exist at design time. They don’t exist in the transpiled javascript and that makes it impossible to discover or scan these constructs at runtime.

Simple & Stupid ExpressJS Routes

If you have experience with building MVC type web applications or REST API’s you’re probably familiar with the idea of controllers. Routes are the same concept in ExpressJS. Keeping your routes and controllers nice and clean is a good discipline to have. Remember, we want to contain our core application logic and try not to leak it outside as much as possible. Not to frameworks, not to external libraries.

Hackathon Planner’s most significant route is the Ideas route where ideas are being CRUD and a few further actions are taken against them. Code below shows the routes defined in there. Notice how clean it reads and how all business logic is kept out of it. The only thing these routes do is to pass requests and responses up and down the stream.

Testing all the Scenarios

The structure of the software created so far lends itself nicely to the subcutaneous testing style discussed earlier. So I take advantage of this by taking the following actions in the tests:

  • Remove the ExpressJS layer that normally sits on top. This is a delivery mechanism and I don’t necessarily need it to test my business logic — since I’ve already separated that logic clearly out of this layer.

  • Load all scenario handlers together with the real data store (or a realistic emulator).

  • Shoot requests & verify responses.

  • In some cases, if verifying the response alone isn’t sufficient, go further and directly check the data store for verifying the side effects. In my case it wasn’t necessary, I could both act and verify using my handlers alone.

Let’s take a look at a few of these tests. Below is a part of jasmine spec that tests inserting and fetching ideas by running several scenarios.

Here are a few key properties of these tests:

  • Lines 17–26: Before each test new NoSql tables are created and after each test they’re dropped. This makes every test run in a sandbox isolated from one another. So they can all run in any random order. In general, this is a characteristic of a unit test, not an integration test. So here we have the best of both worlds!

  • All tests run against a DynamoDb local emulator. So the feedback coming from the database is real. I trust that the Amazon team behind DynamoDb makes sure that the emulator behaves exactly the same as the real one in the cloud.

  • Integration tests are slow right? No, not if you don’t test against user interfaces, or have many number of network calls, or use slow data storage devices. These tests don’t get into any complexity of testing against a UI; they don’t bring up a web server and make excessive number of network calls; and they make use of an in memory database emulator running on the same box. These 15 tests run in ~2 seconds even though each of them setup and tear down their data tables. So you can run quite a lot of these in a few minutes.

  • By their nature, integration tests are broad. They give broad feedback. But this is a trade off because when they fail, you have to dig a little deeper to understand exactly what failed in comparison to the unit tests that are tiny and focused. But that’s a tradeoff I like to make in general.

  • Having said that, I’m by no means trying to trash the value of unit tests here. If I see the need — like any piece of code that is significant for the system and does interesting things — I’ll go and unit test that part in isolation. Especially code that’s heavy on algorithmic work. So unit tests are valuable when they’re implemented for the right code against the right abstraction.

  • Last but not least, these type of tests (i.e subcutaneous) interacts with the system right at the outside of the significant boundary. This means as long as feature requirements don’t change, these tests remain valid and useful. They can survive large refactorings because they’re not coupled to the implementation details! This is a huge deal in my opinion.

In next and the last part of this series, I would like to write a few things about my experience with setting up a CD pipeline using CircleCI, which transpiles all TypeScript files, installs dependencies, runs tests and deploys to AWS Elastic Beanstalk.

Thanks for reading.

~ Hakan



A few months back, I decided to convert my hobby project – Hackathon Planner API from pure Javascript to TypeScript and wrote a blog post about it. This time I sat down to build an Automated Test Suite and a Continuous Delivery (CD) pipeline around it.

Hackathon Planner API is essentially a REST API written in Node.js and Express. Stores its data in a NOSQL database (DynamoDB) and hosted on AWS Elastic Beanstalk.

I thought I would best write this up in 2 blog posts – due to the fact that most of what I did in this repository is based on some fundamental concepts which I’ve been reading and thinking about in the last few years.

In this post, I would like to touch on a few of these ideas that guided me in my refactoring decisions for making the codebase cleaner, less coupled and more testable.

In the next post, I’ll share some more details and examples from the codebase.

Enabling Continuous Integation & Delivery

If you are not very familiar with the terms “Continuous Integration” (CI) or “Continuous Delivery” (CD) I suggest checking out ThoughtWorks website and Martin Fowler’s post to learn more about these software development practices.

I’ve intentionally used the word “enabling” in the title above. I’ve seen many brownfield projects where the team wants to start practicing CI; however, depending on the age and size of the project this may turn out to be very difficult, sometimes even impossible. These are not practices that can be applied to a project only from the outside. They can’t always be easily introduced as an afterthought either, they have to be built and enabled inside out; and it takes time & effort to get there!

The way I see it; practicing CI or CD are kind of a “high performance state” a team reaches after getting A LOT OF THINGS RIGHT; especially around system architecture and automated testing. Skills of the team members also matter, big time! Committing to trunk everyday without breaking stuff and more importantly, always introducing a right amount of tests at each new check-in is not an easy task for everyone.

Clean Architecture

One of the important lessons I learned from Robert C. Martin’s talk Clean Architecture and Design (more resources here) is that, it’s almost always a good idea to keep your business logic, the heart of your system, clean from external concerns like frameworks and delivery mechanisms (web frameworks, native application hosts, operating systems etc.).

This is beneficial in many ways. Keeps your business logic clean and free from the complexities of the environment around it. It also enables great automated testing capabilities which is probably the most important prerequisite of “Continuous Integration” way of working and delivering software.

Btw, just because I’m referring to Robert C. Martin’s clean architecture here doesn’t mean I agree with everything he’s suggesting there – like f.ex his claim on: “storage is an insignificant detail”. I’d still like to think of storage interaction as an integral part of my system. More on that in a bit.

Vertical Slices instead of Horizontal Layers

We’ve all built n-tier applications where software is structured in horizontal layers – most typical layers being Presentation, Business and Data Access. I’ve built many applications in this way and it’s almost certain that I’ll come across many others in the future.

What’s the promise of n-tier and how did it become so widespread? I think this sentence from wikipedia sums it up:

N-tier application architecture provides a model by which developers can create flexible and reusable applications.

In recent years; gotten tired of repeating problems introduced by organically grown n-tier applications – like tight-coupling, bloated classes or services with multiple responsibilities, organically introduced premature/wrong abstractions; all leading up to code that has become unreadable and hard to reason about. You know you’re in a “generic, reusable n-tier application” when you have to jump to definitions up and down OVER AND OVER AGAIN in order get a slightest clue about what a specific scenario is trying to achieve.

Luckily, there are alternative ideas in the community. One of my favorites is the direction Jimmy Bogard is taking in his talk SOLID in Slices not Layers, and his library MediatR that I’ve come to learn and love in the last few years.

I’ve built a few applications using MediatR where I implemented all scenarios (think of these as endpoints in a REST API) in vertical slices and kept the shared code between them to a minimum. I really enjoyed the outcome. Readability, cohesion and testability of these applications went really up.

Recently I listened Scott Allen on a podcast where he mentioned he’s also a fan of vertical slicing and he has a blog post on a related idea.

One other lecture I recommend seeing is by Udi Dahan from NDC Oslo 2016: Business Logic, a different perspective where he talks about the fallacy of reuse.

Last but not least, I wrote a tiny MediatR style application/command facade in TypeScript. I do make use of this module in HackathonPlannerAPI and I’ll write more about it in my next post.

Resist the temptation of sharing and reusing code unless you have a good justification and the right abstraction

Before you decide to share code between application scenarios, think twice, think three times. If you really have to do it, make sure you build a very clear interface around that component or module. Like Udi Dahan says in his talk (shared above), USE this component from multiple scenarios, do not RE-USE it: If you find yourself tweaking the component for each new scenario that’s using it, you probably got the boundary wrong. Find the right boundary or refactor this component back into the scenarios and stop sharing/reusing it.

In one of my favorite medium posts I’ve read recently, the author really nails it:

When Business throws more and more functionality (as expected), we sometimes react like this:

Shared Business Logic-1

Instead, how should we have reacted:

Shared Business Logic-1

Unit or Integration… Let’s call them all TESTING.

For a good chunk of my development career, I was told that the “unit” in unit test is a class. So as a result, my programming style evolved in a way that I always felt the need to design my classes with this type of testing in mind: always being able to isolate a class from its surroundings.

I understand that some call this Test-induced design damage. Somehow in the world of C#, using Dependency Injection and programming against interfaces still feels very natural to me. So for me, this is not a big deal and I still find it useful to be able to swap things in and out as necessary and in every level of my implementation.

However, what I’ve come to learn eventually is this: coupling your tests to your implementation details will kill you. Just like Ian Cooper explains in his brilliant talk at NDC Oslo. So if you’re writing tests for each and every single one of your classes, it’s very likely that you’re doing it wrong and soon you’ll find out that your tests are slowing you down instead of giving you the feedback and agility you were hoping for when you started.

Instead, find your significant boundaries. Meaningful boundaries that are composed of one or more classes and that represent business needs. The key is this: even if your implementation details change, after let’s say a BIG refactoring, your tests SHOULD NOT need to change.

What is a better significant boundary than the whole application boundary? This is essentially what a MediatR style pattern gives you. One narrow facade for your whole application. What a great boundary to write your tests against!

Speed of tests matter but technology is changing too

One big reason (heck, maybe the only reason) why people will tell you to design your system in a way that you can swap out your database in favor of a test double (f.ex by using a repository pattern) is the speed of tests.

This could be a necessary evil back in the days where databases and infrastructure was bulky and slow. But is this still true? For Hackathon Planner API project I wrote 15 tests which execute all application scenarios against a DynamoDB Local Emulator, so nothing is being swapped in or out on the application side but each test sets up and tears down the NOSQL document store – so that all tests are completely isolated from each other.

The result is amazing. It takes ~2 seconds to run all the tests. Let’s say if my application grows in size in the future and that I had to execute 300 tests, it would still take me below a minute to run all the tests!

I know this example will not represent every project out there in the wild for different sorts of reasons but when it does, it definitely blurs the line between “Service” and “Unit” tests in the test pyramid:


Thanks for reading. In Part 2 there will be code, I promise.


Part 2 is now available! You can reach it here.



I started using TypeScript as part of a project I’m working on. If you’re new to TypeScript or aren’t yet sure about the benefits it can bring to your Javascript codebases, I recommend seeing this presentation from Anders Hejlsberg (Build 2016).

One thing I really like about TypeScript is that it’s a superset of Javascript, which means all Javascript you wrote until now is already valid TypeScript. This makes it possible for anyone to come into TypeScript and adopt it as much or as little as they want. Anders calls this “turning the knob” experience. If you want to go all the way in with the type system, you can do so. Or if you only want to use it in certain parts of your codebase, you can do that too.

As I learned more about TypeScript, I got convinced that I can benefit from it in most of my Javascript projects, in particular, the ones that involve rich business domains. As an experiment, I started converting the backend of Hackathon Planner, written in Node.js and Express, into TypeScript.

All in all, it’s been a smooth experience but if you’re a developer yourself, you know that this type of refactorings never happen on a straight line, there are always bumps and quirks along the way.

Here are a number of things I experienced as I gradually converted all of the codebase from ES6 Javascript to TypeScript. This repository is open source and on GitHub. TypeScript version of the code might still be on a separate branch if I haven’t merged it yet.

1. At transition, module loading can get tricky in Node.js

TypeScript shares the same concept of a module as ES6. TypeScript’s module import and export syntax is very similar to its ES6 counterpart but the CommonJS module system that Node.js uses is different.

I wanted to slowly port my Javascript to TypeScript so that I can continue making releases in between. Obviously, this means that part of the code would be transpiled from TypeScript while the other part is originally Javascript. In many occasions I had to load a module from a .js file and the target module was transpiled from a .ts (TypeScript) file. This can cause issues.

Look at the code below and see what happens when “ideas.js” has to load a module from “AdminRepository.ts”. All of a sudden you have to use the “default” property to be able to access the exported object. Eventually, when the consumer (idea.js) is also converted to TypeScript, the problem will go away.

2. Adding Type Definition Files to a Project

Type definition files let you consume third party libraries as if they were written in TypeScript. So you get all the benefits of type checking, intellisense and all other TypeScript design & compile time goodness even when the third party library is written in pure Javascript.

Before 2.0 release of TypeScript, finding and using these type definition files could get a bit painful and confusing – same goes for creating them. TypeScript 2.0 fixed some of these issues by introducing a simplified declaration file acquisition model.

In node.js, in order to grab the underscore.js package, you’d use:

npm install underscore —save

Now, if you also want to use this package from TypeScript, with strong typing, you’ll have to download the necessary type declaration file as well:

npm install @types/underscore —save

Starting with TypeScript 2.0, these type definition files are regular npm packages.

After downloading the type definition file, VS Code stops complaining. Furthermore, I get all type safety features and intellisense while using this library, which is what I want and why I use TypeScript.

Types Packages

On the other hand, sometimes a library you’re using might not have a type definition file. For example, I’m using a passport extension library called “passport-github2” which doesn’t have a definition file yet (and probably never will). Inside any TypeScript file that uses this library, VSCode complains that it doesn’t understand this library. TypeScript compilation also results in the same error. But even though the typing file is missing, the actual library is there and it works during run-time.

But of course we’ve a problem here. The problem is that the TypeScript compiler will complain about this missing type definition file forever. Being a good developer, what you want to do is to avoid having these kinds of error messages lying around and being ignored.

Missing Typings

Having said this, it’s worth mentioning that people are complaining about this aggressive behavior of TypeScript, as it eagerly wants all 3rd party packages to have type definition files. But there’s probably a solution coming.

Until then, there are 2 possible solutions:

  • Set “allowJs” compiler option to true (in ts.config file). Although I should warn you that this will not only allow third party packages to be consumed without type definitions, it will also allow any developer to put pure javascript code into your project.
  • Another solution is to declare an ambient module for that library in a separate file – either marking it with ‘any’ keyword or with proper types.

I chose the latter, because I noticed that “allowJs” flag has far bigger consequences. It silences a bunch of other types of errors/warnings which I would like to be informed about. Here is how those ambient modules look like:

Ambient Modules

3. Incorrect / Incomplete Type Definition Files

Sometimes a type definition file (.d.ts file) turns out to be incorrect or incomplete. The screenshot below shows a situation where the package “aws-sdk” has an incomplete type definition. It doesn’t know that “endpoint” property exists even though it does exist in AWS documentation.

Incorrect type declarations

I literally went into the type declaration file I downloaded via npm and added the field “endpoint” in there to fix the problem. Probably I have to commit this file into source control now. Unfortunately I couldn’t find a best practice that explains how to properly handle these situations. For now, I’ll go with the solution of manually fixing the definition file and adding it into the source control. Below you can see the patch I made to the declaration file of AWS-SDK (line commented out).


4. Dynamic Augmentation of Javascript Objects

A very common pattern in javascript is to dynamically extend or augment an object with new or complementary behavior. Here below, I’m using bluebird.js to convert a callback oriented API to a Promise based API. All of a sudden though, TypeScript doesn’t know about these new “async” methods that bluebird has plugged into the dynamodb client.


The solution is to either declare this object as a new interface or mark it as “any”, as suggested here). I’ve chosen the first, a more type-safe approach, and declared my new type in the “aws-sdk” type declaration file, which solved the problem in a nicer way imho. See the DynamoDBAsyncClient interface definition below:


5. Using import or export in your TypeScript file turns it into a module

Both in TypeScript and ES6, any file containing a top-level import or export is considered a module. So right after you use one of these keywords in your Javascript or TypeScript file, it will start behaving like a module. I stumbled upon this while I was trying to implement an interface that I had created in a separate file.

Let’s first look at a simple case that doesn’t make use of modules. Here below there’s an interface and its implementation right below it. See how AdminRepository.ts refers to IAdminRepository interface without any ceremony.

Now look at these other two files, in which the interface exposes a third party object and therefore has to import that third party module. Note that index.ts can’t just refer to that interface anymore, without actually importing it.

By the way, I also noticed that TypeScript coding guidelines do not recommend using “I” as a prefix for interface names. So I need to hold my C# reflexes.

There are more than a few things that one has to do in order to make the TypeScript compiler happy. For anyone with a “Javascript mindset” this may feel like unnecessary burden. But once you make this investment, the amount of help and insight you get from the TypeScript compiler and tooling around it is amazing.

Thanks for reading.

~ Hakan

Copyright © 2017 - Hakan Tuncer

Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer’s view in any way.