The Open Sourcerer's Magic Spell Book

Written by Sebastian Golasch and edited by Rodney Rehm

Prelude - The magician's apprentice

My dear apprentice,

I'll ask you, do you want to go on a journey with me? It will take us back 15 years into the dark past of the internet.

In the year 2003 when there was no npm, no GitHub, no nothing. All we had was our sometimes loved, often hated SVN and our tiny hosted servers that ran some ancient version of PHP. Development started with opening an FTP client, copying the files (we thought changed) over to our servers, hit reload and started crossing our fingers as hard as we could in the hope that nothing broke.

If we thought we wrote a piece of code we wanted to share with the world, we added an entry to our weblog, uploaded our code snippet, updated our RSS feeds and felt great. Sometimes people approached us via E-Mail or IRC and told us it wasn't working for them. We replied and we forgot. Sometimes we sent them a "new version" via mail but never bothered to update that weblog entry. It was a happy little world. Sometimes we would've liked to know if someone actually used our code snippet. Maybe they even solved this weird thing in IE 5.5 on Mac we never got our heads around because we didn't have a Mac to work on. But overall, it was a happy little flat world. The sun went around us day by day and those sparkling things in the sky every night were clearly some happy little glowing insects far far up there.

But something changed in the years since. We've seen others performing new tricks, we got our new magic wands and we learned those fascinating new spells from all those modern books other magicians wrote. As we adapted to all that we'd learned, we changed the shape of our little open source world. It wasn't flat anymore, we learned that we run in circles around the sun and not the other way around. But most importantly, we learned that we're not alone, all those sparkling insects up in the sky weren't insects. Those were suns and planets like ours, just waiting for us to learn the trick of teleportation and paying them a visit.

With the magical tools that have been created: GitHub, NPM, Bower, Grunt, Webpack, etc. we evolved from being shabby jugglers, we became the Front End Technomagicians!

You see my apprentice, we live in exciting times, but not all those new skills we learn result in happiness. Like all magic, these new tricks have a price. Using them can burn us out, make us feel guilty and they can be a burden that make us question if all of this is worth it and our time.

My dear young magician, I never want you to experience any of this, so I've written this spell book for you. It contains all the magic tricks I've learned to prevent those feelings from rising, all the good magic that takes away most of the painful and dark side effects of Open Source trickery, so that you can focus on what this really should be: Learning, personal growth, making friends and sharing the things you love.


The grandmaster of abandoning repos

Chapter 1 - Trapped in a time loop

Every project has a start. Once you find a piece of code releasable there should no obstacles in doing so. So we're going to prepare our toolchain once and for all time to not find us being trapped in a time loop repeating the same steps over and over for every project, we're going to release.

Our tiny little project will be a CLI application that displays all upcoming JavaScript and Web Conferences that have an open Call for Papers. We're going to use the API from Web Conferences in 2018 as our data source for this. You can find the final code here.

If you ever feel trapped by a big project not coming to an end, try to remember the Unix Philosphy and break it down into tiny parts that go well together, this way it's going to be way easier to initially release it and maintain it in the future.

Creating a GitHub repo

The first thing that should be done is creating a GitHub repo for our new library. Not only is GitHub the de facto standard for sharing code (there are wonderful alternatives of course), but also it acts as our "backup" drive right from the start. Just imagine that you've worked on a project for weeks without backing up your code somewhere and then your hard drive crashes. Wooops. All that hard work is gone. If we start with a git repo from the beginning, we'll have a fail safe for that worst-case scenario.

We'll go ahead and create the repository, enter the metadata and choose a sane default for our .gitignore file (which is "node" in our case) and license we want to release our code with. After that, we can just go ahead and copy/paste the code that gets shown to us to init our local repository.

NPM setup

Next we should configure our npm client with sane defaults that can be used every time we create a new project locally. There are a ton of parameters that can be set within our package.json, but the most valuable for our case of releasing a new library are: The author's name, their email address, an URL to a public profile page of the author and the default license that gets suggested to us.

Those properties can be easily set from the command line:

    npm set init-author-name="Your Name"
    npm set init-author-email=""
    npm set init-author-url=""
    npm set init-license="MIT"

Those defaults get persistent in the local user's npm config file called .npmrc, which we can find in our home directory. We can verify that everything has been set up correctly by running the npm config list command, which will display all of our defaults.

We can then go ahead and create a fresh npm project using:

    npm init

Using a README template

Having a good Readme file for our project is necessary to make it approachable to others (and to our future selves).

A good readme should contain at least the following topics:

  • What the software is and what it does
  • How it can be used
  • How the release process works
  • Dependencies and additional installation instructions (if needed)
  • Any legal restrictions when using or including the software

How we want our readme to look like is up to us, but if we chose a template like the one Billie Thompson provides here, we can avoid the blank page syndrome.

An easy way to pull this template into our projects is the following command we can use from our command line:

    node -e "require('https').get('', b => b.on('data', c => process.stdout.write(c+'')))" >

If we plan to release lots and lots of modules in the future, we could also fork Henrik Joreteg'sfixreadme project and modify it to our own needs. This way, we only need to run fixreadme for all of our future projects and have a wonderful start for one of our software most important parts.

Contributions & the Code of Conduct

If we'd like to receive contributions to our project, which also includes issues like bug reports or feature requests, we need to enable possible contributors by providing a safe space and by handing out guidelines for how our contribution process looks like.

A Code of Conduct...

... should explicitly state that we will not discriminate against people on any grounds other than their code contributions, and make clear that behavior that would be harmful to other people that may wish to contribute is not acceptable. This gives clear authority to remove people who are causing problems and also sets a clear tone of inclusion ...

There are good templates like the contributor covenant out there, but it is important to read and understand the code of conduct to be able to act on it and to enforce it.

Two good npm modules that help you get your favorite CoC into your projects are Conduct by Sindre Sorhus and covenant-generator by Simon Vansintjan.

Secondly, we need to let people know how they can contribute technically. It would be very irritating for any possible contributor to be pushed back because they didn't follow our process if there is no way they could've known about it. This includes our branching strategy, how we deal with issues and so on. Also, we can give helpful hints and further descroptions of how squashing works or how a new branch should be named etc.

An outstanding example of how to do this comes from the friendly Neighbourhoodies. Once more Billie Thompson has us covered here as well, by providing a template that we can pull in and modify like we did with the Readme:

    node -e "require('https').get('', b => b.on('data', c => process.stdout.write(c+'')))" >

If we want to give further support hints like an FAQ, we could also add a file to our project as described here

Adding GitHub PR & Issue templates

Once our project went live, we expect people to open issues and (hopefully) create pull requests to improve our software. If we leave the style of issues and PRs completely undefined we will have a hard time sorting out everything. Luckily GitHub has us covered by offering a neat way to design pull request and issue templates that enable us to introduce a formal process with predefined fields so that we can gather the information we need right away.

It always depends on the type of software that we release, but an issue template should at least gather information on

  • Expected Behavior
  • Actual Behavior
  • Steps to Reproduce the Problem (if it's a bug)
  • The reporter's system (OS, Platform, Software Version, Browser, etc.)

A template for a pull request should most probably contain the following:

  • A description of the change
  • Motivation and context
  • Type of the change (Bug fix, Feature, Breaking Change)
  • A checklist for the committer (style guide followed, documentation changed, etc.)
  • A reference to the issue this PR is related to

Steve Mao created a bunch of templates we can use as an inspiration for our own templates. Once we picked one of the templates and adapted it to our project, we should create a directory called .github in our project's folder and place them inside.

We have standards

Now that we standardized how our contribution process looks like it's time to care about the style guide for the actual code we plan to write.

The first thing we want in this direction is to maintain our basic code style and help possible contributors to adapt it as painless as possible, even if they use a different editor or IDE than we do. Thanks to the wonderful EditorConfig project, this is super easy.

We just create a .editorconfig file in our document root, describing the basic style we want to have. A good and minimalistic example for javascript projects can be fetched from here.

    node -e "require('https').get('', b => b.on('data', c => process.stdout.write(c+'')))" > .editorconfig

While EditorConfig maintains our minimal code style and can be used for any file format possible, we need something more sophisticated for our main programming language. Tools like JSHint and ESLint are well suited to do this, but they only provide the tooling, they don't enforce a particular coding style on us. If we take a look at other programming languages like Python, we see well defined, globally applicable rulesets like Pep8 for the entire ecosystem. As we don't have such a global style guide for JavaScript, a few people got behind creating one in userland and delivering the appropriate tooling as well. The most popular one is Standard. By using Standard we can avoid any discussions with committers about coding styles, so we should use that as well. It also helps first-time contributors, as they're probably already familiar with it as they've seen it in other projects already.

Standard can be installed via npm and we're installing Snazzy alongside with it to beautify the default output.

Standard Linting Error

    npm install standard snazzy --save-dev

Now we just need to add a tiny bit of configuration to our package.json to make those tools easily available to us, this way we only need to run npm run lint in the future to validate our code.

    "scripts": {
        "lint": "standard --verbose | snazzy"

Wrapping it up

Now that we've laid out all the groundwork to make our project accessible to us and others, we're ready to push it to GitHub for the first time. Now it's time to actually write the first version of our application 🙌

Chapter 2 - Meet the gatekeeper & unleash the dragon

Writing a unit test

Assuming we've written our first working implementation, we should also make sure that it doesn't break if we or others will touch it in the future. We can achieve this by writing unit tests. Kent Dodds wrote a great introduction to this topic - But really, what is a JavaScript test?.

We'll use his work as a foundation, install Jest

    npm i jest --save-dev

and add it to our package.json scripts as we did with Standard:

    "scripts": {
        "lint": "standard --verbose | snazzy",
        "test": "jest"

Next we should get our hands dirty and write some tests that ensure the integrity of our little app.

Writing predictable Commit Messages

Before we commit our newly created tests, we should spend a minute thinking about our commit messages. People tend to underestimate the impact of commit messages, they are the main medium that communicates why we changed a piece of code. Was it a bug fix, a new feature or something else... Using a well-defined standard for our commit messaged makes them not only easier to be written and read, but also enables us to derive semantic information that we can use to automatically generate version numbers, changelogs etc.

We're going to use the "gold standard" in commit messages called Conventional Commits. It's not only been adapted and battle-tested by a lot of well-known projects, but also enables us to tap into an entire ecosystem of tools that turn committing changes from being a pain into a guided and joyful experience.

We're going to use Commitizen in our application as well as Commitlint to generate and validate our commit messages.

In order to do this, we need to install the relevant packages from npm, init the commitizen adapter and configure our linter:

    npm i --save-dev @commitlint/cli @commitlint/config-conventional @commitlint/prompt commitizen
    ./node_modules/.bin/commitizen init cz-conventional-changelog --save-dev --save-exact
    echo "module.exports = {extends: ['@commitlint/config-conventional']}" > commitlint.config.js

We then go ahead and add a cz and a commitmsg section to our scripts, cz will be used for committing changes, while commitmsg is used to validate the commit messages.

    "scripts": {
        "lint": "standard --verbose | snazzy",
        "test": "jest",
        "cz": "git-cz",
        "commitmsg": "commitlint -e $GIT_PARAMS"

This will generate and activate everything we need to use semantic commit messages with commitizen in the future. When we're going to commit the tests we just wrote, we can do it using the npm run cz command (instead of git commit) and get a super nice interface that will guide us and the committers through every commit we do 😎.

Thanks to adhering to such a universal format, we're also able to use other tools if we want, for example cz-conventional-changelog to automatically generate change logs for us.

Commit Linting Error

Keeping yourself safe with Git Hooks

All of our tests and linting doesn't help us much if we forget to execute them manually. We have no way to be sure that the code that gets committed passes our valuable checks. As we're using git we can leverage a super neat built-in feature called Git Hooks to do so.


Git hooks are scripts that Git executes before or after events such as commit, push, and receive. Git hooks are a built-in feature - no need to download anything. Git hooks are run locally. These hook scripts are only limited by a developer's imagination. Some example hook scripts include:

-> pre-commit: Check the commit message for spelling errors.

-> pre-receive: Enforce project coding standards.

-> post-commit: Email/SMS team members of a new commit.

-> post-receive: Push the code to production.

Luckily we don't have to write any shell scripts ourselves. Thanks to ghooks we only need a few lines of configuration in our package.json to enable them:

    npm install ghooks --save-dev
    "config": {
        "ghooks": {
          "pre-commit": "npm run lint",
          "commit-msg": "npm run commitmsg",
          "pre-push": "npm test",
          "post-merge": "npm update",
          "post-rewrite": "npm update"

With that in place, we can't commit anything that doesn't adhere to our coding style, make sure that the commit message is compliant to our guideline, can't push anything with failing tests and pull a fresh version of our dependencies everytime we merge something or a git rewrite happens.


Preparing to automate our releases using Semantic Release

So far we've pushed our code to GitHub, but haven't released anything to npm, it's about time to change that 🔥

We've managed to automate all the things and releases, as well as tagging a version, shouldn't be any different. We humans make mistakes all the time and publishing a faulty version of our application or library can easily break the code of many other projects that depend on our's. The same can happen if we accidentally bump our version the wrong way, as we want to follow Semver to make our versions predictable by adhering to the standard of: BREAKING.FEATURE.FIX.

As we already have the semantic information from our conventional commit messages to know if our commit is a breaking change, a feature or a bug fix, we're also able to determine a version change and if a new version should be published at all.

Again, we don't have to do this ourselves as there is a wonderful tool available that works like a charm for our use case, it's called semantic-release.

Semantic Release will:

  • Calculate the version number and modify our package.json accordingly
  • Release a new version to npm
  • Create a tag with the version on GitHub
  • Create a release on GitHub

Exactly what we need! So let's install it:

    npm install --save-dev semantic-release semantic-release-cli

and, again, create a shortcut in the scripts section of our package.json:

    "scripts": {
        "lint": "standard --verbose | snazzy",
        "test": "jest",
        "cz": "git-cz",
        "commitmsg": "commitlint -e $GIT_PARAMS",
        "semantic-release": "semantic-release"

We have to run the semantic-release command locally one time to generate the configuration for us.

    ./node_modules/.bin/semantic-release-cli setup

All of our releases will be done on our CI Server for which Semantic Release already generated a configuration file for. Fortunately Semantic Release also created GitHub and NPM tokens for us and uploaded them to Travis CI, so we don't even have to bother figuring out how and where to get them.

Activate Travis CI

Executing our code locally is one way to make sure it'll run fine, but if we use an external service to do the checks it enables us to not only test our code on a different environment, but also make GitHub aware of it and enable it to check the integrity of every PR issued.

We're going to use Travis CI for this. There are other excellent options like Circle CI, Jenkins, AppVeyor and so on, but the integration of Travis with GitHub and our toolchain makes it so convenient to use.

As semantic-release already generated a .travis.yml file for us, all we need to do is to extend it to use our npm run lint and npm test commands. When we head over to the Travis site we should see our first build already in progress.

We can also tell travis to validate our commit messages, which is a nice thing for pull requests to make sure no one bypassed our git hooks. We need to install, the appropriate Commitlint adapter for this and add it to our package.json as well as to our CI configuration file.

    npm install --save-dev @commitlint/travis-cli
    "scripts": {
        "lint": "standard --verbose | snazzy",
        "test": "jest",
        "cz": "git-cz",
        "commitmsg": "commitlint -e $GIT_PARAMS",
        "commitlint-travis": "commitlint-travis",
        "semantic-release": "semantic-release",
        "travis-deploy-once": "travis-deploy-once"

Our .travis.yml should look like this:

    language: node_js
        - ~/.npm
      email: false
      - '9'
      - '8'
      - '6'
      - npm run commitlint-travis
      - npm run lint
      - npm test
      - npm run travis-deploy-once "npm run semantic-release"
        - /^v\d+\.\d+\.\d+$/

Travis build

Wrapping it up

With our next successful push we'll have a system running that ensures our code doesn't break any tests. We'll only be able to push linted and tested code with a semantic commit message and it bumps our software version for us as well as automatically release it to npm and GitHub. Pure magic!

Chapter 3 - Those fairies are your friends

Now that we released our library into the wild, we expect others to collaborate with us on it, but that means we will give up a lot of control over what will be pushed to our software. Will the Code Coverage decrease because people are too sloppy to write tests? Will they introduce dependencies that might add security vulnerabilities to our software or that have a license which doesn't fit the one we chose for our project? Aside from this, how can we make sure that our dependencies stay up to date, not to be in the position of not being able to update our application because one of our dependencies introduced 10 breaking changes in the meantime?

We're not the first ones to ask those questions. Other people have already created services covering all of that. It's common for these services to be free for Open Source projects and integrate well with GitHub, so let's jump in and see what's out there!

Code coverage & code quality

There are a couple of services that help us to gather some metrics on our code and visualize the code coverage. For JavaScript projects Codecov and Code Climate have a superb toolset and are used by many Open Source projects and companies.

We're going to use the Code Climate, but the way these services operate is quite similar:

  • We need to create an account for the service and connect it with GitHub
  • We create our code-coverage information on every Travis CI build
  • We use the collector tool from one of the services, which we install via NPM and a token to upload the data to it

After we created an account and obtained our token from the Code Climate repo settings page, we need to add a secret environment variable called CODECLIMATE_REPO_TOKEN in our Travis CI project settings page.

Travis Tokens

Codeclimate Token

Next, we need to install the Code Climate reporter that will post the code coverage statistics:

    npm install --save-dev codeclimate-test-reporter

Then we need to modify our .travis.yml file to submit our results to the service:

    language: node_js
        - ~/.npm
      email: false
      - '9'
      - '8'
      - '6'
      - npm run commitlint-travis
      - npm run lint
      - npm run coverage
      - npm run collect-coverage
      - npm run travis-deploy-once "npm run semantic-release"
        - /^v\d+\.\d+\.\d+$/

Additionally, we create a new file called .codeclimate.yml which lets us configure our projects maintainability thresholds, plugins we want to use and allows us to exclude our tests from the maintainability checks if we want it.

    version: "2"
          threshold: 4
          threshold: 4
          threshold: 250
          threshold: 5
          threshold: 20
          threshold: 25
          threshold: 4
          threshold: 4
        enabled: true
      - "__tests__/"

You can download the file above:

    node -e "require('https').get('', b => b.on('data', c => process.stdout.write(c+'')))" > .codeclimate.yml

Now we only need to introduce a command to our package.json that generates the coverage, we call it test-coverage and at the --coverage parameter to the jest command, that's all we need to do to make it work!

    "scripts": {
        "test": "jest",
        "test-coverage": "jest --coverage --collectCoverageFrom=src/*js",
        "collect-coverage": "codeclimate-test-reporter < ./coverage/",

Codeclimate Dashboard

Keeping your dependencies up to date

Managing dependencies can be time-consuming and requires a lot of effort and discipline to do it right. Instead of relying on our time and manual lookups, we're going to use Greenkeeper who will do it for us.


You could manually track updates of your dependencies and test whether things still work. This takes a lot of time however and it’s rarely ever done. So most of the time, your software is in a Schrödinger state of being potentially broken, and you have no idea until you (or your users) run npm install and try it out.

To enable Greenkeeper, we only have to visit, log in with our GitHub user and install the GitHub App by pressing the big green button. After merging the pull request it opened we will receive pull requests from Greenkeeper anytime on of our dependencies got updated.

Thanks to Travis CI, we will know if that update breaks something for us and thanks to semantic release, we will push out a new version automatically once we merged the PR.

Be aware of the dragons

Speaking of dependencies, it's pretty hard to keep track of known security problems with them, especially if our software has many dependencies.

That is where Snyk comes in. It's a service that finds vulnerabilities in our repos and remediates risks with updates and patches. To include it in our project, we need to visit their website and sign in with our GitHub account and select the repo we want covered.

Snyk Repo

From now on, every PR we get will be checked for introducing insecure dependencies and we can see right away if somethings wrong. The ease of setting this up is astonishing.

Checking our licenses

Lastly, we also want to ensure that additional dependencies do not violate our projects license. Of course, this isn't much more work as with the services mentioned before thanks to the brand new Fossa service.


Surface raw licenses hidden inside deep dependencies; correctly-identified even if edited and placed within code.

-> Detects embedded GPL, even when not reported by developers

-> Additional parsing for metadata, notice files and webpages referenced in code

-> Differentiates between declared, nested & included licenses (from i.e. copy-pasted modules/files)

-> Intelligently handles dual/multi-licensed code

Again, we only need to sign in with our GitHub account, link Fossa, enable the repos and merge the automatically created PR. It will not only check each PR for license compliance but also add 2 badges to our README to signal every potential user that our software adheres to the license we claim it does.

Btw.: More information on choosing the correct license for our software can be found at

Wrapping it up

Thanks to those wonderful services and their free for open source policy, we are now able to continuously check our code coverage and code quality, keep our dependencies up to date with no effort and be sure that any changes are safe to use and adhere to our license.

Github PR Status

Chapter 4 - The lion's brides

Now that we made our application safe and easy to maintain in any possible way we can think of, we need to think of managing upcoming releases, issues and to make our roadmaps as well as the amount of work we put into securing the quality of it visible to our users and contributors.

GitHub hands us many useful tools in order to get this right, so let's take a look at those and how we can use them.

Readme Badges for quick visibility of core metrics

Readme Badges are a great way to signal our README visitors that our build is green, our dependencies are up to date, which standards we adhere to, how high our code coverage and maintainability index is, which version was published last, etc.

Here's a list of the badges we use in our README to do exactly this:

Build Status - Signal the state of our build

[![Build Status]({{github_user}}/{{repo}}.svg?branch=master)]({{github_user}}/{{repo}})

npm - Currently released version


dependencies Status - Let the users know that our dependencies are up to date

[![dependencies Status]({{github_user}}/{{repo}}/status.svg)]({{github_user}}/{{repo}})

dependencies Status - Let our users know that our devDependecies are up to date

[![dependencies Status]({{github_user}}/{{repo}}/dev-status.svg)]({{github_user}}/{{repo}}#info=devDependencies)

Test Coverage - Display the current code coverage percentage

[![Test Coverage]({{codeclimate_id}}/test_coverage)]({{github_user}}/{{repo}}/test_coverage)

Maintainability - Display the code quality measured


License: MIT - Display the license that we use

[![License: MIT](](

FOSSA Status - Signal that all of our dependencies are compatible wth our license

[![FOSSA Status]({{github_user}}%2F{{repo}}.svg?type=shield)]({{github_user}}%2F{{repo}}?ref=badge_shield)

semantic-release - Signal that we're using semnatic release


js-standard-style - Display the code style used


Commitizen friendly - Let people know that we're using commitizen for ease commit messages

[![Commitizen friendly](](

Greenkeeper badge - Let people knwo that Greenkeeper keeps our dependecnies up to date

[![Greenkeeper badge](](

Known Vulnerabilities - Let people know that we have no known security vulnarabilities

[![Known Vulnerabilities]({{github_user}}/{{repo}}/badge.svg)]({{github_user}}/{{repo}}.test)

Organize issues

When it comes down to organizing issues, we already did a great job by providing a template, but once issues have been created we need to find a way to group and label them.

Good labels indicate the status of an issue, like it being in progress, or being on hold. We can use them to signal the priority of an issue (low, high, critical, etc.), the type of the issue (bug, enhancement, question) or enable us to signal that we want help with it or that this would be a great type of issue for others to get used to the project and issue their first PR.

Depending on how many repos we plan to maintain into the future, it might be a good idea to use a tool to manage those defaults for us. We might want to check out git-labelmaker and git-label-packages which enable us to manage the labels from the command line and present us with some great defaults that already contain most of what we need.

Dave Lunny did a great write up on Medium about sane labeling of issues and their management.

Use GitHub Projects, Milestones and Roadmaps

GitHub project management features evolved in beautiful ways in the past years, enable us to manage planned features, use agile-ish boards to organize issues and create milestones. Once more developers join as regular contributors, we need ways to manage who works on what and when it is expected to be ready.

GitHub ProjectsandMilestones are a great way of doing this as we're in the GitHub userspace already.

Wrapping it up

Managing the "soft side" of open source projects could be a lengthy article on its own, but as we have the tools already at hand, it would be a shame not to use them. We can organize our issues in projects and tag them with labels and milestones and we can signal the state of our project to our visitors at a glance with badges. In the end, it makes managing open source more painless and keeps the fun of doing so.

Epilogue - The magician

My dear magician,

I hope those magic spells helped you become the great wizard of open sourcery you always wanted to be. Pass them on to everyone who wants to follow you to the Nethersphere of the great Technomagicians and Octowizards. This isn't where the journey ends, this is merely the beginning. Go and release your code into the wild and let it grow to help others in need. Don't let misconfigured pull requests or badly worded issues bring you down. Follow your heart and spread the love.

The contributors to JavaScript January are passionate engineers, designers and teachers. Emily Freeman is a developer advocate at Kickbox and curates the articles for JavaScript January.