The Promise of a Better Future

I am beyond excited to have Ernie Turner contributing to JavaScript January. The first time I met Ernie he opened a conference with his talk on ENCRYPTION and only later did I find out he did it without notes because A/V was a disaster. He and his team are brilliant and we’re lucky he’s willing to teach us about promises today!

Promises are the standard way to manage asynchronous event chains in JavaScript. They provide much better developer ergonomics compared to callbacks and lend themselves to better management of both parallel and sequential asynchronous operations. Being able to sequence Promises together into a logical chain makes complex asynchronous code easier for developers to understand. While async/await are starting to become the more popular way to handle asynchronous programming, they still use Promises under the hood.

Unfortunately, Promises aren't the perfect solution for asynchronous programming that JavaScript developers need. Their implementation is flawed in a number of ways that could easily have been improved in the original spec. Promises are opinionated which can make them confusing and frustrating to use at times. Workarounds have to be implemented in certain situations when they don't behave as intended. And being able to only use .then() and .catch() as chainable methods does a poor job of conveying the purpose and complexity of an asynchronous operation.


Promises are eager. This means that when a new Promise is constructed, it immediately starts executing, attempting to resolve the asynchronous value. When a new Promise is created, there is no control mechanism to determine when the function that is passed to the constructor is executed. For example:

    new Promise(() => console.log("working away!"));
    console.log("log text");

When this code runs you'll see working away! displayed first followed by log text. The Promise wasn't assigned to a variable nor did we attach any .then() or .catch() methods and it immediately executed. In some cases this is the behavior that you want. It can be helpful in situations when you want the Promise to calculate its value as soon as possible and then have that cached resolved value available without having to repeatedly perform the operation.

But what if you don't want that behavior and need to control when a Promise executes? That's where the problems start. If the intent is to create a Promise that allows control of when it starts executing, there are no native options. If you want the function passed to the Promise to run multiple times, Promises don't natively allow that either.

If you want to change the default behavior then Promise creation has to be wrapped in another function which can be invoked as many times as needed:

    const controlledPromise = function(){
        return new Promise(() => console.log("working away!"));
    controlledPromise().then((resolvedValue) => {

This construction will work for controlling when a Promises starts executing, but now calling controlledPromise() will execute the method passed to the Promise each time. This construction can't be used in places where the Promise value should be calculated once and cached for subsequent calls. Again, in some scenarios this is the desired behavior. But the problem is that you have to go through a checklist of what behavior you want and possibly add extra code depending on what you need. Wrapping Promises in functions isn't hard, but why should you have to do it in the first place?

The eagerness of Promises can also make unit testing them a pain. If you can't control when a Promise starts calculating, it's difficult to setup mocks or spies as part of your unit tests and verify asynchronous results. Since asynchronous code is some of the most complicated code we write, having good unit test coverage that exercises all code paths, including the failure cases, is critical.

Implicit Conversion and Method Overloading

Another issue with Promises is their implicit conversion of values returned in both the .then() and .catch() methods. As an example, the following two Promises will log the exact same value:

    const implicit = new Promise((resolve) => resolve("my value"))
        .then((value) => value);
    const explicit = new Promise((resolve) => resolve("my value"))
        .then((value) => Promise.resolve(value));

In the implicit example, the value returned within the .then() callback is automatically wrapped in a Promise.resolve for you. The same is true when returning values from .catch() methods. Any value returned from a .catch() function will implicitly get wrapped in a Promise.resolve wrapper:

    const implicit = new Promise((resolve, reject) => reject(new Error("my error")))
        .catch((error) => error.message);
    const explicit = new Promise((resolve, reject) => reject(new Error("my error")))
        .catch((error) => Promise.resolve(error.message));

While this might be considered helpful for some developers, I've found that most of the time it leads to more confusion. This conversion also overloads the .then() and .catch() methods to have multiple responsibilities. If you want to synchronously map a prior result to a new value, you use .then(). If you want to use a prior result and perform another asynchronous operation, you use .then(). If you want to handle an error and map it to a different error message, you use .catch(). If you want to handle an error and resolve with a default value to keep processing your Promise chain, you use .catch().

This combination of responsibilities drastically hurts code readability. Take the following example code:

    .then(() => ...)
    .catch(() => ...)
    .then(() => ...)
    .then(() => ...)
    .catch(() => ...)

By quickly glancing at that code, you have no idea how many asynchronous operations are actually happening. We have at least one with the fetch() call at the start, but what about the other three .then() calls? Each of those could be doing synchronous data manipulation or be simple logging functions or they could each be making another request to a service. And what is the responsibility of the .catch() calls? Are they mapping error messages? Are they responsible for recovering from an error state?

There's no way to quickly glance at a Promise chain and fully understand how complicated the operation is.

Enter the Future

Futures are an alternative to Promises with the same end goal - represent a value that will be available at some later point. However, Futures provide a number of benefits over Promises that make them more ergonomic, both in implementation as well as reading. There are multipleopensource Future libraries available (full disclosure, I'm the author of the last one). Futures are easily converted to and from Promises. Existing Promise-based code can be converted to and from Futures so code can be gradually converted. And if you're developing a library for others that already returns Promises, there is no need to break APIs to start replacing the internals with Futures.

Note: All Future code samples are using the methods present in the Fluture library. Not all libraries use the same method names.


In contrast to Promises, Futures are lazy and only start evaluating once instructed to do so. For example, the following code doesn't log anything:

    new Future(() => console.log("working away!"));

We're creating a Future, but we're not doing anything with it. With Futures, you're in control of when they start executing. This is done via a .fork() method on the Future which takes both a failure callback and a success callback (In the library I wrote the method used to start execution on the Future is called .engage() because it's pretty great to type Future.engage()). Here's an example of how to start executing a Future:

    const myFuture = new Future((reject, resolve) => ...);
      (error) => {
          //executed if Future fails with error value
      (result) => {
          //executed if Future succeeds with resolved value

The failure and success functions should look familiar if you've used the multiple argument mode that Promises support:

    new Promise((resolve, reject) => {...})
            (error) => {},
            (resolve) => {}

Being able to control when the Future executes allows developers to have more control over which methods are responsible for starting asynchronous operations. This has the benefit of limiting the number of places where side effects can occur in your code. This, in turn, makes for cleaner code that is easier to maintain for all developers who work in a codebase.

Better Chaining

In addition to giving developers the choice of when to execute, Futures also give developers a more functional way to manage their asynchronous event chains. They lend to better composability since multiple functions that all return Futures can have a single place where all of those operations are combined and run in sequence. It's akin to setting up dominos in order and then deciding when the first one is pushed.

Not only does this let you create smaller, reusable functions, but it also makes unit testing those functions a breeze. The unit test calls the function to test, gets back a Future, and then can setup any mocks before calling the .fork() function to verify the result. Testing failure cases is just as easy as you can mock out failures before calling .fork() and verify the failures in the failure method.

Futures also implement a number of chainable functions in addition to .fork() that makes them much more ergonomic. Futures forego the single .then() method and exposes a much more granular set of functions that each have a specific responsibility. There's a chain() method which should return another Future for chaining asynchronous operations together. map() is responsible for doing synchronous data manipulation where the return value is the data to pass to the next method in the chain. There's also methods responsible for handling specific error workflows. Instead of a single .catch() method, Futures expose multiple methods to handle and recover from failures. The mapRej() method allows you to synchronously modify any errors that might occur so you can map to a new error. chainRej() allows for attempting to recover from a previous error by returning a new Future. And there are a many more functions available which let you more easily manage your control flow.

When you see a Future chain it might look like:

    Future.tryP(() => fetch("/api/endpoint"))
       .chain(() => )
       .mapRej(() => )
       .map(() => )
       .map(() => )

At a quick glance, the complexity of this code is much easier to understand. After our initial fetch call (where we wrap a Promise in a Future), we're doing another asynchronous operation in our chain() method, then we're mapping any errors within mapRej(), then we do two synchronous operations in the map calls, and finally we start actually processing by invoking the fork() function. Even though this is pseudocode, with a quick glance you already know that we have two asynchronous and three synchronous operations that are happening.


While Promises are a big improvement over repeated callback based solutions, we can still do better. All of the issues with Promises can be worked around, but why should we have to? Futures immediately provide a better developer experience when writing and reading code. Additionally, Futures can be incrementally introduced into a codebase and don't require a complete rewrite. In our work at IronCore Labs, we've had to interact with many Promise heavy APIs such as REST endpoints, WebWorkers, WebAssembly, and the WebCrypto API. Wrapping these APIs with Futures allowed us to write clean, readable, and powerful code that is also fully unit tested.

In programming, incremental improvements in our day-to-day work can end up making enormous improvements over time. Introducing tools that not only give us better control of our complex code but at the same time improve the self documenting aspect for the next developer is always a goal to strive for.

About Ernie

Ernie Turner is a Senior Developer at IronCore Labs. You can find him on Twitter and GitHub.

Get your Node.js app up and running on Azure (free!) with our Getting Started Guide.

The contributors to JavaScript January are passionate engineers, designers and teachers. Emily Freeman is a developer advocate at Kickbox and curates the articles for JavaScript January.