One of the main selling point of Node.js is the fact that it's based on JavaScript and runs on V8, an engine that actually powers one of the most popular browsers: Chrome. We might think that that's enough to conclude that sharing code between Node.js and the browser is an easy task; however as we will see, this is not always true. Unless we want to share only some small, self-contained and generic fragments of code, developing for both the client and the server requires a non-negligible level of effort in making sure that the same code can run properly in two environments that are intrinsically different. For example, in Node.js we don't have the DOM or long-living views, while in the browser we surely don't have the filesystem or the ability to start new processes. Most of the effort required when developing for both the platforms is making sure to reduce those differences to the minimum. This can be done with the help of abstractions and patterns that enable the application to switch, dynamically or at build time, between the browser-compatible code and the Node.js code.
Luckily, with the rising interest in this new mind-blowing possibility, many libraries and frameworks in the ecosystem have started to support both environments. This evolution is also backed by a growing number of tools supporting this new kind of workflow, which over the years have been refined and perfected. This means that if we are using an npm
package on Node.js, there is a good probability that it will work seamlessly on the browser as well. However, this is often not enough to guarantee that our application can run without problems on both the browser and Node.js. As we will see, a careful design is always needed when developing
cross-platform code.
In this section, we are going to explore the fundamental problems we might encounter when writing code for both Node.js and the browser and we are going to propose some tools and patterns that can help us in tackling this new and exciting challenge.
The first wall we hit when we want to share some code between the browser and the server is the mismatch between the module system used by Node.js and the heterogeneous landscape of the module systems used in the browser. Another problem is that in the browser we don't have a require()
function or the filesystem from which we can resolve modules. So if we want to write large portions of code that can work on both the platforms and we want to continue to use the CommonJS module system, we need to take an extra step, we need a tool to help us in bundling all the dependencies together at build time and abstracting the require()
mechanism on the browser.
In Node.js, we know perfectly well that the CommonJS modules are the default mechanism for establishing dependencies between components. The situation in browser-space is unfortunately way more fragmented:
Luckily, there is a set of patterns called Universal Module Definition (UMD) that can help us abstract our code from the module system used in the environment.
UMD is not quite standardized yet, so there might be many variations that depend on the needs of the component and the module systems it has to support. However, there is one form that probably is the most popular and allows us to support the most common module systems, which are AMD, CommonJS, and browser globals.
Let's see a simple example of how it looks like. In a new project, let's create a new module called 'umdModule.js'
:
(function(root, factory) { //[1] if(typeof define === 'function' && define.amd) { //[2] define(['mustache'], factory); } else if(typeof module === 'object' && //[3] typeof module.exports === 'object') { var mustache = require('mustache'); module.exports = factory(mustache); } else { //[4] root.UmdModule = factory(root.Mustache); } }(this, function(mustache) { //[5] var template = '<h1>Hello <i>{{name}}</i></h1>'; mustache.parse(template); return { sayHello:function(toWhom) { return mustache.render(template, {name: toWhom}); } }; }));
The preceding example defines a simple module with one external dependency: Mustache
(http://mustache.github.io), which is a simple template engine. The final product of the preceding UMD module is an object with one method called sayHello()
that will render a mustache template and return it to the caller. The goal of UMD is integrating the module with other module systems available on the environment. This is how it works:
window
on the browser). This is needed mainly for registering the dependency as a global variable, as we will see in a moment. The second argument is the factory()
of the module, a function returning an instance of the module and accepting its dependencies as input (Dependency Injection).define
function and its amd
flag. If found, it means that we have an AMD loader on the system, so we proceed with registering our module using define
and requiring the dependency mustache
to be injected into factory()
.module
and module.exports
objects. If that's the case, we load the dependencies of the module using require()
and we provide them to the factory()
. The return value of the factory is then assigned to module.exports
.root
object, which in a browser environment will usually be the window
object. Also, you can see how the dependency, Mustache
, is expected to be in the global scope as well.this
object as root
(in the browser, it will be the window
object) and providing our module factory as a second argument. You can see how the factory accepts its dependencies as arguments.The UMD pattern is an effective and simple technique used for creating a module compatible with the most popular module systems out there. However, we have seen that it requires a lot of boilerplate, which can be difficult to test in each environment and is inevitably error-prone. This means that manually writing the UMD boilerplate can make sense for wrapping a single module and not as a practice to use for every module we create in our projects. It is simply unfeasible and impractical. In these situations, it would be better to leave the task to some tool that can help us automate the process, one of those tools is Browserify, which we will see in a moment.
Also, we should mention that AMD, CommonJS and browser globals are not the only module systems out there. The pattern we have presented will cover most of the use cases, but it requires adaptations to support any other module system. For example, the upcoming ES6 module specification will be something that we might want to support as soon as it gets standardized.
You can find a broad list of formalized UMD patterns at https://github.com/umdjs/umd.
When writing a Node.js application, the last thing we want to do is to manually add support for a module system different from the one offered, by default, by the platform. The ideal situation would be continuing to write our modules as we have always done, using require()
and module.exports
, and then use a tool to transform our code into a bundle that can easily run in the browser. Luckily, this is a problem that has already been solved by many projects, among which Browserify (http://browserify.org) is the most popular and broadly supported.
Browserify allows us to write modules using the Node.js module conventions and then, thanks to a compilation step, it creates a
bundle (a single JavaScript file) that contains all the dependencies our modules need for working, including an abstraction of the require()
function. This bundle can then be easily included into a web page and executed inside a browser. Browserify recursively scans our sources looking for references of the require()
function, resolving, and then including the referenced modules into the bundle.
Browserify is not the only tool we have for creating browser bundles from Node.js modules. Other popular alternatives are Webmake
(https://npmjs.org/package/webmake) and Webpack
(https://npmjs.org/package/webpack). Also, require.js
allows us to create modules for both the client and Node.js but it uses AMD in place of CommonJS (http://requirejs.org/docs/node.html).
To quickly demonstrate how this magic works, let's see how the umdModule
we created in the previous section looks like if we use Browserify. First, we need to install Browserify itself, we can do it with a simple command:
npm install browserify -g
The -g
option will tell npm
to install Browserify globally, so that we can access it using a simple command from the console, as we will see in a moment.
Next, let's create a fresh project and let's try to build a module equivalent to the umdModule
we created before. This is how it looks like if we had to implement it in Node.js (file sayHello.js
):
var mustache = require('mustache'); var template = '<h1>Hello <i>{{name}}</i></h1>'; mustache.parse(template); module.exports.sayHello = function(toWhom) { return mustache.render(template, {name: toWhom}); };
Definitely simpler than applying a UMD pattern, isn't it? Now, let's create a file called main.js
that is the entry point of our browser code:
window.addEventListener('load', function() {
var sayHello = require('./sayHello').sayHello;
var hello = sayHello('World!');
var body = document.getElementsByTagName("body")[0];
body.innerHTML = hello;
});
In the preceding code, we require the sayHello
module in exactly the same way as we would do in Node.js, so no more annoyances for managing dependencies or configuring paths, a simple require()
does the job.
Next, let's make sure to have mustache
installed in the project:
npm install mustache
Now, comes the magical step. In a terminal, let's run the following command:
browserify main.js -o bundle.js
The previous command will compile the main
module and bundle all the required dependencies into a single file called bundle.js
, which is now ready to be used in the browser!
To quickly test this assumption, let's create an HTML page called magic.html
that contains the following code:
<html> <head> <title>Browserify magic</title> <script src="bundle.js"></script> </head> <body> </body> </html>
This is enough for running our code in the browser. Try to open the page and see it with your eyes. Boom!
During development, we surely don't want to manually run browserify
at every change we make to our sources. What we want instead is an automatic mechanism to regenerate the bundle when our sources change. To do that, we can use watchify
(https://npmjs.org/package/watchify), a companion tool that we can install by running the following command:
npm install watchify -g
Watchify
can be used in the exact same way as browserify
, the two tools have a similar purpose and command line options. The difference between the two is that watchify
, after compiling the bundle for the first time, will continue to watch the sources of the projects for any change and will then rebuild the bundle automatically by processing only the changed files for maximum speed.
The magic of Browserify doesn't stop here. This is a (incomplete) list of features that make sharing code with the browser a simple and seamless experience:
EventEmitter
, and many more in the browser!--exclude
option), replace it with an empty object (--ignore
option), or replace it with another module providing an alternative and browser-compatible implementation (by using the 'browser'
section in the package.json
). This is a crucial feature and we will have the chance to use it in the example we are going to see in a while.--standalone
option).require()
, from minification to the compilation and bundling of other assets such as templates and stylesheets.You can find a list of all the available transforms on the project's wiki page at https://github.com/substack/node-browserify/wiki/list-of-transforms.
The power and flexibility of Browserify are so captivating that many developers started to use it even to manage only client-side code, in place of more popular module systems such as AMD. This is also made possible by the fact that many client-side libraries are starting to support CommonJS and npm
by default, opening new and interesting scenarios. For example, we can install JQuery
as follows:
npm install jquery
And then load it into our code with a simple line of code:
var $ = require('jquery');
You will be surprised at how many client-side libraries already support CommonJS and Browserify.
A great resource for knowing more about Browserify and its capabilities is its official handbook that you can find on GitHub at https://github.com/substack/browserify-handbook.
When developing for different platforms, the most common problem we have to face is sharing the common parts of a component, while providing different implementations for the details that are platform-specific. We will now explore some of the principles and the patterns to use when facing this challenge.
The most simple and intuitive technique for providing different implementations based on the host platform is to dynamically branch our code. This requires that we have a mechanism to recognize at runtime the host platform and then switch dynamically the implementation with an if
-else
statement. If we are using Browserify, this is as simple as checking the variable process.browser
, which is automatically set to true
by Browserify when bundling our modules:
if(process.browser) { //client side code } else { //Node.js code }
Some more generic approaches involve checking globals that are available only on Node.js or only in the browser. For example, we can check the existence of the window
global:
if(window && window.document) { //client side code } else { //Node.js code }
Using a runtime branching for switching between Node.js and browser implementation is definitely the most intuitive and simple pattern we can use for the purpose; however it has some inconveniences:
clientModule
and serverModule
will be included in a bundle generated with Browserify, unless we don't explicitly exclude one of them from the build:if(window && window.document) { require('clientModule'); } else { require('serverModule'); }
This last inconvenience is due to the fact that bundlers have no way of knowing the value of a runtime variable at build-time (unless the variable is a constant), so they include any module regardless of whether it's required from reachable or unreachable code.
Most of the time, we already know at build-time what code has to be included in the client bundle and what shouldn't. This means that we can take this decision upfront and instruct the bundler to replace the implementation of a module at build-time. This often results in a leaner bundle, as we are excluding unnecessary modules, and a more readable code because we don't have all the if-else
statements required by a runtime branching.
In Browserify, this module swapping mechanism can be easily configured in a special section of the package.json
. For example, consider the following three modules:
//moduleA.js var showAlert = require('./alert'); //alert.js module.exports = console.error; //clientAlert.js module.exports = alert;
In Node.js, moduleA
is using the default implementation of the alert
module, which will log a message to the console, in the browser though we want a proper alert pop up to show. To do that, we can instruct Browserify to swap at build time, the implementation of the alert.js
module with clientAlert.js
. All we need to do is to add a section such as the following into the package.json
of a project:
"browser": { "./alert.js": "./clientAlert.js" }
This will result in every reference to the alert.js
module being replaced with a reference to the clientAlert.js
module. The first module will not even be included in the bundle.
We realize how build-time branching is much more elegant and powerful than runtime branching. On one side, it allows us to create modules that are focused on only one platform, and on the other, it provides a simple mechanism to exclude Node.js-only modules from the final bundle.
Now that we know how to switch between Node.js and browser code, the remaining piece of the puzzle is how to integrate this within our design and how we can create our components in such a way that some of their parts are interchangeable. These challenges should not sound new to us at all, in fact, all throughout the book we have seen, analyzed, and used patterns to achieve this very purpose.
Let's remind some of them and describe how they apply to cross-platform development:
fs
interface?fs
object on the client that proxies every call to the fs
module living on the server, using Ajax or WebSockets as a way of exchanging commands and return values.As we can see, the arsenal of patterns at our disposal is quite powerful, but the most powerful weapon is still the ability of the developer to choose the best approach and adapt it to the specific problem at hand. In the next section, we are going to put what we learned into action, leveraging some of the concepts and patterns we have seen so far.
As a perfect conclusion for this section and chapter, we are now going to work on an application more complex than usual to demonstrate how to perform code sharing between Node.js and the browser. We will take as example a personal contact manager application with very basic functionalities.
In particular, we are only interested in some basic CRUD operations such as listing, creating, and removing contacts. But the main feature of our application, the one that we are really interested in exploring, is the sharing of the models between the server and the client. This is actually one of the most sought after capabilities when developing an application that has to validate and process data both on the client and on the server, which is what most of the modern applications actually need to do.
To give you an idea, this is how our application should look like once it's completed:
The plan is to use a familiar stack on the server with express
and levelup
, Backbone Views (http://backbonejs.org) on the client, and a set of Backbone Models shared between Node.js and the browser, to implement persistence and validation. Browserify is our choice for bundling the modules for the browser. If you don't know Backbone, don't worry, the concepts we are going to demonstrate here are generic enough and can be understood also without any knowledge of this framework.
Let's start from the focal center of our application, the Backbone models wewant to share with the browser. In our application, we have two models: Contact
, a Backbone Model, and Contacts
, a Backbone Collection. Let's see how the Contact
module looks like (the models/Contact.js
file):
var Backbone = require('backbone'); var validator = require('validator'); module.exports = Backbone.Model.extend({ defaults: { name: '', email: '', phone: '' }, validate: function(attrs, options) { var errors = []; if(!validator.isLength(attrs.name, 2)) { errors.push('Must specify a name'); } if(attrs.email && !validator.isEmail(attrs.email)) { errors.push('Not a valid email'); } if(attrs.phone && !validator.isNumeric(attrs.phone)) { errors.push('Not a valid phone'); } if(!attrs.phone && !attrs.email) { errors.push('Must specify at least one contact information'); } return errors.length ? errors : undefined; }, collectionName: 'contacts', url: function() { if (this.isNew()) return this.collectionName; return this.collectionName + '/' + this.id; }, sync: require('./modelSync') });
Most of the preceding code is shared between the browser and the server, namely, the logic for setting the default attributes values and their validation. Both the defaults()
and validate()
methods are part of the Backbone framework and are overridden to provide the custom logic for our model. We also added an extra field to the object, called collectionName
, that will be used by the server for persisting the model in the right sublevel (we will see this later) and by the client in order to calculate the URL of the REST API endpoint (the url
field).
Now, comes the best part: when a Backbone model is saved, deleted, or fetched (using save()
, remove()
, and fetch()
respectively), Backbone internally delegates the task to the sync()
method of the model. Sounds familiar? This is actually a Template pattern and it's perfect for us to perform a build-time branching of our code. That's in fact where the models must have a different behavior based on the target environment:
save()
is invoked, we want to persist a model in the database; similarly, with fetch()
, we want to retrieve the model's data from the database, and with remove()
, we want to delete itsave()
, fetch()
, and remove()
to trigger an AJAX call to the server, which in turn executes the required operation and returns the result back to the clientIn the code fragment given earlier, the sync
attribute is a function loaded from the modelSync
module, which represents our server-side implementation of the method. This is how it looks like (the models/modelSync.js
file):
var db = require('../db'); var Backbone = require('backbone'); var uuid = require('node-uuid'); var self = module.exports = function(method, model, options) { switch(method) { case 'create': return self.saveModel(model, options); [...] } }; self.saveModel = function(model, options) { var collection = db.sublevel(model.collectionName); var results = []; if(!model.id) model.set('id', uuid.v4()); collection.put(model.id, model.toJSON(), function(err) { if(err) return options.error(); options.success(model.toJSON()); }); } [...]
When the internals of the Backbone Model invoke the sync()
method, three parameters are provided, as follows:
method
parameter representing the action being performed (which can be one of the following: 'create'
, 'read'
, 'update'
or 'delete'
)model
parameter, which is the object of the operationoptions
that contains, among other things, a success
callback to be invoked when the operation completes and an error
callback to invoke if it failsIn the preceding code, we are showing what happens when we receive a 'create'
request. As we can see, the saveModel()
function is invoked, which saves the model
into the database.
The sync()
implementation we have just seen, is meant to be executed only on the server, where we want to persist the data. Ideally, it could also work on the browser, because LevelUP has adapters for IndexedDB and LocalStorage, but that's not what we want in this example.
What we want instead is to persist the data on the server, and to do this we have to invoke a web service when an operation is performed on the model. This means that the modelSync
module is not good for us to use on the browser, so we need a different implementation. Luckily, Backbone already provides a default implementation for the sync()
method that does exactly what we need. So that's what we are going to use on the client-side implementation of the modelSync
module (file: models/clientSync.js
):
module.exports = require('backbone').sync;
That's it, the next step is to instruct Browserify to use the module we just created in place of modelSync
when creating the client-side bundle. As we have seen, this can be done in the package.json
file:
[...] "browser": { "./models/modelSync.js": "./models/clientSync.js" [...] }
The preceding few lines create a build-time branching telling Browserify to replace any reference to the module "./models/modelSync.js"
with a reference to "./models/clientSync.js"
. The module modelSync
will not be included in the final bundle.
At this point, the Contact
module should be isomorphic, which means that it can run transparently both on the client and on Node.js. To show how this looks like, let's see how the model is used in the server routes (file routes.js
):
var Contact = require('./models/Contact'); [...] module.exports.createContact = function(req, res, next) { var contact = new Contact(req.body); contact.once('invalid', function(model, errors) { res.status(400).json({error: errors}); }); contact.save({}, {success: function(contact) { res.status(200).json(contact); }}); } [...]
The createContact()
handler builds a new contact (Contact
) from the JSON data received in the body of the request (a POST
to the '/contacts'
URL). Then, we attach to the model a listener for the invalid
event, which triggers when its attributes do not pass the validation tests we have defined. Finally, contact.save()
will persist the data in the database.
As we will see, this is exactly what we do in the browser-side of the application as well. This happens in the Backbone View responsible for handling the data submitted in a form (file client/ContactsView.js
):
var Backbone = require('backbone'); var Contact = require('../models/Contact'); var $ = require('jquery'); [...] module.exports = Backbone.View.extend({ [...] createContact: function(evt) { evt.preventDefault(); var contactJson = { name: $('#newContactForm input[name=name]').val(), email: $('#newContactForm input[name=email]').val(), phone: $('#newContactForm input[name=phone]').val() }; var contact = new Contact(contactJson); //[1] $('.error-container', this.$el).empty(); contact.once('invalid', this.invalid, this); //[2] contact.save({}, {success: function() { //[3] this.contacts.add(contact); }.bind(this)}); } [...] });
As we can see, when the createContact()
function is invoked (after the "new contact" form is submitted), we issue the exact same commands we used on the server:
Contact
model from the form data.invalid
event so that we can immediately display a message to the user if the data does not pass the validation.POST
request to the /contacts
URL.As we wanted to demonstrate, our Contact
model is isomorphic and enables us to share its business logic and validation between the browser and the server!
To run the full sample distributed with the book, don't forget to install all the modules with:
npm install
Then, run Browserify on the main module of the client-side application to generate the bundle used on the browser:
browserify client/main.js -o www/bundle.js
Then finally, fire up the server with:
node app
Now, we can open our browser at the following URL to access the application http://localhost:8080
.
We can now verify that the validation is actually performed identically on the browser as it is on the server. To check this on the browser, we can simply try to create a contact with a phone number that contains letters, which will fail the validation. Then, to test the server-side validation, we can try to invoke the REST API directly with curl
:
curl -X POST http://localhost:8000/contacts –data '{"name":"John","phone ":"wrong"}' --header "Content-Type:application/json"
The preceding command should return an error indicating that the data we are trying to save is invalid.
This concludes our exploration of the fundamental principles for sharing code between Node.js and the browser. As we have seen, the challenges are many and the effort to design isomorphic code can be substantial. In this context, it's worth mentioning that one big challenge related to this area is shared rendering, which is the ability to render a view on the server as well as dynamically on the client. This requires a much more complex design effort that easily affects the entire architecture of the application on both the server and the browser. Many frameworks tried to solve this ultimate challenge, which usually is the most complex in the area of cross-platform JavaScript development. Among those projects, we can find Derby (http://derbyjs.com), Meteor (https://www.meteor.com), React (http://facebook.github.io/react), and then Rendr (https://github.com/rendrjs/rendr) and Ezel (http://ezeljs.com), which are based on Backbone, similar to what we did in our example.