Общо показвания

март 18, 2013

Closure Tools in Sublime Text 2

I have been a big fan of the closure tools for a long time now. It provides benefits over non-typed libraries that are hard to compare to the frivolous young libraries (backbone, component etc).

On one side those young libraries are very good at one thing and should you need exactly that one thing they are simply great: you get the job done in no time.

However once you need to tweak a little bit here and a little bit there or use abstractions on top of those base libraries it becomes real pain in the... I believe this is partially because all those libraries have their own style of code (how does inheritance work, how does structure works, static methods versus prototype, type checking on run time or no etc). This is all nice and fine, every author has its preferences and as far as there is no standardization on the matter everyone is actually forced to figure out by his own what to pick. This however makes it difficult for fellow developers to pick up just any library and start using it with an in depth understanding of what is going on. More often then not the documentation describes the publish methods but lacks description of the internal architecture and design behind it all. It is possible to still use the public APIs but if you want to deviate from the initial design assumptions of the authors you can get in bug trouble. And because every library uses its own style and assumptions it is hard to understand well all the bits and parts that you are using.

On the other side we have the "old" monolithic libraries like Dojo and closure.

Even thou we call them monolithic this is not true! The old beasts are actually very well modularized. Yes, you download the full blown version and you use that in developing cycles, but once you want to make a release there is a build step, very much like the step you are now days forced to have when you use small decoupled libraries. The difference is that in the case of small decoupled libraries you try to make one big thing out of small things (some call it the UNIX way), With the monolithic libraries the process is reversed: you state what you want to use and the rest is tripped out. The result is practically the same (well not exactly, more on that later) - you end up with exactly the code you need no more and no less. However up until the arrival of tools like yeoman, bower etc you had to hunt down for those little bits of code yourself. Now not every such bit is easily discoverable still, most components require additional steps to be readily made available in those environments. More often than never one and same functionality is provided almost identical but in very different style and package, so additionally a choice have to be made and once that choice is made it is kind of hard to divert from that even between projects, let alone in the same project.

So basically because the end result could be considered the same and because developers tend to look for what they already know it is not so much a technical choice but a personal preference. I would argue that closure tools provide some things on top of the rest of the libraries (except maybe typescript), but this was discussed many time in this blog.

What I want to show today is how to use closure tools in sublime text 2.

While tools exist to support development with closure tools (namely the eclipse closure plugin, webstorm and maybe others), those are written in Java (not necessarily a bad thing, just so happens that they are large and somewhat slow to use IMO) and require more resource that many web developers are willing to give up just to have a smart editor. On top of that customization is not that a simple step as with ST2. So we will assume that sublime is already your favorite editor and you want to use it to develop with closure tools.

The most important part of that is type checking of source. However projects in closure are configured to encompass many libraries and folders, which makes it easier if a tool is used to organize it. I will show you how to use make and a little piece of shell scripting to make compile check on any JavaScript file in your project. More about project structure used in this example could be found here.

First we need a small script that can detect the namespace of the current file we are editing and then it should call the compiler with the needed type checks. Here is the script (it might have suboptimal parts in it, I do not pretend to know bash well enough):


PROVIDE=`cat $FILE | grep goog.provide | sed -e "s/goog.provide('\([^']*\).*/\1/" | head -n 1`
make -C $2 check NS=$PROVIDE

Save the file somewhere in your PATH (that is - the directories looked up for executable files) and make sure to put the execution flag on for the file. Basically what it does is scan the currently open file in your editor for goog.provide symbols and runs the make program with the check namespace set to the first match.

Next step is to configure the build system in sublime. From the menus select New Build System and paste the following snippet in it:

  //"path":  "/home/pstj/Documents/System/Library/Bin/:/usr/lib/lightdm/lightdm:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games",
  "cmd": ["GCCCheck.sh", "$file", "$project_path"],
  "file_regex": "^(.*):([0-9]+):() WARNING - (.*)",
  "selector": "source.js"

What it does is to attach the build system to JavaScript source files and run the names command on it when Ctrl+B / Cmd+B is pressed. The path could be additionally configure to match your system PATH and/or the path in which the bash script file was saved. Note that the cmd variable's first parameter is the name of the file we created earlier.

The last step is to configure make to understand the 'check' target. This could be easily done if you already use a make file, if not create one and add the target as if the compile is to be made for real compilation but instead redirect the output to /dev/null. Here is an example:
python ../../library/closure/bin/build/closurebuilder.py \
    -n $(NS) \
    --root=js \    -f --js=build/deps.js \
    -f --flagfile=options/compile.ini \
    -o compiled \
    -c $(COMPILER_JAR) \
Add roots and flags as needed. Here is my compile.ini file:
--jscomp_warning accessControls
--jscomp_warning ambiguousFunctionDecl
--jscomp_warning checkTypes
--jscomp_warning checkVars
--jscomp_warning visibility
--jscomp_warning checkRegExp
--jscomp_warning invalidCasts
--jscomp_warning strictModuleDepCheck
--jscomp_warning typeInvalidation
--jscomp_warning undefinedVars
--jscomp_warning unknownDefines
--jscomp_warning uselessCode
As stated in the beginning of this section this setup will only work if a certain project structure is followed, however I believe it is easily enough customizable to allow it to be applicable to any closure tools project.

In addition if strict styling rules are required for your project there is the sublime-closure-linter package for sublime. I prefer to use make files to build gss files and templates but there are sublime packages for that too.

So what we have as a result is the ability to run any namespace trough the compiler checks and figure out on time if we are violating a contract of messing up types / methods. Once example from today was that I attempted to use the redner method on a PopupDatePicker. Sounds all normal and natural and I was sure it was in the API. However the compiler was smart enough to tell me that there is no such method (it was mistyped). The other day I had a value set as string, while a number was expected. You very well know what will happen if I try to use it in calculations (NaN). What this setup allows me to do is to have checks run from any entry point in my namespaces without actually typing the namespace. A basic work flow would be: add new file, edit it, save it, run build process, fix type errors, commit. That should do for now.

One thing missed by large when it comes to web development with JavaScript is the missing intellisense. I can understand that and I sympathize. However the big IDEs have other draw backs (sublime is times and times faster especially on regular Joe machines) and the IDEs are not perfect either. For example the eclipse plugin does not understands correctly the scoped variables even though the issue is closed it still does not work. WebStorm has its issues too. There are new project trying to address that, but up until then having a consistent APIs across large number of ready to use library code (as in monolithic library) I found to be more pleasing.

март 07, 2013

Clasical versus Prototype inheritance in JavaScript

A lot has been written lately about the benefits and risks of using prototype inheritance versus classical inheritance pattern in JavaScript.

In multiple occasions it has been shown that classical pattern works faster and uses less memory compared to anything else out there. It also consumes less resource (memory wise) and it is the most often used pattern currently as well which soft of guarantees that your code will be compatible with other people's code. Recent blog post even compared the exact amounts of  memory consumption comparing classical pattern versus Object.create (sorry, could not find relevant link now, but basically the memory overhead was there, but not as big as in other patterns).

However there are proponents of the prototype inheritance pattern in the community as well. On multiple blogs one can see examples of the usage and encouragement to try it out. On also many instances the cited benefits are dubious and refuted by proponents of the classical pattern.

Coming from closure library the classical pattern was the defacto only possible solution for my code up until very recently. However these days I have the chance to once again write code exclusively for modern browsers. I decided to explore what will be possible if I use all the 'wrong' and 'dangerous' patterns to make my code simpler and easier to use.

For the exercise I will have a fictional data server that returns list of records that I want to work with. The very basic use case of just retrieving the data and having it ready for manipulation will be investigated. So, we have a server call that returns an array of objects and basically I want to apply behavior on top of it.

Our server data will look like this:
var serverData = [{
    id: 1,
    name: 'Peter'
}, {
    id: 2,
    name: 'Desislava',
    lives: 10

There are two classical approached for this (mostly utilized in the 'data' frameworks these days).

1. Wrap the data: basically execute the logic as constructor function and put the data inside of it (for example as this.data = serverData, then operate over the data with access methods (either general: get('name') or named ones (getSomething/setSomething)). By the time the data need to be saved on the server the stored value is used (this.data).

2. Use functional approach: operate on the data via functions defined specifically for the data and pass the data record to every function call. To save the data back on the server pass directly the data as it is.

I was interested in a more 'natural' approach: using the literal objects created by the JSON parsing, slap the behavior on top of it without actually polluting the data and pass the augmented data back to the server directly (i.e. combination of the two approached above).

For this to work we have to be able to:
a) slap the methods on top of a literal object
b) have pure prototype inheritance to tweak the behavior easily

I came up with this:

Object.createPrototype = function(proto, mix) {
    function F() {};
    F.prototype = proto;
    var newProto = new F();
    mix.forEach(function(item) {
        Object.mixin(newProto, item);
    return newProto;

Object.mixin = function(obj, mix, restrict) {
    for (var k in mix) {
        if (typeof mix[k] == 'object') continue;
        if (restrict && obj[k] != undefined) continue;
        if (k == 'applyPrototype') continue;
        obj[k] = mix[k]

Object.prototype.applyPrototype = function(proto) {
    if (this.__proto__ != Object.prototype && this.__proto__ != Array.prototype) {
        throw new Error('The object is already typed');
    this.__proto__ = proto;

Object.createInstance = function(that, proto, def) {
    if (typeof def == 'object')
        Object.mixin(that, def, true);
    if (typeof that.initialize == 'function') {
    return that;

What this allows me to do with the server data is like this:
// define some defaults (if the data on server could have null's)
var defs = {
    lives: 10

// Define basic behaviour
var Base = {
    update: function(data) {
        if (this.uid != data.uid) {
        } else {
            Object.mixin(this, data);
    get uid() {
        return this.id;
//Upgrade the behavior
var Sub = Object.createPrototype(Base, [{
    kill: function() {

// helper function to process an array
function processData(data) {
    data.forEach(function(item) {
        Object.createInstance(item, Sub, defs);
    return data;

// Upgrade to an array that knows how to handle our data types
var MyArray = Object.createPrototype(Array.prototype, [{
    getById: function(id) {
        return this.map[id];
    initialize: function() {
        this.map = {};
        this.indexMap = {};
        this.forEach(function(item, i) {
            this.map[item.uid] =  item;
            this.indexMap[item.uid] = i;
        }, this);
    add: function(item) {
        if (typeof item.uid == 'undefined') {
            Object.createInstance(item, Sub, defs);
        Array.prototype.push.call(this, item);
        this.map[item.uid] = item;
        this.indexMap[item.uid] = this.length - 1;
    push: function() {
        throw new Error('Use the "add" method instead');
    pop: function() {
        throw new Error('Use "remove" instead');
    update: function(item) {
        if (typeof item.uid == 'undefined') {
            Object.createInstance(item, Sub, defs);
    remove: function(item) {
        if (typeof item == 'number') {
            this.splice(item, 1);
        } else {
            if (item.uid == undefined) {
                Object.createInstance(item, Sub, defs);
            this.splice(this.indexMap[item.uid], 1);

// Make new type that has next and previous.
var Linked = Object.createPrototype(MyArray, [{
    initialize: function() {
        this.index = 0;
Object.defineProperty(Linked, 'next', {
    get: function() {
        if (this.length > this.index + 1) {
            return this[this.index];
        else return null;
Object.defineProperty(Linked, 'previous', {
    get: function() {
        if (this.index - 1 >= 0) {
            return this[this.index];
        } else return null;

// Process the server data to create local data and work with it.
var clientData = Object.createInstance(serverData, Linked);
    id: 3,
    name: 'Denica'
    id: 3,
    name: 'Ceca'
    id: 3,
    name: 'Ceca'
var myNextObjetc = clientData.next;
JSON.stringify(clientData); // returns the actual array only without any of our custom props.
// [{"id":1,"name":"Peps","lives":9},{"id":2,"name":"Des","lives":3}] 

This is not much improvement over the classical approaches yet, because for example I still cannot update an item in the collection directly with values (i.e. clientData[1] = {id: 2, name: 'Another name'}), which can be achieved if I was simply hiding the array inside a wrapper object. However I believe for the objects inside the list it is a great improvement to be able to just have the behavior stamped on top and yet having natural access to all properties (i.e data.property1.property2. This again is not an ideal situation, because updates are not catch (i.e. bindings will be harder to implement), but this can be resolved by the observer/mutator API proposed.

Again, this is not a real world scenario, just me playing a little bit with literal objects as pure data, but instead is an interesting experiment in the sense that I have always wanted to be able to merge data with logic without too much fuss. What is accomplished in this solution is that we have the data (importantly - ready to be submitted back) and the logic is simply an object we can play with and model, including in run time. All this is possible with other approaches as well, but this is interesting because we do not actually create instances, but instead use the original data instances all the time and simply apply logic on top of it. In conclusion this is like merging the two classical approaches: keep the data instances but have logic bound to it.

DO NOT use this in your code! __proto__ is not standard and getters/setters as well as defineProperty are not widely supported!

CLARIFICATION: I am aware that the same effect could be accomplished with the new Proxy API, so thanks for reminding me, I already know it:)