Общо показвания

ноември 02, 2014

Little red ranting hood... in the JavaScript forest.

This is my rant about all the 'fixing JavaScript' "effort" that has been going on for some time now. The post is pretty long even fro me and it is really a rant I do for fun sometimes, in fact I have used or use some of those technologies, so don't get your torches and head to my house yet...


CoffeeScript


First there was CoffeeScript. Oh my God  - what a brainfart it is! So, you take the JavaScript shitty way of doing things and then you slap Python on top of it and then even better you slap 'custom passes' on top of it. So at the end you write in a language that no Virtual Machine understands, that no online editor can help you edit and with files that no JavaScript developer ever wants to 'figure out' because your syntax is way off the language you are actually targeting but in the same time it is exactly the same language with messed up syntax. There are no 'Coffee script' developers, those are just JavaScript developers with aneurysm. And you cannot run your code (oh yeah, I know about 'compile on save' - so I have to ask you, which will be faster: 'reload on save' or 'compile on save'. Wrong question, you cannot actually do that either, you have to reload on compile....). 

Now some people that like to call themselves experts in JavaScript, Closure (and surprisingly on something that does not really exists - Coffee script (OMG there are people that teach this!!!)), would even go further and add a pass that converts to 'closure compiler compatible' JavaScript. Lets review - you write in something that is fictional, then translate to something that is halfway there, then you have to compile it (if only to check types) and then you have to test in both development mode (source files that are actually not sources but translation results) and production mode (translation results compiled). Lets just say that this might get a bit slow and that it might be hard to set up this workflow on any kind of environment, let alone an online one..

But that is not all, ladies and gentlemen, it gets better - those same sick people are now "helping you" when you have JavaScript question with code snippets written in Coffee script!! Yes, like the crazy people they think you see the dragons that are only in their heads. Well that's messed up...

Dart

Going further in time - Dart was born. A completely new language that is all so much powerful and faster and better and shinier. Only if it could work in browsers... but it cannot. You are still debugging JavaScript at the end. There is (as with Coffee whatever) virtually no IDE support for it, you are stuck with DartEditor (which you might find nice, unless you want code collaboration, online editing of something that does not require  rocket fuel powered computer to run at acceptable speed). Oh, you say there is support for Sublime? Really? Have you tried it? The 'support' is basically born-again approach from the end of the 70's. Yes, you read it right - 45 years aaaggggooooo! You save your file then a script runs and then it collects the output of that script and then it presents you the result (in a surprisingly unhelpful way). There is no real intelligence in the completion or there 'should be' but it is not working. Same goes for another brilliant project - WebStorm.

Assuming you are fine with IDEs written in Java (which most of us are not - after all I am developing on the web for the web - Java exited the web like a decade ago and is slowly turning into corpse on the server as well) the Dart 'support' in WebStorm is somewhere between unsatisfying and completely unreliable. Oh, and the last build is broken as well (mind you, the product costs MONEY!) so you will have to either downgrade or wait for a fix... because we forgot to include complete intellisense for dart because.... well because on large projects we just like to crash the whole IDE, "but on the other hand we improved on the start-up speed so you are on the track", don't worry, keep paying...

Lets say you like the 70s, the music was great and the IDE's were non existent, the real men wrote code with text editors. Like Vim. Like scrolling is not a real thing - right, it is much faster to type gg/goog^M^[jjjVapk:sort:w - right! Your hands do not have to move like at all. It would have been sooo great is we as humans have evolved without the lower parts of our bodies (well maybe the penis should stay) and maybe.. hum.. little hands (like the ones found on T. rex) with lots and lots of fingers. Then maybe Vim would have been the ultimate and final - best of all, end of universe - editor.  You can sit (well, you would always sit without the lowed part of your body, but would it be called sitting?) all day long and use only your fingers, we would not need shoulders because we do not need to move our hands, we would just type and be really 'fast and productive'. I see paradise... but then Dart would have come.... Well, the Dart story in Vim is sad, just like with Sublime text. And that paradise would have been ruined. Just like Dart feel like on Vim....

But otherwise Dart is great. Its like 'twice as fast' than JavaScript (well.. on benchmarks, in real world it is twice as slow but don't be discouraged, we will get there, give it 15 years or so of evolution and I can almost guarantee you it will be as fast on the server as.... Java). On the browser... YES, it is twice as fast there as well... oh well not really because v8 catches up and now it is only like 20% faster.... on benchmarks. But we cannot really use it in the browser because no browser really supports it and we have to compile to JavaScript and in theory it should run as fast or faster than hand written JavaScript but... it does not. At least not when someone who actually knows what he/she is doing write the JavaScript.

But its fine, as the development story goes it is really fun to write in Dart. You 'save and reload' just like in JavaScript. Well not really, because you have to use Chromium, which supposed to be Chrome but it is not really and has bugs that are not found on Chrome and you have to figure out if the problem is your code (for CSS for example) or the browser quirks. And then again you have to test in all browsers because you know... it is not really clear if the generated JavaScript is working everywhere, it is supposed to be but... who knows. I really liked the video where the Dart developers explain how trivial the generated class code is, how it is almost one to one transpilation from Dart to JS. Well it is not. This is just a lie and you can read the generated file all day and still you wont be sure how does the result maps to your original code. At the end of day 3 you will figure it out eventually, but does it worth it? The problem is really simple - Dart does not compile to JavaScript idioms, it compiles to a JavaScript blob that is internally emulating how Dart is working. Trying to read/fix the compiled code is almost as trying to fix the result from the C compiler. Pointless. You have to go and 'fix up' your code (well because it works in Dart/ium) to 'make the compiler' happy(er). If you are still on the Dart bandwagon at this point (and you are still read the Dart passage) you might need a dominatrix in your life. There is a lot of internalized pain you need to let out...


TypeScript

Microsoft (I always liked that name, it is so 80s, like Sun microsystems.. omg!) did not wanted to be left behind in this new era of computing, they wanted to appeal to the new type of developers that like the web and want to stay on it as much as possible and even develop inside of it (can you imagine that William?). So how about we take a guy, that at his peak did do some good stuff and let him create a 'superset' of JavaScript that looks like ES6 (but is not really and cannot be run as such even if the browsers support it) and slap some type system on top of it. Well the type system should really be fun, it should be something that is half useful, half making the JavaScript developers go grey and die off, so we can reignite the Windows era (if possible, please... pretty please...?). So TypeScript was born. Have you tried it? I guess you did! 

The problem is that TypeScript tries to add types to functional code (i.e. not Object oriented code) and presents it as 'safety'. There is however one simple catch: most functional code that already exists deals with the JavaScript's lack of types in a funny way, so you have functions with 10 different signatures. I especially like the ones where the middle arguments are optional, you basically cannot infer the types based on their positions as arguments. This is especially funny in jquery where the number of arguments does not really implies the types of those arguments. Have you seen the definitions for jQ? What is really funny is that MS embraced JQ as main library (kind of like a standard library in TypeScript) hoping to appeal to larger audience and as a result no one really writes OO JS in TypeScript, instead everyone writes functional style and then impale themselves on the type system and wait for slow dead. Does the type system in TS make something useful? Well yes, of course, it made MS learn NodeJS. But no, I would not rant about nodejs in this post, here I will be only targeting the 'JS augmentations'. Another problem Microsoft did not foresee with TypeScript was that JavaScript developers rarely know anything about types.

Even if they know something about OOP, OOP in JavaScript is a different story and there are (really fat fat) books about OOP in JavaScript and the fun part is that they present so many patterns for OOP and explain so many things that are not actually useful in practice for one reason or another, they are so purely academical studies of the language capabilities and how far you can go about it, that I have been asked questions directly taken from the 'check your knowledge' sections of those book on job interviews!

I really enjoy citing the author, book title and page where the question is taken from to the 'technical' interviewer and then tell them that if his company is using any of those absurd patterns I will not work there. It always gets to them and distracts the 'knowledgeable' interviewer from you and wake up some internal fears that they themselves are not good enough, which automatically makes you appear more knowledgeable. What do they say? Play the player, not the game?

Sorry for the digression. Where was I? Oh yeah, TypeScript. One thing they got right (shorter syntax for classes) they kill with the type system. Every other negative side of Coffee script apply to TS as well, you still need to transpile just to test run it. Fortunately the results are much more predictable, but the lack of real support from tools (other than Visual studio) simply repulse most developers. I know developers that use TypeScript. Can you guess what their specialty is? Hang on - its .Net! That's right, TS appeals mostly to people that already use VS... ah, the irony...

AtScript

This is the last dumpling from Google. I like how Google is shooting in all directions in recent times, makes them look really desperate. I also like it when they send someone to do a technical talk and then that someone says something Google did not expected, but the talk was recorded and ends up on youtube, only to 'disappear' a week later when they realize what had happened. If you wonder why I will never Angular? Well they did a tech talk. Yes, about that 'first large internal project'. First of all it was done with Dart, which is already a red flag, then the guy said that 'we had to rewrite/do custom versions of pretty much all directives because the ones that are built in were too slow for our use case' - that really pinpointed it. It really said " the idea is great on paper, but not applicable in practice". Well, the idea is so great they decided to reiterate on it, hoping that this time something will be done right. Finally some teams at Google talked to each other and someone said:

Sooo, we have those great things (decorators/annotations) in Dart that no ones know how to use really, but they look nice and give legitimate use for the '@' sign which is pretty cool sign, right, so, you know, how about we slap it inside JavaScript as well. It should be fun.

And the other team said:

Cool, lets make some interns proud of themselves and give them something to live for, let them do a new language. But we will not call it a new language because of some jokes that are circulating around the Internet, something about the problems and number of solutions, don't look at me like that, I do not spend my time browsing comics, i DO work... sometimes, soo... okay where was I, ah yes, lets do this 'super set' thing that Microsoft did. We will take es5, then slap es6 on top of it, then slap typescript on top of it and then we will put the annotations on top of it.

Oh, oh, oh, wait wait, someone wrote an assertion library and management said if we do not use it for something they will fire the guy and he is like a great and nice guy, so we have to use it for something, right, so he does not get fired....right, can we do that, pleeease!

At this point everybody is looking suspiciously at the person that said the last line assuming a work related romance/bro-mance/gay-mance/whatever, but they decided to go with it anyways and as a result we have a new 'super set' that has everything poor old bastards at Microsoft had, but with annotations and runtime checks.  Ladies and gentlemen, I present you AtScript. And because it was a talk between the Dart angular team and the normal Angular team management would be really happy if this new thingy is 'beneficial' for both projects, so lets make it compile-able to both Dart and JavaScript. After all, we can always market it as a good thing....

So now we have to not only write JavaScript, an 'almost HTML, but not really', now we also have to compile it before we run it. Oh, oh oh and you know what, closure team has been doing this angular pass for year now, lets include them in the party, lets run the compiled JavaScript with another compiler, that would be fun!

Yey, Google is such a wonderful place to work at, you can get something and then mix it with something else and then talk to other teams and make really ridiculous things and we can always expect people to like it, especially developers because Google is such a magical word for developers, we can actually get away with anything it we figure out a way to make an internal project with it...

So now we have  AtScript. Of source there will be IDE support.... somewhere in the future... let say we will make a bash script that will monitor files for change and automatically compile them to both Dart and JavaScript, how about that? But that is down the road, for now you can happily use vim or anything else because you see, there is really no such language as AtScript, it is 'a super set' and as such no tool nor IDE nor virtual machine really understands it, so it is simply a dumb text file. But you can use it, Google uses it, so it must be great! Oh, and by the we are not really compatible with es6, nor TypeScript but we will get there as well, I promise!

Did I miss something?
Let me know in the comments:)

септември 02, 2014

Riding the Polymer wave

Comparative analysis of the developer experience for creating a multi-screen, single page application complete with persistence, animations and rich client data management w/ filtering. 

This piece will describe the development process of the creation of a single page application that harness multiple rest resources to compile a cohesive user application with multiple views, data filtering and persistence using the Polymer library. The process is describes as direct comparison with the process of building very similar application using the closure library and tools collection. The analysis is made as an effort to point out the pros and cons of migration from closure tools and libraries to the modern Polymer project as framework/tool set for building such applications. 


The post is quite lengthy so if you are looking for a quick summary scroll down to the bottom of it.

I will start with some background information: I have been building large scale single page applications for the last 5 years. I have been using many different tools for that purpose for one reason or another (some of those being: project already have been started using a particular framework that is not playing really well with anything else, the project required really small footprint as it would be server many many times without caching, and other). Some of the tools I have been using extensively include: MooTool (archaic, I know), jQuery (ridiculous, I know!), GWT (Java, I know), Closure library and tools (templates, CSS pre-processing, compiler) (really really verbose, I know), TypeScript (just a toy, I know), Dart (not really what it is portrayed to be, I know) and other - on occasion.

From all of those, the most reliable one and most trustworthy thus far at least in my use cases was Closure. Google really put efforts into it. The main problem with Closure is that it was designed with several things in mind, that kind of trip over JavaScript developers coming from any other framework/library:


  • designed to look and feel a lot like Java
  • designed to be production useful explicitly only with the Closure compiler 
  • designed to be compatible with old old old browsers
  • designed for internal use primarily and thus lacking the polishing of the other competitors out there


From those probably the biggest obstacle one could have with Closure is the fact that it is designed to actually use types, while JavaScript is designed to not use those. This is also the fact why TypeScript is used mostly as a transpiler these days and much less as compiler - types are not something JavaScript developers want/need to deal with.

Assuming the developer has the time and willpower to learn about the basic ideas inside closure (so types, the fact that you actually have to be able to imagine everything the compiler will do to your code in order to write something that will work after compilation, the fact that you actually need to test both source and compiled versions) - those great ideas and the advantages they bring could become part of your daily coding routine.

However even if you become the master of imaginative compilation and you are able to write code without errors (coming from the compiler) - those will never mitigate the fact that to write closure compatible code you have to always be very very verbose. Actually you have to write so much boilerplate code that even Google attempted to come up with some shorthand versions (mostly I am talking about goog.scope). Those efforts are however nothing compared to the amount of code one need to write to get going. This includes the type information (obligatory - the compiler is still not so smart to infer all types from the code directly and probably will never be), the namespaces and the fact that you need to include all used namespaces in the beginning of your file. Also from my experience it is very much possible to often have slightly overlapping functionality just because the design of a super class does not allow for some particular alternations (a property being important for your subclass but it being private in the parent class etc - no, you cannot alter the source code and no, you cannot just access it - the compiler will scream at you and who knows at which point the compiled code will stop working just because you are accessing something that is marked as private and the compiler can remove it as it sees fit...).

Part I: The good

Polymer is a new library from Google, that attempts to provide opinionated but easy to use abstractions over the new Web Components standard. It promotes declarative syntax for creating and using custom elements that should be compatible with all modern browsers (this includes the last two versions of all 'ever green' browsers).

Polymer itself is consisting of a polyfill for the platform features that are missing in some browsers (actually the only browser that has implementation for all web components specs is Chrome) and then the 'opinion' on top of it (polymer itself) on how to utilize those features to create developer friendly, fast to develop and easy to use custom elements and web applications.

First of all - instead of JavaScript you are writing mostly HTML - tags, attributes and properties. What you would write as new namespace with instance properties now you write as an element (polymer element tag) and attributes on a prototype object.

Here comes the first new thing - in Closure we do not write primitive values (nor object types - like NEVER) on the prototype because it alters the hidden class of the instance if altered after the instantiate phase and thus potentially makes your code run slower. In Polymer it is encourages for readability reasons and to easier understand what is going on. You can still write all properties in the 'ready' method, but the polymer implementation encourages you to use the prototype object of the element to define those and they are used for type conversions (for example the attribute will always be written as string, but you can hint polymer that you actually want to use numbers and thus the string will be automatically converted to number). One can also argue that this saves memory, but unless you are creating thousands of elements this is not really important.

Being able to not really care for the type (i.e. if the correct type will be passed to the property as value etc) and instead hint the type and expect the system to convert it for you is really a great developer experience and removes burden from the developer while in the same time proves usefulness (if one binds the input value to a money conversion element there is no need for the developer to manually convert the string to a number). So +1 for Polymer for not caring about the types and another +1 for managing type conversion for you.

After you define your custom element you need to (for UI components at least) make some templates. In Closure this is done with the soy template language, which is not so bad per see, but is cumbersome to work with - it is not possible to render your template in the browser without a) having a consuming element/component and b) valid model if data is used in the template. More over the template cannot be used directly, you always have to compile it before you can see what it looks like. So basically you end up writing HTML like files, but you cannot preview them and you cannot use them without all the boilerplate code around it. You certainly cannot just visualize that particular template with ease - you need to setup a whole new view with imports and more code just to see what you have done. This forces a certain type of workflow that seem a bit unnatural - first the designers write the view as regular HTML and then you (the JavaScript developer) take that piece of HTML and strip it out and turn it into a template and you hope that you did not broke anything.

In polymer the template is just a regular HTML. Testing it is as simple as including your new element/components in any web page.  +1 for Polymer and easy to use and create templates.

Once you have a basic element (so some template structure and some properties to operate with) you might need some ready to use functionality - you need an import. Unfortunately here polymer and closure, while in totally different in technical perspective ways, operate very similarly from consuming point of view - you have to always include what you need in each and every of your custom elements/components. In closure you do not have to care about file paths so shifting files and directories around is much safer (in Polymer it was a nightmare to have to move some elements to a different directory mid-project - we needed to go and find all the places those were used and update them), but you cannot automatically retrieve needed libraries and elements. In polymer the preferred way to manage dependencies is bower. While it is very nice the installed elements are very volatile to path alterations and once you start it is hard to update those. In both cases however you have a lot to write!


  • In closure - the whole namespace for each used element/utility
  • In Polymer - the full path of every used component (and writing "bower_components" all the time is not fun!)

Because of that no +1 for Polymer in this area.

After all these steps you can potentially have a self-contained ready to be used, re-used and extended/augmented piece of code. As a whole the closure code will be 50% larger (mostly because of the type comments, but if you put comment on your polymer code it would be the same size). The main difference is that in Polymer you end up with a single file (or 3 files, depending on how you structure the component CSS and Script could be separate files, but I tend to keep everything together for more cohesive experience), in Closure you end up with at least 2 files (for UI components) - one with the scripting and logic and one with the markup. The markup of course can live with the markup of other components in a single file under a namespace which makes it even harder to separate your code and distribute it as a single element.

Polymer also provides utilities for documentation and demos of your work (again elements that automatically extract the code comments and construct html in the browser). To do the same with closure code you need jsdoc and templates and basically you are on your own there because the docs of closure library are generated internally by Google and the ones you can generate look and behave nothing like Google's - they are less usable and uglier.

Because of these not so important but kind of not unimportant reasons Polymer gets one more point.

Part II: The bad

Not all is roses in Polymer land either.

First thing one could notice is that because of component reuse and the ease of instantiation basically you are inclined to use lots and lots of elements that provide utilities and no UI. The most notable example for that being core-ajax. In a single view page you can have several core-ajax elements. In Closure land you usually have one 'Loader' class that takes care of all the communication to specific REST endpoints and when an error occurs you decide what to happen next. In Polymer each core-ajax element lives on its own and you might end up having 30+ such elements in your app. You can define error handlers on each of them or listen for core-error globally (on application level). The problem is that you end up with ajax request originating from pretty much anywhere in your application and it makes it harder to manage. Also it is pretty hard to limit what is happening when because you do not have central control over it. Of course you can write such, but it looks unnatural. As a side effect you can load pretty much all your request except one, but it will take time to notice that you failed in that one request.

Another catch you have to deal with is Shadow DOM's CSS rules. Once all browsers implement Shadow DOM it might be great (and it will be in theory) but today you end up with styling that is very volatile under the polyfil. The first time you start styling you are tempted to write rules that are automatically scoped (for example adding classes like '.padded' in several different elements with different values inside of it). Under native Shadow DOM it works fine, but under the polyfil you will get overlapping rules and definitely not what you expect. The main problem with that is that you actually develop in browser that has native Shadow DOM and you do not notice that there is a problem until much later (and of course you are developing with Chrome, right? If you use anything else for development this whole post is probably not for you anyway). Once you realise what is going on you end up writing all rules as if they were un-scoped (so ':host .padded' which defeats the purpose of Shadow DOM - even with Polymer you still have to think about your styles as global - not cool!

Next come RAF and timeouts: The documentation for polymer states: 'async is bound to animation frame'. Well it is, unless you specify timeout in which case setTimeout is used and you loose the RAF timing. To gain that back you end up with atrocities like this:

this.async(function() { this.async(function() { _doSomethingToTheUI() }), null, 500);
I know, it is not that bad, but still looks a bit unnatural. Also guess what if you need to cancel this delayed work... you need to handle the case where the first async is already executed but the second one is not... this makes async programming fun, doesn't it!

Another async related code - 'job' is working around a common pattern and is designed much better and actually reflects real pattern. The implementation for it (as in polymer dev) is actually a bit different than what is expressed in the docs, but still is a nice enough utility.

Next comes the scroll handling: in Chrome scroll events are synchronized to the RAF, but in all other browsers they are not. However guess how scroll events are handled in Polymer...

Same goes for touch events. I know that it is best to think forward, to innovate and blah blah blah, but when you are pressed to deliver working code for those old phones Polymer is not not your friend.

This is especially true when it comes to the animations: the design and the looks are really really great - for the first time you get a framework that allows you to implement very complex and very impressive transitions and animation with so little effort. They are amazing when demoed and when they work. Now try to run the exactly same code under Firefox... or under iOS safari.. Hero transitions do not work as expected, sliding transitions are braking the view completely in Safari... and in Internet Explorer ... there is a bug that prevents platform polyfil from loading... fun huh?

Basically polymer teams says that they support and want to support all major browsers, but the reality is that once you go beyond the simplest demos it only works reliably in Chrome. This is not so bad if you are targeting Chrome users, Chrome OS users and Android users with sort of new and fancy phones. Unfortunately most paying users are actually using iOS for mobile. In theory if your product is successful you should rewrite in Native for iOS for best user experience, but in reality complex polymer applications work ONLY on most modern phones. For example equal by complexity and code size apps when written with Closure work perfectly on iOS 7 on iPhone 4S and iPad 2, but when written with Polymer the app is completely unusable on iPhone 4s and barely running on iPad 2. On iPhone 5 it works in parity with desktop Firefox for example (i.e. there are delays here and there and some things are broken but most of the app is fine). Now compare this to the Chrome browser on the oldest Nexus 7 (also called original Nexus 7). It works perfectly smooth! Nexus 4 - butter smooth, Nexus 5 and Chrome - again buttery smooth experience, all animations look exactly as they should be, all transitions complete nicely and in sync...

I am sure Polymer team wants to support all major browsers, it is simply the fact that they don't. So does not your application. And this means you can be in big trouble with your manager...

When you debug your application in Chrome everything makes sense, you see the shadow root, you see where elements got inserted and so on. Try the same in Safari debugger... or in Firefox.. what you see is very very different from what you expect. This is again related to the fact that there is no native shadow DOM support. Apart from that other things seem to work just fine. The problem I have with those is that I cannot figure out where the slowness comes from, initially I believed it was a rough code path - someone somewhere was making something that was not really needed. However test after test after test proved to be 'accumulation' problem - if I isolate a component and nest it deep deep into lots of other components it still works fine. But when I start adding siblings on all levels the slowness accumulates progressively and an app with only 3 levels of nested animated pages with 4-10 siblings in each level results in apps that run only in Chrome, crawl on any other browser on Desktop and kill performance on mobile safari and old PCs.... There are some 'offenders' that are worse than others, but one after another they bring their 'slow' and it all results in intolerable application.

Once again - I am NOT sure what is going on there (mainly because debug tools on most other browsers are really years behind what we have in Chrome), also another 'nice' reason is that when you attach iOS device to debug web page on a Mac it auto-magically becomes 20 times slower. I do not kid you - the model observing digest all of a sudden runs for 25-28 milliseconds and the touch events take 50 ms to be processed.... close the debugged window and MAGIC - you again have something that looks like an app and not like still images... this terrible experience proves to me that Apple is strictly inhibiting any and all web advances they could think of and wants you to use their insane languages (I am sorry but Objective C looks like someone killed a dragon and used the body parts to compose a language).

Part III: The ugly


The ugly part is short: the ugly part is the controversion this whole Polymer thing creates: on one hand the managers want things to come into existence fast - the faster the better, and love to talk about things like 'time to market' etc. On the other hand they really really want to be able to tap into the paying market of Apple store. 

And indeed developing with Polymer will make you 10 to 20 times faster (depending on what you have been doing before that, I have been doing Closure and Polymer gave me a lot of speed). It makes so many things easier taking care of many things for you: type hinting, data binding, resource loading (hello, HTML imports) and ready to use components that are beautiful and usable (paper-elements). 

However as the 'bad' part explains there is a price to be payed and it is not really developer's choice to make: once you know what to expect it is not really something you would go and fix even if you are a 'javascript ninja' - it is just the way it works, it utilizes very new things that are simply not there in most browsers. Even if you are a ninja, you could fix this or that, but the amount of work you have to do is much much more than if you have just used something that is known to work already. Polymer is being worked on all the time. But Apple is not showing any interest in developing for those new standards and we know iOS is away from this new goodness. 

Conclusion


If you are lucky enough to be developing a Chrome packaged app or an Android only app - consider yourself really lucky, because you can jump right on top of Polymer and ride it like a champ. You can do crazy things like declarative data processing (yes, you describe the data processing and filtering as nested html elements and it works!!!), ultra cool lively animations and transitions and you can write truly reusable, configurable components that you can reuse from project to project regardless of the used library and/or framework - something we have not been able to do - never, until now. Even with closure components you were still bound to closure tool-chain. With polymer you can include (one day only) polymer and do not care who uses what else to construct custom elements that you might use.

If you are not one of the lucky you better watch out: Polymer is really fast to develop with and very gratifying up until the moment you start to test compatibility: forget about IE < 10, forget about complex apps on older phones, forget about really clean fluid animations on anything else than Chrome... at least for now. Polymer is said to be early preview and alpha and not ready for production. If you are like me, if you want to developer faster and more pleasantly Polymer might lure you in, but then you have to pay the price: days, even weeks after your application development is ready and stable you might still be fighting with Mobile Safari and Firefox and IE... and even if you are not actually fighting bugs in your own code you are still days away from releasing.. you have been warned!

юли 16, 2014

Polymer vs. Dart polymer (and a bit of Chrome Dev Editor)

It is no secret that I am a fan of Dart. Having used it for more than few projects already it had speed up my work flow tremendously. It was only natural to check out the new core and paper elements from Google after the demos and talks from Google I/O 2014.

The problem one faces with that is the complexity of the work flow one suddenly has to manage. It's no fun at all, opposite of what I was expecting after working with Dart for several months.

I have to include the html via import, but then again I have to include the dart files of those imports in the dart file of the polymer element. Ahem... I have to use those strange annotations (CustomTag for example). Hum... I was a bit confused and knowing it all will boil down to HTML and JavaScript anyway I decided to give it a try in JavaScript.

What a surprise it was! All of a sudden I did not have to think about annotations and I could work in the same file for my whole component. I could actually understand what was going on and how it all will work out in the browser.

For testing I also used Chrome Dev Editor (needs more polishing by the way and it needs to be faster, with multiple open tabs I could see very clearly delay when switching between the tabs and opening files, creating new file/directory does not put the focus in the input, pressing Enter does not trigger the creation and so on, lack of polishing describes it very well). On the surface it does not provide much more than any other editor. It uses ACE internally and does not provide alternative color schemes at this time. It has few settings, but it has features no other editor I have seen (especially for ChromeOS). One of those is clone from GitHub which is awesome! Another one is Deploy to Mobile! Makes it ultra easy to work on your Chromebook and just attach your phone to test. Combined with the debug and dev tools in Chrome itself it makes for a cool, well integrated development environment.

I have built a whole mobile web app in under a day. It has what a robust modern development platform has: well built universal UI controls (tabs, lists, checkboxes, switches, buttons etc) that are easy to understand and use. For the first time in a very long period I had what other (read non-web) developers have had in years - I could use stable and reliable UI components and write only application logic. And that is what I did - I had to only write the parsing logic for the server responses (because it speak no json). Data bindings are awesome and declarative event handlers are double awesome!

Now back to Dart. I happen to miss the editor's understanding of my code. I had to rename a variable and it was a pain!!! I am so much used to the IDE handling this for me as well as omitting the 'this' keyword that I actually had few bugs related to not using 'this' where I should have.

But as a whole I do not see any advantages of using Dart over pure HTML/JS when creating simple polymer elements. Maybe it would be helpful if the element's logic is much more complex (actually I bet it would help in this case), but the cumbersome way of making elements in Dart should really improve IMO.

I have showed the workflow to my colleagues and after some initial resistance (because of the somewhat more complex css selectors for the styling) all agreed that working this way is faster and much more pleasant. This means a win for polymer in my opinion.

Finally I want to address the ranting from some individuals about the fact that Google pushes polymer so hard that it sounds like polymer is what a web components are. The fact is that polymer provides a declarative way to create web components. This makes the creation of a new component so much easy to understand and faster! For a lot of developers JavaScript is jQuery as well, does it mean that we should rant against it (as often as I do!). No, it only means that there will always be those who have better and deeper understanding of the platform and its features and those who just want to use the free way and get there faster using any tool available. I would not go and write custom component using JavaScript to access the shadow root and set its inner HTML - this is insanity! On the other hand I could write a template and declare my bindings. So to those people I want to say: whatever floats your boat, but stop attacking Google for being too loud about their accomplishments. All companies do that. Apple released a feature that was there for Linux and Windows a decade ago and still they have the nerve to say that it was a great innovation. Why not rant about that?

април 14, 2014

My opinion on StackOverflow

StackOverflow is a strange place.


On one hand - it has lots of information you can directly access (via Google search) if you are just stepping into a technology (be it a language, a framework or what not).


On the other hand the moment you have a real problem (so not just being lazy to read the docs or test it yourself) so end up voice in the desert. No one even tries.


And then there is the 'reputation bullshit'.


Let me ask this: If I have a question (very clear one - can I trigger file download from custom made menu in Drive) and the only person who answers does not really answer but instead starts to explain to me how the other APIs work (Like the UI api and how to put a link on it and wait for my users to click on it, mind you, completely re-imagining the workflow - something I can do on my own) my reaction was a downvote. Which provoked several downvotes from him apparently and probably some of hif fan club. After a few more sharp messages the guy finally gave me an answer (at which point I no longer trusted his answers btw). But in the meantime I have lost 7 points and he has lost 3.


So to recap - voting someone DOES NOT directly correlate with his knowledge nor his helpfulness. It is simply an emotional response to user's desired to a) be noticed and b) be always correct.


I myself have answered lots of stupid questions in SO. Mainly related to TypeScript before I got bored of it. And NONE of my hard questions have been answered. They still sit there, more than a year after I have posted those.


If you are wondering: Question 1, Question 2.

Now, I am not saying SO is bad - it is a good place to fast-track on a new problem/technology, but once you get to a certain depth you get nowhere with SO. Which is sad. And makes SO less and less useful the more you gain competence on the area. Where should one go after that? I have no idea...

dart and performance (a test journy in game land)

I have been playing with Dart and StageXL for 10 days now and I feel like there are thoughts to be shared. Part of this post is also an update to a thread in the StageXL group

The game is really simple clone of the flappy bird. The main idea is that the gameplay should be easy to implement and understandable in order to allow me to concentrate on the internals of the game and the rendering engine instead of toying too much with the game itself.

At one point I have noticed that the way the original game detects the collision of the bird on the trees is a naive one and often gives false negatives. I went on reading about what is possible and present currently in Dart and StageXL in this regard. Turns out only very basic approach is available internally as StageXL is geared as animation library and not as a game development framework. Never the less this allowed me to play a bit more with the language itself.

For collision detection I have utilized the following approach:

  1. Using the DisplayObject.hitTestObject find element(s) that are potentially colliding (in the test game I developed it could be only one out of ~ 10).
  2. Determine the rectangle where the transformed protagonist will fit.
  3. Using detached canvas element, clear it and draw the transformed image of the protagonist.
  4. Using detached canvas element, clear it and draw the potentially colliding element. 
  5. The 2 canvases used are just big enough to contain the protagonist image.
  6. Compare the non-alpha pixels of the protagonist canvas to the colliding element canvas and there is a pixel where alpha is greater than 0 we want to return true (i.e. there is a collision).

Here are the results of this little experiment.

On PC and DartVM this runs so smooth, one would think why people are not using it all the time. The average time to run the code per frame with complete pixel accurate collision is ~0.35ms (with time limit of ~16ms this is pretty good I think).

Compiled to JS it runs at about twice as slow, the frame taking about 0.78ms still pretty good. This is kind of strange since the code is mainly touching webGL, canvas and arithmetics. Canvas and webGL are DOM interfaces and thus there should be no difference, so the only possible place of optimization / speed is the arithmetics (at least this is what I initially thought). I am not an expert in VMs, but having a VM that can run at least twice as fast the calculations is awesome!

However the problems with the 'compile to js' approach start to show as early as here: taking a look at the memory debug panel one could notice a very different pattern in JS world. The game has two different modes: off-game mode where a very simple animation is performed and on-game mode where in addition to the very simple sprite animation of the 'protagonist' obstacles are animated as well as collision detection is performed at each frame.

Here are the images.
DartVM

V8 - Chrome desktop

Notice a difference? Again - the code is only using a single canvas element with 3d context (webGL) to draw the animation on it and internally StageXL is using detached canvases and lots of calculations and canvas transformations. First of all - before the snapshot is taken the memory is force cleaned. Then for the test I let the off-game animation run for a while and then I play a little bit and then I leave it alone again.

On top of using around 4 times more memory (JS VM versus DartVM), JS also shows significant difference in memory allocation patterns when more code paths are executed per frame compared to DartVM. For example in the first image it is impossible to distinguish when I am playing the game and when not, while in the second image taken from JS land it is pretty much very clear - the climbs are much more steeper when the more complex code path is executed. Notice the change in steepness around second 33 in the second picture - this is where I killed my protagonist and the complex animation path is discarded. Notice also the enormous difference in memory allocation - going from 7.2MB up to 10MB in 40 seconds, while in DartVM we went from 1.8MB to 2.9MB in 5 minutes. Actually I had to wait for the garbage collection event to happen in DartVM just to see if it will ever happen, this is why the screenshot of the DartVM is encompassing such a long time period. For a moment there I thought it was broken - how come no GC event....

One negative thing about Dartium - when in debug mode (console open and recording) the performance on the screen was worse and noticeably janky. Turn off dev console and you get perfectly smooth animations.

The above summarized data about the performance and memory footprint were all taken on a core i3 @ 3.3GHz with 4GB onboard memory and a dedicated video card Nvidia. This is all well and nice, but the target platform is actually phones and tablets. It was time to test this on those devices.

I started with nexus 7 as easier to approach (no need to find a Mac computer just to debug some web stuff ha!). The results are pretty sad.

The memory consumption is pretty much identical to what we get in the desktop chrome (not surprising).

V8 - Chrome mobile

You can again easily see when the game is on and when not. Also the memory utilized is in the same range.
A very significant difference is however noticed in the frames timing. While on i3 CPU it takes under a millisecond to complete the code paths in the RAF, here it takes about 5ms on average, peaking at ~9ms!! This is so much more than what we had on the PC. Also the composite time is very different (although the canvas size is kept the same for measurement purposes - 480px x 640px) - ~2.5ms versus 0.5ms on PC. Strangely the time reported for compositing in Dartium is reported to be 0.2ms... (doesn't really make sense here...).

Even with those measurements the game should have been pretty good on the Nexus 7 device. But it is not. Well, remember the memory allocation patterns? This is the worse enemy of any game - having too much garbage. Guess the time for garbage collection events on Nexus 7.... ~80ms on GC cycle!!!! This is killing my game (and possibly will kill yours)!

Before following the logical steps to attempt to alter my code to lower the memory allocations and recollection I wanted to know which are the places where most of the CPU cycles are going. From the above tests I already knew that compositing the scene on Nexus 7 was taking ~3ms, which leaves me with ~13ms time to handle all the game logic. I have also tested with LG phone that claims to have the same hardware specs as the Nexus 7 device. Turns out the JS time is almost the same, but compositing time is 4.2ms. The screen is smaller and I expected the compositing time to be shorter, but instead it is longer, leaving the game in a pretty bad state: that is even in the time where no GC events occur the frame rate is below 60.

Back on my mission to understand performance implications of dart2js code I used nexus 7 to measure where is the CPU time spent.

Surprise - surprise! drawImage was taking 50% of the CPU. This is strange, I would have expected this to be a fast operation, considering the fact that the canvas is detached from the document. Anyways, what I was more surprised about was that 14.30% was spent in the comparison of pixel data of my two hidden canvases, but the actual comparison was only 7%, the other 7.3 was spent in 'convertnativeToDart_ImageData' - guessing from the name it converts the native ImageData object to Dart list of integers. Transforming the canvas is also 7.14% of the total time plus the clearing of the canvas. As a whole the slowest is the drawing of the image data. I believe I have put together the best practices (draw on a full pixel value, do not attach the canvas to the DOM), but still the combination of those actions take ~10ms, with a compositing taking ~3 the game is on the verge of not being playable.

Clearly using a different approach for detecting collision will reduce the amount of time (since we can avoid the most expensive operation - i.e. clearing the canvas and drawing on it).

More interestingly - using profiling from Dartium was not possible. The window.console.profile() call did nothing there. I am not sure if this is a bug or simply it does not work this way with DartVM, but it would have been interesting to see.

Going back to actual search for solution I decided to rework the code and instead of Dart idiomatic code to write what is known to work best in JS land. For more details on how one is supposed to write code in Dart please refer to this article.

One of the things I liked the most was the lexical scoping, combined with the lack of necessity to constantly write 'this' leads to lots of closures, trashing instances in methods and so on - the code becomes really terse and easy to understand and follow. Now, judging from the benchmarks DartVM performs around 2 times better than V8. Well, I have no idea how those benchmarks work and what they do compare, but the facts are like this: If you write your code in the Dart idiomatic way the code works perfectly fine and really fast in DartVM. This includes creating and trashing a lots and lots and lots of instances of small classes in a single stack (i.e. in one tick going deeper), using a lots of closures (think List..forEach()and List..forEach((_) => [_.a, _.b].map()) etc) and dumb objects, no local variables or lots of local variables, deep object nesting (o.o.o.o.o.o.o for example) and so on and so on - things that for years have been condemned and considered a no-no in JavaScript when performance is number one consideration. And to tell you more - i feels GOOD! Not having to write 'this', creating instances all over the place to make your code more readable and understandable (as opposite of creating cache properties all over the place and accessing it in bizarre manners just to avoid allocations to be cleaned later by the GC). Almost like a dream....

But it comes at a price. The dart2js compiler aims to produce code that is as close to the original dart code and its idioms, so basically if you use closures and forEach etc they end up in your code. Of course tree shaking is performed, but I do not see a lot of code rewrite being done, even less code optimization. It is a grey area (means it is not clear who should be optimizing the code in this case, the VM or the compiler. We know that in Google both are employed in different projects (for example GWT produces code that is highly optimised per browser, while Closure compiler produces code that is optimized for size and can potentially lead to more ineffective code when it is executed in V8 (actually there are several bugs submitted about this - function in-lining leading to calls that are de-optimized or cannot be optimized at all)).

When we write in JavaScript it is our responsibility to know all the catches and tweaks and quirks of the underlying VM in order to make most of the hardware and software capabilities. This is also true for transpiled languages (like typescript and coffee-script) but how should we handle this in Dart? Dart has its own VM and from what I have seen already it is optimizing all those 'dart idioms' very well and even with them it outperforms V8. But then the code needs to be compiled to JS and this is where I feel like the authors of the compiler fail us: yes the produced code works in all browsers, but the speed is not what we see from the benchmarks, the performance is 4 and more times worse than that of DartVM. So basically what I said before that Dart is more capable of what we expect - I lied - it is what it is, around twice as fast as V8, but the code dart produces for V8 is far from excellent in regards of memory usage and raw CPU performance. Seasoned JS developers know how to write code that is both memory efficient and performant (and those often mean the same due to those GC pauses), but then again those same developers are hard to find and even harder to make them do dull projects.

Because of the structure of Dart I assumed that it would be much more easy to produce more 'static' JavaScript code than to analyse a whole application and try to optimize it (à la closure compiler). I have been imagining some frivolity when rewriting the Dart code to JS such as turning forEach into for loops, creating bound and cached instances for often run closures etc etc, - things that we know to speed up and lower memory variance in large applications. Instead the code is preserved as close to the original as possible and thus it is again responsibility of the developer to write the same ugly but high performance code if the target platforms is known to be JS.

So what I did to mitigate things: first - remove all closures (so no forEach, no map etc). Second - get rid of all in-method created object instances (mainly Matrix and Rectangle instances for calculations). Instead create 'cache' instances and tie them to the main object (the one where the methods are executed). So now instead of creating several matrices only one instance is used and is mutated several time and reused. Same goes for the rectangles. Third - get rid of local variables, instead use a List instance as cache and put every number needed in there.

Result: the code got as ugly as any 'hand optimized' JavaScript you can see in my repository. In the often called methods there is no object creation or freeing whatsoever, nor variable (well I am not sure about the numbers, in JS those are primitive values, but in Dart those are Objects, so I am not sure if this optimization is really worth it). Memory wise I got this: 7.8 MB going up to 10.2 and back down. So now the logically same code executes with much less variance in heap allocation, which means the GC pauses are shorter and less often. Indeed the game play experience improved and GC time went down to ~30ms in the worse cases. This however is only my code and not the library code (StageXL). While the library is great (I could honestly said I wouldn't be doing this test if it was not for StageXL, so kudos to Bernhard for all his answers to my stupid questions) there are some (well - a lot) of places there the code is written directly in Dart idioms (so would run great in DartVM but not so great in V8). Those I do not intent to try to optimize out and there is no point in it: the point was to see if applying JavaScript idioms for fast code would benefit an application built in Dart and run as JavaScript.

Well the answer is (sadly) - YES.

My assumption is that this would not matter in more static web apps or apps using other means to animate, but for games, while providing great library and excellent tooling as a whole Dart hides too many underwater rocks. You have to already be a JS ninja to write Dart code that will perform best when compiled to JS, which basically nulls out the benefits of Dart IMO. Dart promised to get us rid from JS. And yet we are going back there the moment we need a bit more performance.

I should finish with a positive note: all this will be gone once DartVM is available as  a built-in Chrome option in stable. At the penetration rate of the new stable version to the users if would be a week after that release and everyone will have it. It is another question how will we deploy pure dart code to the users (my project for example uses ~500 dart files, imagine downloading those to the client...). But it could happen and then one will be truly liberated from the JS. But what about other browsers? Mozilla's ex CTO/CEO is directly opposing it, while Microsoft and especially Apple have financial interest of hindering the adoption of Dart. So it is not purely technical problems after all. Anyway, with Dart support in chrome one would have a large user base (Android + ChromeOS devices + all chrome installs) and in the beginning I think it is enough to incline the developers to explore it more as a platform and less as a compilation mid-stage for JS. Just imagine what you could write with those extra CPU cycles...

Ah, the dream..

март 03, 2014

GWT in Dart

I had no idea that this is even existing, but...!

Behold: http://dartwebtoolkit.com/

Okay, enough pathos. The thing is I always wanted to use GWT for a project (or several ones), but my knowledge of Java is about a book long and one that is for the first year of CS bachelor degree. And the book was a bad one!

It is understandable that I wanted to try GWT, after all it provided type system, safety, performance, developer productivity and is backed up by the best web app producing company out there. But this was not incentive enough for me to learn Java. Besides HTML and Java do not mix that well despite what you might have heard.

GTW was partially ported to Python and I have played around with that version of it, however never got to a working large scale application, mostly it was toy projects, jut to get a taste of the tool. As you can imagine the tooling around Python is not as good as the one around Java.

That and other factors led me to Closure tools.  After all it was used to build the flagman application for Google. t is indeed a great tool and anyone stating otherwise is simply ignorant. This is not the place to enumerate the benefits and advantages one gets with the project components. However they still lack something I find to be important for a productive developer: good tooling and IDE integration. There are several projects out there, some free and some at a steep price, but none of them solves the problem (and to be frank the premium ones are a rip off and worse than the free alternatives).

And then came Dart. If you have not watched one of the tens of videos introducing the language and the IDE go and do it know. You have no real reason to not do it, especially now that youtube supports playing the videos at twice the speed without loss of audio quality and without higher pitch of the voices. 30 or so minutes worth it!

Unfortunately Dart did not came with a sophisticated bunch of elements ready to be used in a large scale application. It came with Web UI, which was then deprecated and inherited by Polymer, which is powerful and new and cool, but far from ready for production IMO (it lags behind the JS implementation and has many bugs).

Little did I know that someone somewhere was thinking about it and working hard to implement the GWT set of components in Dart! It was just today that I have learned about the effort and the demo applications look great! Go on and take a look.

To sum it up - now you have powerful set of web tool-kit coupled with easy to learn yet robust optionally typed language with excellent IDE support.  If you are trying to build an application with lots of widgets that are more typical for the desktop application look and in the same time learn a new awesome language look no further!

февруари 19, 2014

Is Google intentionally making their products only for Chrome Browser?

Here is the deal: Google's web applications are the best of breed, exemplary model for modern, high efficiency, stable, constantly updated, usable pieces of software available today. Without their web apps the Chrome OS would be almost useless.

However I have notices a tendency in the recent development of their products that I do not like at all!

The tendency to lock features to their own browser. Many examples exists across many of their products: offline support (no, no, do not tell me it is because APIs are missing in other browsers, packaged apps are indeed possible only in Chrome, but AppCache and IndexedDB are available on all browsers!), features (like developing image editing feature as NaCl plugin instead of a web application might have been easier, but is surely NOT impossible to be done the other way around especially with the wide support for workers) and other.

I can freely assume that this is a business strategy (which is kind of counter intuitive for a company that has a slogan of 'do no evil'), but what is worse for the web platform as a whole is the fact that instead of advancing the open web features they advance locked in features that promote ease of use for the developers, but lack availability on anything else but their own browser. Many of the packaged applications (maybe not all of them of course) can very happily exist as offline applications with web standard technologies, but instead the developers of those applications decided to go with Chrome's closed ecosystem.

It is hard to promote the standards use in place of the ease of use, which has something to tell to the standard bodies, but this is a whole different story. What is more important is why is Google doing this? It is one thing to provide the best services and best products and willing to profit from that, but is completely different to lock in users, encouraging mono-culture on the user end.

I love Google's web products, I truly believe that they are the best I can use right now, but this is not the case with the Chrome browser. However I feel like I am missing something when I am using the web apps when I do so from another browser. Especially features it has been shown to be working and working well with standard and widely available web technology only (I am talking about the image editing features in G+ in this case). Also promoting a new "offline" capability in Google Sheets without mentioning that it only works in Chrome is kind of hypocritical. So basically it says: "We are releasing new and exciting features, but only for the users of Chrome, the others can suck it". 

Does remind me of another browser and another technological epoch....

януари 29, 2014

The Dart language


Introduction

I have been using JavaScript for many years now as my primary language and most of the time I have been developing front-end rich applications. Not only applications with lots of data, but also with robust custom widgets/elements that are not native to the web (like scalable design views, video editor times lines etc).

When I first started writing those complex apps it was very very hard to compose such an app and make it run on all browsers. Now days when I have to quick fix something written years ago I see how custom objects have been sprinkled in IE only code and other terrible things.

Then came the libraries. jQuery, Mootools etc. They made it a bit easier.

And then there were the frameworks. Without them a lot of what is possible today would not have existed at all! I will not iterate over the problems of incompatible implementations etc, but the thing is even the best of the best of those frameworks and libraries had issues on their own and most importantly the developer needed to choose one and stick with it for the whole project basically, changing it requires learning a whole new world of class implementation and strategies for building an app.

The last thing to consider was the lack of comprehensive and useful tolling for the language as a whole and in particular for those enormous libraries/frameworks.

Alternative solutions

Meanwhile some of the shortcoming of the language (JavaScript) have been addressed by different organizations differently for a long time now (think GWT, Closure tools,  TypeScript etc). Some are targeted to the JavaScript community, some appeal to the developers community as a whole and try to attract interest from developers not traditionally writing for the web.

One of the earliest was GWT. It was targeted at Java developers who want to write web applications, but are not versed in HTML and JavaScript. Google used it internally and many companies outside of Google are also using it to build data driven apps. However the built-in widgets and utilities tend to be seen as obsolete and not attractive enough for the new markets and the new generation of users that are spoiled by the Apple inspired designs. The promise there is that you can write in the language you already know and you get for free the type safety and the rich tooling typically existing for type safe languages and for Java in particular. On top of that the tools were able to produce optimized code tailored for each target platform. GWT is still an excellent choice today as it continue to evolve and addresses some of the shortcomings raised by the extensive use of the tools and revolves around the evolution of the web as a platform. Mobile is also a big word in the newest history of the project, so if you already know Java and maybe you wish you could target several platforms at once (see keynotes from GTW create conf) this might be the best choice for you.

Closure tools are also a Google conceived product, later on open sourced. It consists of an extensive JavaScript library, a compiler written in Java and a template language that can be compiler to run both on the server and on the client side. The compiler is used to optimize and minify the JavaScript code, supports tree shaking (meaning eliminates dead code paths assuming it is provided with the full code for your application) and does some neat tricks like using the comments in your code to infer types for an abstract type system. Assuming all of your code is sprinkled with the needed comments the whole of your project can be type safe, bringing the benefits of the previous project (GWT) to the JavaScript world. However this comes at a cost - all of the JavaScript in your application must obey a stricter subset of the JavaScript language, mainly the classical pattern for inheritance can only be used and no constructor addressing should be done. Other than that you still write in JavaScript and you get type checking, type inference, safety and because of those - tree shaking and safe advanced rewrite and minification. If you are starting a new project and you expect the code size to be large or you would like to try and see what you can do when you have type safety throughout of your application you can count on this project. Check out the wiki page as well as it links to some helpful third party widgets and libraries that are compatible and highly efficient.

TypeScript came only recently and is produced by Microsoft. It was also open-sources. The approach there is different, inspired by the new features yet to be released for the JavaScript language and mixes it with some optional type information, basically becoming a superset of the JavaScript language. It allows reuse of existing JS code (unlike Closure), however the support for tooling and type safety and inference is only available for Visual Studio on Windows, which limits the usability for the developer community. Unlike Microsoft, Google has always targeted all platforms when providing tools for the developers. Never the less TypeScript, being young has yet to be used in very large scale projects, but in the mean time it is successfully used by several not-so-small ones. If you are looking for ways to reuse your code or a popular library, like jQuery, but get good tooling and you are okay with using Windows, TypeScript might be jut for you.

Dart - batteries included

The latest project, trying to address some of the shortcoming of JavaScript is Dart. Unlike the above mentioned projects it is targeted at all developers regardless of their background. This is because Dart is an entirely new language, mixing features from several other languages and trying to make sense of the web platform.

The language itself is okay, its functional (see functions as a first class citizen), but also has classes, single inheritance, mirroring support, isolates, mixins and some very nice sugar syntax for common use cases. As whole it looks familiar for Java as well as for JavaScript developers, probably for other as well.

Dart VM is a virtual machine for the dart language and can run dart scripts natively. For now it is not available in any stock browser and this tend to be the future at least for non-Chrome browsers. However it has been proven by many project already that it i possible to compile to very high efficiency JavaScript from other languages, so I would not worry if Dart VM will ever be adopted in stock browsers as far as it produces efficient JavaScript.And that turns to be the case as of recently.

The type system in Dart is optional and is really not used in runtime, not in the Dart VM (the virtual machine that runs Dart programs natively) nor in the compiled JavaScript version. The type system is actually there for the developer and for the tools around the language. First of all it is used by the Dart editor which gives you very rich real time information about your code, intelligent code completion, intelligent re-factoring and problem resolution and helpful information. This speeds up my programming at least 3 times compared to the closure tools where I need to look up type information myself while writing the code and only later when I run the compiler I can see if I have a type error. Because the types are optional and because of the excellent type inference you can often omit the type annotations in your code and still get very useful information and help from the editor. As a bonus you also get very efficient JavaScript.

The structure of the language itself is crafted in such a way as to allow high expressive power without sacrificing the safety. Because classes are supported in the language itself the problem with different class implementation that exists in JavaScript land is basically non-existing here. You can be sure that your code is compatible with the rest of the ecosystem and you would never again need to choose inheritance implementation etc.

Because of the type system in the editor and the tools around the language tree shaking is possible and performed every time you deploy, which means that you deploy only the code that you are actually using regardless of the size of the libraries your project utilizes / links to. This puts the constant chase for the smaller code size in a library to its end. Very similar effect you can get with GWT and Closure but not with TypeScript. The later translates directly to JavaScript and will not exclude anything from the compile, so you still need to think how to rectify that after the compile. This is not to say that there are no other method to achieve this (AppCache, server caching, even localStorage and Desktop apps (for Chrome only)), but if you are on a team that likes to make updates and improvements often this might be challenging. On the other hand you get the minimum possible code size for free from your tools if you decide to go that way.

Another interesting option is the ability to run on the server side and while the language is targeted mainly at the front-end developers, it is not impossible even today to share code between the front end and the back end, just like with JavaScript and Java. This seems to be a mandatory feature as of lately.

On top of that Dart has its package management system, similar to npm for JavaScript. There are hundreds of packages there already, but just like with npm the quality varies. One of the nice things still is that the language itself makes the packages compatible with each other, which is a great relief, unlike with NPM where each author decides on the structure and API exposition  to use. With the time, just like with NPM the bad projects will die out and the good once will grow. And as mentioned early growth is not a a problem with Dart as everything not used is excluded from the builds.

So who will be appealed by the language?

For once I can easily see Closure tools guys switching to Dart. They are already disciplined enough to use type safe code and dream of a shorter syntax and more expressive power and the amazing tooling will be an unexpected bonus for them.

I do not see Java developers running toward Dart right away, but with the right packages they might be tempted as well.

There seem to be two types of JavaScript developers: the ninjas will probably not be impressed by the language neither the tools, no matter that arguing on the implementation of the Promises and how we should create our objects is only possible in only two cases: either you are so passionate about the language you tend to be obsessed with the details or you have too much time on your hand. So those "ninja"s will not be impressed at all. Those are also what I call "the low level engineers", the tweakers and the experimenters. We need them in order to move the platform forward and I would say they are doing a great job at that. However they can be of little use in a small company or in a start up as their obsession with the language itself and the detail is often very time consuming.

The rest of the JavaScript developers are the guys like me, who have learned the language to a great deal, but wish there was compatibility at least to some extent in the ecosystem. Those guys write applications, not libraries (thou many of them, me including have written at least one library or a small framework just to get a good grip of the language).For them application level support in the used tools might be something very useful and unexpected considering the state of such support or more precisely the lack of it currently.

There are some great examples out there demonstrating the power of the language - things only a real JS master can do are easy and natural in Dart! Take a look at this for example: extending the regular Array object to count the access to its items. Try to do the same in JavaScript! Operator overriding is cool!

If you have not yet - please go and check out the one hour tutorial. You will be surprised how easy to understand and natural this language flows. If you are coming from Closure tools I bet you can understand and finish the exercise in less than 10 minutes and understand everything crystal clear! If you are coming from TypeScript the same should happen! If you are a regular JavaScript user you might wonder why those types, but fear not - you can add them later!

Of course there is still much to be desired. For example for me it is still unclear with what to replace closure templates for example. I see the community pushing strongly toward polymer and/or angular. Angular for me is just a toy and is more of an obstacle when it comes to very complex highly interactive widgets (image editing, design instrumentation etc - widgets that do not naturally link to a model of a strict JSON representation) and this is indeed what I have been doing a lot lately. Polymer might be better suited because it can encapsulate model handling that is not necessarily observer based and instead, but it is poorly supported in non-Chrome browsers. Even the demos on the dart site are not running on Firefox. Also there are bugs in the Polymer JavaScript implementation (basically styling issues - see my post dedicated to polymer. If and when polymer stabilize it would be a great addition to the dart ecosystem but for now I would really like to build complex apps with it but based on secure and fast templates like the ones used in closure, however I cannot find it.

Regardless of the hiccups and the fact that it is still evolving the goodness it brings is so tempting that I just might use it to build a real app!

You should try it definitely! If nothing else you will at least have a real opinion on it!

A colleague of mine has tried it too early (before 1.0) and was very disappointed. As application developers we should know when to try out a new technology. Let the ninjas handle the demo and beta versions! We want security, stability and robustness. I believe Dart is capable of provided all those and more!

Disclaimer: I am a front-end developer and have been doing this for 8 years. I use mainly JavaScript and HTML/XML/CSS as complementary tools/languages. I have been using in the past several other languages: Python, PHP, C, I have a working knowledge of Bash, TLC and other tools and instruments from Unix/Linux. By no mean is my opinions the ones of my employers nor that of a community, those are only mine.

януари 24, 2014

12 reasons why I still prefer Firefox for my daily browsing

I am a web developer. It is more precise to say that I am a front-end web developer. This means that I need developer console each and every day.

For years now Chrome's development tools have been much much better for me as a developer than what is available in Firefox. So I use Chrome (mostly) for development.

However I also have personal needs when browsing as well as professional, not related to coding. In this case I love what Firefox has to offer.

Here are   reasons why I prefer Firefox over Chrome for my browsing:

Unlimited tabs open without noticeable slow down. This is important for me, especially when I read my news feed, I tend to open all interesting articles and read them after I am done with the list. Often this means 30+ tabs.

Configurable tab opening behavior. Chrome was not able or not willing to implement that for years: I want my tabs to open in the background without me having to use the middle mouse button. Why? Well because most products (apps) use shortcuts there days and I love them. Even in Google Plus you can press "v" to open the linked article and in Firefox it goes to load in the background while in Chrome the browser switches to the newly opened tab and I have to wait and stare first at the blank page and after a while at the half rendered page and after that maybe read the article, but usually switch back to the original opener tab. There are plugins that try to handle that, but none of them works correctly.

Much faster start up. One thing Firefox learned to do is to load only the pinned tabs and the one that is focused when restoring a browsing session. This means that I can close my browser at any time and go back to it as it was without having to wait like 5 minutes for it to become responsive. On top of that even without many tabs Chrome starts much slower, on a few years old computer (core 2 duo with 4 gigs of ram) Chrome starts too slow (mainly because it tries to read so many things from the disk I think). Once it starts it is okay, but this means I have to keep it preloaded all the time for it to be as responsive on start up as Firefox. No thanks.

Omnibox completion. This one suck big time in Chrome. At the beginning Firefox used two inputs - one for the address (address bar) and one for the search (search bar). However for many years now one can just remove the search bar and use the address bar to search Google (or any other set as default search engine address). On top of that it uses your bookmarks, your history and your recent searches and it learns to be much more of help than what Chrome does. Chrome for example almost never uses my bookmarks to help my with the typing, it is limited to only 5 entries and usually most of them are Google suggestions. On top of that there is no easy way (panel or otherwise) to search your bookmarks quickly. Chrome has those "apps" installed that are just links, but they are not configurable. For example I use InoReader, but their 'app' icon uses http. I want to use https for the service, but even if I add it as a bookmark Chrome never completes it for me. I have to navigate my bookmarks and find it. Way to frustrating.

Pinned tabs. In Firefox you cannot accidentally close a pinned tab. In Chrome pinned tabs are not really pinned, they are just visually reduced to be as large as the faveicon.

Expose like functionality for sessions. Okay this is really interesting. With this one Firefox allows me to have several filled with tabs views into the same browser window. I had 3 and kept them there for months before I had the time to read all of them and the browser remembered them. In chrome I am forces to bookmark those and mark them as "read it later". But those should not really be bookmarks. Note that this cannot be simulated with several Chrome windows, because I cannot close all windows at once easily.

Often used web sites: Chrome and Firefox have the ability to display thumbnails of your most often visited web sites. What Firefox allows you to do on top of that is to pin your favorites and thus have quick access to the ones you think are most useful and not the ones you really visit most often. This is to say the tool is versatile and you can use it as you like, not as someone has decided for you.

Superior plugins. At this point I have only one in mind but it works much much much better in Firefox than anything close to functionality in Chrome - ScrapBook. In Firefox the extension uses nice side panel, has search and most importantly - saves your documents at configurable location. For example in your Dropbox folder. Chrome variants use indexedDB and cannot sync between several computer in no usable way. And this tool has proven to be so useful that I simply cannot forgo it.

Better support for themes. You can make Firefox look like Chrome. You cannot make Chrome look like Firefox.

You can force sync. In Firefox the sync button is exposed and you can force the sync. In Chrome if you make a change right before you close the browser it might not be populated to the server. This is especially bad in relation to the fact that a bookmark has to be used to mark a web page for later reading (see above).

Last tab closed behavior: In Firefox when you close a tab and it happen to be your last tab the browser does not exit, instead a new tab replaces it. This is also configurable. The problem is that I am just closing a page, I do not have to pay attention if it is the last one in the window. This is very irritating especially when combined with the slow "fresh" start of the Chrome (see above).

AdBlock(+): This almost never work as intended on Chrome. Maybe it is developers problem, not necessarily a browser problem, but still as an end user it irritates me.


If I think hard enough I can come up with several other reasons, but I think this suffice to say that I am a bit on the 'advanced' user side. Chrome appeals much more to those that do not rely so much on their browser to comfort their browsing for whole days/nights. I know people that use it very happily, do not have even a single bookmark and most often do not even know which browser they are using. This is all fine as well and I know Firefox is not for everyone, neither is Chrome

What I miss from Chrome when using Firefox?

As much as I love Firefox there are aspects of Chrome that I like and miss when using Firefox. A list follows that describes the most missed features.

Open as window app. Using the 'app' icon you can configure a URL to open in a new window as a separate app. It is especially useful for apps that do not use popup windows and are examples of the so called fat clients or rich apps: Gmail, InoReader, Youtube (especially now that it does not reload the page) etc. The window manager picks up the face icon as application icon and you have somewhat more real application on your desktop.

Task manager: Sometimes while browsing I notice that my fan goes on and on at the highest speed. Chrome allows me to see the CPU.Mem usage of each separate tab. Well, this is not entirely true, it shows me tabs grouped by processes, which is not the same as being per tab but never the less it allows me to narrow down the offender. On top of that it can give useful information from developer's point of view.

Offline apps with escalated privileges. I do not use one in particular that cannot be implemented well with IndexedDB and AppCache (that is to require access to hardware outside of the standard web APIs) but still it is good to know that it is there if you need it.

Chrome makes features available faster than Firefox. Such example is Speech synthesis and recognition, File API etc, things Firefox users have to wait for much longer.

Chrome has more horse power when it comes to raw JS performance: For most apps JS runs smooth on both browsers, but computation intensive apps tend to be more responsive in Chrome. One such example is Google spreadsheet. I have spreadsheets with thousands of records and tens of sheets and thousands of formulas and those run around 20% faster on Chrome. I am not sure if it is because off the browser or because they were fine tuned for Chrome but it is the fact.

This is again not all, but what comes at the top of my head, the features I miss the most on my day to day browsing.

I am really pro-Firefox for several ideological reasons as well, but I am not too optimistic for its future. They as a company have dedicated too much effort in Firefox OS, a project that is born dead in my eyes. On the other hand Google uses Chrome to make it possible to create apps that run fully offline on the desktop and on the mobile (packaged apps) and eventually this will make them a leader as a target platform for development while Firefox will remain marginal "proof of concept" platform with no adoption outside the geek community and is essentially waste of time and money. I know that Mozilla is non-profit, but making useful things for the users is IMO more productive than to make a very limited-use OS.




декември 29, 2013

The problem with on-line apps

Okay, I have to admin on-line web apps are absolutely great. You can use any computer and if you remember your password you can use the tool from anywhere in the world, you don't have to bring your own laptop, you don't have to worry about backups and disk failures and you are always using the latest and greatest version of the software. Isn't it great!


Well, it is great for as long as it lasts....


One after another companies and start-up fail or are being bought and/or merges with another (mostly commercial) product and with a little to no notice are being closed down. And it happens very often, more often than you might imagine.

Lets take Google reader for example. What a great service it was. And it closed down, regardless of the protests, regardless of the on-line groups opposing it. Vuru - financials service also closes now with only a month notice. Half of the applications I have ever installed in Chrome do no longer exist. Half! And the other half are giants like Facebook, Google mail etc. And even they can decide to close down at any given moment. As soon as they decide that it is no longer relevant or profitable there is nothing from stopping the company from closing down the doors and windows and there is no way you can argue with that.

So what does that mean for you? Lets imagine this: You have worked for several days, maybe even weeks creating the ultimate application in Google Spreadsheets. It is NOT portable, because you used all the excellent services provided in the scripting environment so basically your sheets are useless outside of  this company's environment.

But Google is almighty and great, they will not close down. Oh but they will, sooner or later they will shut down services that are not profitable for them. And then what?

I will tell you what - you re on your own. Currently there is no way to write those scripts portable so that they will work with any other known environment. I have data collection and analysis tool written for Google spreadsheet application spawning 24 files. It is working very well indeed and it is hosted for free for me etc. But what will happen when Google decides that the new - improved by the way - spreadsheet application drops some of the services support. Like they actually did? Can you stay on an older version so all your stuff will continue to work? No! Can you take your application elsewhere. No! Can you do something about it? No.....

So next time you agree to fully rely on a free or paid on-line web application remember this - you have absolutely no control over your data. And even if this is not a problem for you think this: you have absolutely no control over the software. And if this is also not bothering you - you have no control on the availability of this software. It can disappear  in just an instance!

I for one am thinking about it. The problem is this: 6-7 years ago when Google started this "the web is the platform" bullshit it sounded like the end of an era - every one can put their application on-line and reach any user on any device and perform outstandingly business-wise. Yeah... and now every and each one of those excellent performers hold you in a grip firmer than any of those other before them. Yes, I am talking Microsoft. The "evil" they was / are, they would NEVER make you loose your application. You had a choice. Stay with the older version and protect yourself the ways you can (Intranet, firewalls, whatever) or upgrade and pay anew once again becoming master of your applications. Now this choice is gone and you are completely at the mercy of those companies. Think about that. You do not own anything. Your data, your software or its availability. You control nothing and have no rights.

So congratulations to you all on-line service and apps users. You have been schemed out of your digital hold o things. What will happen next? Who knows. But the next time someone starts to praise to me the advantages of the cloud and web apps I might just kick him hard.