Общо показвания

юли 27, 2015

Request Animation Frame for high performance applications

Disclaimer: This is a recent document I wrote for the technical blog run by a SaaS company operating in the USA. I got the permission to post here as well and this is what I am doing. 
Disclaimer 2: The code is based on the implementation of the same approach as found in the open source closure library.



As per recent publication on optimizing the drawing operations on a canvas element one of the recommendations is to use a single RAF instance and put your callbacks that should be working right before the next drawing to be done by the browser.

In a less demanding application a simplistic solution based on a queue should be sufficient to bundle the callbacks to run when the browser is ready to draw, this however does not solves the main bottleneck of the RAF approach in the first place: the callbacks might be causing a lot of layout requests and trashing without knowing about each other's internals and thus make the frame (the pre-drawing execution) much slower than it needs to be.

Another possible problem with the default approach (i.e. calling

function myCallback {
// potentially close over the variables in the current scope
requestAnimationFrame(myCallback);
}

inside the body of the callback) is the creation of closures on each execution. When multiple such occurrences are live in the application this might end up creating too much memory allocations every N-th frame and thus promoting the garbage collector run. This is a potential problem because several generations of callbacks have already been passed and the result is that generative garbage collection might not be effective enough and your application might end up spending > 40ms for middle sized apps in GC phase while inside a RAF callback. As expected this leads to a delay in the call for the next frame and user perceives this as frame being dropped.

To mitigate those pitfalls we can devise an approach that has the following characteristics:

  • make it easy to schedule work for the next frame
  • make it easy to only do work once per animation frame, even if more than one event fires that triggers the same work
  • make it easy to separate and do work in two phases to avoid repeated style recalculations caused by interleaved reads and writes
  • avoid closures creation per schedule operation

To achieve this we need to be able to:
  • create the callback only once and call it without using closures
  • create the callback in such a way as to allow ordered execution in two contexts: read from DOM and write to DOM for each callback

Because the callback is now separated in two phases we also need an object to represent 'read' state and to be passed to the 'write' state. For example we want to read the offsetHeight of an element and we want to perform some calculations with it. After that we might need to update the style / positioning of an element based on the performed calculations.

As already known the fastest way to perform a task is to not perform it at all. Same principle applies to animating with RAF: if you can avoid a job - avoid it. An extension to this principle is to use 'pre-calculation' - that is to use the time between the triggering of a new animated interaction and actually start the animation to pre calculate anything you might need as values while the animation is running. An example for this is pre calculating dragging thresholds that might trigger an action: instead of calculating the potential next threshold while performing the dragging you can pre-calculate all thresholds in the visible area and use them to compare values while animating the object the user is dragging.  This will also avoid more memory allocations while animation is being run. For example if you create an array of threshold values it is better than to create a new value after each threshold. Basically think of this code path as critical and make the code as static as possible: pre-allocate all memory you might need (i.e. new array with the exact length you expect to use in any calculation involved), pre-generate the actual animation code.

Now lets take a look at one possible implementation of such approach: https://gist.github.com/pstjvn/f0197e09381eb346160b

What we did is define a single function that will be available in the global scope and can be used to create callbacks for event handling that can be used as regular handlers for events and still work in sync with the browser's drawing operations.

Lets see an example:

var el = document.body;

var task = window.createTask({
 measure: function(ts, state) {
   // Record if the document is scrolled and how much.
   state.scrollTop = this.scrollTop;
 },
 mutate: function(ts, state) {
   if (state.scrollTop == 0) {
     // remove header shadow
     el.classList.remove('scrolled');
   } else {
     el.classList.add('scrolled');
     // add header shadow to indicate that there is scrolled content
   }
 }
}, el);

// Will ignore parameters passes.
el.addEventListener('scroll', task);


While contrived, this example demonstrates the power of this approach: a single function is created once for each RAF tasks and is reused as a regular event handler. Internally the work is synchronized with the browser drawing scheduler and layout trashing is prevented assuming you avoid mixing the measure and mutate operations and correctly separate them in the corresponding functions. Existing implementation do check your calls for correct use of only measuring in measure phase and only mutations in mutation phase, but ultimately it is developer's responsibility to use the tool as per its design.

What can be improved in this example? One might add new property to the state that keeps the last state and only assign classes when there is actually change. In this case this is neglectable as we know that modern browsers do avoid re-layouting when 'classList' is used and no change is detected, but might be a potential gain in other use cases.

An optional improvement in the implementation is to allow the creation of the task to also accept an instance that has a certain shape and thus avoid garbage while restructuring the state in the animations. For example:

function MyState() {
 this.scrollTop = 0;
 this.wasZeroPreviousTime = false;
}

Now one can create state instance when creating the task and have completely static task representation with state.


Conclusion


When developing large and highly interactive web application with JavaScript one might often be tempted to take the short road and write code in an intuitive way in order to accomplish programming tasks faster. JavaScript is notorious for allowing the developers enough flexibility to do just that. However flexibility often comes at a cost and while the code is valid and runs fast on your desktop computer, one need to consider the implications in mobile devices.

Finally, if you already have an application and you see performance penalties instead of blindly rewriting your code always measure and deduct where the potential for improvement is and only then start refactoring.

The approach presented in this article is a tool and not a complete solution for all your intensive animations, but is a good one and should be considered when applicable.

януари 12, 2015

The confessions of a chromebook user

Here is the deal - I am sick with the Chromebook!

I know I will get flamed even after that first line, not because I have no right to be mad at those machines and not because they are perfect and I am wrong; from that first line it will be because of the new fan-boy bandwagon that has been rolling recently - the Chromebook / ChromeOS core fans.

It is understandable - after so many years in the grip of Microsoft and Windows, after so many years of Linux being the idiot child in the bunch, after so many OS disappearances (even today I have read about someone testing Solaris... what?). And then there is OSX as well, the brave saviour of us all that never was - on its latest incarnations OSX is so buggy that I often hear the once 'windows' branded joke - "it needs rocket fuel to run at normal speed" being repeated over and over again, now that 8GB of RAM is the minimum required to have installed if you want to browse the web in more than one tab or actually do some work on your computer. 

And there it is - the sunshine, the little gem that Google decided to gift to the world for unknown or at least unclear reasons: ChromeOS - a stripped down version of Linux that runs it's own graphics stack and has Chrome on top of it.

I can whine and whine for days about the issues with Linux and even more - months - for the stupidity collection labelled as OSX, but this is a post about ChromeOS.

I knew I was not going to get out of this without whining at the end, but there it was - cheap (okay okay - affordable) and promising to never bother me with updates, viruses, hardware incompatibilities etc. laptop...

What I did not expect was that even their own (Google) services will be impossibly slow to use on those machines in less than 1 year! And don't start with 'powerwash' or 'try in an incognito mode' - this is bull shit! I want to have adblock and still be able to use the damn site and I would NEVER use youtube if I have to watch an ad before each fucking video there!

Google+ is now so slow I can actually see the rerendering when new data is loaded. Spreadsheets - I can actually take a small nap when the initial load is going. Hangouts - I actually need to kill it from time to time because I consider half a gigabyte of RAM used by a chat service to be obscene!


Was this whole thing a plan to 'revive' the dying PC market? Or are we too stupid to not remember the rise and fall of the 'netbook'?

In any way ChromeOS might be gaining 10-20 points in some benchmarks now and then but those devs at google are making sure that each week the apps become more and more demanding even if they do not present new features (like for example using paper elements in drive - WTF!) - at the end of the full year of ownership you end up with an almost completely unusable device - its too slow to handle new apps, its unsellable (unless the buyer is a fanboy who does not care if the device can actually be used for something), its untweakable (at least with old windows laptops you can sell to linux nerds and they can use it but who would buy a device with 16GB or storage, its not 1999 any more - the last time I was able to sell a PC with only 4 GBs of storage).

So for 1/3 of the price you actually get 1/3 of the usable lifetime and 1/3 of the functionality of a regular laptop.

I guess one gets what one pais for.

So I have only me to blame for my stupidity. Once I had a thinkpad - latest business model, ultra portable (its funny how almost all laptops today are more portable than the then ultra portable) - I paid premium. I have made a mistake.

Now I have a chromebook - I have paid really small amount and again I made a mistake.

I guess Buddha was right - the true path is in the middle.

And if you are wondering - yes, I ave tried crouton - on paper it sounds great, have you tried it running for more than a day - all the storage is eaten by the swap created for it. ALL OF IT! It is a complete crap! Even if I leave o the background doing nothing it still eats up all the (oh so large) free space on the ssd.

Also I am sick of not being able to play AC3 sounds. I cannot chose what people send me.

If after all this you still consider buying a Chromebook as a second laptop ask yourself this: why do you need a second laptop? You get bored with the first one and you need something else to sit on your lap?

At least this is what I feel right now. Might be the chromebook, might be the pain from the oral surgery I had today. But I will definitely not buy another chromebook in the foreseeable future.

януари 01, 2015

Dart (a revision at the end of 2014)

In this past year I have been using Dart on and off and then back on and then back off and I wrote a lot of articles about it (mostly critical but hey, this is who I am and this is who most people are - most people tend to emphasize negative impressions while the positive ones are accepted as 'the normalcy').

As with many other Google related things the often missed truth is that a lot of work and a lot of consideration has gone into the idea long before it became a project and still before we even have heard about it. 

Dart is not an exception. If you have been watching carefully the talks published that are related you will easily notice that even now, long after the language has reached 1.0 and have been publicized as 'stable' and 'ready to use' there are still things that are not clear and that different project members want things to be done in different ways and at the end of the day not everyone is happy with how things went. 

As a side note I think that Google should not force people to go to conferences because they (those forced people) tend to express their opinion on things in an agitated state and it makes a very very bad impression on the states of affair when an open source project is concerned especially if you are putting your neck up when advertising 'pro' it inside your company (or even worse when you are a start up or a young company and you need to make some technical decision that can cost you a lot in the short and long terms). I have seen at least 3 such videos - the first one I have talked about before and it concerns angular dart mostly. The last two I have seen only recently. One of them basically starts with 'I have picked the short straw so here I am to talk to you about dart ...'. and finishes with 'So here it was, I talked to you about those things that I know nothing about, do you have any questions?'

The second one I have in mind if given by Gilad Bracha and in it the negatives are less but still they creep in.

Why I mention this? Well, the thing is that when you go to present something on a conference (or wherever) you are expected to be enthusiastic about it, you do not go there to say bad things about what you are talking about, right? Even when it is really a stupid useless piece of moronic code that you have made in your free time and no one will ever never use (like some of the items presented at HTML5Conf, omg, I feel like this was the worse conference ever, full of really badly prepared presenters and full of really badly made software). But still those people talk like their thing is THE thing - it almost cures heart disease and cancer. They are enthusiastic and positive about it and even when almost everyone leave the room after 10 minutes of their presentation and there is this irritating guy in the front row who keeps interrupting and asking real questions that they have no real answers for they keep going explaining to everyone (who did not leave) that this is it - the next big thing.

Putting that on the side, the reality is as follows: it takes too much effort to write real world web application. Way too much. There is no standard toolkit. There is no usable / fast way of creating UI/UX. If you go for a framework you are basically stuck with it. We have seen time and again (now even Google acknowledges it) that Polymer is way too slow for real world usage, so this is a no-op for now, mastodon libraries were great once upon a time, but they are now crushed under their own weight - they are so big and so hard to write for that they simply cannot keep up with the evolving requirements of the UI/UX on the client side any longer. If you chose to go with that, you are basically stuck in the state of the art UI of year 2000.

But even if you close your eyes (for the mobile first etc UX) you still face a ton load of issues - incompatibilities in the libraries (have you seen projects where the UI is done in one library, the graphs in another, yet more dynamic things with another and so on and each of those provides its own way of doing things?), the minification (I know it is not a word, even the spell checker thinks it is not a word) and then the bloating (no mater how much you minify at the end you still push 600KB on the client on mobile connection and this is only the script, let alone the images and styles and then the live performance... oh my...).

So I think most people agree that web development is super complex today.

Where do you even start to learn if you are new to this? Do you start with simple scripts (news flash - it is not really helpful if you are then buried in half a million lines of code of an app)? Or do you start with a coach, who will have to explain exactly how browsers work today, how they fetch resources, what is blocking, what is not, when painting starts, what is re-layouting, what is the link between JS objects and DOM objects, why you should combine your read and writes to the DOM and so on and so on. Even a super intensive course on all the topics (and brace yourself, we have not even touched the SVG, canvas, webGL, audio processing, server side, noSQL DB etc yet!) would take weeks.

How much does it cost to train a new developer (not necessarily one without experience, but instead new to the client side/JS). JS is really not just the language. Its the DOM APIs as well. JS without it is not that bad, you can write simple apps in it and run them in node. And then on the other hand if you are classically trained in C++/Java JavaScript has so many underwater stones that you will be productive in... at least 6 months I dare to say. I mean really knowing what you are doing.

I have worked with people coming from back-end to the dark side (the front end): its not the event loop that trips them first: its the fact that you have multiple entry points to the same application. The fact that if you change the order of script tags things tend to break badly.  The fact that you are declaring things but some things are immediately executed while other are not and finally the fact that you cannot just do your work in a loop, you have to break things to smaller pieces because otherwise you kill the UX.

This and much more need to be learned before one developer can even start  (to let say port his ideas from the mental model he or she is used to on the back-end to the front-end coding arena).

Now about Dart.

They have got some things right, and mind you, those might seem like trivial things for seasoned JS developers, but are super important to people coming from other languages and from the back-end.

Single point of entry for your application: you cannot believe how much of a help that is! I have seen it with my eyes and now I am a believer. Well, this is nothing new, some frameworks/libraries have been doing this for a long time, but most do not provide any guidance about it. Now compare this to multiple 'onload' handlers and multiple immediately invoked function spread over tens of files. If you already know the internal design of the application it might not sound so bad, but usually you do not. And it is bad!

Built in libraries: I do not even know how to compare the dart (and most other languages) libraries with the way things work in the browser. I have worked with a young designer that has been learning JavaScript for some time now. He still does not get the order of execution of JavaScript libraries, well because the language does not provide a built in way to require another library and most do not even bother - lets assume a UI plugin for jQuery: it requires jquery, this is simple, but then again in the docs it requires another plugin, but instead of requiring it at use time it requires it at creation time (i.e. the required plugin needs to be present when the new plugin is registering with jqeury) - this is a hard dependency. Now imagine how many times I have been asked to resolve a "problem" that turns out to be incorrectly ordered script tags. You might say 'oh, but this is because the plugin is written in a stupid way' or even worse, you can say 'the guy is stupid, he should read the documentation'.  The truth is different: simply JS was not designed to handle this kind of usage - dependency tracking and handling was never part of the language and every one does it differently. I, for one, like how closure does it but this is just my opinion. I have used AMD as well. I also used simple script tags... but JS does not provide a standard way of doing this sort of things and while we as JS developers may be dazzled by its flexibility (to implement things that are missing) for new developers this is off-putting in immense ways. Yes, this problem is solve-able, but do we need to solve THIS problem - really? Yet we need to solve this with every new project and even worse in the middle of the project we often need to re-solve this because not all libraries comply with the solution we have chosen - be it browserify, amd, commonJS, closure library or whatever more solutions you can think about just because all of a sudden we require a new dependency that does not mach our dependency solution. Let alone the fact that libraries tend to solve this same problem in their own way, 99% incompatible with the rest of the world.... Well - Dart solved this for us! Once and for all. There are cases where this is not working perfectly (for example when we need to inter-operate with JS land in our app - we still need to manage those dependencies on our own. This is also why I am advocating on a rewrite where possible and doable within the project, because I want to get out of that JS grip and use modern tools for my projects. Even if I chose TypeScript (which I have done once) still the external libraries need to be loaded. And yes, I know about concatenation, guess what, order still maters!)

Fixed up DOM APIs: This is somewhat a double sword; on one hand it does simplify the DOM interaction (which means it makes it look more natural and close to the language as opposite to how JS handles this, but of course this is purely a fault of the committees and specs creators). On the other it does not always comes at zero price: I have written about that as well, for example in (maybe older) some version of Dart the image data arrays have been 'manually' converted to regular arrays in the generated JavaScript which had a really big penalty (mostly GC pauses but still CPU as well). This was of course a lack of consideration when it comes to the implementation of the dart2js process, I hope this was fixed, if not - it should be. Never the less the benefits of having a consistent API outweighs the edge cases when it comes at performance cost. Depending on your project you might need to consider what you are doing more carefully than usually. This is I think one of the weak sides of the html package - it makes things so easy that you sometimes forget that there are perfs that you need to account for. Again with an example - I have seen people I work with that had to 'translate' what jQuery is doing and then searching in google how to do it. This 'translation' work is terrible and I think no one should really do it! Consistent APIs for the win... also this is not only my point of view, lots of talks have been made where regrets have shared about how many APIs were designed without really looking at anything else in the list of DOM APIs and how different they look from one another.

Types: Types in combination with generics and the fact that those are optional makes the language approachable by a much larger audience imo: I know that the type information is completely ignored at run time, but if you are a 'typed' developer you don't care, you still consider it essential. Types in Dart are also somewhat not ideal (at least for me) for one single reason: I am so used on union types from closure/typescript that I basically want to make each and every method accept an union of several very different things: just an example - how about an union of string, Node and NodeList - this could be for example the content of an element. Also enumerations came only recently and are still experimental. But still the advantage you get from using types and more precisely the work of the analizer on it is simply fantastic! All of a sudden you can use a much larges code surface without actually knowing it or used it before because of the type inference and the inline help/completion you get. This frees your mind to be bothered much more with the aspects of your own code than with the API surface of the supporting libraries. It simply enables a JavaScript developer to 'know and do more with less'. I know it sounds like a cliche but it is the truth. Want to give it a try - simply try posting complex code to dart. First you will notice that a lot of the code in JS is 'tricks' (like turn this node list into a real array so iterating we can remove some of the items and then make a new iteration on the shortened list and many other such examples where we as developers are spending too much time solving inefficiencies in the language or APIs than actually working with our ideas). One other thing missing from dart types is the so called 'non-nullable' types. In typed variants of JS you can state that a value (especially function params) can never be null and then you safe on type checks, not so in Dart and because there is no such annotation the users of your code are allowed to submit null values as valid ones for every type and the type checker will not warn them, so you have to check the values for null... blah...



Not all is perfect in Dart either. As with many other languages and tools it has some darker sides. One of those is that it works well enough only in Dart Editor. While it is supported on sublime text for example, the support is very limited and does not provide the great experience you get when using the Dart editor. Also the attempt to bring the same productivity to the web for now does not provides what it was expected. The implementation (chrome dev editor) is really a crappy experience compared to more robust and seasoned applications. Even Vim performs better with polymer (because of how it can handle matching tags) than CDE. CDE implements git but only partially and only with some predicaments. It is supper slow compiling to JS (so if you have a large dart code base you can take small naps when you press the 'run' button before the result is shown in the browser). They have had some terrible performance issue with it comes to dart/js integration in the past (I am not 100% sure it is solved now). As such one of the largest user facing projects written in dart does not look so great and does not put dart in a nice place really. There is hope (at least in me) that this will change if DartVM is bundled with Chrome but this seem to not be coming soon...

Another pitfall is that pub does not allow un-publishing your packages. While this is done for a good reason, it makes it pretty much full of dead code. Even more is the number of 'crap' packages (just like in npm you can easily get 10 packages that should do the same, but 10 of them are useless of full of bugs and there is no real review process in place nor a way to know how many projects use it in production and which version). As the times goes the number of really good projects is not increasing with the rate of the badly constructed ones. Having this in mind can make some of us (adopters) cry out loud for the time of monolithic, carefully curated libraries. One stark example is closure library and often related projects (again usually coming out of google) - old code is preferably not removed but just deprecated, all tests are run, everything is tested and performance measured. This means that you need to do more work (reviewing the package and its performance on your own), but I guess this can be improved once more people start using dart (if this ever happens).

Another thing to consider is this: when dart first appeared it was twice as fast as JS and the gap between generated code and handwritten code was very small. One year later the things look a bit different:

DeltaBlue: Dart VM is almost back to the level it was in the beginning of 2014 (~5% improvement). Dart2js has not moved at all, but v8 is now much faster (starting the year around the 400 mark and now it is above 600).

FluidMotion: Dart VM improved by 2 points. dart2js has lost a bit, bit is very close to where v8 is, and v8 is on the same place it was in the start of 2014.

Havlak: it appeared somewhere in the middle of the year so no date prior to that, but v8 substantially beats the dart2js code on this test, as of the last month hand written js code  is closer to DartVM than dart2js is to v8. The really bad performance of dart2js is shame on this test - really! Both v8 and dartVM performance has been improved this year, while the dart2js has not moved at all.

Richards: somewhere at the end of 2013 dart2js and v8 got in parity and stuck that way for the whole year - no change there. DartVM got better at this test by about 10%.

Tracer: I bet this is the fave test of all Dart lovers - v8 got just a tiny amount better at it, but dart2js has always been faster, more at the end of the year than on its start. DartVM also made a great leap this year in this test.

Looking at those tests an interesting picture is forming: DartVM is no longer (if it ever was) twice as fast as v8 - it is now about 20 to 25 percent faster. V8 made some great progress in 2014. Note that this does NOT represent advances in other JSVM so if you have some free time on your hands please do this: compare JS code to dart generated js code in other browsers - please!

dart2js did not moved a lot in 2014 and IMO is still pretty much work in progress. One things I cannot get from those test is the flags used to compile JS from Dart: for example is it minified, is it type optimized, are the boundary checks preserved and so on. Would be interesting to turn on all possible optimizations and then try again. Why? Because Google has been doing type optimizations and in-lining and virtualization and what not to its JS code for almost a decade and using Dart it is possible to do the same. I wonder if it really makes the code run faster as they say. It is known that the closure compiler for example sometimes break hot functions (because of code in-lines basically breaking the compilation to native speed code in v8 because of type alteration or because the function becomes too large (larger function are not compiled)). This is what perf tests are for. There was a recent blog post describing ~14% increased speed in some code so I guess it is worth a try. For this to work however you need to be really diligent  about those type annotations, just like with closure compiler. Which basically excludes some (actually most) pub packages....

This all was about CPU performance, which is fine and well, we see that dart VM is advancing little by little and that v8 is still strong and catching up, dart2js has much to be desired. But what about the other type of performance - memory?

Oh, Memory! There is a really good explanation about why memory is much better topic to talk about when it comes to DartVM vs. V8. Ask Gilad Bracha.

The whole thing is really simple: Dart works differently and the internal representation of things require less memory, which on its own is a big win - remember the talk about static JS? The fact that for a garbage collected environment to work optimally (and fast!) you need to have ~8 times the RAM that your application requires at any given time. Then you can be sure that the GC will not interfere with performance while still doing its job. It is not only limited to JS, but it is important for JS because apps are getting bigger and bigger and bigger, JS heap sometimes is reaching for the gigabyte mark and yet most people do not have 8GB installed...

In the blog post about game development with dart and stageXL I have mentioned that the memory profile of the exactly same app run as Dart and JS generated code is very different, both in the way memory allocations are accumulating in the game loop (mostly fault of the dart2js transformation) and in the memory consumed as a whole. JS code was 3 to 4 more memory hungry and was making more and larger allocations and GC was running more often as a result. Back then I blamed dart2js for the whole things and also I was not sure  if DartVM was reporting the correct values, now that we have better tooling for dart it is proved - having a bigger app in dart is better than having it in js. This is why I really really hope that dart and chrome team can finally integrate and chrome starts to ship with DartVM by default.

Yes, I know that most people are not using chrome (which is not really true, around 50% are using chrome but this is a whole other story), but even for the sake of android this will be a big win both for devs and users: first of all android is usually more memory constrained (as well as those chromebooks!), also being faster means less CPU for the same tasks even if ti is only 20% less, it still can translate to an hour of battery life. But not only that - it will become easier to ship web apps with native feel like (UX). At HML5Conf there was a talk about why app stores should die. Well, they are not dying as far as I can see, but for some apps it is really bothering that you need the cordova wrap just to be accessible for the sake of the store. Google is making some great progress in that direction with recent changes (separating tabs in A5 and so on). I hope the whole cordova thing can go away in 2015 and capabilities are granted to https served apps without real installation (sort of like service worker).

Yes, most apps still need to serve to IE/opera/ff/safari. I for once need to continue to support those, but still, if at least on one platform things get much better the market will react, especially if great apps come out of it. If the users see the difference and feel the difference they will react. So hopefully 2015 will be the year. if not, next year is fine also, but not more... I am getting older after all:)

Happy new year to everyone and may all your dreams of virtual machines and fast code come true!

ноември 02, 2014

Little red ranting hood... in the JavaScript forest.

This is my rant about all the 'fixing JavaScript' "effort" that has been going on for some time now. The post is pretty long even fro me and it is really a rant I do for fun sometimes, in fact I have used or use some of those technologies, so don't get your torches and head to my house yet...


CoffeeScript


First there was CoffeeScript. Oh my God  - what a brainfart it is! So, you take the JavaScript shitty way of doing things and then you slap Python on top of it and then even better you slap 'custom passes' on top of it. So at the end you write in a language that no Virtual Machine understands, that no online editor can help you edit and with files that no JavaScript developer ever wants to 'figure out' because your syntax is way off the language you are actually targeting but in the same time it is exactly the same language with messed up syntax. There are no 'Coffee script' developers, those are just JavaScript developers with aneurysm. And you cannot run your code (oh yeah, I know about 'compile on save' - so I have to ask you, which will be faster: 'reload on save' or 'compile on save'. Wrong question, you cannot actually do that either, you have to reload on compile....). 

Now some people that like to call themselves experts in JavaScript, Closure (and surprisingly on something that does not really exists - Coffee script (OMG there are people that teach this!!!)), would even go further and add a pass that converts to 'closure compiler compatible' JavaScript. Lets review - you write in something that is fictional, then translate to something that is halfway there, then you have to compile it (if only to check types) and then you have to test in both development mode (source files that are actually not sources but translation results) and production mode (translation results compiled). Lets just say that this might get a bit slow and that it might be hard to set up this workflow on any kind of environment, let alone an online one..

But that is not all, ladies and gentlemen, it gets better - those same sick people are now "helping you" when you have JavaScript question with code snippets written in Coffee script!! Yes, like the crazy people they think you see the dragons that are only in their heads. Well that's messed up...

Dart

Going further in time - Dart was born. A completely new language that is all so much powerful and faster and better and shinier. Only if it could work in browsers... but it cannot. You are still debugging JavaScript at the end. There is (as with Coffee whatever) virtually no IDE support for it, you are stuck with DartEditor (which you might find nice, unless you want code collaboration, online editing of something that does not require  rocket fuel powered computer to run at acceptable speed). Oh, you say there is support for Sublime? Really? Have you tried it? The 'support' is basically born-again approach from the end of the 70's. Yes, you read it right - 45 years aaaggggooooo! You save your file then a script runs and then it collects the output of that script and then it presents you the result (in a surprisingly unhelpful way). There is no real intelligence in the completion or there 'should be' but it is not working. Same goes for another brilliant project - WebStorm.

Assuming you are fine with IDEs written in Java (which most of us are not - after all I am developing on the web for the web - Java exited the web like a decade ago and is slowly turning into corpse on the server as well) the Dart 'support' in WebStorm is somewhere between unsatisfying and completely unreliable. Oh, and the last build is broken as well (mind you, the product costs MONEY!) so you will have to either downgrade or wait for a fix... because we forgot to include complete intellisense for dart because.... well because on large projects we just like to crash the whole IDE, "but on the other hand we improved on the start-up speed so you are on the track", don't worry, keep paying...

Lets say you like the 70s, the music was great and the IDE's were non existent, the real men wrote code with text editors. Like Vim. Like scrolling is not a real thing - right, it is much faster to type gg/goog^M^[jjjVapk:sort:w - right! Your hands do not have to move like at all. It would have been sooo great is we as humans have evolved without the lower parts of our bodies (well maybe the penis should stay) and maybe.. hum.. little hands (like the ones found on T. rex) with lots and lots of fingers. Then maybe Vim would have been the ultimate and final - best of all, end of universe - editor.  You can sit (well, you would always sit without the lowed part of your body, but would it be called sitting?) all day long and use only your fingers, we would not need shoulders because we do not need to move our hands, we would just type and be really 'fast and productive'. I see paradise... but then Dart would have come.... Well, the Dart story in Vim is sad, just like with Sublime text. And that paradise would have been ruined. Just like Dart feel like on Vim....

But otherwise Dart is great. Its like 'twice as fast' than JavaScript (well.. on benchmarks, in real world it is twice as slow but don't be discouraged, we will get there, give it 15 years or so of evolution and I can almost guarantee you it will be as fast on the server as.... Java). On the browser... YES, it is twice as fast there as well... oh well not really because v8 catches up and now it is only like 20% faster.... on benchmarks. But we cannot really use it in the browser because no browser really supports it and we have to compile to JavaScript and in theory it should run as fast or faster than hand written JavaScript but... it does not. At least not when someone who actually knows what he/she is doing write the JavaScript.

But its fine, as the development story goes it is really fun to write in Dart. You 'save and reload' just like in JavaScript. Well not really, because you have to use Chromium, which supposed to be Chrome but it is not really and has bugs that are not found on Chrome and you have to figure out if the problem is your code (for CSS for example) or the browser quirks. And then again you have to test in all browsers because you know... it is not really clear if the generated JavaScript is working everywhere, it is supposed to be but... who knows. I really liked the video where the Dart developers explain how trivial the generated class code is, how it is almost one to one transpilation from Dart to JS. Well it is not. This is just a lie and you can read the generated file all day and still you wont be sure how does the result maps to your original code. At the end of day 3 you will figure it out eventually, but does it worth it? The problem is really simple - Dart does not compile to JavaScript idioms, it compiles to a JavaScript blob that is internally emulating how Dart is working. Trying to read/fix the compiled code is almost as trying to fix the result from the C compiler. Pointless. You have to go and 'fix up' your code (well because it works in Dart/ium) to 'make the compiler' happy(er). If you are still on the Dart bandwagon at this point (and you are still read the Dart passage) you might need a dominatrix in your life. There is a lot of internalized pain you need to let out...


TypeScript

Microsoft (I always liked that name, it is so 80s, like Sun microsystems.. omg!) did not wanted to be left behind in this new era of computing, they wanted to appeal to the new type of developers that like the web and want to stay on it as much as possible and even develop inside of it (can you imagine that William?). So how about we take a guy, that at his peak did do some good stuff and let him create a 'superset' of JavaScript that looks like ES6 (but is not really and cannot be run as such even if the browsers support it) and slap some type system on top of it. Well the type system should really be fun, it should be something that is half useful, half making the JavaScript developers go grey and die off, so we can reignite the Windows era (if possible, please... pretty please...?). So TypeScript was born. Have you tried it? I guess you did! 

The problem is that TypeScript tries to add types to functional code (i.e. not Object oriented code) and presents it as 'safety'. There is however one simple catch: most functional code that already exists deals with the JavaScript's lack of types in a funny way, so you have functions with 10 different signatures. I especially like the ones where the middle arguments are optional, you basically cannot infer the types based on their positions as arguments. This is especially funny in jquery where the number of arguments does not really implies the types of those arguments. Have you seen the definitions for jQ? What is really funny is that MS embraced JQ as main library (kind of like a standard library in TypeScript) hoping to appeal to larger audience and as a result no one really writes OO JS in TypeScript, instead everyone writes functional style and then impale themselves on the type system and wait for slow dead. Does the type system in TS make something useful? Well yes, of course, it made MS learn NodeJS. But no, I would not rant about nodejs in this post, here I will be only targeting the 'JS augmentations'. Another problem Microsoft did not foresee with TypeScript was that JavaScript developers rarely know anything about types.

Even if they know something about OOP, OOP in JavaScript is a different story and there are (really fat fat) books about OOP in JavaScript and the fun part is that they present so many patterns for OOP and explain so many things that are not actually useful in practice for one reason or another, they are so purely academical studies of the language capabilities and how far you can go about it, that I have been asked questions directly taken from the 'check your knowledge' sections of those book on job interviews!

I really enjoy citing the author, book title and page where the question is taken from to the 'technical' interviewer and then tell them that if his company is using any of those absurd patterns I will not work there. It always gets to them and distracts the 'knowledgeable' interviewer from you and wake up some internal fears that they themselves are not good enough, which automatically makes you appear more knowledgeable. What do they say? Play the player, not the game?

Sorry for the digression. Where was I? Oh yeah, TypeScript. One thing they got right (shorter syntax for classes) they kill with the type system. Every other negative side of Coffee script apply to TS as well, you still need to transpile just to test run it. Fortunately the results are much more predictable, but the lack of real support from tools (other than Visual studio) simply repulse most developers. I know developers that use TypeScript. Can you guess what their specialty is? Hang on - its .Net! That's right, TS appeals mostly to people that already use VS... ah, the irony...

AtScript

This is the last dumpling from Google. I like how Google is shooting in all directions in recent times, makes them look really desperate. I also like it when they send someone to do a technical talk and then that someone says something Google did not expected, but the talk was recorded and ends up on youtube, only to 'disappear' a week later when they realize what had happened. If you wonder why I will never Angular? Well they did a tech talk. Yes, about that 'first large internal project'. First of all it was done with Dart, which is already a red flag, then the guy said that 'we had to rewrite/do custom versions of pretty much all directives because the ones that are built in were too slow for our use case' - that really pinpointed it. It really said " the idea is great on paper, but not applicable in practice". Well, the idea is so great they decided to reiterate on it, hoping that this time something will be done right. Finally some teams at Google talked to each other and someone said:

Sooo, we have those great things (decorators/annotations) in Dart that no ones know how to use really, but they look nice and give legitimate use for the '@' sign which is pretty cool sign, right, so, you know, how about we slap it inside JavaScript as well. It should be fun.

And the other team said:

Cool, lets make some interns proud of themselves and give them something to live for, let them do a new language. But we will not call it a new language because of some jokes that are circulating around the Internet, something about the problems and number of solutions, don't look at me like that, I do not spend my time browsing comics, i DO work... sometimes, soo... okay where was I, ah yes, lets do this 'super set' thing that Microsoft did. We will take es5, then slap es6 on top of it, then slap typescript on top of it and then we will put the annotations on top of it.

Oh, oh, oh, wait wait, someone wrote an assertion library and management said if we do not use it for something they will fire the guy and he is like a great and nice guy, so we have to use it for something, right, so he does not get fired....right, can we do that, pleeease!

At this point everybody is looking suspiciously at the person that said the last line assuming a work related romance/bro-mance/gay-mance/whatever, but they decided to go with it anyways and as a result we have a new 'super set' that has everything poor old bastards at Microsoft had, but with annotations and runtime checks.  Ladies and gentlemen, I present you AtScript. And because it was a talk between the Dart angular team and the normal Angular team management would be really happy if this new thingy is 'beneficial' for both projects, so lets make it compile-able to both Dart and JavaScript. After all, we can always market it as a good thing....

So now we have to not only write JavaScript, an 'almost HTML, but not really', now we also have to compile it before we run it. Oh, oh oh and you know what, closure team has been doing this angular pass for year now, lets include them in the party, lets run the compiled JavaScript with another compiler, that would be fun!

Yey, Google is such a wonderful place to work at, you can get something and then mix it with something else and then talk to other teams and make really ridiculous things and we can always expect people to like it, especially developers because Google is such a magical word for developers, we can actually get away with anything it we figure out a way to make an internal project with it...

So now we have  AtScript. Of source there will be IDE support.... somewhere in the future... let say we will make a bash script that will monitor files for change and automatically compile them to both Dart and JavaScript, how about that? But that is down the road, for now you can happily use vim or anything else because you see, there is really no such language as AtScript, it is 'a super set' and as such no tool nor IDE nor virtual machine really understands it, so it is simply a dumb text file. But you can use it, Google uses it, so it must be great! Oh, and by the we are not really compatible with es6, nor TypeScript but we will get there as well, I promise!

Did I miss something?
Let me know in the comments:)

септември 02, 2014

Riding the Polymer wave

Comparative analysis of the developer experience for creating a multi-screen, single page application complete with persistence, animations and rich client data management w/ filtering. 

This piece will describe the development process of the creation of a single page application that harness multiple rest resources to compile a cohesive user application with multiple views, data filtering and persistence using the Polymer library. The process is describes as direct comparison with the process of building very similar application using the closure library and tools collection. The analysis is made as an effort to point out the pros and cons of migration from closure tools and libraries to the modern Polymer project as framework/tool set for building such applications. 


The post is quite lengthy so if you are looking for a quick summary scroll down to the bottom of it.

I will start with some background information: I have been building large scale single page applications for the last 5 years. I have been using many different tools for that purpose for one reason or another (some of those being: project already have been started using a particular framework that is not playing really well with anything else, the project required really small footprint as it would be server many many times without caching, and other). Some of the tools I have been using extensively include: MooTool (archaic, I know), jQuery (ridiculous, I know!), GWT (Java, I know), Closure library and tools (templates, CSS pre-processing, compiler) (really really verbose, I know), TypeScript (just a toy, I know), Dart (not really what it is portrayed to be, I know) and other - on occasion.

From all of those, the most reliable one and most trustworthy thus far at least in my use cases was Closure. Google really put efforts into it. The main problem with Closure is that it was designed with several things in mind, that kind of trip over JavaScript developers coming from any other framework/library:


  • designed to look and feel a lot like Java
  • designed to be production useful explicitly only with the Closure compiler 
  • designed to be compatible with old old old browsers
  • designed for internal use primarily and thus lacking the polishing of the other competitors out there


From those probably the biggest obstacle one could have with Closure is the fact that it is designed to actually use types, while JavaScript is designed to not use those. This is also the fact why TypeScript is used mostly as a transpiler these days and much less as compiler - types are not something JavaScript developers want/need to deal with.

Assuming the developer has the time and willpower to learn about the basic ideas inside closure (so types, the fact that you actually have to be able to imagine everything the compiler will do to your code in order to write something that will work after compilation, the fact that you actually need to test both source and compiled versions) - those great ideas and the advantages they bring could become part of your daily coding routine.

However even if you become the master of imaginative compilation and you are able to write code without errors (coming from the compiler) - those will never mitigate the fact that to write closure compatible code you have to always be very very verbose. Actually you have to write so much boilerplate code that even Google attempted to come up with some shorthand versions (mostly I am talking about goog.scope). Those efforts are however nothing compared to the amount of code one need to write to get going. This includes the type information (obligatory - the compiler is still not so smart to infer all types from the code directly and probably will never be), the namespaces and the fact that you need to include all used namespaces in the beginning of your file. Also from my experience it is very much possible to often have slightly overlapping functionality just because the design of a super class does not allow for some particular alternations (a property being important for your subclass but it being private in the parent class etc - no, you cannot alter the source code and no, you cannot just access it - the compiler will scream at you and who knows at which point the compiled code will stop working just because you are accessing something that is marked as private and the compiler can remove it as it sees fit...).

Part I: The good

Polymer is a new library from Google, that attempts to provide opinionated but easy to use abstractions over the new Web Components standard. It promotes declarative syntax for creating and using custom elements that should be compatible with all modern browsers (this includes the last two versions of all 'ever green' browsers).

Polymer itself is consisting of a polyfill for the platform features that are missing in some browsers (actually the only browser that has implementation for all web components specs is Chrome) and then the 'opinion' on top of it (polymer itself) on how to utilize those features to create developer friendly, fast to develop and easy to use custom elements and web applications.

First of all - instead of JavaScript you are writing mostly HTML - tags, attributes and properties. What you would write as new namespace with instance properties now you write as an element (polymer element tag) and attributes on a prototype object.

Here comes the first new thing - in Closure we do not write primitive values (nor object types - like NEVER) on the prototype because it alters the hidden class of the instance if altered after the instantiate phase and thus potentially makes your code run slower. In Polymer it is encourages for readability reasons and to easier understand what is going on. You can still write all properties in the 'ready' method, but the polymer implementation encourages you to use the prototype object of the element to define those and they are used for type conversions (for example the attribute will always be written as string, but you can hint polymer that you actually want to use numbers and thus the string will be automatically converted to number). One can also argue that this saves memory, but unless you are creating thousands of elements this is not really important.

Being able to not really care for the type (i.e. if the correct type will be passed to the property as value etc) and instead hint the type and expect the system to convert it for you is really a great developer experience and removes burden from the developer while in the same time proves usefulness (if one binds the input value to a money conversion element there is no need for the developer to manually convert the string to a number). So +1 for Polymer for not caring about the types and another +1 for managing type conversion for you.

After you define your custom element you need to (for UI components at least) make some templates. In Closure this is done with the soy template language, which is not so bad per see, but is cumbersome to work with - it is not possible to render your template in the browser without a) having a consuming element/component and b) valid model if data is used in the template. More over the template cannot be used directly, you always have to compile it before you can see what it looks like. So basically you end up writing HTML like files, but you cannot preview them and you cannot use them without all the boilerplate code around it. You certainly cannot just visualize that particular template with ease - you need to setup a whole new view with imports and more code just to see what you have done. This forces a certain type of workflow that seem a bit unnatural - first the designers write the view as regular HTML and then you (the JavaScript developer) take that piece of HTML and strip it out and turn it into a template and you hope that you did not broke anything.

In polymer the template is just a regular HTML. Testing it is as simple as including your new element/components in any web page.  +1 for Polymer and easy to use and create templates.

Once you have a basic element (so some template structure and some properties to operate with) you might need some ready to use functionality - you need an import. Unfortunately here polymer and closure, while in totally different in technical perspective ways, operate very similarly from consuming point of view - you have to always include what you need in each and every of your custom elements/components. In closure you do not have to care about file paths so shifting files and directories around is much safer (in Polymer it was a nightmare to have to move some elements to a different directory mid-project - we needed to go and find all the places those were used and update them), but you cannot automatically retrieve needed libraries and elements. In polymer the preferred way to manage dependencies is bower. While it is very nice the installed elements are very volatile to path alterations and once you start it is hard to update those. In both cases however you have a lot to write!


  • In closure - the whole namespace for each used element/utility
  • In Polymer - the full path of every used component (and writing "bower_components" all the time is not fun!)

Because of that no +1 for Polymer in this area.

After all these steps you can potentially have a self-contained ready to be used, re-used and extended/augmented piece of code. As a whole the closure code will be 50% larger (mostly because of the type comments, but if you put comment on your polymer code it would be the same size). The main difference is that in Polymer you end up with a single file (or 3 files, depending on how you structure the component CSS and Script could be separate files, but I tend to keep everything together for more cohesive experience), in Closure you end up with at least 2 files (for UI components) - one with the scripting and logic and one with the markup. The markup of course can live with the markup of other components in a single file under a namespace which makes it even harder to separate your code and distribute it as a single element.

Polymer also provides utilities for documentation and demos of your work (again elements that automatically extract the code comments and construct html in the browser). To do the same with closure code you need jsdoc and templates and basically you are on your own there because the docs of closure library are generated internally by Google and the ones you can generate look and behave nothing like Google's - they are less usable and uglier.

Because of these not so important but kind of not unimportant reasons Polymer gets one more point.

Part II: The bad

Not all is roses in Polymer land either.

First thing one could notice is that because of component reuse and the ease of instantiation basically you are inclined to use lots and lots of elements that provide utilities and no UI. The most notable example for that being core-ajax. In a single view page you can have several core-ajax elements. In Closure land you usually have one 'Loader' class that takes care of all the communication to specific REST endpoints and when an error occurs you decide what to happen next. In Polymer each core-ajax element lives on its own and you might end up having 30+ such elements in your app. You can define error handlers on each of them or listen for core-error globally (on application level). The problem is that you end up with ajax request originating from pretty much anywhere in your application and it makes it harder to manage. Also it is pretty hard to limit what is happening when because you do not have central control over it. Of course you can write such, but it looks unnatural. As a side effect you can load pretty much all your request except one, but it will take time to notice that you failed in that one request.

Another catch you have to deal with is Shadow DOM's CSS rules. Once all browsers implement Shadow DOM it might be great (and it will be in theory) but today you end up with styling that is very volatile under the polyfil. The first time you start styling you are tempted to write rules that are automatically scoped (for example adding classes like '.padded' in several different elements with different values inside of it). Under native Shadow DOM it works fine, but under the polyfil you will get overlapping rules and definitely not what you expect. The main problem with that is that you actually develop in browser that has native Shadow DOM and you do not notice that there is a problem until much later (and of course you are developing with Chrome, right? If you use anything else for development this whole post is probably not for you anyway). Once you realise what is going on you end up writing all rules as if they were un-scoped (so ':host .padded' which defeats the purpose of Shadow DOM - even with Polymer you still have to think about your styles as global - not cool!

Next come RAF and timeouts: The documentation for polymer states: 'async is bound to animation frame'. Well it is, unless you specify timeout in which case setTimeout is used and you loose the RAF timing. To gain that back you end up with atrocities like this:

this.async(function() { this.async(function() { _doSomethingToTheUI() }), null, 500);
I know, it is not that bad, but still looks a bit unnatural. Also guess what if you need to cancel this delayed work... you need to handle the case where the first async is already executed but the second one is not... this makes async programming fun, doesn't it!

Another async related code - 'job' is working around a common pattern and is designed much better and actually reflects real pattern. The implementation for it (as in polymer dev) is actually a bit different than what is expressed in the docs, but still is a nice enough utility.

Next comes the scroll handling: in Chrome scroll events are synchronized to the RAF, but in all other browsers they are not. However guess how scroll events are handled in Polymer...

Same goes for touch events. I know that it is best to think forward, to innovate and blah blah blah, but when you are pressed to deliver working code for those old phones Polymer is not not your friend.

This is especially true when it comes to the animations: the design and the looks are really really great - for the first time you get a framework that allows you to implement very complex and very impressive transitions and animation with so little effort. They are amazing when demoed and when they work. Now try to run the exactly same code under Firefox... or under iOS safari.. Hero transitions do not work as expected, sliding transitions are braking the view completely in Safari... and in Internet Explorer ... there is a bug that prevents platform polyfil from loading... fun huh?

Basically polymer teams says that they support and want to support all major browsers, but the reality is that once you go beyond the simplest demos it only works reliably in Chrome. This is not so bad if you are targeting Chrome users, Chrome OS users and Android users with sort of new and fancy phones. Unfortunately most paying users are actually using iOS for mobile. In theory if your product is successful you should rewrite in Native for iOS for best user experience, but in reality complex polymer applications work ONLY on most modern phones. For example equal by complexity and code size apps when written with Closure work perfectly on iOS 7 on iPhone 4S and iPad 2, but when written with Polymer the app is completely unusable on iPhone 4s and barely running on iPad 2. On iPhone 5 it works in parity with desktop Firefox for example (i.e. there are delays here and there and some things are broken but most of the app is fine). Now compare this to the Chrome browser on the oldest Nexus 7 (also called original Nexus 7). It works perfectly smooth! Nexus 4 - butter smooth, Nexus 5 and Chrome - again buttery smooth experience, all animations look exactly as they should be, all transitions complete nicely and in sync...

I am sure Polymer team wants to support all major browsers, it is simply the fact that they don't. So does not your application. And this means you can be in big trouble with your manager...

When you debug your application in Chrome everything makes sense, you see the shadow root, you see where elements got inserted and so on. Try the same in Safari debugger... or in Firefox.. what you see is very very different from what you expect. This is again related to the fact that there is no native shadow DOM support. Apart from that other things seem to work just fine. The problem I have with those is that I cannot figure out where the slowness comes from, initially I believed it was a rough code path - someone somewhere was making something that was not really needed. However test after test after test proved to be 'accumulation' problem - if I isolate a component and nest it deep deep into lots of other components it still works fine. But when I start adding siblings on all levels the slowness accumulates progressively and an app with only 3 levels of nested animated pages with 4-10 siblings in each level results in apps that run only in Chrome, crawl on any other browser on Desktop and kill performance on mobile safari and old PCs.... There are some 'offenders' that are worse than others, but one after another they bring their 'slow' and it all results in intolerable application.

Once again - I am NOT sure what is going on there (mainly because debug tools on most other browsers are really years behind what we have in Chrome), also another 'nice' reason is that when you attach iOS device to debug web page on a Mac it auto-magically becomes 20 times slower. I do not kid you - the model observing digest all of a sudden runs for 25-28 milliseconds and the touch events take 50 ms to be processed.... close the debugged window and MAGIC - you again have something that looks like an app and not like still images... this terrible experience proves to me that Apple is strictly inhibiting any and all web advances they could think of and wants you to use their insane languages (I am sorry but Objective C looks like someone killed a dragon and used the body parts to compose a language).

Part III: The ugly


The ugly part is short: the ugly part is the controversion this whole Polymer thing creates: on one hand the managers want things to come into existence fast - the faster the better, and love to talk about things like 'time to market' etc. On the other hand they really really want to be able to tap into the paying market of Apple store. 

And indeed developing with Polymer will make you 10 to 20 times faster (depending on what you have been doing before that, I have been doing Closure and Polymer gave me a lot of speed). It makes so many things easier taking care of many things for you: type hinting, data binding, resource loading (hello, HTML imports) and ready to use components that are beautiful and usable (paper-elements). 

However as the 'bad' part explains there is a price to be payed and it is not really developer's choice to make: once you know what to expect it is not really something you would go and fix even if you are a 'javascript ninja' - it is just the way it works, it utilizes very new things that are simply not there in most browsers. Even if you are a ninja, you could fix this or that, but the amount of work you have to do is much much more than if you have just used something that is known to work already. Polymer is being worked on all the time. But Apple is not showing any interest in developing for those new standards and we know iOS is away from this new goodness. 

Conclusion


If you are lucky enough to be developing a Chrome packaged app or an Android only app - consider yourself really lucky, because you can jump right on top of Polymer and ride it like a champ. You can do crazy things like declarative data processing (yes, you describe the data processing and filtering as nested html elements and it works!!!), ultra cool lively animations and transitions and you can write truly reusable, configurable components that you can reuse from project to project regardless of the used library and/or framework - something we have not been able to do - never, until now. Even with closure components you were still bound to closure tool-chain. With polymer you can include (one day only) polymer and do not care who uses what else to construct custom elements that you might use.

If you are not one of the lucky you better watch out: Polymer is really fast to develop with and very gratifying up until the moment you start to test compatibility: forget about IE < 10, forget about complex apps on older phones, forget about really clean fluid animations on anything else than Chrome... at least for now. Polymer is said to be early preview and alpha and not ready for production. If you are like me, if you want to developer faster and more pleasantly Polymer might lure you in, but then you have to pay the price: days, even weeks after your application development is ready and stable you might still be fighting with Mobile Safari and Firefox and IE... and even if you are not actually fighting bugs in your own code you are still days away from releasing.. you have been warned!

юли 16, 2014

Polymer vs. Dart polymer (and a bit of Chrome Dev Editor)

It is no secret that I am a fan of Dart. Having used it for more than few projects already it had speed up my work flow tremendously. It was only natural to check out the new core and paper elements from Google after the demos and talks from Google I/O 2014.

The problem one faces with that is the complexity of the work flow one suddenly has to manage. It's no fun at all, opposite of what I was expecting after working with Dart for several months.

I have to include the html via import, but then again I have to include the dart files of those imports in the dart file of the polymer element. Ahem... I have to use those strange annotations (CustomTag for example). Hum... I was a bit confused and knowing it all will boil down to HTML and JavaScript anyway I decided to give it a try in JavaScript.

What a surprise it was! All of a sudden I did not have to think about annotations and I could work in the same file for my whole component. I could actually understand what was going on and how it all will work out in the browser.

For testing I also used Chrome Dev Editor (needs more polishing by the way and it needs to be faster, with multiple open tabs I could see very clearly delay when switching between the tabs and opening files, creating new file/directory does not put the focus in the input, pressing Enter does not trigger the creation and so on, lack of polishing describes it very well). On the surface it does not provide much more than any other editor. It uses ACE internally and does not provide alternative color schemes at this time. It has few settings, but it has features no other editor I have seen (especially for ChromeOS). One of those is clone from GitHub which is awesome! Another one is Deploy to Mobile! Makes it ultra easy to work on your Chromebook and just attach your phone to test. Combined with the debug and dev tools in Chrome itself it makes for a cool, well integrated development environment.

I have built a whole mobile web app in under a day. It has what a robust modern development platform has: well built universal UI controls (tabs, lists, checkboxes, switches, buttons etc) that are easy to understand and use. For the first time in a very long period I had what other (read non-web) developers have had in years - I could use stable and reliable UI components and write only application logic. And that is what I did - I had to only write the parsing logic for the server responses (because it speak no json). Data bindings are awesome and declarative event handlers are double awesome!

Now back to Dart. I happen to miss the editor's understanding of my code. I had to rename a variable and it was a pain!!! I am so much used to the IDE handling this for me as well as omitting the 'this' keyword that I actually had few bugs related to not using 'this' where I should have.

But as a whole I do not see any advantages of using Dart over pure HTML/JS when creating simple polymer elements. Maybe it would be helpful if the element's logic is much more complex (actually I bet it would help in this case), but the cumbersome way of making elements in Dart should really improve IMO.

I have showed the workflow to my colleagues and after some initial resistance (because of the somewhat more complex css selectors for the styling) all agreed that working this way is faster and much more pleasant. This means a win for polymer in my opinion.

Finally I want to address the ranting from some individuals about the fact that Google pushes polymer so hard that it sounds like polymer is what a web components are. The fact is that polymer provides a declarative way to create web components. This makes the creation of a new component so much easy to understand and faster! For a lot of developers JavaScript is jQuery as well, does it mean that we should rant against it (as often as I do!). No, it only means that there will always be those who have better and deeper understanding of the platform and its features and those who just want to use the free way and get there faster using any tool available. I would not go and write custom component using JavaScript to access the shadow root and set its inner HTML - this is insanity! On the other hand I could write a template and declare my bindings. So to those people I want to say: whatever floats your boat, but stop attacking Google for being too loud about their accomplishments. All companies do that. Apple released a feature that was there for Linux and Windows a decade ago and still they have the nerve to say that it was a great innovation. Why not rant about that?

април 14, 2014

My opinion on StackOverflow

StackOverflow is a strange place.


On one hand - it has lots of information you can directly access (via Google search) if you are just stepping into a technology (be it a language, a framework or what not).


On the other hand the moment you have a real problem (so not just being lazy to read the docs or test it yourself) so end up voice in the desert. No one even tries.


And then there is the 'reputation bullshit'.


Let me ask this: If I have a question (very clear one - can I trigger file download from custom made menu in Drive) and the only person who answers does not really answer but instead starts to explain to me how the other APIs work (Like the UI api and how to put a link on it and wait for my users to click on it, mind you, completely re-imagining the workflow - something I can do on my own) my reaction was a downvote. Which provoked several downvotes from him apparently and probably some of hif fan club. After a few more sharp messages the guy finally gave me an answer (at which point I no longer trusted his answers btw). But in the meantime I have lost 7 points and he has lost 3.


So to recap - voting someone DOES NOT directly correlate with his knowledge nor his helpfulness. It is simply an emotional response to user's desired to a) be noticed and b) be always correct.


I myself have answered lots of stupid questions in SO. Mainly related to TypeScript before I got bored of it. And NONE of my hard questions have been answered. They still sit there, more than a year after I have posted those.


If you are wondering: Question 1, Question 2.

Now, I am not saying SO is bad - it is a good place to fast-track on a new problem/technology, but once you get to a certain depth you get nowhere with SO. Which is sad. And makes SO less and less useful the more you gain competence on the area. Where should one go after that? I have no idea...