Общо показвания

декември 29, 2013

The problem with on-line apps

Okay, I have to admin on-line web apps are absolutely great. You can use any computer and if you remember your password you can use the tool from anywhere in the world, you don't have to bring your own laptop, you don't have to worry about backups and disk failures and you are always using the latest and greatest version of the software. Isn't it great!

Well, it is great for as long as it lasts....

One after another companies and start-up fail or are being bought and/or merges with another (mostly commercial) product and with a little to no notice are being closed down. And it happens very often, more often than you might imagine.

Lets take Google reader for example. What a great service it was. And it closed down, regardless of the protests, regardless of the on-line groups opposing it. Vuru - financials service also closes now with only a month notice. Half of the applications I have ever installed in Chrome do no longer exist. Half! And the other half are giants like Facebook, Google mail etc. And even they can decide to close down at any given moment. As soon as they decide that it is no longer relevant or profitable there is nothing from stopping the company from closing down the doors and windows and there is no way you can argue with that.

So what does that mean for you? Lets imagine this: You have worked for several days, maybe even weeks creating the ultimate application in Google Spreadsheets. It is NOT portable, because you used all the excellent services provided in the scripting environment so basically your sheets are useless outside of  this company's environment.

But Google is almighty and great, they will not close down. Oh but they will, sooner or later they will shut down services that are not profitable for them. And then what?

I will tell you what - you re on your own. Currently there is no way to write those scripts portable so that they will work with any other known environment. I have data collection and analysis tool written for Google spreadsheet application spawning 24 files. It is working very well indeed and it is hosted for free for me etc. But what will happen when Google decides that the new - improved by the way - spreadsheet application drops some of the services support. Like they actually did? Can you stay on an older version so all your stuff will continue to work? No! Can you take your application elsewhere. No! Can you do something about it? No.....

So next time you agree to fully rely on a free or paid on-line web application remember this - you have absolutely no control over your data. And even if this is not a problem for you think this: you have absolutely no control over the software. And if this is also not bothering you - you have no control on the availability of this software. It can disappear  in just an instance!

I for one am thinking about it. The problem is this: 6-7 years ago when Google started this "the web is the platform" bullshit it sounded like the end of an era - every one can put their application on-line and reach any user on any device and perform outstandingly business-wise. Yeah... and now every and each one of those excellent performers hold you in a grip firmer than any of those other before them. Yes, I am talking Microsoft. The "evil" they was / are, they would NEVER make you loose your application. You had a choice. Stay with the older version and protect yourself the ways you can (Intranet, firewalls, whatever) or upgrade and pay anew once again becoming master of your applications. Now this choice is gone and you are completely at the mercy of those companies. Think about that. You do not own anything. Your data, your software or its availability. You control nothing and have no rights.

So congratulations to you all on-line service and apps users. You have been schemed out of your digital hold o things. What will happen next? Who knows. But the next time someone starts to praise to me the advantages of the cloud and web apps I might just kick him hard.

декември 27, 2013

Rss for personal use

After Google reader have been shut down in July I migrated first to Feedly and ten to InoReader.

Ino is very good if you can get used to some limitations.

Lately I have been reading more and more on privacy concerns and it turned out that all those on-line services have one or more issues when it comes to your privacy. Basically they can do whatever they want with the statistical data collected based on your usage and interests under one form on another.

I can understand why for some people this is not a big concern, however there are also people who strongly disagree with this policy.

I tend to be neutral on it, but just for the sake of the argument I decided to try and go on the other side and see if I can set up a local solution for RSS reading.

Now, it is clear to me that large percentage of the younger population prefer to get their news pre-filtered by their peers (via facebook, tweeter and g+ for example), but I still have several very different interest and no particular person(s) to count on for providing me enough information on all topics that might interest me, so I use RSS news feeds daily.

Quick look up for free Linux solutions for RSS reading reveal that most of those are console based and are not very useful if your feeds contain lots of media (pictures, embedded audio and/or videos) so I decided to go with liferea.

Now, one interesting aspect of liferea is that it can sync with those on-line services you know already (like Feedly and InoReader). However the objective is to be independent of those.

So what you do is basically export your feed list from the service provider and import it in liferea.

Liferea keeps all its data in .liferea_1.8 in your home directory so it is easy to set this up as a symlink and actually use portable media to store your data and take with you. Note that you should actually use fast flash drive as the low end devices are too slow and will result in bad user experience is used.

I think same could be done for your .firefox folder. Even thou Mozilla says it protects your data and encrypts it, Google definitely does look at your usage. Chrome is a very good browser, but I do not feel comfortable using it for my day to day browsing so I use it primarily for development.

As a developer I like the idea for the open web, but as a business trained mind I can clearly understand that those "free" services has to operate on something and that 99% of the users do not actually pay for the premium features so it is very hard to stay afloat with only free riding users so it is only natural to try and capitalize on the user statistics. This is why I think that most services must be pay-walled. If you pay you have the right to demand conditions. If you do not want to pay just use other personal solutions. Free services should die.

And if this means RSS might die, so be it.

декември 11, 2013

Component goodness (Polymer) - (with update on readiness)

After several months of active development I think Polymer deserves another trial / deep look.

My initial reactions to it was negative due to several assumptions I had:
  • there is no way to compile down to one file
  • there is no 'global compile-time checking'
It seems my first objection is now being addressed and there will be a way to compile down all the imports and links into one single file, which is great. The second one is hard to explain if you are not used to Dart or Closure ways of doing things, but basically they make compile time graph of all your calls and compile an AST of your application and operate on it (checks, asserts, rewrites etc). The thing is that you should not really need that if you use the declarative approach to build your application, which is what Polymer assumes you are trying to do.

So how do we do this?

First go get the code. I used c9.io workspace, but if you have a Linux/MacOSX machine you can do this locally. I tried all the methods: zip archive, git pull and bower - all of them work, but some of the examples need path tweaking to find the files. Also you might need to add both polymer.js and platform.js if non-minified version is used (bower and git).

I find that one of the most interesting tests one can do with a new technology is to try and build something complex that you have already built with another technology. Then compare the result, the experience and the speed of development between the two approaches.

My intent is to use Longa as a product developed with Closure tools and to try to re-create it with Polymer.

Longa is a large product (~2MB unprocessed JS and templates). Compiled with closure compiler it is boiled down to 115KB (JS plus the templates) and is further compressed by the server to 30KB, which makes a reasonable download size for a mobile web application. 

The HTML cannot be compiled down (names matter, its not like JavaScript), thus the savings should be coming from a more compact expression forms. 

For now I have re-created only a single channel record and I could say I am already impressed with what the platform is capable of doing without actually me needing to write any code at all. Of course there is much to be done and even more to be desired.

For example the styling of two shadowed elements do not work as I was expecting (in the context of a single polymer element and the elements are regular ones - a div and an image tag). Maybe it is a bug, maybe I am missing something, but still it is kind of a hurdle for a new comer, regardless of how many tutorials are out there, pretty much all of them concentrate their efforts on the area of isolating the styles from the outside world and not on how they work internally.

One of the most interesting things I notices was the fact that you can bind the styling of an element and mix it with an expression, so an element can calculate style values based on properties and arithmetics. For example:

#selector { height: {{height+20}}px; }

is a valid style inside the template of an element!

Another interesting factor is the abstraction of complex routines into elements. For example ajax calls are hidden in an element and you can listen on that element or any of its parents and use it as a regular element (just like the select or change events in native controls).

All in all it is at least an interesting toy. I am not sure how fast the browsers will get to the point where all this will work without hacks or shims, but at least for a certain class of applications it will be a nice fresh breath out of the JS insanity we have been living with for the last 10 years. 

As a conclusion: even if your code is very complex (for example something like a drawing board or document viewer) you can at least try and ship it as a web component so others can use it in the simplest form possible, by just import it and use it as a tag in their markup. I for one will try that!

Update (12/16/2013): Turns out most of the things do not work in FF/Mobile Safari, especially the styling. Some style rules do not work in Chrome even with no apparent reason. For example rule like this:

padding-left: 4px;

does not work but this one does:

padding: 0 0 0 4px;

I guess there are still lots of bugs and features missing, but clearly if you want to just tap your fingers into the power of the web components now is a suitable time. However if you want to go to full blown large apps you should definitely wait at least a few months! 

ноември 09, 2013

HP Chromebook 11 - developer's review.

Ever since the inception of the Chromebook project I have been a huge fan.

I envied the attendees who got the cr48 one. Then I was furious that the Samsung ARM one was not available (or any of the other ones as a matter of fact) in my country. I have travelled, but never got the chance to see one or wait long enough for a deliver on any given place.

Finally few weeks ago when the new HP Chromebooks have been announced I decided it is time to purchase one of those.

A little bit about my love-hate history with the 'new laptop' purchases.

I have an old Thinkpad X61s. Purchased 6 years ago when it was brand new it costed a lot! Really big money for the time. But I loved it. I has always been running with Linux and even thou I had to manually install a script every time I reinstall (every 18 months or so) it worked great.

A year and a half back I realised it was maybe the time to upgrade. So I went hunting for a new laptop.

This is the place to say that I do web development for living. Mainly front-end development (CSS, HTML, JavaScript). Modern front end development also requires you to run Node js, Java, GNU Make, Bash, Python and a series of other more obscure tools. They are not part of the final product but greatly reduce the work load and provide assistance which is invaluable for modern development practices. Linux fits here in the middle. Lots of tools work best with OSX, like Casper etc, but is much better than Windows.

Naturally I went first with Thinkpad again - Edge 340 (or something like that, sorry don't really remember the numbering - core i5, 4GB, 120GB HDD, 11 inches wide, glossy display, built in camera). Well the problem with that laptop was it was way too hot to tolerate on my lap. And the fan never ever stopped even on Windows, let alone on Linux. So I returned it.

Then the next choice was Apple - MacBook Air 11 inches. I have to say it was a very well built machine. It only goes hot when watching flash videos in full screen. And then there was OSX. Well, there is the menu bar always on top - stealing precious 25 pixels from a very short 768 pixels. And then there was Chrome not being able to make plugins go full screen (so when you go to apple trailers you have to watch them as small as they come). And then there was the fact that lots of tools had to be managed/installed from different sources manually. At least in Linux there is one general command to manage all software. I have been with it for 2 weeks and then I returned it. Some say I should have given it more time. Sometimes I think that too, but then I remember what I had to put up with all the time for the smallest things and I do not intend to to back to OSX any time soon.

To sum up I am a web developer who is fed up with badly designed hardware and software that was not fitting my expectations.

And then there was ChromeOS. I decided to try and live off the Chrome browser (on Linux) for a month and see how it goes.

I uploaded my pictures to G+ and my videos to YouTube, my pdf and manuals to Drive and my docs and spreadsheets to Docs. My music did not fit in anywhere, but I was listening more and more off my phone so it was not such a big deal. I moved my development environment to Cloud9 and what I was not able to move there I accessed 'live' with sftp mounts. (Well I was not aware that sftp mounts are not allowed in ChromeOS which sucks).

The experiment went for like a month and I was sure I can live with the Chrome browser. I still have my work PC at the office if I needed to test Firefox or IE compatibility.

I was thinking a lot which Chromebook to purchase. At the end I just wanted a silent machine and the one without a fan seemed just the right choice. I was aware that it might get slow with more than 5 - 6 tabs, but it was okay, I just need to make some adjustments to my browsing - that's what I told myself.

So I went and purchased the HP Chromebook 11 - ARM.

First I needed to wait for the availability. Then I needed to wait for the delivery. Then for the delivery from UK to Bulgaria. It costed me almost as much as a regular netbook (around 300 Euro). But it finally got here.

I had to wait (a lot) for the initial update, but after that it was fine. For viewing my email and G+.

And then I tried watching a movie from my thumb drive. H264 (480p) plays just fine. Excellent actually. Xvid is really bad (read SLOWWW). I haven't tried anything else yet, but I think h264 is okay. For now.

And this morning the shit hit the fan. I like to watch tech videos on YouTube at 1.5 speed. So naturally I went and enabled html5 player. And then it stopped working. Like at ALL! I was no more able to rewind in the videos. If I do it stops playing and it never comes back. Even after refresh. Even after restart. So I go back and opt out of the html5. But guess what - even if it says you are served the flash player you still get the html5 one. Even after clearing all caches. Even after resetting the browser. Even after deleting all the cookies. Even after powerwash. Even in incognito mode. Even in guest mode. So basically you are stuck with html5 play back which is terrible on this laptop.

After advice from G+ community I installed an extension that half resolved the problem. Now the video starts loading and then the extension kicks in and reloads the video this time with the flash player. Great! So after pushing for years now for html5 it does not work on the machine that they have designed? The big shots have more important things to do I guess.

So I went about my day and logged in c9 to do some work. Yeah, not so fast! c9 uses socket to communicate. And the HP11 decided to drop the connection every few minutes. Even when on the same network with the Thinkpad which never drops it. Even when left alone in the network with the router. Even after powerwash. Ha! So basically the terminal restarts every few minutes. And youtube videos stop every few minutes. And it is not the network and it is not the router. So it IS the HP11.

At this point I doubt that it could be used for anything more than to rant about it on the Internet. That one is does great. It is actually so bad that half of my spreadsheets do not work (actually they do, but after one makes a change it starts auto save and no new changes are reflected in the UI until a successful save and the saves take 1-2 minutes. Yes, MINUTES!

This reminds me of a toy that is sold currently - an imitation of a laptop which plays some music and displays some text to teach the kid the alphabet. Anyone looking to buy such a toy please feel free to contact me - I have one for sale and as a bonus you can also (try to) watch your porn on it.

In the scale from 1 to 10, 10 being the best laptop, HP11 is a 2. One point for being completely cool. Half point for being shiny good looking and half for the good display. Everything else about it is grossly unbearable and should never ever be sold to customers.

Important notice: It is NOT the software. I have lived with Chrome-only environment for the last month and everything was great. But it is great when run on a normal hardware. When run on something like the HP11 where the wifi is unreliable and youtube seem to be hated it turns into a nightmare. Developing in the cloud is absolutely impossible (if you want to stay sane). And you can never install anything locally, remember?

So two options left: sell it or try to run Ubuntu on it. About that - only 4 GB left free. How come from 16GB? What do those people put on this machine? I have Linux installs with 10GB only root filesystem and with all possible software (including build tool-chains, java, gimp, open office and lots and lots of other software it never goes beyond 7GB. How come ChromeOS already ate 12 GB?!?!? Its a mystery!!

When I get back to the city I will try it again. And if it fails me again I will just sell it. Too expensive to return it.

I have no problem parting with it. The problem is what will I get instead of it. I cannot live with Windows. And I certainly do not want OSX. But Linux is becoming weaker and weaker with every year. ChromeOS seemed like a nice fit, but evidently that still need to figure out the hardware policies. Because I am not paying 1300 USD nor putting up with a half usable laptop.

септември 22, 2013

Frameworkless JavaScript (or how we reinvented the wheel)

So the quality of the articles is doomed to decline once you get enough subscribers, I notice that. Last example is "JavaScript Weekly". How and why I cannot still comprehend, but it is a fact. In this weeks edition a ridiculous post was shared with the subscribers written by Tero Piirainen. In short (if you do not want to waste time reading it all) he claims that they do not use library (A, B or C) because it introduces:

  1. large code size (to be downloaded)
  2. abstractions (really? abstractions over websocket, that actually transfers JSON only??)
  3. hard to comprehend large code base
So basically he advises us to reinvent the wheel every time we start  anew project. So to have clean slate and think only about the API. Hah....

Dear Tero, smart people years ago had the same problem you have today. And smart people tend to resolve the problems (well) in a smart way. Years ago Google open sources their set of tools called Closure Tools. It attempts (and succeeds) to solve all those problems that you are fighting like a real Don Quixo without ever looking into alternative solutions. Of course lots and lots of people did not understand how to use the library and the compiler back then and still do not understand it today. One such example is here (really stupid article about micro optimization and the lack of understanding what does the compiler do to such "slow" code). For those who are not into masochism the tools make what you want without you reinventing the wheel each time: tree shaking, minification, method extraction (i.e. shortening/removing the prototype chain lookups), reduce/remove or namespace nesting, method in-lining,  modules (lazy loads) and more (like pre-calculating values etc). Yes - it is kind of hard to wrap one's head around it, but it does brilliant job and in my experience it reduces code from several megabytes to 30-40KB gzipped. 

I do not say it is the perfect solution, but it does a very good job. Today it is still actively developed at Google and is actively used in large number of their user facing product. This about that for a second: one of the largest web companies, a company that is very inclined to make the web fast, is using those tools to make smaller, faster loading, faster running applications!

And yes, backbone plus underscore plus whatever templates is the mainstream way of doing things these days. But no, this is not the way to go when you want large scale but small applications.

август 14, 2013

After Google Reader

It is somewhat in fashion to name things starting with 'After'. After Eart, After etc.

However this post is about the now dead Google reader and what happened after that.

At first it looked like feedly will be the winner - they already had integration with google accounts, it was very easy to use your feeds and while lots lacked behind on the UX side of things, they provded a working solution for the problem somewhat 1 mln users were facing.

After a week long testing I migrated (almost a full month before reader was closed down). The day it happened feedly had some issues but none the less it worked well. The problem was its creators had UI vision very different from what Google engeneers used to have when reader was last updated. Hovering to activate the side panel, really? No way to go to topics, only a few, very basic shortcuts... there was so much missing it bearly scratched the needs of the power users. And then it was an extention, not a real cloud app. But they fixed that...

If you are like me you read several houndred titles a day, mostly skimming for something that might be of interest or importantce. This was not supported at all initially, then they aded somewhat handicaped version of it (calling it "classic google reader look". Well ... no.

There was no search. Later they decided that search will be behind a paywall. Great, so now we are supposed to just pay for search results? It ain't gonna happen, considering the fact that the same content is available online and indexed already by google's search egngine. Sorry fellas, wrong decision.

If that was the end of the story it would have been a bit traggic: users end up with half working (several times I had to contact support because titles I have read kept coming up as new ones!), half familiar, worse product than what tehy are used to.

But then, I don't eve remember how, I found InoReader. Gess what, it has the exact same shortcuts as Google reader, it looks pretty much the same (it's an option) and behaves the same. It contains the statistics for your feeds, also in a newtly arranged UI box it shows the status of your subscriptions so you can see if one or more had stopped working and it loads FAST. I mean really really fast! I cannot compare it to GR, because it is gone already, but it is much faster than feedly that is for sure! It reacts to user input faster and behaves smoother.

If you are like me and happen to miss Google Reader just give it a try! I promess at least as UX it wont dissapoint. I am not sure how well the backend is constructed, but front end wise it is the best out there, maybe even better than GR!

юли 10, 2013

Closure tools development in the cloud

I have been waiting for this for some time now!

Probably this happened some time ago, but I found out just today that it is now possible to use cloud9 ide on a complete shared environment, which means I have a terminal, python and java runtimes and I can finally develop using complete closure tools in the cloud! I just had to copy over my setup from a local folder to the hosted environment and it just worked.

Well not "just", because they run an older version of python (2.6), but it is pretty easy to overcome this.

The finding was part of my ongoing investigation for an article on using Chomebook as a primary laptop for the modern web developer. Of course my work flow is already skewed enough by my decade and a half living with GNU/Linux, but even I find it frustrating to constantly have to "update" and "distro-upgrade". The last piece missing will probably be the hardest to figure out, but I have one last trick up my sleeve.

If it all goes well and sound I would be buying my next laptop from Google. If not, well, I would be still buying a laptop with Chrome OS but it would be the cheapest one (Samsung S3) and will use it only for light reading.

Tip: I know that the Samsung can be used for development as well, because all the compilation and checks are run on the server, however because it has only 2GB of RAM it would be hard to open many "applications" on it and I am not sure how usable it would be for devs.

май 30, 2013

Как лесно да следим инвестициите си в ДФ (с google spreadsheet)


След като държавата прецени, че българинът разполага с предостатъчно средства и е време за нов данък, който стана актуален от началото на 2013 година (интересното при него е че стана ретроактивен, което попари доста спестяващи български семейства като им жулна данък върху депозит направен много преди да се заговори за това, например 2 или 3 години по-рано), нормално беше хората със спестявания да потърсят алтернативен начин за вложение на парите си така че поне да се преборят с инфлацията. Единият вариант бяха така наречените безсрочни депозити, а другият (за малко по-авантюристичните) дяловото участие в доверени фондове.

Тази статия цели да помогне на не-специалистите да следят и измерват по-лесно инвестицията си в ДФ.


Пресполагаме че вече сме си избрали ДФ където искаме да закупим дялове. Ако още не сте си избрали фонд, българският финансов пътеводител е добър старт.

Най-лесният вариант да следим движението според мен е google spreadsheet.

За да разработим пример ще използваме един от фондовете на UBBAM.

Избираме си фонд и проследяваме адреса на който се визуализира в табличен вид информацията за ценните. В случаят с "Патримониум Земя" адресът  се генерира автоматично от JavaScript и се състои от постоянен адрес и две променливи: начална дата и крайна дата.


За този пример ни е нужен само най-новия запис, за това ползваме следнтата формула за получаване на конкретен адрес:

=CONCATENATE("http://www.ubbam.bg/libs/graphics.php?id=39&type=5&date_to=",TEXT(TODAY(), "yyyy-MM-dd"),"&date_from=", TEXT(TODAY()-1, "yyyy-MM-dd"))

Така получаваме валиден адрес за зареждане на данни за цените на ДФ.

Следваща стъпка е да си направим импорт на тези данни: в гонрния край на таблицата ползваме функцията за импортиране на данни от web таблици:

=ImportHtml(B19, "table", 0)

където b19 е клетката в която сме получили адреса.

До тук имаме автоматично обновяване на данните за текущите цени и брой дялове на фонда при всяко отваряне на нашия документ.

Следваща стъпка е да въведем нашите покупки и формулите по които можем да изчисляваме показателите които не интересуват: текуща стойност на инвестицията ни, абсолютен растеж в проценти, относителен годишен растеж и т.н. Според конкретните инвестиционни цели структурите на формулите могат да се различават, но основното, което ни интересува е дали участието ни във фонда ни носи печалба и ако да дали тя е по-голяма от онова, което банките предлагат като лихвени проценти за срочните депозити.

Целият пример може да бъде видян тук.


Участието в ДФ има своите плюсове и крие своите рискове, бъдете сигурни че сте се запознали добре с тях и за всеки случай отново прегледайте материалите на страниците на сайта "моите пари", където е подробно обяснено на български език всеки аспект на този вид инвестиции.

С помоща на малко познания от училище и малко компютърна грамотност е възможно да имаме лесен удобен и смислен достъп до данните за нашите пари инвестирани в ДФ.

май 25, 2013

What's wrong with Linux desktop

It is going to be a rant!

A few weeks back I was surprised to learn that Google has decided to stop supporting older versions of libc on Linux and thus the binaries you can download are incompatible with older Linux distro versions.

I still run Fedora 13 on my personal laptop and this was a clear signal that I am way behind end of life of the OS product. On the other hand I already know that I hate unity, mostly because task switching is awful hard and unintuitive. Basically they copy the OSX way of doing things: if you want to use Alt+Tab you end up switching applications and if an application has multiple windows you have to wait and perform a different set of instructions in order to be able to select a particular window. Well that's bad if you use multiple applications when working but some of them have multiple windows.

Let's make an example:

Google chrome for browsing and reading documentation/examples on the Internet.
Google chrome with different user for testing (without plugins)
Chome's debug tool as a separate window (or several of them)
Sublime text 2

Now, the problem is that I can no longer just use Alt+Tab, I have to construct a special mental model of how the environment is handling the windows and then perform some acrobatics with my fingers just to get to the window I need. This is waste of time and I installed gnome 2. It was still possible on ubuntu 12.04

Today I installed Fedora 18 (how many years later is that?).

You can imagine my surprise when I realised that Mutter is doing exactly the same stupid window selection thing as Unity!!!

Okay open source (shitheads) developers: I can understand that you LOOOOVVVEEE OSX and your child dream is to make a free UI that is better than mac's UI, but my question is this: of all the beautiful and awesome things OSX does (packaged apps anyone??!?!) how could you copy the worse EVER feature??

You might find it surprising bit this is the truth: Microsoft got it right! OSX is wring! Users want to change windows with Alt+Tab and tabs with Alt+1-9, no one wants to change "context" or "application" with ANYTHING, this is the most useless feature EVER!

The other most distasteful thing done in Gnome is the hard binding of the Windows key to whatever the stupid panorama view functionality is called. I want to change my keyboard layouts with the left windows key - the way I was doing it forever (I started using Linux in 1999 and ever since then I was using the freaking windows key to switch kbd layouts). Is it so hard to just make a key binding optional? It seems it is, if I need a whole application just to tell the freak show that gnome 3 is to use special keys for layout switching, because guess what, by default you need a real key to do it, just like an action trigger in an applications does, so basically Gnome3 is swallowing your applications' shortcuts!

Thanks a lot developers. You have ruined a perfectly well working desktop environment, and turned into mix of the worse decisions made in each and every OS out there.

On the bright side of things Xfce is working well enough. But there is still no support for remote servers (a la Nautilus) and this makes it impossible to use Thunar if you work in a networked environment and you need the remote files to look for all possible applications as local files.

Anyway, I think Gnome3 should die and all the developers that are working on it should be restricted from contributing to any software that has na UI for the rest of teir lifes.

There, I said it. I feel much better now.

май 21, 2013

Polymer (by Google)

At IO this year a new project to build a toolkit on top of web components was presentedPolymer.

The demonstration was simply brilliant (in contrast of the Web components - tectonic shift talk's demos) and it showed the 'promised land' the exhausted web developer, who has to combine again and again the same JavaScript files with each new project (and mind you, a new web application is built for 6 to 8 weeks these days, so this is a lot of repetitive work) just to get those standard components to work and align as they are supposed to. Then he has to battle with the performance and load times and then and only then he can start implementing the application logic.

Lots of efforts has been spent to put off pressure from the regular Joe developer in recent years as well, lots of frameworks, libraries, utilities, build tools, component architectures and other stuff were invented to make this an easier and faster to complete task and yet we are at the place where it takes days if not weeks to gather everything you will need until you can start implementing application logic.

Everybody wants us to believe that the resolution to this chaotic state is called Web Components. So much, that companies started to implement early prototype frameworks and tool-kits on top of the still emerging standards by shimming the lacking support and polifilling the browser incompatibilities.  So today you can go and test drive the "future of web".

Well, I have so you don't have to.

And it sucks.

It does not work. Half of the examples do not work as expected. Even in latest stable Chrome or Firefox. Some that failed there worked in Mobile Safari, but mostly what you see is a repetition of what we have had for years on our hands from developer's perspective: lots of files that we have to know where to gather from and how and when to include in our page just to get things to load, from then on we have to figure out this new shiny approach to data management (we just learned to use meaningful data structures and consistency checks on the client side and to combine it with two ways data binding). And what about memory management? What about node count? For years we have been taught to not put too much nodes in the document and all of a sudden we are putting nodes in fragments twice as much as the old fashioned applications. Did the browser vendors all of a sudden implemented much better large DOM node list handling?

Anyways, while the approach is very interesting for me (lots of good ideas there, basically from the demo it seems you can have even the most complex interaction models implemented as a tag and you can use it as any other tag, including nesting the same tag inside, which is kind of cool and basically not possible with complex applications unless specially designed to be possible).

However the fact that even the simplest demos do not work or work very poorly (terrible redraws on mobile, terrible response time even when there is only one widget on the page - practically completely useless on mobile for now).

This leaves the developer with very bad feeling about this bright new web future. And then again China has 25% traffic coming from IE6 and companies simply just cannot ignore that. You can ask any international company or a company targeting that market - they do not care about the future, they care about the cash flow and right now IE6 is driving 25% of it in Chine, even if it is less than 6% globally. China is BIG! On top of the mess with IE there is also the performance case: it is still very unclear how is load time being solved in the case where you have tens of components loaded from all over the Internet.

I completely agree that we should not look too much in the rear mirror, but guess what, you cannot drive forward without it.

март 18, 2013

Closure Tools in Sublime Text 2

I have been a big fan of the closure tools for a long time now. It provides benefits over non-typed libraries that are hard to compare to the frivolous young libraries (backbone, component etc).

On one side those young libraries are very good at one thing and should you need exactly that one thing they are simply great: you get the job done in no time.

However once you need to tweak a little bit here and a little bit there or use abstractions on top of those base libraries it becomes real pain in the... I believe this is partially because all those libraries have their own style of code (how does inheritance work, how does structure works, static methods versus prototype, type checking on run time or no etc). This is all nice and fine, every author has its preferences and as far as there is no standardization on the matter everyone is actually forced to figure out by his own what to pick. This however makes it difficult for fellow developers to pick up just any library and start using it with an in depth understanding of what is going on. More often then not the documentation describes the publish methods but lacks description of the internal architecture and design behind it all. It is possible to still use the public APIs but if you want to deviate from the initial design assumptions of the authors you can get in bug trouble. And because every library uses its own style and assumptions it is hard to understand well all the bits and parts that you are using.

On the other side we have the "old" monolithic libraries like Dojo and closure.

Even thou we call them monolithic this is not true! The old beasts are actually very well modularized. Yes, you download the full blown version and you use that in developing cycles, but once you want to make a release there is a build step, very much like the step you are now days forced to have when you use small decoupled libraries. The difference is that in the case of small decoupled libraries you try to make one big thing out of small things (some call it the UNIX way), With the monolithic libraries the process is reversed: you state what you want to use and the rest is tripped out. The result is practically the same (well not exactly, more on that later) - you end up with exactly the code you need no more and no less. However up until the arrival of tools like yeoman, bower etc you had to hunt down for those little bits of code yourself. Now not every such bit is easily discoverable still, most components require additional steps to be readily made available in those environments. More often than never one and same functionality is provided almost identical but in very different style and package, so additionally a choice have to be made and once that choice is made it is kind of hard to divert from that even between projects, let alone in the same project.

So basically because the end result could be considered the same and because developers tend to look for what they already know it is not so much a technical choice but a personal preference. I would argue that closure tools provide some things on top of the rest of the libraries (except maybe typescript), but this was discussed many time in this blog.

What I want to show today is how to use closure tools in sublime text 2.

While tools exist to support development with closure tools (namely the eclipse closure plugin, webstorm and maybe others), those are written in Java (not necessarily a bad thing, just so happens that they are large and somewhat slow to use IMO) and require more resource that many web developers are willing to give up just to have a smart editor. On top of that customization is not that a simple step as with ST2. So we will assume that sublime is already your favorite editor and you want to use it to develop with closure tools.

The most important part of that is type checking of source. However projects in closure are configured to encompass many libraries and folders, which makes it easier if a tool is used to organize it. I will show you how to use make and a little piece of shell scripting to make compile check on any JavaScript file in your project. More about project structure used in this example could be found here.

First we need a small script that can detect the namespace of the current file we are editing and then it should call the compiler with the needed type checks. Here is the script (it might have suboptimal parts in it, I do not pretend to know bash well enough):


PROVIDE=`cat $FILE | grep goog.provide | sed -e "s/goog.provide('\([^']*\).*/\1/" | head -n 1`
make -C $2 check NS=$PROVIDE

Save the file somewhere in your PATH (that is - the directories looked up for executable files) and make sure to put the execution flag on for the file. Basically what it does is scan the currently open file in your editor for goog.provide symbols and runs the make program with the check namespace set to the first match.

Next step is to configure the build system in sublime. From the menus select New Build System and paste the following snippet in it:

  //"path":  "/home/pstj/Documents/System/Library/Bin/:/usr/lib/lightdm/lightdm:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games",
  "cmd": ["GCCCheck.sh", "$file", "$project_path"],
  "file_regex": "^(.*):([0-9]+):() WARNING - (.*)",
  "selector": "source.js"

What it does is to attach the build system to JavaScript source files and run the names command on it when Ctrl+B / Cmd+B is pressed. The path could be additionally configure to match your system PATH and/or the path in which the bash script file was saved. Note that the cmd variable's first parameter is the name of the file we created earlier.

The last step is to configure make to understand the 'check' target. This could be easily done if you already use a make file, if not create one and add the target as if the compile is to be made for real compilation but instead redirect the output to /dev/null. Here is an example:
python ../../library/closure/bin/build/closurebuilder.py \
    -n $(NS) \
    --root=js \    -f --js=build/deps.js \
    -f --flagfile=options/compile.ini \
    -o compiled \
    -c $(COMPILER_JAR) \
Add roots and flags as needed. Here is my compile.ini file:
--jscomp_warning accessControls
--jscomp_warning ambiguousFunctionDecl
--jscomp_warning checkTypes
--jscomp_warning checkVars
--jscomp_warning visibility
--jscomp_warning checkRegExp
--jscomp_warning invalidCasts
--jscomp_warning strictModuleDepCheck
--jscomp_warning typeInvalidation
--jscomp_warning undefinedVars
--jscomp_warning unknownDefines
--jscomp_warning uselessCode
As stated in the beginning of this section this setup will only work if a certain project structure is followed, however I believe it is easily enough customizable to allow it to be applicable to any closure tools project.

In addition if strict styling rules are required for your project there is the sublime-closure-linter package for sublime. I prefer to use make files to build gss files and templates but there are sublime packages for that too.

So what we have as a result is the ability to run any namespace trough the compiler checks and figure out on time if we are violating a contract of messing up types / methods. Once example from today was that I attempted to use the redner method on a PopupDatePicker. Sounds all normal and natural and I was sure it was in the API. However the compiler was smart enough to tell me that there is no such method (it was mistyped). The other day I had a value set as string, while a number was expected. You very well know what will happen if I try to use it in calculations (NaN). What this setup allows me to do is to have checks run from any entry point in my namespaces without actually typing the namespace. A basic work flow would be: add new file, edit it, save it, run build process, fix type errors, commit. That should do for now.

One thing missed by large when it comes to web development with JavaScript is the missing intellisense. I can understand that and I sympathize. However the big IDEs have other draw backs (sublime is times and times faster especially on regular Joe machines) and the IDEs are not perfect either. For example the eclipse plugin does not understands correctly the scoped variables even though the issue is closed it still does not work. WebStorm has its issues too. There are new project trying to address that, but up until then having a consistent APIs across large number of ready to use library code (as in monolithic library) I found to be more pleasing.

март 07, 2013

Clasical versus Prototype inheritance in JavaScript

A lot has been written lately about the benefits and risks of using prototype inheritance versus classical inheritance pattern in JavaScript.

In multiple occasions it has been shown that classical pattern works faster and uses less memory compared to anything else out there. It also consumes less resource (memory wise) and it is the most often used pattern currently as well which soft of guarantees that your code will be compatible with other people's code. Recent blog post even compared the exact amounts of  memory consumption comparing classical pattern versus Object.create (sorry, could not find relevant link now, but basically the memory overhead was there, but not as big as in other patterns).

However there are proponents of the prototype inheritance pattern in the community as well. On multiple blogs one can see examples of the usage and encouragement to try it out. On also many instances the cited benefits are dubious and refuted by proponents of the classical pattern.

Coming from closure library the classical pattern was the defacto only possible solution for my code up until very recently. However these days I have the chance to once again write code exclusively for modern browsers. I decided to explore what will be possible if I use all the 'wrong' and 'dangerous' patterns to make my code simpler and easier to use.

For the exercise I will have a fictional data server that returns list of records that I want to work with. The very basic use case of just retrieving the data and having it ready for manipulation will be investigated. So, we have a server call that returns an array of objects and basically I want to apply behavior on top of it.

Our server data will look like this:
var serverData = [{
    id: 1,
    name: 'Peter'
}, {
    id: 2,
    name: 'Desislava',
    lives: 10

There are two classical approached for this (mostly utilized in the 'data' frameworks these days).

1. Wrap the data: basically execute the logic as constructor function and put the data inside of it (for example as this.data = serverData, then operate over the data with access methods (either general: get('name') or named ones (getSomething/setSomething)). By the time the data need to be saved on the server the stored value is used (this.data).

2. Use functional approach: operate on the data via functions defined specifically for the data and pass the data record to every function call. To save the data back on the server pass directly the data as it is.

I was interested in a more 'natural' approach: using the literal objects created by the JSON parsing, slap the behavior on top of it without actually polluting the data and pass the augmented data back to the server directly (i.e. combination of the two approached above).

For this to work we have to be able to:
a) slap the methods on top of a literal object
b) have pure prototype inheritance to tweak the behavior easily

I came up with this:

Object.createPrototype = function(proto, mix) {
    function F() {};
    F.prototype = proto;
    var newProto = new F();
    mix.forEach(function(item) {
        Object.mixin(newProto, item);
    return newProto;

Object.mixin = function(obj, mix, restrict) {
    for (var k in mix) {
        if (typeof mix[k] == 'object') continue;
        if (restrict && obj[k] != undefined) continue;
        if (k == 'applyPrototype') continue;
        obj[k] = mix[k]

Object.prototype.applyPrototype = function(proto) {
    if (this.__proto__ != Object.prototype && this.__proto__ != Array.prototype) {
        throw new Error('The object is already typed');
    this.__proto__ = proto;

Object.createInstance = function(that, proto, def) {
    if (typeof def == 'object')
        Object.mixin(that, def, true);
    if (typeof that.initialize == 'function') {
    return that;

What this allows me to do with the server data is like this:
// define some defaults (if the data on server could have null's)
var defs = {
    lives: 10

// Define basic behaviour
var Base = {
    update: function(data) {
        if (this.uid != data.uid) {
        } else {
            Object.mixin(this, data);
    get uid() {
        return this.id;
//Upgrade the behavior
var Sub = Object.createPrototype(Base, [{
    kill: function() {

// helper function to process an array
function processData(data) {
    data.forEach(function(item) {
        Object.createInstance(item, Sub, defs);
    return data;

// Upgrade to an array that knows how to handle our data types
var MyArray = Object.createPrototype(Array.prototype, [{
    getById: function(id) {
        return this.map[id];
    initialize: function() {
        this.map = {};
        this.indexMap = {};
        this.forEach(function(item, i) {
            this.map[item.uid] =  item;
            this.indexMap[item.uid] = i;
        }, this);
    add: function(item) {
        if (typeof item.uid == 'undefined') {
            Object.createInstance(item, Sub, defs);
        Array.prototype.push.call(this, item);
        this.map[item.uid] = item;
        this.indexMap[item.uid] = this.length - 1;
    push: function() {
        throw new Error('Use the "add" method instead');
    pop: function() {
        throw new Error('Use "remove" instead');
    update: function(item) {
        if (typeof item.uid == 'undefined') {
            Object.createInstance(item, Sub, defs);
    remove: function(item) {
        if (typeof item == 'number') {
            this.splice(item, 1);
        } else {
            if (item.uid == undefined) {
                Object.createInstance(item, Sub, defs);
            this.splice(this.indexMap[item.uid], 1);

// Make new type that has next and previous.
var Linked = Object.createPrototype(MyArray, [{
    initialize: function() {
        this.index = 0;
Object.defineProperty(Linked, 'next', {
    get: function() {
        if (this.length > this.index + 1) {
            return this[this.index];
        else return null;
Object.defineProperty(Linked, 'previous', {
    get: function() {
        if (this.index - 1 >= 0) {
            return this[this.index];
        } else return null;

// Process the server data to create local data and work with it.
var clientData = Object.createInstance(serverData, Linked);
    id: 3,
    name: 'Denica'
    id: 3,
    name: 'Ceca'
    id: 3,
    name: 'Ceca'
var myNextObjetc = clientData.next;
JSON.stringify(clientData); // returns the actual array only without any of our custom props.
// [{"id":1,"name":"Peps","lives":9},{"id":2,"name":"Des","lives":3}] 

This is not much improvement over the classical approaches yet, because for example I still cannot update an item in the collection directly with values (i.e. clientData[1] = {id: 2, name: 'Another name'}), which can be achieved if I was simply hiding the array inside a wrapper object. However I believe for the objects inside the list it is a great improvement to be able to just have the behavior stamped on top and yet having natural access to all properties (i.e data.property1.property2. This again is not an ideal situation, because updates are not catch (i.e. bindings will be harder to implement), but this can be resolved by the observer/mutator API proposed.

Again, this is not a real world scenario, just me playing a little bit with literal objects as pure data, but instead is an interesting experiment in the sense that I have always wanted to be able to merge data with logic without too much fuss. What is accomplished in this solution is that we have the data (importantly - ready to be submitted back) and the logic is simply an object we can play with and model, including in run time. All this is possible with other approaches as well, but this is interesting because we do not actually create instances, but instead use the original data instances all the time and simply apply logic on top of it. In conclusion this is like merging the two classical approaches: keep the data instances but have logic bound to it.

DO NOT use this in your code! __proto__ is not standard and getters/setters as well as defineProperty are not widely supported!

CLARIFICATION: I am aware that the same effect could be accomplished with the new Proxy API, so thanks for reminding me, I already know it:)

януари 27, 2013

Decoupling components for frontend development.

Introduction: It appears the era of monolithic UI libraries is coming to an end.

It has been irritating for years: you go deep into a project for months just to realize that you need a certain type of UI component that is not available for your set of libraries but is readily available for another (for example it is written for Mootools, but you use jQuery or it is in Dojo but you are actually using jQuery).

So before you even start you have to have in mind all the components you might need. Then o and hunt for a library that has them all or at least most of them. You then have to spend time and learn their component idioms (i.e. how a new component is created or composed from existing ones). This is one of the hardest things to grasp in a new library in addition to its build model.

Build model is another opinion in a library that you can't (or can but hardly) change. Some libraries (smaller ones) do not provide a build model and you are on your own, however if a build model is selected you have to stick to it. If the library uses AMD you have to stick to AMD. If the library uses CommonJS you ave to stick to browserify or a similar one. The primary goal being dependency management, it often includes concatenation and minification steps.

The problem often overseen is that you can achieve greater responsiveness if you separate initialization code from the rest of your application. RequireJS supports building your application with build units allowing such set ups. Closure compiler also allows you to separate your code into loadable modules. However with browserify you are on your own. You have to manage the dependencies and the build step if you elect to use one.

A new concept recently introduced by TJHolowwaychuck is components: The idea is that you could develop a reusable component and put it in its own repository on github or on a private git service and then reuse it from within another component or in an application. The components could contain css, JavaScript and HTML.

I find the idea great, however the execution and the design are not so great. The following problems stick out pretty fast:
  • while component management is made easy with this project, you are required to use only one final file in your application, so modularization and modular loading is not possible yet.
  • while the builder used in the project provides a hooking mechanism it is unusable unless you wither alter the builder itself locally to attach your own hooks (i.e. ones that are not provided by default) or you have to fork the component npm package to make it require augmented builder package where you have extended the base builder. This makes the project really hard to augment in transparent manner and locks you into constant chasing with the original. 
On the other hand the naming convention used and the plumbing added to make ti work makes the project a real pearl in the dust: no more strange and ugly names (as in the author's example you could use 'tip' for your project name, no more tippie, tipsy, tips, tiping etc) thanks to the repo/project naming convention. The names are disambiguated automatically and this works great.

The even greater thing about this is that dependencies are managed automatically for you and transparently and that you can concentrate on a smaller, testable units of code. This IMO encourages decoupling (similar to AMD) but removes the extra complexity of AMD (the paths for example could be a real hell if you try to mix several repositories). Fattening out the directory tree was a great win for this project. 

Conclusion: One thing we should get used to in the coming months and years is that monolithic application model (an application that consists of only one pre-build file) is going away. Shadow DOM, Web components, JavaScript modules etc are making steps forward and the sooner we understand the speed and memory implications of those the better we can implement them in your projects. Until now - use decoupled code as much as possible.

Tree shaking and should you care about it.

Introduction: Tree shaking is the process in which unreached parts of the code in your application are removed. 

Description: So now we have at least 4 JavaScript compilers / minifiers out there and all of them are pretty much feature complete and stable. They support source maps (very important now days) and they do a decent job as squeezing those large (sometimes over a megabyte) chunks of JavaScript code into something that you can actually serve to your users without making them wait long enough and thus loosing them.

While in some parts of Europe having 80MB optical down link in your home for 10 euro/month is pretty common, there are places on this Earth that still consider even a single megabyte per second to be a really fast, pretty much inaccessible luxury. So if your content takes under a second to load on an 80MB link, it might take a whole minute in another location. Not that most content is the JavaScript, often pictures take a larger portion of the traffic (and oh my god, stop with those ultra large "retina ready" images!). However the perceived performance of your application is mostly affected by the JavaScript and it being minified is only one side of the coin.

Of course you should try and write optimized code. Of course you should avoid unneeded repaints.  Of course you should throttle repeating actions. This is mostly that you should look for as a developer working on your code. What you should not look for is how many properties are you attaching to your function's prototype. Or how deep is your inheritance chain. Or how to avoid name collision.

Now. lets talk about tree shaking. This is how developers call the action of stripping our code that is never used in the application, should the tree shaking procedure is presented with the whole application code. So basically you give all of your code and it returns the same code, but with the unneeded parts removed. If you are mostly writing application code, you might argue that all your code is used and that is the truth most often. However if you are relying on a library, do you use all of it? If you inherits from an UI class and you want to use only a small subset of methods then tree shaking is what you want. It reduces file size and it reduces parse time.

Now, there are two instruments out there that can make tree shaking: closure compiler and uglifyjs.

I will start with uglify JS: if you have code as this:
if (false) {
// do somthing
 then uglifyjs is smart enought to remove it.  However uglify is designed to minify your code, stripping unneeded spaces and reduce the length of the variable names. It is not designed to make application level tree shaking.

The other available tool is closure compiler with its advanced mode: it is able to determine all the code paths that are not executed, assuming you input all of your code. However the compiler assumes a subset of JavaScript that most developers find to be too restrictive. So basically if you use a library that is not conformant with the closure style you cannot use it in a build, which means that you have to either use extern files or use bracket notation (which is more tedious) in your code.

Here is a small truth: closure compiler is perpetually used in large application like gmail, because gmail introduces an average of 10% code change in a month. This is a product google is still actively developing and augmenting and in their case using app cache of other techniques to allow the browser to not make those request is not possible. If your applications can get away with the same version of a file just go with it.

Conclusion: The whole idea of 'building world' is designed and introduced for projects that match Google's workflow (iterative augmentation of a product on a regular basis). It is not necessarily the best thing for your project. On its own tree shaking is a good thing, but the code restrictions are too great to ignore. If possible you should investigate other modern solutions for speeding up code loading.

януари 17, 2013

Browser wars?

It is old news but still: Chrome is taking up share from Firefox.

I, being Linux user for at least 10 years, think that the Chrome 'speed' and 'performance' hype is way overestimated.

Maybe because I use Linux, or maybe because I do not like making hazardous trash too much, change my laptop only every 6 years: every 6 years I spend tons of money on a top of its class laptop. It is usually 'business' class (I have no idea what this means except the extended warranty, but anyways), which basically leaves me with a machine that is not that performance capable, but more durable and with a little more squeezed battery life.

Now about Firefox/Chrome. First of all I notice the following with Chrome:
  1. much slower from first start to being capable of showing anything (have not measure it, but at least 3-4 seconds and 1 more to colorize the applications (meaning really ready, not just showing the main window)). 
  2. much more CPU intensive when loading content: it does not really bother me how much CPU it uses, but if I scroll on a long page and click on a link with the middle button to open in a background tab the scrolling experience greatly degrades until the new tab is loaded. Maybe this is why Chrome does not allow setting to always open links in background when opening a new tab? Besides it is supposed to use all cores of my CPU, so how could Firefox do it with only one process and chrome cannot do it with all?
  3. tab switch is much more prone to "redraws": after I have been on a tab and witch to another and go back to the first one I see re-flows happening. This is not really a problem, but just wondering, eating like twice the memory and CPU compared to Firefox why not just do the re-flows in the background whenever those are needed? And no, the pages are not using the visibility API:)
  4. Sync does work, but in a very strange manner: Lets say I have to use a PC at work and I sync with my google account. It takes hours to have everything synced, some applications never sync (so basically now I have different set of plugins (for example the google chat in tabbar for chrome) and different set of applications (for example google play) on three different computers. And no, those will never sync. Theme syncs after hours too, sometimes days. I have had a case where my theme was different on 3 different computers for 3 days. Eventually it syncs, but ... I also has cases where the sync gives up at some point, it gives no error or any kind but you can clearly see that new bookmarks never arrive to your other instances. While this is not directly chrome's fault, being able to sync everything is considered part of the browser now days.
  5. I cannot only tell about the wrongs: Chrome has the best developer tools ever! Not only they are feature rich (much much more than those of Firefox) but also they work much much faster than anything else available. It would be foolish not to use them instead of other tools.
 So basically from browsing / surfing point of view I find Firefox to be much more friendly than Chrome, not only RAM and CPU wise, but also perception wise: Firefox shows up and is ready to be used much faster, also is able to retain much more tabs without issues and also is able to store those tabs for long time (I often use the ability to store tabs for viewing later in another tab group (using panorama) and only check them out when I have the time - the browser performance is not affected by this at all, not is the start up time, try the same with chrome...). On the other hand I develop large web applications and I would not be as effective without top class developer tools, so I have to use chrome.

I think it is worth mentioning that on mac OS X the picture was slightly different: I have briefly had access to a brand new AirBook. I have initially installed Firefox, but the performance was really bad. I have also installed Chrome and the performance was much much better: the start time was again a bit longer, but once it is running it was much smoother and faster than Firefox. Also Chrome was using features of the OS while Firefox ignored them (like back/forward with swipe) or full screen the OSX way. However I am not such an Apple fan and sold the laptop after a month of trying to get used to it. The point is the performance is much different on another OS, so it is possible that Chrome is better on Windows as well, but on Linux it is much better really.

On one side Google could at least try to make Chrome as good on Linux as on the other platforms (but probably will not, because Linux has such a small share on the desktop). Firefox on the other hand could try to pick up those OS specific features a little bit, but mostly they should come up with an useful debugging tools. I can tell you, working with the current tools really feels like hell, I cannot do anything with those, when I want to check out how a feature is working in a page I encounter, I have to open chrome and load the page in it.

I wonder if anyone else has similar experience with the browsers on Linux? I also wonder how is Chrome comparing to Firefox on Windows machines. Not that I am going to use one any time soon, but still.

януари 14, 2013

Butter smooth timeline for your video/audio project (canvas drawing)

Conceived as a pet project for editing TV schedule for small IPTV providers, the project proved to be an interesting challenge.

For starter we looked at some other implementations and liked this one: it comes with lots of examples and proved to be interesting enough to work with. However we quickly realized that the code does not scale well at all. All the examples feature very short videos and we needed a visual time representation for at least 24 hours period. Basically the code would have drawn each second on the canvas no matter how long the period is and thus was working progressively slower with larger periods of time being used. It also binds directly to a slider / scroller and thus was updating on value change, instead of when the browser has a drawing frame schedule.

Never the less the code was a good example of a timeline view executed with canvas and we decided to use its concepts.

The first thing to do was to make it closure compiler friendly, this was easy. We also stripped the binding with the scales / scrollers and instead exposed methods to set the values on the class regardless of the means used for the user interface.

Then we started looking for optimization opportunities. We notices that if we draw all the seconds in the period on the canvas we quickly turn the time lines (scale separators) into a blank line - this is because all the drawing is always performed: each second's position is determined and drawn on the canvas. As you can imagine for 24 hours period those draws were often from thousands to tens of thousands. We decided it would be more wise to separate the whole period into dividable frames and use those as steps when jumping from one time position to the next instead of iterating on every second of the period.

Our first solution was fine, but iterating over the same amount of time using different steps has two side effects: number one is related to how canvas use the dimensions internally. Basically for best results you would want to set the canvas size to the actual size of the view. Also when drawing single pixel wide lines you would want to shift the line to a half a pixel (i.e if you want to draw on pixel one you need to perform a stroke at position 0.5 and 1 pixel wide). If you draw on anything else anti-aliasing kicks in and what you see is not as crisp as you might hoped for. Because we implemented this half pixel shifting as the first thing in our code the defect introduced with the multiple walk trough the scale was not visible: basically we were drawing on the same pixel multiple times. Still - the code was much faster now, because at this point we did not draw on every second but instead on every time the next time point would stand at least few pixels away from the previous. Considering the capabilities of the modern monitors this would mean one would draw not more than few thousand lines, more often than never less than one thousand, regardless of the time frame to be visualized. The second problem was to manage the time stamps on the top - managing meaningful time stamps required much more code.

Using the shifting to a whole pixel drawing technique, we also need to consider this shifting when drawing the separate program items and that proved to be not so trivial, because while using steps in drawing the scale, using those when drawing precise time is not perfect. Instead of putting too much effort in aligning the pixels, we instead decided to investigate another alternative: let the canvas use fractions when drawing but keep the reduced drawing calls by using time steps larger than 1.

This however allowed us to see the defect introduced on the previous step: going on top of a fraction of pixels does make a difference in the visualization  and because we draw the different steps sizes in different length the resulting image was very poor. Basically the drawing is performed on the same fraction (i.e. 0.232536) but visually you get a wider line because you have drawn twice on it.

At this point we were not happy with neither pixel alignment results (too difficult to shift everything to a full number, too much calculations) nor step cycling results (poor picture quality).

During the weekend I had the idea of removing the cycles of drawing with different step and instead use only one step but instead calculate the step offset to the predefined steps. If modulus division matches 0 then we are on a certain step (i.e. full hour if we divide to 3600). Also one thing to notice was that we do not actually need to calculate the X position to draw on every step: because the step is determined only once per redraw we can also determine the difference between two adjacent steps use it increment the X position without actually calculating X for a certain second in time. As a final optimization I suggested also to calculate the first visible second and align it to the step (so we skip cycling over x positions that are out of view) So basically you calculate the X for the 0 second in the period, then for the first step value and using the offset to 0 (the start of the visible area) you have the initial visible second and from it you calculate the first draw-able X value. From then on you only increment the second to draw by the time step and the x value for it by the difference between two values.

Now the code behaved perfectly, even on Atom powered laptops with Chrome browser. However in Firefox things were not that brilliant. There was noticeable lag on updates on desktop Core processor, let alone a netbook.

Next optimization we had to perform was to decouple the drawing of the canvas from the scale and scroll updates. This is easily done and well documented technique (just look up "using request animation frame" in google). Basically every time we have value updates we set a dirty flag and call the RAF. It calls itself until the dirty flag is turned off.

Now that was a butter smooth canvas visualization.

What was accomplished: unlimited time frame to visualize, unlimited items in the schedule, decoupled value updates (i.e. you can set them on batches, it will not hurt performance), smooth animation and great performance on all modern browsers.

What we learned: drawing on the canvas is expensive, draw as little as possible, calculations with floating number are also expensive, reduce those to the possible minimum, use RAF to limit the drawing operation to the browser optimized frame rate.

A demo application will be available soon, as well as the code related to the time line will be on GitHub once the project is complete.

Non-sense all around the place...

Following the last post I would like to express what or who was the motivation for it and how badly one can be misled by authority.

At this address you can find a long post trying to argue that the closure compiler in advanced mode is a bad idea. Posts from the same author has been trowing crap at the fan for some time now, but this last post was really the tipping point for me. Basically the behavior exhibited is all over the Internet now since the JavaScript language  became so important and widely spread.

First rule: attack something that does not match your project criteria or something you have never used or something that you even do not understand. The author states that he has never used the compiler in advanced mode and judging by his examples of how bad the advanced mode is has also never tried to understand the concepts in the compiler.

Second rule: make a full of yourself by talking about optimizations you have actually never applied. The author talks about you should manually remove pieces of your code that are not reached in the application - this would be probably the most insane thing to do, but lets assume we want to do it. Lets say we have a calendar widget that can do really anything calendar related. And then we only use it as static calendar in our app. All the methods and properties attached to the constructor are never reached - like 80% of the code is unreachable. Lets say we have a large application that uses at least 20 such widgets. Lets say the code accumulates to 2+MB. Now imagine you want to make this application accessible to people with slow network connection and old software, like people still living with XP of worse (windows 2000 for example). Waaaay to go removing the code that is not reached. Really!

Third rule: attach importance to things that are really not possible with the technology you are attacking but make it sound like it is essential to use/have them. ES6? ES5? Polluting the global scope? Just wrap the compiled code in immediately invoked function and move on! It is as simple as specifying the wrap text at the command line. Object.defineProperty - combined with the wrapper I do not see how one could mess with your code, really, plus the access is controlled by the directives. So basically those are overlapping. Which one is used depends on the developer's preferences. Different syntax? Not really, one can either use externs or stop assigning meaning to the names.

Forth rule: Just go plainly mad and barf all over! "Not compatible with requirejs"??? Really? That is a thing now? So basically we are supposed to believe that AMD is now the only way to scale an application? I have seen only one (ONE) large application that uses AMD - cloud9 IDE. That's it. Everything else I have encountered was a regular size application with some AMD inside, mostly just to be on the cool side. The compiler also has modules by the way. But the author wouldn't care because those are not AMD compatible.

Bonus rule: Talk about security. Just to make sure you will be taken seriously by the masses starving for JavaScript enlightenment, do not omit the opportunity to talk about security - whatever it means in your context.  Basically the thing with deleting your code is really applicable to anything - really! Wrapping it in a closure would solve this one as well, but of course the author is looking for problems, not for solutions. How your code is treated (and removed) by the compiler is well documented and many posts have been written to explain that advanced optimization is not for everyone. But of course if security has been compromised then yes - this compiler mode must be a crap. Evidently this is why google's web products fail pretty much all the time - because of the advanced mode and if they just switch to AMD and simple mode it would be much better. Hopefully someone from Google is reading the author's blog and will take a note and start action on the issue....

януари 07, 2013

When it comes to awful JavaScript

The worse thing about JavaScript is not eval, with or this. The worse thing is that people take a higher ranking development position in a top shop company and pretend they rule the universe and they know it all. But wait, there is even worse - everybody else in the JavaScript universe almost religiously start to believe every word those people say.

The end point? Be careful who you trust, be careful what a benchmark is actually tracking, avoid micro optimization especially integrating it just because 'know it all' says so. Basically - be skeptical and peer review:)