Общо показвания

ноември 09, 2013

HP Chromebook 11 - developer's review.

Ever since the inception of the Chromebook project I have been a huge fan.

I envied the attendees who got the cr48 one. Then I was furious that the Samsung ARM one was not available (or any of the other ones as a matter of fact) in my country. I have travelled, but never got the chance to see one or wait long enough for a deliver on any given place.

Finally few weeks ago when the new HP Chromebooks have been announced I decided it is time to purchase one of those.

A little bit about my love-hate history with the 'new laptop' purchases.

I have an old Thinkpad X61s. Purchased 6 years ago when it was brand new it costed a lot! Really big money for the time. But I loved it. I has always been running with Linux and even thou I had to manually install a script every time I reinstall (every 18 months or so) it worked great.

A year and a half back I realised it was maybe the time to upgrade. So I went hunting for a new laptop.

This is the place to say that I do web development for living. Mainly front-end development (CSS, HTML, JavaScript). Modern front end development also requires you to run Node js, Java, GNU Make, Bash, Python and a series of other more obscure tools. They are not part of the final product but greatly reduce the work load and provide assistance which is invaluable for modern development practices. Linux fits here in the middle. Lots of tools work best with OSX, like Casper etc, but is much better than Windows.

Naturally I went first with Thinkpad again - Edge 340 (or something like that, sorry don't really remember the numbering - core i5, 4GB, 120GB HDD, 11 inches wide, glossy display, built in camera). Well the problem with that laptop was it was way too hot to tolerate on my lap. And the fan never ever stopped even on Windows, let alone on Linux. So I returned it.

Then the next choice was Apple - MacBook Air 11 inches. I have to say it was a very well built machine. It only goes hot when watching flash videos in full screen. And then there was OSX. Well, there is the menu bar always on top - stealing precious 25 pixels from a very short 768 pixels. And then there was Chrome not being able to make plugins go full screen (so when you go to apple trailers you have to watch them as small as they come). And then there was the fact that lots of tools had to be managed/installed from different sources manually. At least in Linux there is one general command to manage all software. I have been with it for 2 weeks and then I returned it. Some say I should have given it more time. Sometimes I think that too, but then I remember what I had to put up with all the time for the smallest things and I do not intend to to back to OSX any time soon.

To sum up I am a web developer who is fed up with badly designed hardware and software that was not fitting my expectations.

And then there was ChromeOS. I decided to try and live off the Chrome browser (on Linux) for a month and see how it goes.

I uploaded my pictures to G+ and my videos to YouTube, my pdf and manuals to Drive and my docs and spreadsheets to Docs. My music did not fit in anywhere, but I was listening more and more off my phone so it was not such a big deal. I moved my development environment to Cloud9 and what I was not able to move there I accessed 'live' with sftp mounts. (Well I was not aware that sftp mounts are not allowed in ChromeOS which sucks).

The experiment went for like a month and I was sure I can live with the Chrome browser. I still have my work PC at the office if I needed to test Firefox or IE compatibility.

I was thinking a lot which Chromebook to purchase. At the end I just wanted a silent machine and the one without a fan seemed just the right choice. I was aware that it might get slow with more than 5 - 6 tabs, but it was okay, I just need to make some adjustments to my browsing - that's what I told myself.

So I went and purchased the HP Chromebook 11 - ARM.

First I needed to wait for the availability. Then I needed to wait for the delivery. Then for the delivery from UK to Bulgaria. It costed me almost as much as a regular netbook (around 300 Euro). But it finally got here.

I had to wait (a lot) for the initial update, but after that it was fine. For viewing my email and G+.

And then I tried watching a movie from my thumb drive. H264 (480p) plays just fine. Excellent actually. Xvid is really bad (read SLOWWW). I haven't tried anything else yet, but I think h264 is okay. For now.

And this morning the shit hit the fan. I like to watch tech videos on YouTube at 1.5 speed. So naturally I went and enabled html5 player. And then it stopped working. Like at ALL! I was no more able to rewind in the videos. If I do it stops playing and it never comes back. Even after refresh. Even after restart. So I go back and opt out of the html5. But guess what - even if it says you are served the flash player you still get the html5 one. Even after clearing all caches. Even after resetting the browser. Even after deleting all the cookies. Even after powerwash. Even in incognito mode. Even in guest mode. So basically you are stuck with html5 play back which is terrible on this laptop.

After advice from G+ community I installed an extension that half resolved the problem. Now the video starts loading and then the extension kicks in and reloads the video this time with the flash player. Great! So after pushing for years now for html5 it does not work on the machine that they have designed? The big shots have more important things to do I guess.

So I went about my day and logged in c9 to do some work. Yeah, not so fast! c9 uses socket to communicate. And the HP11 decided to drop the connection every few minutes. Even when on the same network with the Thinkpad which never drops it. Even when left alone in the network with the router. Even after powerwash. Ha! So basically the terminal restarts every few minutes. And youtube videos stop every few minutes. And it is not the network and it is not the router. So it IS the HP11.

At this point I doubt that it could be used for anything more than to rant about it on the Internet. That one is does great. It is actually so bad that half of my spreadsheets do not work (actually they do, but after one makes a change it starts auto save and no new changes are reflected in the UI until a successful save and the saves take 1-2 minutes. Yes, MINUTES!

This reminds me of a toy that is sold currently - an imitation of a laptop which plays some music and displays some text to teach the kid the alphabet. Anyone looking to buy such a toy please feel free to contact me - I have one for sale and as a bonus you can also (try to) watch your porn on it.

In the scale from 1 to 10, 10 being the best laptop, HP11 is a 2. One point for being completely cool. Half point for being shiny good looking and half for the good display. Everything else about it is grossly unbearable and should never ever be sold to customers.

Important notice: It is NOT the software. I have lived with Chrome-only environment for the last month and everything was great. But it is great when run on a normal hardware. When run on something like the HP11 where the wifi is unreliable and youtube seem to be hated it turns into a nightmare. Developing in the cloud is absolutely impossible (if you want to stay sane). And you can never install anything locally, remember?

So two options left: sell it or try to run Ubuntu on it. About that - only 4 GB left free. How come from 16GB? What do those people put on this machine? I have Linux installs with 10GB only root filesystem and with all possible software (including build tool-chains, java, gimp, open office and lots and lots of other software it never goes beyond 7GB. How come ChromeOS already ate 12 GB?!?!? Its a mystery!!

When I get back to the city I will try it again. And if it fails me again I will just sell it. Too expensive to return it.

I have no problem parting with it. The problem is what will I get instead of it. I cannot live with Windows. And I certainly do not want OSX. But Linux is becoming weaker and weaker with every year. ChromeOS seemed like a nice fit, but evidently that still need to figure out the hardware policies. Because I am not paying 1300 USD nor putting up with a half usable laptop.

септември 22, 2013

Frameworkless JavaScript (or how we reinvented the wheel)

So the quality of the articles is doomed to decline once you get enough subscribers, I notice that. Last example is "JavaScript Weekly". How and why I cannot still comprehend, but it is a fact. In this weeks edition a ridiculous post was shared with the subscribers written by Tero Piirainen. In short (if you do not want to waste time reading it all) he claims that they do not use library (A, B or C) because it introduces:

  1. large code size (to be downloaded)
  2. abstractions (really? abstractions over websocket, that actually transfers JSON only??)
  3. hard to comprehend large code base
So basically he advises us to reinvent the wheel every time we start  anew project. So to have clean slate and think only about the API. Hah....

Dear Tero, smart people years ago had the same problem you have today. And smart people tend to resolve the problems (well) in a smart way. Years ago Google open sources their set of tools called Closure Tools. It attempts (and succeeds) to solve all those problems that you are fighting like a real Don Quixo without ever looking into alternative solutions. Of course lots and lots of people did not understand how to use the library and the compiler back then and still do not understand it today. One such example is here (really stupid article about micro optimization and the lack of understanding what does the compiler do to such "slow" code). For those who are not into masochism the tools make what you want without you reinventing the wheel each time: tree shaking, minification, method extraction (i.e. shortening/removing the prototype chain lookups), reduce/remove or namespace nesting, method in-lining,  modules (lazy loads) and more (like pre-calculating values etc). Yes - it is kind of hard to wrap one's head around it, but it does brilliant job and in my experience it reduces code from several megabytes to 30-40KB gzipped. 

I do not say it is the perfect solution, but it does a very good job. Today it is still actively developed at Google and is actively used in large number of their user facing product. This about that for a second: one of the largest web companies, a company that is very inclined to make the web fast, is using those tools to make smaller, faster loading, faster running applications!

And yes, backbone plus underscore plus whatever templates is the mainstream way of doing things these days. But no, this is not the way to go when you want large scale but small applications.

август 14, 2013

After Google Reader

It is somewhat in fashion to name things starting with 'After'. After Eart, After etc.

However this post is about the now dead Google reader and what happened after that.

At first it looked like feedly will be the winner - they already had integration with google accounts, it was very easy to use your feeds and while lots lacked behind on the UX side of things, they provded a working solution for the problem somewhat 1 mln users were facing.

After a week long testing I migrated (almost a full month before reader was closed down). The day it happened feedly had some issues but none the less it worked well. The problem was its creators had UI vision very different from what Google engeneers used to have when reader was last updated. Hovering to activate the side panel, really? No way to go to topics, only a few, very basic shortcuts... there was so much missing it bearly scratched the needs of the power users. And then it was an extention, not a real cloud app. But they fixed that...

If you are like me you read several houndred titles a day, mostly skimming for something that might be of interest or importantce. This was not supported at all initially, then they aded somewhat handicaped version of it (calling it "classic google reader look". Well ... no.

There was no search. Later they decided that search will be behind a paywall. Great, so now we are supposed to just pay for search results? It ain't gonna happen, considering the fact that the same content is available online and indexed already by google's search egngine. Sorry fellas, wrong decision.

If that was the end of the story it would have been a bit traggic: users end up with half working (several times I had to contact support because titles I have read kept coming up as new ones!), half familiar, worse product than what tehy are used to.

But then, I don't eve remember how, I found InoReader. Gess what, it has the exact same shortcuts as Google reader, it looks pretty much the same (it's an option) and behaves the same. It contains the statistics for your feeds, also in a newtly arranged UI box it shows the status of your subscriptions so you can see if one or more had stopped working and it loads FAST. I mean really really fast! I cannot compare it to GR, because it is gone already, but it is much faster than feedly that is for sure! It reacts to user input faster and behaves smoother.

If you are like me and happen to miss Google Reader just give it a try! I promess at least as UX it wont dissapoint. I am not sure how well the backend is constructed, but front end wise it is the best out there, maybe even better than GR!

юли 10, 2013

Closure tools development in the cloud

I have been waiting for this for some time now!

Probably this happened some time ago, but I found out just today that it is now possible to use cloud9 ide on a complete shared environment, which means I have a terminal, python and java runtimes and I can finally develop using complete closure tools in the cloud! I just had to copy over my setup from a local folder to the hosted environment and it just worked.

Well not "just", because they run an older version of python (2.6), but it is pretty easy to overcome this.

The finding was part of my ongoing investigation for an article on using Chomebook as a primary laptop for the modern web developer. Of course my work flow is already skewed enough by my decade and a half living with GNU/Linux, but even I find it frustrating to constantly have to "update" and "distro-upgrade". The last piece missing will probably be the hardest to figure out, but I have one last trick up my sleeve.

If it all goes well and sound I would be buying my next laptop from Google. If not, well, I would be still buying a laptop with Chrome OS but it would be the cheapest one (Samsung S3) and will use it only for light reading.

Tip: I know that the Samsung can be used for development as well, because all the compilation and checks are run on the server, however because it has only 2GB of RAM it would be hard to open many "applications" on it and I am not sure how usable it would be for devs.


май 30, 2013

Как лесно да следим инвестициите си в ДФ (с google spreadsheet)

Увод

След като държавата прецени, че българинът разполага с предостатъчно средства и е време за нов данък, който стана актуален от началото на 2013 година (интересното при него е че стана ретроактивен, което попари доста спестяващи български семейства като им жулна данък върху депозит направен много преди да се заговори за това, например 2 или 3 години по-рано), нормално беше хората със спестявания да потърсят алтернативен начин за вложение на парите си така че поне да се преборят с инфлацията. Единият вариант бяха така наречените безсрочни депозити, а другият (за малко по-авантюристичните) дяловото участие в доверени фондове.

Тази статия цели да помогне на не-специалистите да следят и измерват по-лесно инвестицията си в ДФ.

Изложение

Пресполагаме че вече сме си избрали ДФ където искаме да закупим дялове. Ако още не сте си избрали фонд, българският финансов пътеводител е добър старт.

Най-лесният вариант да следим движението според мен е google spreadsheet.

За да разработим пример ще използваме един от фондовете на UBBAM.

Избираме си фонд и проследяваме адреса на който се визуализира в табличен вид информацията за ценните. В случаят с "Патримониум Земя" адресът  се генерира автоматично от JavaScript и се състои от постоянен адрес и две променливи: начална дата и крайна дата.

http://www.ubbam.bg/libs/graphics.php?id=39&type=5&date_to=
&date_from=

За този пример ни е нужен само най-новия запис, за това ползваме следнтата формула за получаване на конкретен адрес:

=CONCATENATE("http://www.ubbam.bg/libs/graphics.php?id=39&type=5&date_to=",TEXT(TODAY(), "yyyy-MM-dd"),"&date_from=", TEXT(TODAY()-1, "yyyy-MM-dd"))


Така получаваме валиден адрес за зареждане на данни за цените на ДФ.

Следваща стъпка е да си направим импорт на тези данни: в гонрния край на таблицата ползваме функцията за импортиране на данни от web таблици:

=ImportHtml(B19, "table", 0)

където b19 е клетката в която сме получили адреса.

До тук имаме автоматично обновяване на данните за текущите цени и брой дялове на фонда при всяко отваряне на нашия документ.

Следваща стъпка е да въведем нашите покупки и формулите по които можем да изчисляваме показателите които не интересуват: текуща стойност на инвестицията ни, абсолютен растеж в проценти, относителен годишен растеж и т.н. Според конкретните инвестиционни цели структурите на формулите могат да се различават, но основното, което ни интересува е дали участието ни във фонда ни носи печалба и ако да дали тя е по-голяма от онова, което банките предлагат като лихвени проценти за срочните депозити.

Целият пример може да бъде видян тук.

Заключение

Участието в ДФ има своите плюсове и крие своите рискове, бъдете сигурни че сте се запознали добре с тях и за всеки случай отново прегледайте материалите на страниците на сайта "моите пари", където е подробно обяснено на български език всеки аспект на този вид инвестиции.

С помоща на малко познания от училище и малко компютърна грамотност е възможно да имаме лесен удобен и смислен достъп до данните за нашите пари инвестирани в ДФ.

май 25, 2013

What's wrong with Linux desktop

It is going to be a rant!

A few weeks back I was surprised to learn that Google has decided to stop supporting older versions of libc on Linux and thus the binaries you can download are incompatible with older Linux distro versions.

I still run Fedora 13 on my personal laptop and this was a clear signal that I am way behind end of life of the OS product. On the other hand I already know that I hate unity, mostly because task switching is awful hard and unintuitive. Basically they copy the OSX way of doing things: if you want to use Alt+Tab you end up switching applications and if an application has multiple windows you have to wait and perform a different set of instructions in order to be able to select a particular window. Well that's bad if you use multiple applications when working but some of them have multiple windows.

Let's make an example:

Google chrome for browsing and reading documentation/examples on the Internet.
Google chrome with different user for testing (without plugins)
Chome's debug tool as a separate window (or several of them)
Sublime text 2

Now, the problem is that I can no longer just use Alt+Tab, I have to construct a special mental model of how the environment is handling the windows and then perform some acrobatics with my fingers just to get to the window I need. This is waste of time and I installed gnome 2. It was still possible on ubuntu 12.04

Today I installed Fedora 18 (how many years later is that?).

You can imagine my surprise when I realised that Mutter is doing exactly the same stupid window selection thing as Unity!!!

Okay open source (shitheads) developers: I can understand that you LOOOOVVVEEE OSX and your child dream is to make a free UI that is better than mac's UI, but my question is this: of all the beautiful and awesome things OSX does (packaged apps anyone??!?!) how could you copy the worse EVER feature??

You might find it surprising bit this is the truth: Microsoft got it right! OSX is wring! Users want to change windows with Alt+Tab and tabs with Alt+1-9, no one wants to change "context" or "application" with ANYTHING, this is the most useless feature EVER!

The other most distasteful thing done in Gnome is the hard binding of the Windows key to whatever the stupid panorama view functionality is called. I want to change my keyboard layouts with the left windows key - the way I was doing it forever (I started using Linux in 1999 and ever since then I was using the freaking windows key to switch kbd layouts). Is it so hard to just make a key binding optional? It seems it is, if I need a whole application just to tell the freak show that gnome 3 is to use special keys for layout switching, because guess what, by default you need a real key to do it, just like an action trigger in an applications does, so basically Gnome3 is swallowing your applications' shortcuts!

Thanks a lot developers. You have ruined a perfectly well working desktop environment, and turned into mix of the worse decisions made in each and every OS out there.

On the bright side of things Xfce is working well enough. But there is still no support for remote servers (a la Nautilus) and this makes it impossible to use Thunar if you work in a networked environment and you need the remote files to look for all possible applications as local files.

Anyway, I think Gnome3 should die and all the developers that are working on it should be restricted from contributing to any software that has na UI for the rest of teir lifes.

There, I said it. I feel much better now.

май 21, 2013

Polymer (by Google)

At IO this year a new project to build a toolkit on top of web components was presentedPolymer.

The demonstration was simply brilliant (in contrast of the Web components - tectonic shift talk's demos) and it showed the 'promised land' the exhausted web developer, who has to combine again and again the same JavaScript files with each new project (and mind you, a new web application is built for 6 to 8 weeks these days, so this is a lot of repetitive work) just to get those standard components to work and align as they are supposed to. Then he has to battle with the performance and load times and then and only then he can start implementing the application logic.

Lots of efforts has been spent to put off pressure from the regular Joe developer in recent years as well, lots of frameworks, libraries, utilities, build tools, component architectures and other stuff were invented to make this an easier and faster to complete task and yet we are at the place where it takes days if not weeks to gather everything you will need until you can start implementing application logic.

Everybody wants us to believe that the resolution to this chaotic state is called Web Components. So much, that companies started to implement early prototype frameworks and tool-kits on top of the still emerging standards by shimming the lacking support and polifilling the browser incompatibilities.  So today you can go and test drive the "future of web".

Well, I have so you don't have to.

And it sucks.

It does not work. Half of the examples do not work as expected. Even in latest stable Chrome or Firefox. Some that failed there worked in Mobile Safari, but mostly what you see is a repetition of what we have had for years on our hands from developer's perspective: lots of files that we have to know where to gather from and how and when to include in our page just to get things to load, from then on we have to figure out this new shiny approach to data management (we just learned to use meaningful data structures and consistency checks on the client side and to combine it with two ways data binding). And what about memory management? What about node count? For years we have been taught to not put too much nodes in the document and all of a sudden we are putting nodes in fragments twice as much as the old fashioned applications. Did the browser vendors all of a sudden implemented much better large DOM node list handling?

Anyways, while the approach is very interesting for me (lots of good ideas there, basically from the demo it seems you can have even the most complex interaction models implemented as a tag and you can use it as any other tag, including nesting the same tag inside, which is kind of cool and basically not possible with complex applications unless specially designed to be possible).

However the fact that even the simplest demos do not work or work very poorly (terrible redraws on mobile, terrible response time even when there is only one widget on the page - practically completely useless on mobile for now).

This leaves the developer with very bad feeling about this bright new web future. And then again China has 25% traffic coming from IE6 and companies simply just cannot ignore that. You can ask any international company or a company targeting that market - they do not care about the future, they care about the cash flow and right now IE6 is driving 25% of it in Chine, even if it is less than 6% globally. China is BIG! On top of the mess with IE there is also the performance case: it is still very unclear how is load time being solved in the case where you have tens of components loaded from all over the Internet.

I completely agree that we should not look too much in the rear mirror, but guess what, you cannot drive forward without it.

март 18, 2013

Closure Tools in Sublime Text 2

I have been a big fan of the closure tools for a long time now. It provides benefits over non-typed libraries that are hard to compare to the frivolous young libraries (backbone, component etc).

On one side those young libraries are very good at one thing and should you need exactly that one thing they are simply great: you get the job done in no time.

However once you need to tweak a little bit here and a little bit there or use abstractions on top of those base libraries it becomes real pain in the... I believe this is partially because all those libraries have their own style of code (how does inheritance work, how does structure works, static methods versus prototype, type checking on run time or no etc). This is all nice and fine, every author has its preferences and as far as there is no standardization on the matter everyone is actually forced to figure out by his own what to pick. This however makes it difficult for fellow developers to pick up just any library and start using it with an in depth understanding of what is going on. More often then not the documentation describes the publish methods but lacks description of the internal architecture and design behind it all. It is possible to still use the public APIs but if you want to deviate from the initial design assumptions of the authors you can get in bug trouble. And because every library uses its own style and assumptions it is hard to understand well all the bits and parts that you are using.

On the other side we have the "old" monolithic libraries like Dojo and closure.

Even thou we call them monolithic this is not true! The old beasts are actually very well modularized. Yes, you download the full blown version and you use that in developing cycles, but once you want to make a release there is a build step, very much like the step you are now days forced to have when you use small decoupled libraries. The difference is that in the case of small decoupled libraries you try to make one big thing out of small things (some call it the UNIX way), With the monolithic libraries the process is reversed: you state what you want to use and the rest is tripped out. The result is practically the same (well not exactly, more on that later) - you end up with exactly the code you need no more and no less. However up until the arrival of tools like yeoman, bower etc you had to hunt down for those little bits of code yourself. Now not every such bit is easily discoverable still, most components require additional steps to be readily made available in those environments. More often than never one and same functionality is provided almost identical but in very different style and package, so additionally a choice have to be made and once that choice is made it is kind of hard to divert from that even between projects, let alone in the same project.

So basically because the end result could be considered the same and because developers tend to look for what they already know it is not so much a technical choice but a personal preference. I would argue that closure tools provide some things on top of the rest of the libraries (except maybe typescript), but this was discussed many time in this blog.

What I want to show today is how to use closure tools in sublime text 2.

While tools exist to support development with closure tools (namely the eclipse closure plugin, webstorm and maybe others), those are written in Java (not necessarily a bad thing, just so happens that they are large and somewhat slow to use IMO) and require more resource that many web developers are willing to give up just to have a smart editor. On top of that customization is not that a simple step as with ST2. So we will assume that sublime is already your favorite editor and you want to use it to develop with closure tools.

The most important part of that is type checking of source. However projects in closure are configured to encompass many libraries and folders, which makes it easier if a tool is used to organize it. I will show you how to use make and a little piece of shell scripting to make compile check on any JavaScript file in your project. More about project structure used in this example could be found here.

First we need a small script that can detect the namespace of the current file we are editing and then it should call the compiler with the needed type checks. Here is the script (it might have suboptimal parts in it, I do not pretend to know bash well enough):

#!/bin/bash

FILE=$1
PROVIDE=`cat $FILE | grep goog.provide | sed -e "s/goog.provide('\([^']*\).*/\1/" | head -n 1`
make -C $2 check NS=$PROVIDE

Save the file somewhere in your PATH (that is - the directories looked up for executable files) and make sure to put the execution flag on for the file. Basically what it does is scan the currently open file in your editor for goog.provide symbols and runs the make program with the check namespace set to the first match.

Next step is to configure the build system in sublime. From the menus select New Build System and paste the following snippet in it:

{
  //"path":  "/home/pstj/Documents/System/Library/Bin/:/usr/lib/lightdm/lightdm:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games",
  "cmd": ["GCCCheck.sh", "$file", "$project_path"],
  "file_regex": "^(.*):([0-9]+):() WARNING - (.*)",
  "selector": "source.js"
}

What it does is to attach the build system to JavaScript source files and run the names command on it when Ctrl+B / Cmd+B is pressed. The path could be additionally configure to match your system PATH and/or the path in which the bash script file was saved. Note that the cmd variable's first parameter is the name of the file we created earlier.

The last step is to configure make to understand the 'check' target. This could be easily done if you already use a make file, if not create one and add the target as if the compile is to be made for real compilation but instead redirect the output to /dev/null. Here is an example:
python ../../library/closure/bin/build/closurebuilder.py \
    -n $(NS) \
    --root=js \    -f --js=build/deps.js \
    -f --flagfile=options/compile.ini \
    -o compiled \
    -c $(COMPILER_JAR) \
    --output_file=/dev/null
Add roots and flags as needed. Here is my compile.ini file:
--compilation_level=ADVANCED_OPTIMIZATIONS
--warning_level=VERBOSE
--js=../../library/closure/goog/deps.js
--js=../pstj/deps.js
--use_types_for_optimization
--jscomp_warning accessControls
--jscomp_warning ambiguousFunctionDecl
--jscomp_warning checkTypes
--jscomp_warning checkVars
--jscomp_warning visibility
--jscomp_warning checkRegExp
--jscomp_warning invalidCasts
--jscomp_warning strictModuleDepCheck
--jscomp_warning typeInvalidation
--jscomp_warning undefinedVars
--jscomp_warning unknownDefines
--jscomp_warning uselessCode
--externs=../../externs/webkit_console.js
As stated in the beginning of this section this setup will only work if a certain project structure is followed, however I believe it is easily enough customizable to allow it to be applicable to any closure tools project.

In addition if strict styling rules are required for your project there is the sublime-closure-linter package for sublime. I prefer to use make files to build gss files and templates but there are sublime packages for that too.

So what we have as a result is the ability to run any namespace trough the compiler checks and figure out on time if we are violating a contract of messing up types / methods. Once example from today was that I attempted to use the redner method on a PopupDatePicker. Sounds all normal and natural and I was sure it was in the API. However the compiler was smart enough to tell me that there is no such method (it was mistyped). The other day I had a value set as string, while a number was expected. You very well know what will happen if I try to use it in calculations (NaN). What this setup allows me to do is to have checks run from any entry point in my namespaces without actually typing the namespace. A basic work flow would be: add new file, edit it, save it, run build process, fix type errors, commit. That should do for now.

One thing missed by large when it comes to web development with JavaScript is the missing intellisense. I can understand that and I sympathize. However the big IDEs have other draw backs (sublime is times and times faster especially on regular Joe machines) and the IDEs are not perfect either. For example the eclipse plugin does not understands correctly the scoped variables even though the issue is closed it still does not work. WebStorm has its issues too. There are new project trying to address that, but up until then having a consistent APIs across large number of ready to use library code (as in monolithic library) I found to be more pleasing.

март 07, 2013

Clasical versus Prototype inheritance in JavaScript

A lot has been written lately about the benefits and risks of using prototype inheritance versus classical inheritance pattern in JavaScript.

In multiple occasions it has been shown that classical pattern works faster and uses less memory compared to anything else out there. It also consumes less resource (memory wise) and it is the most often used pattern currently as well which soft of guarantees that your code will be compatible with other people's code. Recent blog post even compared the exact amounts of  memory consumption comparing classical pattern versus Object.create (sorry, could not find relevant link now, but basically the memory overhead was there, but not as big as in other patterns).

However there are proponents of the prototype inheritance pattern in the community as well. On multiple blogs one can see examples of the usage and encouragement to try it out. On also many instances the cited benefits are dubious and refuted by proponents of the classical pattern.

Coming from closure library the classical pattern was the defacto only possible solution for my code up until very recently. However these days I have the chance to once again write code exclusively for modern browsers. I decided to explore what will be possible if I use all the 'wrong' and 'dangerous' patterns to make my code simpler and easier to use.

For the exercise I will have a fictional data server that returns list of records that I want to work with. The very basic use case of just retrieving the data and having it ready for manipulation will be investigated. So, we have a server call that returns an array of objects and basically I want to apply behavior on top of it.

Our server data will look like this:
var serverData = [{
    id: 1,
    name: 'Peter'
}, {
    id: 2,
    name: 'Desislava',
    lives: 10
}]; 

There are two classical approached for this (mostly utilized in the 'data' frameworks these days).

1. Wrap the data: basically execute the logic as constructor function and put the data inside of it (for example as this.data = serverData, then operate over the data with access methods (either general: get('name') or named ones (getSomething/setSomething)). By the time the data need to be saved on the server the stored value is used (this.data).

2. Use functional approach: operate on the data via functions defined specifically for the data and pass the data record to every function call. To save the data back on the server pass directly the data as it is.

I was interested in a more 'natural' approach: using the literal objects created by the JSON parsing, slap the behavior on top of it without actually polluting the data and pass the augmented data back to the server directly (i.e. combination of the two approached above).

For this to work we have to be able to:
a) slap the methods on top of a literal object
b) have pure prototype inheritance to tweak the behavior easily

I came up with this:

Object.createPrototype = function(proto, mix) {
    function F() {};
    F.prototype = proto;
    var newProto = new F();
    mix.forEach(function(item) {
        Object.mixin(newProto, item);
    });
    return newProto;
};

Object.mixin = function(obj, mix, restrict) {
    for (var k in mix) {
        if (typeof mix[k] == 'object') continue;
        if (restrict && obj[k] != undefined) continue;
        if (k == 'applyPrototype') continue;
        obj[k] = mix[k]
    }
};

Object.prototype.applyPrototype = function(proto) {
    if (this.__proto__ != Object.prototype && this.__proto__ != Array.prototype) {
        throw new Error('The object is already typed');
    }
    this.__proto__ = proto;
};

Object.createInstance = function(that, proto, def) {
    that.applyPrototype(proto);
    if (typeof def == 'object')
        Object.mixin(that, def, true);
    if (typeof that.initialize == 'function') {
        that.initialize();
    }
    return that;
};

What this allows me to do with the server data is like this:
// define some defaults (if the data on server could have null's)
var defs = {
    lives: 10
};

// Define basic behaviour
var Base = {
    update: function(data) {
        if (this.uid != data.uid) {
            return;
        } else {
            Object.mixin(this, data);
        }
    },
    get uid() {
        return this.id;
    }
};
//Upgrade the behavior
var Sub = Object.createPrototype(Base, [{
    kill: function() {
        this.lives--;
    }
}]);

// helper function to process an array
function processData(data) {
    data.forEach(function(item) {
        Object.createInstance(item, Sub, defs);
    });
    return data;
}

// Upgrade to an array that knows how to handle our data types
var MyArray = Object.createPrototype(Array.prototype, [{
    getById: function(id) {
        return this.map[id];
    },
    initialize: function() {
        processData(this);
        this.map = {};
        this.indexMap = {};
        this.forEach(function(item, i) {
            this.map[item.uid] =  item;
            this.indexMap[item.uid] = i;
        }, this);
    },
    add: function(item) {
        if (typeof item.uid == 'undefined') {
            Object.createInstance(item, Sub, defs);
        }
        Array.prototype.push.call(this, item);
        this.map[item.uid] = item;
        this.indexMap[item.uid] = this.length - 1;
    },
    push: function() {
        throw new Error('Use the "add" method instead');
    },
    pop: function() {
        throw new Error('Use "remove" instead');
    },
    update: function(item) {
        if (typeof item.uid == 'undefined') {
            Object.createInstance(item, Sub, defs);
        }
        this.map[item.uid].update(item);
    },
    remove: function(item) {
        if (typeof item == 'number') {
            this.splice(item, 1);
        } else {
            if (item.uid == undefined) {
                Object.createInstance(item, Sub, defs);
            }
            this.splice(this.indexMap[item.uid], 1);
        }
    }
}]);

// Make new type that has next and previous.
var Linked = Object.createPrototype(MyArray, [{
    initialize: function() {
        MyArray.initialize.call(this);
        this.index = 0;
    }
}]);
Object.defineProperty(Linked, 'next', {
    get: function() {
            console.log(this.index)
        if (this.length > this.index + 1) {
            this.index++;
            return this[this.index];
        }
        else return null;
    }
});
Object.defineProperty(Linked, 'previous', {
    get: function() {
        if (this.index - 1 >= 0) {
            this.index--;
            return this[this.index];
        } else return null;
    }
});

// Process the server data to create local data and work with it.
var clientData = Object.createInstance(serverData, Linked);
clientData.getById(1).kill();
clientData.add({
    id: 3,
    name: 'Denica'
});
clientData.update({
    id: 3,
    name: 'Ceca'
});
clientData.remove({
    id: 3,
    name: 'Ceca'
});
var myNextObjetc = clientData.next;
JSON.stringify(clientData); // returns the actual array only without any of our custom props.
// [{"id":1,"name":"Peps","lives":9},{"id":2,"name":"Des","lives":3}] 

This is not much improvement over the classical approaches yet, because for example I still cannot update an item in the collection directly with values (i.e. clientData[1] = {id: 2, name: 'Another name'}), which can be achieved if I was simply hiding the array inside a wrapper object. However I believe for the objects inside the list it is a great improvement to be able to just have the behavior stamped on top and yet having natural access to all properties (i.e data.property1.property2. This again is not an ideal situation, because updates are not catch (i.e. bindings will be harder to implement), but this can be resolved by the observer/mutator API proposed.

Again, this is not a real world scenario, just me playing a little bit with literal objects as pure data, but instead is an interesting experiment in the sense that I have always wanted to be able to merge data with logic without too much fuss. What is accomplished in this solution is that we have the data (importantly - ready to be submitted back) and the logic is simply an object we can play with and model, including in run time. All this is possible with other approaches as well, but this is interesting because we do not actually create instances, but instead use the original data instances all the time and simply apply logic on top of it. In conclusion this is like merging the two classical approaches: keep the data instances but have logic bound to it.

DO NOT use this in your code! __proto__ is not standard and getters/setters as well as defineProperty are not widely supported!

CLARIFICATION: I am aware that the same effect could be accomplished with the new Proxy API, so thanks for reminding me, I already know it:)

януари 27, 2013

Decoupling components for frontend development.

Introduction: It appears the era of monolithic UI libraries is coming to an end.

It has been irritating for years: you go deep into a project for months just to realize that you need a certain type of UI component that is not available for your set of libraries but is readily available for another (for example it is written for Mootools, but you use jQuery or it is in Dojo but you are actually using jQuery).

So before you even start you have to have in mind all the components you might need. Then o and hunt for a library that has them all or at least most of them. You then have to spend time and learn their component idioms (i.e. how a new component is created or composed from existing ones). This is one of the hardest things to grasp in a new library in addition to its build model.

Build model is another opinion in a library that you can't (or can but hardly) change. Some libraries (smaller ones) do not provide a build model and you are on your own, however if a build model is selected you have to stick to it. If the library uses AMD you have to stick to AMD. If the library uses CommonJS you ave to stick to browserify or a similar one. The primary goal being dependency management, it often includes concatenation and minification steps.

The problem often overseen is that you can achieve greater responsiveness if you separate initialization code from the rest of your application. RequireJS supports building your application with build units allowing such set ups. Closure compiler also allows you to separate your code into loadable modules. However with browserify you are on your own. You have to manage the dependencies and the build step if you elect to use one.

A new concept recently introduced by TJHolowwaychuck is components: The idea is that you could develop a reusable component and put it in its own repository on github or on a private git service and then reuse it from within another component or in an application. The components could contain css, JavaScript and HTML.

I find the idea great, however the execution and the design are not so great. The following problems stick out pretty fast:
  • while component management is made easy with this project, you are required to use only one final file in your application, so modularization and modular loading is not possible yet.
  • while the builder used in the project provides a hooking mechanism it is unusable unless you wither alter the builder itself locally to attach your own hooks (i.e. ones that are not provided by default) or you have to fork the component npm package to make it require augmented builder package where you have extended the base builder. This makes the project really hard to augment in transparent manner and locks you into constant chasing with the original. 
On the other hand the naming convention used and the plumbing added to make ti work makes the project a real pearl in the dust: no more strange and ugly names (as in the author's example you could use 'tip' for your project name, no more tippie, tipsy, tips, tiping etc) thanks to the repo/project naming convention. The names are disambiguated automatically and this works great.

The even greater thing about this is that dependencies are managed automatically for you and transparently and that you can concentrate on a smaller, testable units of code. This IMO encourages decoupling (similar to AMD) but removes the extra complexity of AMD (the paths for example could be a real hell if you try to mix several repositories). Fattening out the directory tree was a great win for this project. 

Conclusion: One thing we should get used to in the coming months and years is that monolithic application model (an application that consists of only one pre-build file) is going away. Shadow DOM, Web components, JavaScript modules etc are making steps forward and the sooner we understand the speed and memory implications of those the better we can implement them in your projects. Until now - use decoupled code as much as possible.