|
09-12-08 / 00:42 : Javascript & ActionScript : a step back (cjed) | Through the comments on the latest ArsTechnica article (September) about Cappuccino, we could read :
Technologically, JavaScript apps running in a browser is like apps running under MultiFinder back in the Mac OS 6 days: no memory protection, cooperative multi-tasking, etc. One badly programmed web app and it takes down your entire browsers and all the other web apps and open web pages along with it. Worse, however, there are just about half a dozen software abstraction layers added, and thus, what used to work on a 68k CPU back then now requires a dual or quad-core CPU with GHz clock frequencies and Gigabytes of RAM just to get adequate performance. Can you say "back to the future"?
JavaScript, Java, Flash, Cookies, etc. should be filtered out by the firewall. The web is a publishing platform, if you want to go back to mainframes and terminal based remote processing, then come up with a secure protocol that's designed for remote GUIs over low-bandwidth channels. The web isn't it.
Lacking synchronization features (mutex), Javascript and other scripting languages aren't robust. And performance of Flash applications is really bad (ActionScript is derived from ECMAScript, the same basis as for Javascript) - can trigger fans in recent dual core latops (worse, on a G4 a new banner ad on the site prevents entering news as the browser becomes unresponsive due to huge resources leak).
We van read an article that presents tricks to counter synchronization problems in Javascript. There isn't any CPLock class in Cappuccino, but source code from evaluate.js (and also CPTimer -timeouts) leads to think there is some sort of such tricks to simulate synchronization.
Then as I previously said, the competition won't be only on the javascript engine side (SquirrelFish Extreme, Chrome, etc.), but also on the whole container side. All is container : browser for javascript or Flash plugin, AIR container for Flex outside the browser, Quicktime (Quicktime X will probably use HTML 5 sockets, and iTunes is still an hybrid application, that uses WebKit).
As ECMAScript derived languages aren't satisfying, they will probably evolve, or new scripting languages will appear, but then we are back to existing powerful languages that we all know (Objective-C, etc.) We can't have robust and responsiveness interface using scripting languages, computing science is complex and thread and synchronization problems a reality.
Google is working at parallelization and execution spaces protection for Javascript, but it will only delay occuring of bugs (as we did with some tricks on System 7 to delay crashes - safety memory blocks at the start of the adresses stack, and use of adresses starting from the top stack).
The performance problems aren't specifically tied to interpreted languages. For example the latest Openoffice.org 3 for OSX is still far more slower (at least for page scrolling) than the older NeoOffice, although the later uses the Java bridge for UI. A 68040 LC475 with 4Mb of memory performed better under Word 5.1. Such software abstraction layers also exist in OSX itself : a Mach task is wrapped by a BSD task, that is at end wrapped by a Carbon task. We can hope the highest layer will be removed when Cocoa migration will be completed (Adobe is working on Cocoa versions of its softwares but this isn't still the case for Microsoft Office).
There are plenty of optimization areas available on OSX however, as we can read throughout the great Cocoa programming book (I've yet read 930 pages). Hopefully the AppStore success and limited iPhone capabilities lead developers to think about optimizing, the same for Apple when it did the mobile version of OSX. That is the reason why the Objective-C 2 garbage collector isn't provided in the iPhone SDK. MacOSX will also finally benefit a lot from Grand Central and OpenCL, further 64 bits optimization, code cleaning and strong optimization for Intel only, advanced use of SSE4. | | Comments | Write a comment | |
|