Why Radi uses Canvas – comparing CSS-based animation and immediate rendering

The market for HTML5 design apps has heated up lately. The number one question that I now get asked about my Radi application is: How does it compare to Edge and Hype? Isn’t it the same kind of app? On the surface that is the case – all three are animation tools that target the modern web. But there are important differences under the hood. Although the apps look similar, they answer a different need and have a different growth path going forward.

Comparing Radi to Edge and Hype is pretty much “apples to oranges”, as the old saying goes. Fruits are not a very informative analogy, though. Let’s think of these apps as akin to musical instruments that can be used either solo or in a band. If Edge and Hype are the electric guitar, then Radi is perhaps the synthesizer. Unless you’re a genre purist, neither kind of instrument is objectively “better”. Either can be used to create a brain-wrecking cover version of Stairway to Heaven, but that doesn’t mean they are inherently flawed instruments…

For more sophisticated uses, the guitar and the synthesizer are more likely to complement rather than overlap each other, and so there are many individuals and creative groups that will want to use both together for the best effect. In this post, I’ll try to explain how the difference between Radi and other HTML5 apps, and how they can complement each other. I’ve got some simple content examples to illustrate things. (I’m also planning to write a second part that will concentrate on Canvas performance and WebGL, so stay tuned for more.)

First, a brief overview of the apps under discussion. My own Radi is at radiapp.com; check out the details there. Edge is a new application by the world’s most venerable content creation software company, Adobe. It is available as a free preview from Adobe Labs, and is also cross-platform (Mac + Windows). Hype is also a new application, but Mac-only. It’s created by Tumult, a company founded by two ex-Apple software engineers (who clearly know the Mac better than their own pockets).

As mentioned, Edge is free for now, but it stands to reason that it will eventually be included in Adobe’s Creative Suite because it’s clearly meant to complement Adobe’s other products rather than stand on its own. (For example, it seems unlikely that Edge will ever have vector drawing tools, with Adobe preferring instead to leave that task to Illustrator). Meanwhile Hype is available on the Mac App Store for $29. This is a limited-time offer upon its first release, which presumably means that Hype will cost more in the future.

Moving divs – how CSS-based animation works

Both Hype and Edge are CSS-based animation tools. This means that everything that is to be animated must be represented as style properties on actual HTML elements. When you create a layout (called a “stage” in Edge or a “scene” in Hype), the layers that you see in the app are directly equivalent to <div> elements on a web page. When your layout is eventually published and loaded into a browser, a piece of JavaScript code is included which takes care of the actual animation. At each frame, this program modifies the div elements’ style properties to reproduce the animation that you designed on the timeline.

This kind of CSS animation represents one extreme of the web animation spectrum: you’re using the full browser engine to move things around, with complete access to the layout and styling capabilities provided by the browser. Hence we could also call this “DOM animation”, as it uses the browser’s Document Object Model (DOM) to represent the objects to be animated – in other words, everything that moves must be accessible through the DOM.

This kind of animation on a web page is not a new idea in itself. DOM animation has been possible since Netscape 4, all the way back in 1996. What’s new right now is that the number of CSS properties supported by browsers in the real world has significantly increased in recent years. Whereas the old browsers were pretty much limited to moving elements around, changing fonts and making stuff visible or completely hidden, modern browsers can apply many kinds of more discreet transformations and visual tricks to elements: rotation, alpha blending, gradients, text shadows… Armed with this “CSS3” toolset of properties, it’s finally possible to animate elements freely within a web page.

(In addition to Edge and Hype, there are a few more horses in the CSS-based animation race. Sencha Animator is a competent-looking alternative. It’s a Mac desktop app but with an interface that’s built entirely using web technologies, so it doesn’t really feel native. Sencha has said that price of Animator will be in the “low hundreds of dollars”, so it will likely be more expensive than Hype but cheaper than Adobe Creative Suite. There are also some web-based options like Edit Room – I haven’t tried it, so I can’t tell you anything more about it. A few other web apps have been announced, but are not yet available.)

Painting primitives – how canvas-based animation works

On the spectrum of web animation, modifying the DOM is one extreme. The other extreme is to use the <canvas> element for animation. This element represents a single drawing space that occupies a fixed rectangular area on the web page. It’s actually a lot like the familiar <img> element (which represents a still picture), but for one crucial difference: you can – and in fact must – draw the contents of the canvas element yourself using JavaScript; by default, it’s merely full of empty pixels.

Although <canvas> has only recently become a de-facto web standard in the sense of being widely supported by all major browsers (Internet Explorer added support in version 9 this year), it’s been available since 2005. The original specification and implementation of <canvas> were created by Apple. The Canvas API was first included in Mac OS X 10.4 where it was offered as the method that Dashboard widgets could use to draw their contents. Subsequently it was incorporated into Apple’s Safari browser – and not without a fair amount of protest from friends of web standards, who saw the addition of <canvas> as an example that Apple is following Netscape’s and Microsoft’s lead of adding their own incompatible extensions to HTML.

Luckily, Canvas didn’t end up in that <blink>ing graveyard of forgotten misguided proprietary extensions… The main reason is that it’s a really useful element. It provides the precise pixel-manipulation functionality that nothing else in HTML can provide, but which many have tried to approximate by using arrays of 1-pixel divs and other awkward concoctions. Canvas also offers a reasonable set of modern vector graphics capabilities – Bézier curves, antialiased fills, gradients, clipping paths, basic compositing modes. The Canvas JavaScript API is very small and easy to learn, with few gotchas or weird combinations of properties that might lead to unexpected results. Lastly, Canvas maps nicely to the 2D graphics capabilities offered by current operating systems, so it’s easy for browser vendors to implement. (The fact that Canvas maps directly to OS primitives is no surprise considering the origin of Canvas as a feature-enabler technology in Apple’s operating system.)

So, how does one render animation using Canvas? The principle is simple: when something interesting happens on your web page, a piece of JavaScript code must access the canvas element and draw the new stuff in it. This “something interesting” can either be an UI event, for example in the case of a paint program where the canvas is updated every time the user moves the cursor, or it can be a timer event. To create a canvas-based animation that renders at 24 frames per second, we can simply create a regular timer that is called every 1/24th of a second, and perform the update in that timer’s callback.

What Radi does in Canvas

As the above description of canvas-based animation implies, it’s quite low-level. The browser doesn’t really do anything to help out your web page to get those pixels moving; the JavaScript code on the web page must model everything it needs to update its animation.

This is where Radi comes in. It provides a readymade framework of elements that can be rendered using Canvas and animated from one frame to another. The user interface in Radi allows you to build up the contents of the canvas from layers with graphical manipulation and drawing tools, and animate their properties with keyframes and curves. For a lot of things that you might want to do with Canvas, this is much more fun than having to write out the code in JavaScript. (Ever tried drawing vector shapes in Canvas by writing out coordinates in raw JavaScript code? It’s ridiculously painful for anything more complicated than a triangle.)

However, Radi doesn’t limit you to only using its own framework of layers. You can also create Canvas Script layers in Radi, and these have full reign over the canvas element in which they’re being drawn. Want to draw some particles? Maybe erase a part of a vector shape on a layer below? Do some pixel manipulation? A Script layer in Radi can do all of these things using regular JavaScript code – it’s standard Canvas, there are no special APIs to learn.

For this reason, Radi is also a development environment rather than just an animation tool. You can combine both visual manipulation and custom algorithms to get the best of both worlds.

(I’ve only talked about using canvas elements in Radi, so it’s worth mentioning that Radi is not limited to just Canvas. It also has full support for producing videos, and this goes beyond what other web tools can do. The same layers that you use in a canvas element can be placed in a <video> element, and Radi will seamlessly render the content into a video file instead. Additionally, Radi has a very limited form of DOM animation known as ‘element behaviors’. It’s meant for interactive situations rather than timeline animation. Currently you can only use this to animate the opacity of elements, but in the future the feature will hopefully accommodate more of CSS3’s possibilities.)

SVG, the hulking standard that falls between DOM and Canvas

With DOM at one end of the web animation spectrum and Canvas at the other, there remains a third option in the middle. SVG, Scalable Vector Graphics, is a technology that aims to be the HTML for vector graphics – a standard markup language for representing vector images and other types of graphical content that HTML intentionally doesn’t handle.

SVG’s development started back in 1999, and is directly based on an earlier effort called VML by Microsoft (in fact VML is still supported in Internet Explorer). It is visually and conceptually similar to HTML. It uses similar tags and attributes. Like HTML, SVG doesn’t directly contain a full programming language, but instead relies on declarations for representing dynamic content.

Hence the way to create animation in SVG is very different from Canvas. You don’t just pop up when something interesting happens and draw whatever you want in an SVG image. Instead, the SVG standard expects you to declare your animation using the tags and properties provided by the standard. For example, if you have a shape that needs to move across the screen in 2 seconds, you must tell the SVG engine that beforehand. The SVG engine will retain your animation declaration and will perform the display updates “behind the scenes”, without any direct influence from your code.

A terminology intermission: retained vs. immediate

This example is a convenient opportunity to discuss two words that often pop up when discussing animation frameworks, “retained” and “immediate”. The SVG engine uses a retained model. Once you give it the SVG code that declares the shapes and the animations, the engine is like a bus driver with a route: it knows where it’s going and doesn’t need your input.

On the other hand, the Canvas drawing model described earlier is a perfect example of immediate rendering. The canvas element is just a “dumb” bunch of pixels; it doesn’t do anything on its own (except ensure that the pixels stay visible if the browser window gets redrawn). To draw something, you must actively do it yourself. As your code uses the Canvas API to draw, the results are applied immediately into the pixels in the canvas – hence the name “immediate rendering”.

This split between two approaches is essentially as old as computer graphics itself. Immediate rendering is where it all started. As computers gained more memory capacity and processing power, a retained graphics model became attractive. It seems like a good idea to let the graphics framework handle things automatically as much as possible, instead of being actively prodded by the user’s code.

Despite the theoretical advantages, retained graphics hasn’t exactly been a triumph in the real world. The big problem is that the graphics framework needs to be designed so well that it can accommodate whatever needs the user code may have. As graphics tends to be a field where everyone is looking for a “hot new thing”, anticipating needs hasn’t been easy. If the retained framework doesn’t support the kind of rendering you need to do, it can have a huge performance impact or may even prevent you from rendering your graphics altogether.

A poignant example of a failed retained graphics framework is the original Direct3D. This GPU rendering framework by Microsoft has been a tremendous success, with many top game titles being designed exclusively to take advantage of Direct3D, but it didn’t start out that way. The original version of Direct3D, back in 1995, used a retained model that turned out to be a major impediment for game developers trying to take advantage of new graphics accelerators. In version 3.0, Direct3D moved to an immediate model that gave more control to game developers, and the retained model was deprecated. (The competing GPU framework, OpenGL, had used immediate rendering from the start.) The retained model that Microsoft had perceived as an advantage of Direct3D turned out to be a flawed design because the field evolved so rapidly.

In my opinion, SVG suffers from a similar fundamental design problem. In many ways it’s a kitchen-sink API from an era when XML was thought to be the answer to everything… But because SVG is not the topic at hand (as it’s not used by either Edge or Hype currently), I won’t dwell more on its design in this post. Instead, I’d like to discuss what I perceive as problems with the DOM animation model.

The weakness of DOM animation

It’s clear to see why CSS-based animation is so popular amongst tool developers. It’s conceptually simple. It’s straightforward and clean to implement, because the native engine in the web browser handles all the messy details of how elements interact. It’s also optimized, because browser vendors have put a lot of work into making sure that elements are rendered as quickly as possible.

One could also argue that CSS animation is “native to the web” because it deals with the basic elements of a web page. A bunch of <div> elements is as standard as you can get, right…? But in my opinion, that’s where CSS animation goes wrong. The problem is that div elements already had an established meaning in HTML, and now they are being used for a purpose that has nothing to do with the original intent.

Consider the Adobe Edge demos that look like motion graphics. Each moving element is a div. What happens if you view these demos in a browser that doesn’t support the necessary CSS and JavaScript features? Instead of an animation, you get a pile of disjointed elements, laid out one after each other on a web page. The elements are being displayed by the browser in a way that is in accordance with the original meaning of <div>, yet has nothing to with the intention of the designer who was making an animation.

When something is expressed in HTML elements, there is a built-in expectation of graceful degradation: the user should be able to access the content even if his/her browser doesn’t support specific non-HTML features (e.g. CSS properties) that the designer of the page had in mind. Using CSS and JavaScript to turn divs into motion graphics fails this test.

“I’m sorry Dave, you can’t do that.”

DOM animation also has all the limitations of retained graphics, as discussed above. This is perhaps even aggravated by the somewhat haphazard way that CSS has evolved towards animation. If something is not directly available as a CSS property, it’s usually not possible. Traditional retained graphics systems try to prepare for this problem by providing a design of generic concepts that can be repurposed and combined to produce results that the framework’s designers could not have anticipated, but this is not the case in CSS: there are few features available as style properties that could provide this kind of generic functionality. (The CSS3 transform matrix property is probably the closest thing, since it can be used to create transforms outside the straightforward scale/rotate/translate ones.)

With CSS3 being capable of so much already, is this such a big problem? I feel that it is. The following is an example of an animation created in Radi that is trivial to render in Canvas, yet essentially impossible to duplicate using DOM animation:

(Please excuse the lack of artistry; I produced this animation as a tutorial example, and it acutely reminded me of why I’m a programmer instead of an animator…)

In the above example, one big and obvious feature that can’t be accomplished with divs is masking. The bird consists of several shapes that are being masked by a circle. At the end of the animation, the circle expands to reveal more layers. (In Radi, this animated circle is a clipping layer.)

There could be an extension to CSS to allow masking. Indeed, WebKit has acquired an extension that does something along those lines. But will it ever be supported by any other browsers? And if a masking CSS property were available, would it be possible to mask several elements using the same mask?

How about animated vector content? In my example, the mask’s edge stays sharp even as the circle expands. I don’t see any reasonable way to extend CSS so that this could be accomplished with just a styled <div> element.

Another big feature of Canvas that’s not set to be available in CSS any time soon is blending modes. To be fair, Canvas doesn’t really excel in this department: of the dozen modes available, only a few are really useful. (Too many of the “compositing operations” in the Canvas API are theoretically derived with few practical uses and definitions that are much too complex to remember. For example ‘destination-atop’: how often does one need to render a layer’s content outside the transparency of the content already in the canvas, while also clearing the canvas where the layer is transparent…?)

Still, Canvas has the ‘lighter’ compositing operation, which is basically the equivalent of Photoshop’s Screen blending mode. This alone is way better than what the DOM provides, which is basically “use normal blending, or go home”. (For an example of what you can do with this type of blending, check out Silk which uses blending in a Canvas to render some amazing generative shapes.)

Poking pixels

Masks and blending modes could be added to CSS in some way, and SVG also supports them, so these capabilities are not exclusive to Canvas. However there is something in Canvas that the other types of rendering fundamentally cannot support: getting and setting actual pixel values.

Here is an example. It’s the same Radi-produced animation as above, but a script has been added that processes part of the canvas into greyscale:

This was accomplished in Radi by adding a Script layer at the top of the Canvas:

What this script does is to read a part of the Canvas as pixel data, calculate new pixel values, and write it back. This is effectively like an adjustment layer, but written in custom code.

This program is small enough to fit into a screenshot of Radi’s Script Editor:

Adding this kind of low-level capabilities to CSS or SVG seems extremely improbable at present. The way to approach it would probably be through a separate programming language for processing pixels; this type of special rendering programs are usually known as “shaders”. (They are common on GPUs, where they get executed in hardware. Adobe Flash also includes a CPU-executed shader language known as Pixel Bender.) This could be a nice feature, but realistically, how soon could we expect browsers to support a common shading language in CSS – maybe by 2025…?

Small API, small worries

Compared to SVG and CSS3 (which usually needs to be extended by browser-specific extensions), Canvas has one more huge advantage: it’s small and orthogonal.

Orthogonal means that there are not dozens of ways to accomplish the same thing. Small means that the API that browser vendors need to implement has as few public entry points as possible. This greatly reduces the possibility of incompatible rendering from one browser to another, which has been the plague of CSS and SVG since day zero.

The risk of the API getting fractured is very real. For a while, it seemed that Microsoft’s IE9 would ship with an incompatible implementation of Canvas (see this MSDN blog post, issues 2 and 3). Happily Microsoft was able to resolve these problems before the release, and IE9 shipped with a <canvas> that renders just like other browsers. This happened largely thanks to the small size of the Canvas API. If it instead were a humongous spec like SVG, there would be dozens of missing or half-broken features in every browser, and Microsoft would not have felt a similar pressure to ship with a 100% implementation. Small is good because it makes the omissions really stand out.

Like Goldilocks’ porridge, Canvas is not “too hot” with complicated features or “too cold” with theoretical purity — it is “just right” for the web. A nice indication of this is that the entire API fits on a single printable sheet that you can put up on the wall: Canvas Cheat Sheet by Jacob Seidelin.

What next

What about Canvas performance — does it need to suck on mobile? How about 3D rendering? I hope to cover these topics in an upcoming post.

Want to learn more about Radi? Check out the website and the documentation (the latter very much a work in progress…)

I’ve been making frequent updates to Radi lately. The best way to keep up with the progress is to sign up for the Radi email list – you can do it here.

This entry was posted in Animation, Mac-related, Web. Bookmark the permalink.

5 Responses to Why Radi uses Canvas – comparing CSS-based animation and immediate rendering

  1. canuckinluck says:

    Thank you for the break down – as a designer new to the area of animating using the canvas element I appreciate your effort of making this program available as well as providing here a comparison to the few others also treading this path.

  2. matt o says:

    i am a flash guy trying to get his head around html 5. can you use canvas and css3 together on the same page, ie do parts with edge or hype (or muse which uses divs) in conjunction with radi / canvas? i have need for a non-rectangular mask, and radi seems like the way to go based on the masking capability… but i have built parts of the site already in adobe muse, and there is a fair amount of photographic imagery with transitions which seem better suited for css3… right?

  3. Marcel says:

    Hi,
    Thanks for your clear explanation! Very interesting.

    Marcel

  4. Dennis says:

    Good explanation of the differences. Just a small nitpick….. SVG stands for Scalable Vector Graphics.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>