Howard Klein

Howard Klein

Musings on software, discrete event simulation and other pseudo-random topics

Through the (Animated) Looking Glass…

…in which I post yet again on a topic about which I know relatively little… which arguably is not a recipe for a long and happy professional life

It seems like every self-respecting simulation package has at least a modicum of animation capability – and many are, in fact, designed around pretty impressive animators.  I figured I would have to supply at least bare-bones 2-D animation sooner or later… but I hoped it would be later.  While I felt like I had a pretty good handle on what a core simulation engine looks like, I didn’t really have a clue when it came to animation. Animation struck me as a large, amorphous black box that would suck up copious amounts of my time while still resulting in an amateurish solution.  I just didn’t want to deal with it.

So I built the rudiments of a simulation engine, created some models using that engine, and started running them.  And, of course, I almost immediately ran into the problem of verifying those models, even at the most basic level.  How am I supposed to make sure these models are behaving as I intend?  Sure, I can model an M/M/1 queue and see if queue size and time conform to the standard analytically-derived results.  But if they don’t, how do I figure out what’s going wrong?  What about other models, with no analytic solutions?

Yes, I could manually trace the simulation, event-by-event, but I quickly tired of that approach. Like (probably) most software developers, I’m easily bored by repetition, and inherently lazy – isn’t that why we like to automate stuff? Spending hours tracing and debugging the simplest of models was just not going to cut it.

Animation seemed to be the most straightforward solution. In theory, at least, animation should quickly make the most obvious bugs, well, obvious. I really did not want to go there yet, but if the only alternative was carefully pouring through reams of trace output, then perhaps it was, in fact, time to go there. So I waded my way into the Alice-in-Wonderland world of animation technology, and dove down the first rabbit hole that presented itself.

I should probably take some of that back. I did spend at least a little bit of time thinking about requirements.  This is basically what I came up with:

  • 2-D animation should be just fine, particularly if my primary goal is to debug models. I realize that the commercial products are virtually all 3-D (and pretty impressive 3-D, for that matter) – but without knowing much about it, I decided that I wasn’t ready to jump into that particular pool. I’ll eventually see whether or not that was a mistake  🙂
  • I do not want to create yet another full-featured drawing or graphic design tool. That’s not my area of interest, and it’s certainly not my area of expertise.  If the modeler needs or wants to create sophisticated graphic elements, they should be able to do so using something off-the-shelf – and preferably, at least one of those somethings should be free.
  • In addition to moving stuff around the screen, the simulation model code should be able modify graphic elements – e.g. changing the color of all or part of an element, or changing an element’s text component.
  • My overall target delivery platform is, at least initially, a traditional desktop operating system. I am therefore willing to live with an animation technology that is similarly constrained.  That being said, it would be awfully nice if I chose a technology and architecture that ultimately lends itself to a solution that also runs on mobile or web-based platforms.
  • Finally… while I do not want to overly dwell on performance up front, I do need to acknowledge that ultimately, I should expect to have to deal with models containing thousands of independent graphic elements, with many (if not most) of them moving about.

After deciding to go 2-D, I almost immediately started looking at SVG.  SVG was initially attractive for a several reasons:

  • There are plenty of editors and tools that support it, including free ones.
  • It provides a mechanism for addressing and modifying graphical subcomponents – i.e., modifying text elements or the color of some sub-element of the whole.
  • My UI toolkit of choice (Qt) supports it.

So I began fiddling with the use of SVG in traditional Qt desktop graphics… and didn’t get very far. While I was able to load and render SVG from external files, I couldn’t do much to modify it. After banging my head against that wall for a while, I ran across internet posts suggesting that for SVG support, QtWebKit was the way to go. I promptly dove in.

QtWebKit is a Qt wrapper around the open source WebKit web rendering and browser engine.  One could, for example, use it to build a complete browser using Qt APIs.  Of particular interest, QtWebKit provides mechanisms to integrate outside code (Python, in my case) and Javascript code running within a WebKit-rendered page. In particular, it:

  • Allows Python code to invoke Javascript functions via the Qt signal/slot mechanism; Javascript functions may be treated as Qt slots and connected to Qt signals emitted by the Python code.
  • Allows a proxy for a Python object to be created in the (DOM) window, which is thereby accessible to the Javascript code (which can then in turn invoke methods on that Python object)

Have I mentioned the fact that up until this point, I had never written a line of Javascript in my life? And that I was (and still am) at best barely conversant when it comes to the web development tool stack?  (There go all those job offers!)  Time for this mostly desktop guy to begin getting a clue. So I began reading, which is, of course,  an inherently dangerous activity.

I started reading about animation and game development in HTML5, and quickly came across warnings about the scalability (pun only partially intended) of SVG.  When running an animated simulation, you are likely to find yourself creating and destroying lots of objects (think simulation entities); if all of these transient objects are added to/removed from the DOM, this reading suggested that things were not going to end well.  In other words, if I did my animation using SVG, performance was going to suck.

Most of this initial research seemed to suggest that for performant animation, HTML Canvas is the way to go. The built-in functionality is rudimentary as compared to SVG, but it should allow for animation of a large number of objects with reasonable performance.  So I started poking around Canvas-based toolkits, settling on KineticJS, which seemed to provide both reasonable support for the kind of animation I wanted and enough documentation and code samples for a Javascript neophyte such as myself.

At this point, one might say I no longer knew which end was up. I had started thinking in terms of SVG, was led to a partially web page/Javascript solution precisely because it had better SVG support, and then found myself looking at a Javascript solution without SVG at all! Nonetheless, I was able to build a rudimentary simulation model editor and animator without a horrendous amount of pain. The model editor’s graphical palette was limited to a small, hardwired collection of basic simulation objects, each represented by a simple graphic (think squares and triangles, along with a T-shaped object representing a queue). Using this, I was able to build and animate an M/M/1 queue model, and it all more or less worked. It was very basic – just a bunch of small green rectangle-shaped entities flying across the screen, queueing up in front of the server (a triangle, of course!), and then flying into my rectangular sink object to die once they were done – but it did allow me to verify my model.

But there were also clearly going to be some issues moving forward, even given my pretty simple and basic requirements:

  • The KineticJS library allowed me to define arbitrarily complex graphical objects – in (Javascript) code. To make this accessible to a user, I’d need a graphical object editor. This didn’t appear to exist for KineticJS, and I was in no mood to roll my own. Other Canvas libraries might have provided some editing tools, but I was not excited by the prospect of a library switch, particularly if it meant tying myself to a so-so graphics editor that defined graphics persisted in a non-standard (and probably homegrown) format.
  • I could have incorporated PNG-based graphics, which seemed to be reasonably well supported by KineticJS. I’d then be stuck with the scalability ugliness of bit-mapped graphics, and I’d lose the ability to access and modify sub-elements of the graphic (to modify text or color on the fly, for example).

I should mention that before starting this little adventure, I had not given much thought to how I would persist models to disk, and what that might mean in terms of basic modeling library design. My initial (pre-animation) simulations were just programs – import the simulation modules, define a few model-specific Process (and maybe Resource) subclasses, instantiate the objects in the model, and run the simulation.

Now the graphical representations for most of these objects would have to be specified and saved ahead of time. Sources, sinks, resources, queues, locations of various types – all of these would be created, saved, and reloaded via an editor, plopped on to the screen as graphics, and then tied to simulation objects at run time. Some objects – entities, essentially – could be ignored; they would be created and destroyed during the simulation run, and their location on the screen would be determined based on code and the location of the other objects they interacted with.

So essentially I had two types of objects:

  • Static objects, which are enumerated by the model (and by extension the modeler) ahead of time and exist in a more or less fixed location (both graphically and logically) for the entire simulation. (The “more or less” should probably be the subject of a future post.)
  • Transient objects, which are created at various points during a simulation run, typically are destroyed during the run, and move about based on the simulation logic and the location of model’s static objects.

I can be a bit slow and dense, but I like to think that I eventually acquire at least a modicum of clue. So at this point, it slowly dawned on me that there’s nothing to prevent me from using both SVG and Canvas in the same animation. I could put both an SVG and Canvas container onto my model editing and animation (HTML) page, one over the other. The transient objects – just entities, at least for now – could be rendered as simple shapes or PNGs on the Canvas container, taking advantage of Canvas’s strengths – the ability to create, destroy and animate lots of simple objects in a reasonably performant way. The static objects – everything else – could be rendered by the SVG container.  All of these objects could be defined and added to the SVG container at the start of a simulation and exist for the entire life of the run, thereby avoiding overhead associated with large scale creation and destruction of elements in SVG. And I’d get the goodies that come along with SVG – compatibility with other tools/editors, vector graphics and the transformation functionality that comes with SVG.

All in all, this sounded much more appealing, particularly since it seemed that the static objects were likely to be the ones represented by more complex graphics. The next version of my model editor incorporated a palette of simulation objects defined by SVG graphics, and was soon followed by enhancements allowing those graphics to be customized by (Python) class – if you want a customized resource graphic, define a SimResource subclass, and attach a Python decorator that specifies it’s graphical representation – typically an SVG file and the id of an element within that file. (In other words, an SVG file can be used to specify graphics for any number of different simulation object types.) This decorator also integrates with the model editor – once your SimResource subclass is sucked into your model, the new resource type appears in the editor tool palette.  If the decorator does not specify a graphic, your subclass will inherit the graphical representation from its base class. A similar decorator may be used to define entity subclasses and graphics, though in this case, the graphical representation must be a PNG or comparable bit-mapped file.

(As an aside, I’ll note that the phrase “once your subclass is sucked into your model” glosses over a number of sins and pitfalls that I have not fully addressed or solved – namely the potential problems that occur when a Python-based tool dynamically and potentially repeatedly reads and uses other Python source code – any of which could have theoretically been modified by the user between uses. At some point I’ll try to post something about this, but if you’re dying for a sneak preview, Google “Python reimport”.)

At this point, I think I’ve found myself with a usable, if basic, animation framework. The 2-D graphics are simple and very cartoonish (dare I say “amateurish”) when compared to the pretty sophisticated stuff you see in high-end commercial solutions.  And even in the context of my simple framework, there’s a lot of work left to do (and problems that have yet to show themselves!). But I’m hopeful that it’s enough to communicate what’s happening during a simulation, which at the end of the day, is really all I need.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>