Archive for category 3d

spring in puerto rico

… is apparently going to be 25 degrees Celsius and none too rainy.

I’m about to take off for a few weeks to visit my mother. It’s the first time I’ve ever been to Puerto Rico when it’s not Christmas; I’ve been apprehensive about how hot it’s going to be, and about how few pairs of shorts I have in my possession.

I have a few things I wanted to mention before I leave. The first is Female Body Parts. Now, I know that sounds morbid, but it’s a photo book of close-up images of various parts of the female anatomy. There’s lots of great reference of eyes, mouths, tongues, noses, and things that are not the face. I bought it while I was in Brooklyn earlier this month visiting Luckbat. It came with a CD, which I figured would have 1K-ish resolution images on it. Imagine my surprise when I found every image on the disc to be over 4Kx3K! At the $30 price tag it currently has on Amazon, that book is a steal.

The other thing I wanted to mention is Messiah 5. If you’re like me, you weren’t able to get it loaded and running after the install. I finally figured it out today and I wanted to post about it. Here’re the steps to getting it working. Keep in mind I’d already gotten one USB stick licensed before I ran through these steps.

1) Run the remove demo script.
2) Run the uninstaller script.
3) Unplug all USB devices.
4) Re-copy Messiah from the original download.
5) Run it. Allow it to get to the licensing screen.
6) QUIT out of the licensing screen by hitting the red X on the window, and allow the program to fully exit.

Now if you plug in your USB stick and run Messiah, it loads fine.

It seems that if you’ve got a license file in there on the initial install, for some odd reason the Crossover Chromium folder doesn’t get created in your ~/Library/Application Support/ folder, which (I believe) is what’s screwing things up.

Either way, it’s loaded for me. I don’t know anything about the program yet but at least I can experiment now.

Wish me luck on the flight. And wish my mother luck!

501 for christmas

Modo 501 is looking fantastic. I downloaded it today, and I can’t wait to put it through it’s paces. I’m especially excited about the revamped painting and sculpting, since it means I can stay in one app as opposed to jumping back and forth between Maya and Mudbox or ZBrush. But more that that, I feel like Modo is exactly where Blender will be once the 2.5 series is put to bed and gives birth to a stable 2.6. Everything is scriptable and it’s extremely simple to make new commands, or commands that properly refire when using interface sliders. I think my favorite part is, I’ve yet to find a feature I don’t like that I couldn’t disable or otherwise change its behavior. Even the default space bar behavior (which switches between component editing modes) is changeable; mine is set to pick item mode now. Not to mention, one of the new guys at work, Rowan, is a Modo master. He’s been invaluable in finding out where things are.

It’s a pretty steep learning curve both modeling and scripting-wise, particularly for someone who’s only really done 3D scripting through the Maya and Blender APIs. Also, while every tool I use when modeling seems to exist in Modo, the names and methods for use are so different that it’s taken me all week to find the first quarter of my usual bag of tricks. But I’ve also picked up a few new ones, like Background Constraint with Vector direction. Holy crap, did I not know I wanted that feature so badly.

Blender’s always going to be there for me, but at least until the 2.5 series stabilizes (and the input manager stops getting stuck when I sculpt, making sculpting impossible), I have a new swiss-army knife for work. Oh, and Luxology: thank you for making my ordering process amazing. I’m not going to say why I’m so happy with you on this blog, but if more companies behaved like you I’d be a happier person all around.

It’s funny, though– I’m not finding with Modo that I fight the learning curve as much as I do when I move to, say, Houdini. Modo draws from all the best parts of Blender, Maya, and Lightwave, so it just works for my head. Here’s to being more efficient with modeling tasks in 2011!

By the way, the new Gorillaz album (the one recorded on an iPad) is up for streaming. I like it a lot more than Plastic Beach. In fact, it feels a lot like D-Sides, which is one of my favorite collections of their music. If you’re a fan, definitely check it out. And if anyone knows what iPad software Albarn used to master these songs, please let me know.

fear of flying high (poly)

We’ve had a number of new hires recently at March. Our new Character Lead showed me something the other day that on the surface of it was so simple, but the application of it is likely to change the way I work entirely with regards to character workflow.

I’m not a bad modeler. (I’d prove it with some images, but all my coolest stuff is still under NDA.) I’ve spent the last year or so focusing on topology flow for animation, and until about a week ago I thought I was doing alright.

But yesterday I was watching the Character Lead remodel (or rather, reshape) a character on our show. The mesh is much more dense than I’d expected, and his technique for doing mass changes in large sections of verts is very interesting (similar to how I do retopology in Blender).

While the new application of that modeling technique is going to be very useful to me when I return to modeling, what really got me was when I asked him about workflow and on keeping triangles out of the mesh. His answer? Add edge loops / splits to the model and force the triangles into quads; don’t be afraid to make the mesh higher resolution.

I ended up thinking about that for the rest of the day. It echoes a conversation I had with Mike years ago when I was dithering over the purchase of my current MacBook Pro. He was pushing for me to get it because he thought my work was being limited too much by hardware considerations. At the time I hadn’t considered that I was doing more work than was necesssary on my 12″ Powerbook, building scenes with a lot of structure for keeping a limited number of polygons on screen to keep the interaction speed usable. When I moved to the new laptop and loaded the animation I was working on at the time, the difference was night and day: the file in which I was previously hiding so many things and using low-res proxies now ran at full speed with final geometry. I realized Mike had been right all along (as he is with many things), and that simple hardware change had a fundamental and lasting effect on how I work and how my work has developed.

However, that nagging sense that things can always be lighter, more efficient, has never really left. I model defensively for rigging– there are always exactly enough loops and lines for the shape I want, but no more. I try to keep poly count low so things like skin clusters and other deformers don’t work overtime, and so that weight painting isn’t painful. While these are valid concerns, the conversation I had yesterday made me realize that there’s a fine balance between keeping things light and having enough polys in there to make modeling and blend shape building easy.

I guess the point is, the next model I make is going to be significantly higher poly. And I need to always be aware of my old habits and whether or not they hold me back as I continue to hone my skills. When it comes to animation, don’t fear the poly!

Tags: ,

new yaaaawk, baybee!

Dimos and I headed out over the weekend to see Mike in New York. The title of this post is what Dimos screamed, cheerleader-on-prom-night-in-the-back-of-a-volkswagen style, every five minutes. Somehow it never got old.

There are amazing pictures taken by Mike of both Dimos and myself on Facebook; hopefully I’ll find some time this weekend to get the pics of my own camera and Flickrize them.

I might talk a bit more about the trip in a few days, but what I actually wanted to write is this:

To project a worldspace point into camera space, you need to multiply the camera’s view matrix by its projection matrix. The projection matrix is grabbable through the OpenMaya API (there’s a function for it on MFnCamera). The view matrix is simply the inverse of the worldspace transform matrix.

Needed to get that out of my head and into a place I wouldn’t lose it. If that makes no sense to you, just pretend it wasn’t there. 🙂

playing in coffee

I’m slowly coming out of the fog of Maya, now that I’m done with using it for Animation Mentor, and I’ve been trying to get back into Cinema 4D. I just wanted to post this short COFFEE script snippet. This looks through all the animation curves on an object and spits out information on them.

var op = doc->GetActiveObject();
var curTime = doc->GetTime();

println("\nKey dump for ", op->GetName(), ":\n");

var t = op->GetFirstCTrack();
if (t == NULL)
	println("Error.");

var i = 0;
var j = 0;
var k;
var c;

while (t != NULL) {
	i++;
	c = t->GetCurve();
	println("\tTrack ", i, ": ", t->GetName(), " -- ",
	c->GetKeyCount(), " keys.");

	for (j = 0; j < c->GetKeyCount(); j++) {
		k = c->GetKey(j);
		var bt = k->GetTime();
		println("\t\t", j, ": v", k->GetValue(), ", t:", bt->GetFrame(30));
	}

	t = t->GetNext();
}

println("Number of tracks: ", i);

The sad part is this: you can’t add new keys to curves in COFFEE; you can only read what’s there. (EPIC FAIL, Maxon.) At least now I know how to get keys and how the BaseTime class works.

Weak References

Last year while I was working at Red Rover, I heard the term “weak reference” in reference to a technique for referencing objects in 3DS Max. The Max TD used them for a variety of things. I didn’t quite understand what he was talking about at the time, since the last version of Max I used was 2.5 and I never did rigging or coding for it then.

More recently I’ve come to the same technique on my own both in Maya and in Cinema 4D, and was reminded of the name for it by that same TD over beers a few weeks back.

Essentially, weak references are variables on objects that contain a pointer to an object, and not a reference to it’s name. In Maya, for example, you may see rigging scripts written by TDs referencing specific objects by name or saving names of objects as they’re created and using those names to connect attributes or add constraints. In a clean scene this works fine, as long as the rigger is meticulous in their naming scheme and runs different stages of the rigging script in proper order.

But what happens when Maya namespaces become involved? As soon as you reference in an asset, every object that makes up that asset gets a namespace prefixed onto it’s name. If you’ve written scripts that require specific names, they break. If your layout files aren’t perfect and the namespace is different between two or more shots (as Maya is wont to append a number, regardless of what you specify), useful tools like character GUIs and the like break and you’re left doing namespace surgery in a text editor.

Weak references sidestep all this by giving you a permanent connection to an object regardless of name changes or namespace prefixes.

A good example is how I’m currently handling cameras in scenes. A decision was made early on, on the current project at work, to name cameras in layout files by the name of the shot / sequence. Normally this isnt a problem, but we’re using a renderer that’s not linked into Maya directly and therefore needs a command line batch exporter written. If all the cameras are named differently, and the camera’s animation has to be baked and exported as well, how do you go about picking the right object?

Using weak references, the problem becomes trivial. You create them as follows:

addAttr -ln "cameraObj" -at "message";

You’ve probably seen attribute connections of type message into IK handles and other things. The message attribute carries no data– that is, it never changes and causes no DAG recalculations. (This is doubly useful because you can connect the message attributes of two objects to message-typed user attributes in a cycle without causing a cycle error– more on that later.) However, the attribute can be used to get the name of the connected object like so:

listConnections object.messageAttribute;

It will return an array of strings. If you rename an object, you can get the object’s current name through the above command.

So where do you store these attributes? For the moment I’m using a trick I saw on the Siggraph ’08 talk by Blue Sky on procedural rigging: I create non-executing script nodes and store connections on them. In the camera example above, every scene has a master script node. On that node are a few attributes, including its “type” and a .message connection to the render camera. It’s them trivial to find the camera’s name:

string $sel[] = `ls -type "script"`;
for ($s in $sel) {
	if (`attributeQuery -node $s -ex "snCamera"`) {
		// this should be the one you need
		// normally I search for the type, but this is an example
		string $conn[] = `listConnections ($s + ".snCamera")`;
		// if it's only one connection incoming, then you're done.
		print("Camera is named " + ($conn[0]) + "\n");
	}
}

This technique can be extended to include all kinds of objects. It can also be very helpful for scripts like character GUIs that need to know what characters are present in a scene, and be able to change the positions of all those controls.

One final note on this for now: In Cinema 4D, every object and tag in a scene can be named the same. Searches for objects or tags by name are often fruitless because of this; if two objects or tags have the same name there’s really no easy way to tell which is which in a COFFEE script. What you can do, however, is create a user data variable that is of the Link type. This allows you to drag and drop an object into that variable’s edit field, and provides a permanent pointer to that object regardless of name. This is very useful in rigging; for example, you can always tell which joints in a leg are control joints, and which are bind joints, by creating links. You can also expose the links in XPresso and use the pointers as if you’d dragged an object onto the XPresso node window.

a quick note on object snapping

A few weeks ago I got stuck while writing a foot snapping tool — the snapping code I used to use didn’t work on the rigs we inherited because there were a lot of pivot changes. I tried a bunch of more elegant ways to fix this but ended up doing the old standby:

delete `orientConstraint`;
delete `pointConstraint`;

Now, that solution wasn’t a satisfying one to me, so I kept looking for a better one.

Turns out there’s an easy fix. If you use getAttr on a transform node’s .worldMatrix, you get back the proper 4×4 set of float values that represents its final position in world space. Afterwards you can set the matrix of another object using xform. It’s pretty simple in either MEL or Python.


// this snaps one object to another using worldSpace matrix coordinates
string $sel[] = `ls -sl`;
float $m[] = `getAttr ($sel[1] + ".worldMatrix")`;
xform -ws -m $m[0] $m[1] $m[2] $m[3] $m[4] $m[5] $m[6] $m[7] $m[8] $m[9] $m[10] $m[11] $m[12] $m[13] $m[14] $m[15] ($sel[0]);

The MEL code is a bit ugly because of having to specifically reference each item in the array of floats, but it works. Here’s the Python equivalent. I keep it now as a button my shelf.


# uses matrices to snap one object to another
import maya
# yes, I put all my commands into the main namespace
from maya.cmds import *

sel = ls(sl=True)
mat = getAttr(sel[1] + ".worldMatrix")
xform(sel[0], ws=True, m=mat)

Katt's Mysterious 3D Lectures – Vector Application: A Better Rivet

You have the aim constraint under your belt. You can guess how a pole vector’s motion will change rotations with a look. You’re feeling a new sense of power and a desire to accomplish… things.

Now what?

Let’s start with something simple. Remember that old standby, the Rivet script, that I mentioned in my last post? Every good tutorial DVD I’ve bought over the years had it. Every rigger I’ve ever met uses it. As scripts go, it’s probably the most useful 2k on HighEnd3D.com.

But did you know it’s also broken?

Let me go back for a moment. A year ago, while I was working under a talented rigger who liked the Farenheit Digital system for creating nurbs ribbon rigs, was saddened by the fact that all licenses of Maya at our company were Maya Complete save two: his, and mine. This meant that the standard way of building ribbons, where you attach a Maya Hair follicle to the surface, wasn’t going to work as Maya Hair only comes with Unlimited. He mentioned something about using an aim constraint and a pointOnSurfaceNode or through the decomposeMatrix node to accomplish the same, although it didn’t work as well. So I was tasked with writing a python node plugin that accomplished the task. It worked well and quickly enough; 40 or so of them were live in the final rigs.

However I prefer to keep the number of non-standard nodes in rigs to a minimum. At my current place of work we realized a need for a follicle-like setup again, so I started researching.

At one point we’d thought we could solve the problem with the Rivet script. Rivet takes two edges, lofts them into a nurbs surface with history, then attaches to the surface a locator using a pointOnSurfaceInfo and an aim constraint. When the lofted surface is relatively square and doesn’t deform much, this works fine. When you try to use just the pointOnSurfaceInfo and aim constraint setup on a longer nurbs surface that deforms and bends, however, we found that the locators do not behave properly. Past a certain amount of twisting, the locators would rotate off in odd directions.

I played with the script and found that the pointOnSurfaceInfo node was feeding the surface normal into the aim constraint as the aim vector, with one of the tangent vectors as Up. Because of this, the aim constraint was causing the locator to flip. The way aim constraints work makes up vectors into suggestions, not rules. It also makes the third axis a product of the other two, as I showed in my last post.

In the end it was a simple fix: instead of using the surface normal (which wasn’t an illogical choice), I fed both surface tangents into the aim constraint and let the third axis, the normal, be the derived one. Since the tangent u and v vectors are always orthogonal regardless of how much you distort the surface, and since they always run in the right directions along the surface, you can be certain that the surface normal — a third orthogonal vector — will still end in the right place. (I bet the surface normal is derived from the cross product of the two tangent vectors anyway, internally.) No need for a custom node or to force the loading of decomposeMatrix; so far I haven’t seen any problems with this setup.

Steps for those who want to try this at home:

1) Create a pointOnSurfaceInfo node attached to your nurbs surface. Set it’s U and V parameters to get the output to the right place on your surface.

2) Use the createNode command to make an aimConstraint node.

3) Plug the pointOnSurfaceInfo’s tangentU into the aimConstraint’s target[0].targetTranslate, and the tangentV into the constraint’s up vector.

katt's mysterious 3D math lectures: how aim constraints work

More often than one would think, I’ll come across a post on CGTalk or some similar forum where the following question:

“Hi, how do aim constraints work? I’d like to write my own.”

gets the reply:

“Thats stupid. Why would you want to do that? Just use the built-in aim constraint. It’s more efficient.”

I hate that reply. It’s generally true that the internal node or whatever that handles the aim constraint in your software *is*, in fact, much more efficient than rolling your own through scripting or API-level programming, but without knowing *how* the node works internally, without understanding *why* it behaves the way it does, you’ll only ever be able to use that constraint in a limited number of ways.

The examples in this post are going to focus specifically on Maya, but the thought process is the same regardless of your software package. You may not have the same level of control when modifying the aim constraint in your package, however. In that case, knowing how it works so you can build your own and extend upon the behavior that’s there becomes all the more important.

Anyway, on to how aim constraints work. I’ll be using “target” and “constrained” to refer to the two objects connected by the aim constraint.

The first thing that happens when you aim constrain one object to another is that constrained’s position is subtracted from target’s. This gives you a vector in the direction of target that passes through both target and constrained.

Why is this important?

Actually the vector is very important — it’s one of three that are needed to describe the orientation of an object in most 3D packages, due to the fact that matrices are used to hold object transforms. In a 4×4 matrix (where the upper-left corner is referred to as m00, and the bottom right is m33; a two-dimensional array of values), m00 – m02 represents the vector along which the object’s X axis lies. M10 – m12 and m20 – m22 are the Y and Z axes respectively. Each vector must sit at a 90 degree angle to the other two — must be orthogonal — just like their respective axes.

This first axis we’ve gotten can be plugged into one of the matrix axis spots to align the first axis on your object. So if you’re aiming the positive Y axis at target, the vector would go into m1. If you’re aiming the negative Y axis, you can either flip the vector you’ve gotten from the earlier subtraction or, if you’re smart, save the extra calculation and just subtract target from constrained instead.

Alright, one axis is constrained. How about the other two?

Next up is the pole. It’s just another direction vector, and the pole vector axis is snapped to it in the same manner that the first axis was set to aim at the target object. Pole vectors can be calculated in any number of ways: you could create a second vector through subtraction again, use a world vector (such as <0,0,1> for the world Z axis), or even just plug in the direction vector from another object’s matrix directly. Put the pole vector into the matrix for the axis that’s pole vector constrained and we’re halfway there.

Now, remember I said that the three axes in the matrix need to be at 90 degree angles to each other? I’m betting that if you draw out the two axes you’ve currently got plugged into your matrix, they’re not aligned. This is expected. If you create an aim-constrained object and target in a 3D scene right now and move the target around a bit, you’ll see that the pole vector axis will aim along the pole vector, but it won’t often snap to it. The pole vector is actually only there to get the third, unconstrained axis. If you do a cross product on the aim vector and the pole vector, the third angle you get will be at a 90 degree angle to them both. We’ll call this the “unconstrained” vector. This means that the first and final axes are finished. Afterwards, another cross product is done between the aim vector and the unconstrained vector to make sure the pole vector is orthogonal to the other two. While this makes the pole vector-constrained axis not always point exactly along the pole vector, you do have all three axes accounted for and the system is relatively stable. Pop the vectors into your matrix and voila — constrained aims at target.

As far as I know any aim constraint based on Euler angles works like this in most packages.

Now on to what’s interesting.

Take Maya, for example. For years I’ve been using the river script by Michael Bazhutki. In fact, I’m willing to bet there are few riggers out there who haven’t used it from time to time. However, until recently I hadn’t ever stopped to look at how his script works.

If you’ve never used rivet, it basically takes two edges or four vertices and lofts a nurbs surface between them, with construction history on. Then it snaps a locator to the center of that surface. The locator rotates perfectly with the surface even after joint and blendshape deformations, so it’s great for sticking buttons or other decorations to character meshes.

It turns out that the script’s main trick is using Maya’s aim constraint node in a neat way: he gets the surface normal from the nurbs surface at the position he wants through a pointOnSurfaceInfo node, uses it as the aim vector, then uses the same pointOnSurfaceInfo node to get the nurbs surface’s tangent vector at the same point and plugs that into the constraint’s aim vector slot. Since the constraint doesn’t care what vectors get plugged into it, and since a nurbs surface at any point can be evaluated to get three orthagonal axes (not unlike the X, Y, and Z axes), this works out great. It also keeps working regardless of how the mesh bends since all the construction history is kept, forcing updates down the line as a character deforms.

This is a trick I’ve used in the past few weeks on our current project at work, and it’s something I plan to expand upon in the coming months. It’s also something that’s gotten me thinking: just what else could I do if I, instead of using the constraint commands to constrain objects together, just created nodes in Maya and used the connections in ways the developers hadn’t envisioned?

But the better question is: how would someone even know to pull apart a set of node connections to a constraint if they didn’t know, roughly, how that constraint works?

I hope this helps someone. As soon as I have time, the next topic I want to write about is something I did with vectors over the weekend: replicating the smear effect from the Pigeon Impossible blog in Cinema 4D.

catching up

Lots of things going on, but I didn’t want to talk about some of it until the details were more concrete.

The first big news is that I’ll be starting a new job with March Entertainment this Monday, as a CG Animator. Apparently there’s some difference between “CG Animator” and “Character Animator” that I was unaware of, where CG Animators deal with more parts of the pipeline, so I’m expecting to have as much fun on the projects coming up as I did at Rover. Bonus points: I get to work with Dimos again. Nothing like doing a project with people you already meld with. Anyway I’m animating on the first project, which excites me a lot.

Ever since Ollie finished I’ve basically been on an enforced vacation. I say enforced because I generally don’t do the relaxing thing — traditionally I use downtime to learn or catch up on personal projects. But instead of doing that this time around, I’ve been doing absolutely nothing. For a few days I sat on the couch and caught up on DVR’d TV shows, especially Heroes and Chuck. I bought and played through most of Prince of Persia on PS3, which surprised me — unlike the last three, the fighting is great and doesn’t detract from the game at all. In fact, I’d say the balance is perfect. Story’s not bad either. I’ve also been going through Final Fantasy XII. I never had a chance to finish it after I bought it a year or two ago and it’s been waiting on me all this time. It’s also quite good. It’s the first FF I’ve enjoyed this much since FF7, and that’s saying something. The voice acting is superb, the plot is great, the writing / translation is well above the normal level, and even for a PS2 game the graphics are extremely well done.

Not that I haven’t been studying up on things. I’ve been doing some rig tests in Maya, working out some issues in Blender, and learning waaaay more than anyone should about using cloth sims as rigid body generators in Cinema 4D than anyone should.

I’ve also been framing through the Bolt Blu-Ray, as I have time. In the disc extras they mention how they simplified the paint on the backgrounds to keep the focus of each shot prominent, like the old 2D cartoons used to do. You don’t notice it unless you look for it, but it’s everywhere throughout the movie. Certain things are rendered in high quality. Others are so simplified that when you pause the movie you can barely tell what they are. Paint strokes are visible everywhere. If you’re an animation geek and are interested in seeing this, check out the distant skyscrapers in the city. Also check out the scene where Buttons is looking for something to bash Bolt over the head with, when they’re both in the moving truck after leaving NYC. That particular scene really shows what’s going on. It’s amazing I watched through the movie and never noticed it, not once.

What else… A lot of Papervision playing. I’ve had a look at all the 3D engines for Flash and out of all of them I like the way Papervision renders triangles the best. A lot of the other engines have issues with gaps between triangles and quads, and none seem to have any serious benefits over PV3D. There are a few limitations that I’m still trying to get over, especially with the limit on the number of triangles. It’s a bit like the DS, which can’t draw more than 2000 triangles per frame. (Or faces; I’m not sure which.) Even on my dual-core laptop the number of triangles you can use is pretty limited. Note that I don’t say limiting — a ceiling on the number of triangles you can use just means you have to be creative in your use of them. There’re also other things to speed up how the system works — anything I do in PV3D will used baked lighting, for example. I also plan on being very aggressive on keeping poly counts down. I’ve also been reading up on BSP tree creation (finally understand how portals work).

That is, if I use PV3D.

I’ve had enough time to think about games I want to make, and what I would need to know in order to get them made. I know that everything has to be done in stages, and that to make the game I really want to make, I have to build a better foundation. I need to build something small and simple to see the extent of what something small and simple takes, in order to use it as a lens to look at a larger project. Kind of like how doing a short film is like a microcosm for a feature, or even a TV show.

Every so often when I’m really bored or full of insomnia I’ll hit up The Video Game Name Generator and write down some of the best ones, like Go Go Basketball Gladiator or Nuclear Transvestite Experience. I write down the ones I love for possible future games (or domain names). One in particular would be good for a short game. We’ll see how that goes. Either way, I’ll probably end up using Unity. 2.5 just hit, and I could make web versions of the game just as easily with it as I could do something up in Flash.

Oh, and I rewrote the first chapter in my book. I don’t talk about the book on here hardly at all, but it’s something that’s always going in the background despite everything else I’m doing.

Heh… I guess I didn’t do “nothing” per se, but I do feel a hell of a lot more relaxed. Oh, and did I mention that I did a quick redesign of my main site?