Archive for April, 2010


I had to make a difficult decision today: Looks like I’ll be returning my New Core i7 MacBook Pro.

I bought the in-store antiglare model because the antiglare screen was the only thing not on the standard build that I wanted, and I thought it was amazing to finally be able to pick it up in-store after the one month wait on my last laptop. But the current generation of laptops only offer the antiglare option on the higher-resolution screen.

At first I thought it wouldn’t be an issue, that I’d appreciate the extra screen real estate. But a week of using the machine has led me to the conclusion that the higher resolution screen at 15″ is too much of a strain on the eyes. At least, on my eyes.

I watched video tutorials all day today and it got to the point where I couldn’t focus on the windows on-screen. Three years on my previous laptop and I never had that problem; one week with the new one and I’m punch-drunk.

I argue all the time with people about why I get Mac laptops. I honestly do believe they’re of a higher calibre than their equivalently-priced PC counterparts, but between this issue, a bevy of issues I’ve had since iPhone OS update 3.1, and the fact that Apple still hasn’t gotten their shit together with regards to OpenGL standards on the Mac and I’m seriously considering a Windows laptop for my next work machine.

I wiped this one and I’m taking it back. I’ll replace it with a glossy-screen copy, and hopefully that helps my headaches. But I can’t help thinking that this could have been avoided had Apple offerred the antiglare option on the regular resolution screens, an option I’d have gotten had I bought a Dell.

Funnily enough I have a copy of Windows 7 Professional arriving in the mail in a week or two, so I’ll have plenty of time to get re-acclimated with that side of things if I do decide to make the switch back (after my 2001 switch to Mac). Really going to miss Quicksilver, though.

new paradigms

Part of working at anything, of being a craftsperson, is the constant search for new techniques that either aid you on your process or add something new to your process, making you better in the process.

Over the last few weeks I’ve heard about a number of rigging techniques that sounded counter-intuive at first, but the more I think about them the more interesting they become.

The one I’m most interested in trying out will be arriving soon as a very expensive DVD: the Mastering Maya: Developing Modular Rigging Systems with Python. The autorig itself is almost identical to something I’ve worked on in my spare time, but what’s crazy about it is that the autorigger works by layering on top of referenced FK skeletons in shot files.

I’ve never built a rig that was feature limited, or where an animator asked for something that I thought wouldn’t benefit everyone. (I do only IK limbs in my own work, but that’s a different story.) But before, the rig would be modified and the change would move downstream as part of the referencing system. The idea that you’d want to not reference characters as a whole, and allow animators to pick and choose their favorite controls, seemed ludicrous on first listen.

The pros are very compelling. You can always strip out the control rigs and put the keys on the FK rig; pure FK rigs are very compatible across all programs. Not to mention, feature-level control schemes could be applied to game characters as well (and moving forwards, I fully expect more and more projects in this industry to target all “three screens”). There’s also the ease of fixing issues on a single animator basis: once a fix is in the autorig, they can bake their keys down to the FK rig, remove the old controller, them reapply the rig in the scene with no need for the TD to come over and swap things around.

But what of multiple scenes? Does a script run on scene load and alert animators to updates controls as they become available? How are major changes propagated to all shots throughout the pipeline?

Right now the answer I’m coming up with is: the animators apply changes on their own. If they want the new rig with fixes, then they opt in by baking their keys to the base FK rig and blowing away the broken contol rig, replacing it with the fixed version. I can’t wait to see if that’s how the 3DBuzz tutorial solves this problem.

It also gets around a nasty issue: because broken rigs live in scene files, you don’t have to have multiple copies of fixed rigs that travel downstream for shots that used the respective broken rig iterations.

Then there’s the idea that this makes character referencing less important– in software that doesn’t support animated references like Lightwave and Cinema4D, you get the benefts of a tool that gets around the issues of rig updates. You still need to force tool updates on all artist machines, but that’s less of an problem for me.

Anyway, it’s been over a week and I’m still waiting on my purchase, so all I can do is speculate and look forward to what’s in store.

On the music front, I finally got something out of Live that did not suck. In fact, I just might like the drum beat. The weird part is that while nothing I have in my head comes out when I sit down to write music, what does come– however different it may be– still makes me happy.

fear of flying high (poly)

We’ve had a number of new hires recently at March. Our new Character Lead showed me something the other day that on the surface of it was so simple, but the application of it is likely to change the way I work entirely with regards to character workflow.

I’m not a bad modeler. (I’d prove it with some images, but all my coolest stuff is still under NDA.) I’ve spent the last year or so focusing on topology flow for animation, and until about a week ago I thought I was doing alright.

But yesterday I was watching the Character Lead remodel (or rather, reshape) a character on our show. The mesh is much more dense than I’d expected, and his technique for doing mass changes in large sections of verts is very interesting (similar to how I do retopology in Blender).

While the new application of that modeling technique is going to be very useful to me when I return to modeling, what really got me was when I asked him about workflow and on keeping triangles out of the mesh. His answer? Add edge loops / splits to the model and force the triangles into quads; don’t be afraid to make the mesh higher resolution.

I ended up thinking about that for the rest of the day. It echoes a conversation I had with Mike years ago when I was dithering over the purchase of my current MacBook Pro. He was pushing for me to get it because he thought my work was being limited too much by hardware considerations. At the time I hadn’t considered that I was doing more work than was necesssary on my 12″ Powerbook, building scenes with a lot of structure for keeping a limited number of polygons on screen to keep the interaction speed usable. When I moved to the new laptop and loaded the animation I was working on at the time, the difference was night and day: the file in which I was previously hiding so many things and using low-res proxies now ran at full speed with final geometry. I realized Mike had been right all along (as he is with many things), and that simple hardware change had a fundamental and lasting effect on how I work and how my work has developed.

However, that nagging sense that things can always be lighter, more efficient, has never really left. I model defensively for rigging– there are always exactly enough loops and lines for the shape I want, but no more. I try to keep poly count low so things like skin clusters and other deformers don’t work overtime, and so that weight painting isn’t painful. While these are valid concerns, the conversation I had yesterday made me realize that there’s a fine balance between keeping things light and having enough polys in there to make modeling and blend shape building easy.

I guess the point is, the next model I make is going to be significantly higher poly. And I need to always be aware of my old habits and whether or not they hold me back as I continue to hone my skills. When it comes to animation, don’t fear the poly!

Tags: ,

new yaaaawk, baybee!

Dimos and I headed out over the weekend to see Mike in New York. The title of this post is what Dimos screamed, cheerleader-on-prom-night-in-the-back-of-a-volkswagen style, every five minutes. Somehow it never got old.

There are amazing pictures taken by Mike of both Dimos and myself on Facebook; hopefully I’ll find some time this weekend to get the pics of my own camera and Flickrize them.

I might talk a bit more about the trip in a few days, but what I actually wanted to write is this:

To project a worldspace point into camera space, you need to multiply the camera’s view matrix by its projection matrix. The projection matrix is grabbable through the OpenMaya API (there’s a function for it on MFnCamera). The view matrix is simply the inverse of the worldspace transform matrix.

Needed to get that out of my head and into a place I wouldn’t lose it. If that makes no sense to you, just pretend it wasn’t there. 🙂