2 minute read

Last week we were sitting around the office wondering if it would be possible to place ourselves in a game world with Autodesk’s Project Photofly.  How cool would that be?  We thought we might be able to scan one of us in a T-pose and then use Mixamo’s Auto-Rigging tool to create a rigged avatar.  Then we could be running around a level in front of our Kinect as ourselves.

Sadly it never went past stage one.  It’s harder than you might think to hold a T-pose for 3 minutes while someone circles you twice snapping pictures at 10 degree intervals.

I don’t have any pictures of the results; I came out looking like the elephant man.  We’ll probably try again at some point, but in the meantime I made another scan of a pair of static objects that was turning out pretty good until I got to the back of the monkey.

All in all Autodesk’s Photofly software is pretty cool.  It’s still lacking in the area of iteration and debugging.  You can try manually tagging photo matchup points between images to give it a better idea on how the images fit together but it takes awhile for the data to be processed in the cloud.  It’s also unclear where some data comes from, or why portions of the background become part of the foreground mesh.  If it had better feedback for how that data became part of the mesh cleaning up the results would be a lot easier.

Also, if you own a camera with a sports video mode that captures at 60 FPS you can just slowly circle the subject and then dump all the frames using ImageGrab.  Which is way easier than snapping individual pictures.

I wonder if I could generate 3d art for a game jam…