Thursday, 20 June 2013


Get ready for one crazy night of partying on our beautiful rooftop!

Monday, 13 May 2013

IGA Hair Salon : How we do it.

  The IGA campain features anywhere from 3 to 16 characters per spot.  All these CG actors need to drop by the virtual hair salon before they are allowed on set.  Here's what happened to Oceane Rabais and Bella Marinada at this stage.

1-We always start with the character design made here at SHED as a reference.

  For the whole hair process we use a collection of in-house compounds that are derived from krinstinka ( ) and Melena ( ).  We also recently added a few nodes fromTriggerfish animation studios ( ).

2 - We then look up on the internet for a real life reference of what the hairdo could look like.  This is only as a reference to capture certain real life details.  Since we are going for a Cartoonish look, we are not aiming at reproducing the reference exactly.  Of course a picture of a duckface girl is always a plus.

3 - We proceed to create an emitter fitted to the head from which we emit guide strands with Ice.  They get their shape from nurbs surfaces.  Those guides are low in number (from 200 to 400), so it's easy to work with them to groom and later simulate and cache on disk.  The idea is to get the shape of the hairstyle and the length.  The bright colors are there to help see what's going on.

4 - Next, we clone theses strands, add an offset to their position and apply a few Ice nodes to further the styling.  These nodes generally include randomizing and clumping amongst others.  We now have around 90 000 strands and it can go up to 200 000.

5 - Then we repeat the process with the eyelashes and the eyebrows.  During the whole process the look is  tweaked in a fast rendering scene. 

6 -  Once happy with the results, we copy the point clouds and emitters to the "render model" where the point clouds will be awaiting an Icecache for the corresponding shot.  We use Alembic to transfer animation from rig to render model and the Ice emitters are "cage deformed" to the alembic geometries because the hair styling is done too late in the process to include theses emitter in the alembic export.

7 - Back to the Hair model we convert the guides strands to mesh geometries.  We apply syflex cloth simulation operators to these geometries to get ready for shot simulation.  We link the guide strands to the syflex mesh so they inherit the simulation.

8 - Next comes shot by shot simulation and Ice caching of the guides strands (hair, lashes, eyebrows and beard if necessary).

9 - Before we pass down the simulation caches to the rendering department, we need to do a test render to be sure every frame works and there is no glitch/pop.  With final beauty renderings taking sometimes close to 2 hours per frame, it is not a good thing to have to re-render a shot because a hair strand is out of place !  The scene we use renders quickly with no complex shaders and only direct lighting.

10 - Once we are happy with the look of the hair, the movement of the simulation AND most of all once we've resolved all the problems, we give the signal to the rendering department.  The hair PointClouds are always automatically linked to the appropriate simulation cache for the current shot so all they have to do is "unhide" the corresponding object in their scene and voila !

Luc Girard, our hair artist, was interviewed by the guys at TD Survival on facebook. You can watch the video here:

Tuesday, 7 May 2013

IGA - Aide Gourmet

Here goes the 2nd spot of the 2013 IGA campaign. "Aide Gourmet" was about showing how much knowledge an IGA employee got in his head. For this we had to create a huge library filled with characters that suggest different idea to the client. Here's some behind-the-scenes!

View the final in english here:

IGA - Aide Gourmet - EN from SHED on Vimeo.


This was a great opportunity to revisit our favorite section of the IGA store (Fruits & Vegetables). We went back to the previous campaign and retrieve all our asset. The biggest part of the job was to re-update all the shaders since the assets were getting old and dusty!

We created this section so camera could basically film in any direction without having to rethink the assets. That way, we could reuse the same layout in the two IGA commercials.

The library section was definitely more tricky. In the end, with all the bookshelves, it contained more than 30 000 objects. We proceeded to split it in different floors so we could easily manage each "character setup" of the sequence. As for the books, we used an ICE setup to populate the bookshelves.

When the layout was done, it was exported as an Arnold standin (.ASS) so the render scene wouldn't get heavy. That way, our lighting artist could position the lights based on a lowres mesh of the scene, and the the high res object would be processed only at rendering.

You can see here the result of some of our dailies internal comments after a basic lighting setup was built to put the modelling in context.

Then of course came the shading of the environment, here's some quick shading wip:


As usual, here are all our animation step:

IGA Aide Gourmet :: Animation from SHED on Vimeo.


Our lighting pipeline goes like this. Instead of  beginning by lighting characters, we first light the environment, and go through all the pipeline with it (lighting, render, comp). There is several good reason for this. To name a few: our environment renders are way longer to render than our characters so it's good to start them as early as possible in the production and once we know the look of the background we can more easily light the characters.

Here's a small breakdown of the first shot of the commercial:

IGA Aide Gourmet :: Behind-The-Scenes from SHED on Vimeo.

Some Work In Progress of the environments Lighting:

Arnold render didn't support IES lights files, so we created our own
gobo filters to create a similar look

Some work in progress and final frames of the characters lighting

As you can see, we put the environment look dev. in rotoscopy while we light the characters so we can get a quick visual representation of the final look of the shot

 And of course, some fun technical failure

This spot was also tricky because of the number of characters. We had to create 14 characters. Of course some of them are seen in the background and doesn't need to be as polished as the main ones, but even removing those, we had 6 characters seen in close up. That means that all those character also needed proper hair grooming and simulation. In a couple of days, we will have a post about our Hair creation pipeline. So stay tuned.

Some final Full resolution (2538x1080) frames (click on the thumbnail, then open full size image in a new tab to see the real resolution) :

I hope you liked this behind-the-scenes! We definitely loved working on this commercial! Again, keep checking the blog because in a few days we'll do a post about our Hair pipeline!