SAUCE (Smart Assets for re-Use in Creative Environments) is a three-year EU Research and Innovation project between 8 companies and research institutions to create a step-change in allowing creative industry companies to re-use existing digital assets for future productions.
SAUCE will research, develop, pilot and demonstrate a set of professional tools and techniques for making content ‘smarter’, so that it is fully adaptive in a broad, unprecedented manner: adaptive to context (which facilitates re-use), to purpose (among or within industries), to the user (improving the viewing experience), and to the production environment (so that it is ‘future-proof’). The approach is based on research into
These new technologies and tools will show that a vast reduction of costs and increases in efficiency are possible, facilitating the production of more content, of higher quality and creativity, for the benefit of the competitiveness of the European creative industries.
This project has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement No 780470.
Throughout the SAUCE project multiple shootings with the lightfield camera rig were realised. A first experimental production shot with University of Saarland Lightfield rig at Filmakademie Baden-Württemberg studios took place in October 2018. The exemplary lightfield production UNFOLDING was shot with the same rig at a studio of Saarländischer Rundfunk in January 2019.
The SAUCE Lightfield Shoot example data captured October 2018 contains e.g. walkcycles and fire / smoke elements.
Semantic Animation describes using high level descriptors (such as verbal commands) to direct the performances of special character assets which are capable of determining their own detailed low-level behaviours and scene interactions. The required semantic information can be created in a manual authoring process or by machine learning. It includes our work in both virtual production and physically based animation.
Objectives
In this context Filmakademie has crafted and released the PHS Motion Library. it contains over an hour of bipedal reference motion capture data and videos. The data-set was created with particular focus on emotional variations of walk cycles.
Semantic Character Animation is realized with dedicated focus on virtual production scenarios. Filmakademie developed an open character streaming protocol for its VPET toolset. The entire character (including weights, skeleton etc.) can be transferred to the tablets at run time. The newly developed API then allows arbitrary external animation solving engines to animate the character through streamed bone animations.
This interface extends the open architecture of VPET. Although the client is based on Unity, any host (providing the scene to the VPET clients) could be connected. We have demonstrated this with Foundry Katana in the past. The latest addition allows to connect an arbitrary animation engine.
Semantic Scene Understanding allows to extract labels for (SMART) assets of a scene by the means of machine learning. They can be used to prepare the assets for their dedicated production scenario or to dynamically create character animations.
Filmakademie released the LOVE & 50 MEGATONS - Miniature City Assets. This 2020 VES nominated production made heavy use of a virtual production scenario and scanned miniature assets.