The challenge is to keep the app’s media files small so that they can be served on-demand from a content delivery network (CDN). This ensures only content that is relevant to the current user is downloaded and the main app size is kept small. Instead of using a mp4 or other type of movie file to convey the town 360 scene I am using the new Google webp format which I have previously validated within the 360 Flutter component. My aim is to keep this file as small as possible so it can be quickly served from a remote server. Other content such as voice and 3d characters which are part of the experience but not specific to a town may be compiled as assets with the build as they will not need to update as often as other interactions. There is also the option to stream the non-town specific video over the underlying 360 animated webp image file.
Using an Adobe Media Encoder plugin I was able to export a short section of my 360 movie into a webm file. The movie version of the webp format. However, this format turned out not to be supported by the 360 component I am using. So, I am looking to convert to webp as I will not need embedded audio; which can be played from a separate file.
I found that Google provides decent documentation for webp as well as a number of command line programs to help convert.
The conversion worked nicely. I am now able to open up my town scene with movement; in this case rainfall. However, (there is always a however…) the 360 plugin supports the standard Flutter Image widget which in turn support webp animated images, but I am so far unable to loop. So, the animation stops after the final frame.
As the 360 visual content of the spatial narrative is town based; the relevant media file is served from the CDN. Its important that the image files are as optimized as possible do they transfer quickly. Sound effects will be embedded within the app as they are common to all users, whereas narrative speech is also, like the town based visual content, based on the user’s language but also the path they take through the adventure based on their decision making. So, it also makes sense to serve this content from CDN. Speech files do not need to load instantly as they are usually triggered through an interaction. So, the final recipe involves playing embedded sound effects and an embedded visual effect while the main 360 media loads. The media is then cached so the slight delay is only noticable on the first playback. The narrative sound is then played on top of the 360 visual content at the appropriate moments.
For the sound I have opted for an AI generated voice; albeit with intonation and a Hollywood accent.
Noirscape experiments with a cross reality approach to narrative. Participants search for and find fictional objects in their own physical home through the AR (augmented reality) feature. One of these objects is a door and it’s keyhole a doorway between an augmented view of one world and the entirely fictional world of another. Furthermore, Noirscape binds the narrative with physical spaces in the nearby town. In my case and for the purpose of the pilot version of Noirscape, this is the French town of Bourg-en-Bresse.
I previously carried out fieldwork collecting 360 panoramic photographic content in and around the town; at over twenty locations selected not neccesarily for their prominence, but also for their intrigue; whether that be a curious street name, a bygone brewery turned warehouse or the site of a house where a celebrated artist and philosopher one lived. The noirscape experience will take participants into their town where segments of narrative are reveals through what I call flashbacks – these are a staple component of film noir; where the protagonist, who is usually since deceased or condemmed to a life of prison, recounts events from the past which played an important role in their current whereabouts [or absence of].
My challenge is to take my 360 content from the town and combine it with fictional Noir narrative to give an augmented or combined immersive experience whereby the content is triggered only by visiting the place in the physical world and at which point, a flashback from a fictional past occurs. To achieve this I decided to work with digital 3d character creation and animation. I had previously arranged to work with a friend who is also an actor; but, it’s complicate right now with the pandemic, to meet up and spending enough quality time to get something filmed; I was planning to use my green screens and then take the content into the 360 editor using Adobe After Effects and Premier Pro. One thing lead to another and I opted for digital characters. I initially hoped I’d be able to use Adobe software but they have discontinued their Fuse product which was a character designer app that could be used with Mixamo, their recently acquired character animation service. I decided to use Reallusion’s Character Creator instead due to the vast amount of resources available. I used Headshot, their AI face generator to base character on my own face (although I’ve reworked it somewhat since!) and I imported custom objects like a Fedora hat and set up the character in a black coat.
Next I took the character into iClone, Reallusion’s 3D animation suite. The challenge with iClone was to be able to bring in my 360 photo and create my own scene within the panorama. However, I ran into problems with this at first. While export to 360 panorama format is suported in iClone, I couldn’t achieve this using photography without experiencing problems with the way the image was being wrapper; due to distortion at the poles of the sphere if the Skybox object. The Skybox object in iClone and more generally in 3D design, is the imagery used to define the farthest most visible details; this would normally be the sky, hence the name; but may also be a distant mountain range. Usually this would only be thought of as a backdrop, with far more focus on the foreground and midground detail. In my case the Skybox would be represented by a complete 360 photo, in which I would place 3D assets like a person, a vehicle, etc.
I discussed the issue in the Reallusion support forum; and one solution put forward was to create my own 3d sphere object and set my 360 image as the texture. This did produce a slightly better outcome but not satisfactory enough for what I need. The Reallusion is fantastic nontheless; what I am seeking to do is certainly not a typical user-case by any means. One really good feature with iClone, and one of the key reasons for settings a photo as the Skybox, is for calculating light within a scene. The iClone software will identify from the image, in my case the 360 photo, which direction light is coming from and therefore where to cast light and shade within the 3D assets added to the scene. So, although I chose not to use iClone with the 360 photo visible, I still used it for the lighting work.
Within iClone I applied some subtle animation to my character; his tie blows in the wind and he blinks and moves a little while he waits for his rendez-vous. I applied rain effects with splashes and flickering light effects. In order to export my animation without the Skybox image so that I could bring it into Adobe After Effects I would need to export as an image sequence so ensure a transparent background. The sequence is 30 seconds long and 30 frames per second; so the software rendered 900 images in total which I then imported into After Effects.
Within After Effects the first challenge was to align the two-dimensional representation of my sequence within a 360 environment. If I place it as-is then it will be foreably bent into a banana shape as it is interprated through a 360 viewer. So, to avoid this, it’s important to alter the curvature of the 2d assets to align with the 360 image in equirectangular-panoramic format.
I’m generally pleases with the outcome and although it took quite a bit of time to get what I wanted, I now have a documented workflow for the process; I have a character ready to deplot to new scenarios and the knowhow to create others much more quickly. A small issue I have with the end result is that the animation is too subtle to really see properly on a mobile device; but this is easily tweaked. For now, I’m going to settle with what I have for the purpose of integrating with the app. The next step is to create a looping image based version of the scene in webp format as I have shown in a previous post. I will then play the audio channel, with the voice narration and sound effects via the app/device rather than the media file itself. This will keep the size of the media file down and allow me to serve the localised element (the view using footage from a specific town) and the global content – the spoken narrative.
I am currently working on a conceptual design document inspired by BJ Fogg, founder, and director of the Stanford Behavior Design Lab.
I am looking to emulate the graphical feeland mood of classic film noir as much as possible; studying posters and film titles from the era (1930s – 1950’s) and within this genre.
The title for my product is Noirscape. This is a term often used to describe a typical cityscape, townscape or even interiorscape that carries the mood and feel of Film Noir. My concept involves spatialised narrative and is also inspired by my own enjoyment of ‘escape’ games. Thus, Noirscape, is a title that can be interpreted in the sense describe above – a Noirscape (a type of place) or as Noir[e]scape (Noir Escape).
The concept is primarily an app based experience. However, it will be sold as a boxed product. The box will include a number of elements which form part of this experience – a mysterious bundle including an old newspaper clipping, an ID card which is used to acivate the app aswell as provide hints during the game via it’s NFC (Near-field Communication) functionality and several other items, pencil, notebook and a set of abstract puzzle pieces. I have used 3d software to create an early concept design of what this might look like.
Early Concept Design for Noirscape
I created a simple visual to convey what form this concept will take from a user’s perspective. Although, this is a boxed product, the main experience will by via a smartphone’s innovative features including GPS anchors, immersive 360 degree video with special effects as well as the NFC feature described earlier.
“Getting this visual into people’s heads early helps them start thinking about your concept in concrete ways”
BJ Fogg, pp 203
I have begun building a user story board to illustrate the user experience in a step by step manner. Rather than use doodles, I decided to make use of my Photoshop skills combined with 3D object design and renderings to produce a photo-montage type story.
During my recent experiments I have been working with Adobe After Effects to apply special effects and animations to 360 degree video footage. My first experiments were quite successful. I was able to do what I set out to do, with a couple of areas that would need some further attention. One such area was the exportation format from After Effects which was producing an unusual effect whereby the 360 image was not wrapping correctly.
In the image below it can be seen how a 360 image looks when flattenned out. There are a multitude of settings and formats within After Effects related to 360 formatting; some of these depend on the destination platform. For example, whether the video will be used in VR or as a Facebook or Youtube video. After many, many cases of trial and error with different formats [I was unable to find sufficient documentation for my specific case within the Adobe support] I was able to output a satisfactory video, complete with the colour filters and rain effects.
I took about 20 pictures at various locations around my town. I had previously used 360 video as the source format. However, I have since opted to work 360 photos which I then apply animated effects and further layers of superimposed footage within the Adobe suite of software. This approach means I can opt for a higher defintion of the underlying image source (a photo is higher resolution that a frame) and it also means the final video size is much smaller. A further advantage is that it more straightforward to edit out any unwanted elements. For example, I took one of my photographs into Adobe Photoshop and used it’s 3D tools to remove the camera tripod and the shadow it was casting.
Below, the image can be seen, within Photoshop – usually associated with flat images, with filters applied and the ability to move and edit in 360 mode.
The following short clip demonstrates how what was originally static colour 360 image can be transformed into a more lively and ambient scene with just a few special effects.
Using this approach, I will be able to create some interesting scenes, by adding further animated scenarios and providing clues to the player about how to proceed wihin the gameplay.