Spatial Narrative Content

Noirscape experiments with a cross reality approach to narrative. Participants search for and find fictional objects in their own physical home through the AR (augmented reality) feature. One of these objects is a door and it’s keyhole a doorway between an augmented view of one world and the entirely fictional world of another. Furthermore, Noirscape binds the narrative with physical spaces in the nearby town. In my case and for the purpose of the pilot version of Noirscape, this is the French town of Bourg-en-Bresse.

I previously carried out fieldwork collecting 360 panoramic photographic content in and around the town; at over twenty locations selected not neccesarily for their prominence, but also for their intrigue; whether that be a curious street name, a bygone brewery turned warehouse or the site of a house where a celebrated artist and philosopher one lived. The noirscape experience will take participants into their town where segments of narrative are reveals through what I call flashbacks – these are a staple component of film noir; where the protagonist, who is usually since deceased or condemmed to a life of prison, recounts events from the past which played an important role in their current whereabouts [or absence of].

Opening Sequence to Sunset Boulevard, Paramount, 1950

My challenge is to take my 360 content from the town and combine it with fictional Noir narrative to give an augmented or combined immersive experience whereby the content is triggered only by visiting the place in the physical world and at which point, a flashback from a fictional past occurs. To achieve this I decided to work with digital 3d character creation and animation. I had previously arranged to work with a friend who is also an actor; but, it’s complicate right now with the pandemic, to meet up and spending enough quality time to get something filmed; I was planning to use my green screens and then take the content into the 360 editor using Adobe After Effects and Premier Pro. One thing lead to another and I opted for digital characters. I initially hoped I’d be able to use Adobe software but they have discontinued their Fuse product which was a character designer app that could be used with Mixamo, their recently acquired character animation service. I decided to use Reallusion’s Character Creator instead due to the vast amount of resources available. I used Headshot, their AI face generator to base character on my own face (although I’ve reworked it somewhat since!) and I imported custom objects like a Fedora hat and set up the character in a black coat.

A base character in Reallusion Character Creator software with an AI interpretation of my face projected onto it.
My clothed and hatted character in a T pose
Closer shot

Experimenting with different predefined pose templates

Next I took the character into iClone, Reallusion’s 3D animation suite. The challenge with iClone was to be able to bring in my 360 photo and create my own scene within the panorama. However, I ran into problems with this at first. While export to 360 panorama format is suported in iClone, I couldn’t achieve this using photography without experiencing problems with the way the image was being wrapper; due to distortion at the poles of the sphere if the Skybox object. The Skybox object in iClone and more generally in 3D design, is the imagery used to define the farthest most visible details; this would normally be the sky, hence the name; but may also be a distant mountain range. Usually this would only be thought of as a backdrop, with far more focus on the foreground and midground detail. In my case the Skybox would be represented by a complete 360 photo, in which I would place 3D assets like a person, a vehicle, etc.

Example of 0 degrees (ground) when 360 is wrapped within Photoshop;

Ground shot taken in iClone with the same 360 photo set as the Skybox image

I discussed the issue in the Reallusion support forum; and one solution put forward was to create my own 3d sphere object and set my 360 image as the texture. This did produce a slightly better outcome but not satisfactory enough for what I need. The Reallusion is fantastic nontheless; what I am seeking to do is certainly not a typical user-case by any means. One really good feature with iClone, and one of the key reasons for settings a photo as the Skybox, is for calculating light within a scene. The iClone software will identify from the image, in my case the 360 photo, which direction light is coming from and therefore where to cast light and shade within the 3D assets added to the scene. So, although I chose not to use iClone with the 360 photo visible, I still used it for the lighting work.

Scene from within iClone with my 3D character and other assets placed within my photo.

Within iClone I applied some subtle animation to my character; his tie blows in the wind and he blinks and moves a little while he waits for his rendez-vous. I applied rain effects with splashes and flickering light effects. In order to export my animation without the Skybox image so that I could bring it into Adobe After Effects I would need to export as an image sequence so ensure a transparent background. The sequence is 30 seconds long and 30 frames per second; so the software rendered 900 images in total which I then imported into After Effects.

Within After Effects the first challenge was to align the two-dimensional representation of my sequence within a 360 environment. If I place it as-is then it will be foreably bent into a banana shape as it is interprated through a 360 viewer. So, to avoid this, it’s important to alter the curvature of the 2d assets to align with the 360 image in equirectangular-panoramic format.

The 2D animation curvature is altered to match that of the 360 scene so that when wrapped into a sphere it looks correct.
My Animation positioned within the 360 photo with field of view warping to match 360 sphere position.
Adobe After Effects Settings Using the VR Plane to Sphere effect to warp the field of view.

I’m generally pleases with the outcome and although it took quite a bit of time to get what I wanted, I now have a documented workflow for the process; I have a character ready to deplot to new scenarios and the knowhow to create others much more quickly. A small issue I have with the end result is that the animation is too subtle to really see properly on a mobile device; but this is easily tweaked. For now, I’m going to settle with what I have for the purpose of integrating with the app. The next step is to create a looping image based version of the scene in webp format as I have shown in a previous post. I will then play the audio channel, with the voice narration and sound effects via the app/device rather than the media file itself. This will keep the size of the media file down and allow me to serve the localised element (the view using footage from a specific town) and the global content – the spoken narrative.

Mobile phone view of interactive scene
Interactive YoutTube Version

Narrative Props & Backstory

While, in itself, the narrative of Noirscape is to be interpreted at will by it’s participants, the structure of the environment in which the experience will take place will of course provide the underlying props and backstory along with the opportunity for interactivity with these props, people and different types of spaces – fictional, semi-fictional and realworld.

To help anchor my own understanding of the interactive aspect I have put together a flowchart to represent a snapshot of the app’s experience flow. This diagram illustrates how the narrative interactivity moves between levels of fiction, semi-fiction and realworld spaces while providing the participant with the props, events and actions that make the app ‘playable’.

Flow chart to illustrate process of moving between ‘realities’ within a single narrative.

While this diagram only illustrates a section of the app’s world it would represent a satisfactory achievement if I am able to build the functionality over the next four to six weeks. From then on I would be able to focus on adding new interactive items, 360 events and so on. The functionality to power the above snapshot is to be fed by cloud based data; hence I am able to add new elements on-the-fly; without having to bake them into the app itself and have many releases.

Found Objects

A common theme throughout the experience is the aquisition of Noirscape objects. These objects can be found in the Augmented Reality (AR) part of the app, the immersive 360 world as well as outside in the local town.

The objects will each have their own unique properties and actions. Actions represent what a participant can do with each item. For example, as per the flow chart, the participant will experience the old fashioned realsized doorway appear in their own home within the AR part of the app. They will learn that the door is locked as each item has a description. A common action in choose your own adventure is ‘examine‘. The participant may examine the door. The door object will be pre-configured to possess this ‘examinability‘ quality along with a consequence of carrying out the action. In the case of the door, the participant will find a keyhole as a consequence of the examine action. At some point they will surely find a key with which to ‘open‘ the door. Until they find the key, though, it remains shut. The keyhole is also treated as an interactive object which the participant has found. Albeit, it cannot be removed from the door, which is the parent object. However, the participant can ‘look through‘ the keyhole and this will reveal a keyhole view of a fictional space on the other side of the door.

An object (door) with available actions

This transitioning through nuances of fiction and reality is an important aspect to Noirscape and very much inspired by my research into the success of Bandersnatch but also the creative influence of other filmmakers and indeed videogame makers and thinkers who are keen to explore and expoit these blurred bounderies.

Refs/Resources

Text Adventure Game Design in 2020, Chris Ainsley
https://medium.com/@model_train/text-adventure-game-design-in-2020-608528ac8bda

ARCore with Flutter Tests

In order to get to the bottom of the issues I have been experiencing with ArCore for Flutter I decided to test and document different combinations of factors. Given that there is a multitude of possible test cases with different types of 3d object file, different versions of Sceneform and heaps of other stuff; I wanted to test each case one by one and record my findings, as I’m convinved this is doable with Flutter and ARCore.

V1.15 Gradle

implementation 'com.google.ar.sceneform.ux:sceneform-ux:1.15.0'
implementation 'com.google.ar.sceneform:core:1.15.0'
implementation 'com.google.ar:core:1.15.0'implementation 'com.android.support:multidex:1.0.3'

Tests with KhronoGroup Duck model
https://github.com/KhronosGroup/glTF-Sample-Models/raw/master/2.0/Duck/

  1. Test GLTF with Draco encrypted

https://raw.githubusercontent.com/KhronosGroup/glTF-Sample-Models/master/2.0/Duck/glTF-Draco/Duck.gltf

Remote glb (github) : no (causes app to crash)

signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x4

  1. Test Binary GLB

https://github.com/KhronosGroup/glTF-Sample-Models/blob/master/2.0/Duck/glTF-Binary/Duck.glb?raw=true

Remote glb (github) : no , not renderable

E/ModelRenderable( 8021): Unable to load Renderable registryId='https://github.com/KhronosGroup/glTF-Sample-Models/blob/master/2.0/Duck/glTF-Binary/Duck.glb?raw=true'

  1. Test Embedded gltf

https://raw.githubusercontent.com/KhronosGroup/glTF-Sample-Models/master/2.0/Duck/glTF-Embedded/Duck.gltf

Remote glb (github) : no, can’t load embeeded uri’s

Error : FileNotFoundException
Possible Solution : PR for embedded base64 extraction?


4. Test Quantized gltf

https://raw.githubusercontent.com/KhronosGroup/glTF-Sample-Models/master/2.0/Duck/glTF-Quantized/Duck.gltf

Remote glb (github) : no, fails silently

5. Test Original Format gltf

https://raw.githubusercontent.com/KhronosGroup/glTF-Sample-Models/master/2.0/Duck/glTF/Duck.gltf
Remote glb (github) : yes, WORKS!

6. Test Original Format gltf (alternative model)

https://raw.githubusercontent.com/KhronosGroup/glTF-Sample-Models/master/2.0/AntiqueCamera/glTF/AntiqueCamera.gltf

Remote glb (github) : no, fails silently

Info from library : I/com.difrancescogianmarco.arcore_flutter_plugin.ArCoreView( 8021):  addArCoreNode

ACTION

Test in source code of Flutter ARcore for silent exceptions to understand why test 6 fails when it is the same format as test 5 which passes.I turned on debugging in the flutter library (it is a property of the controller but marked as final so not manageable from the app level). and… the 3d object appeared! WIthout explantion at this stage. I was expecting a fail and some useful debugging info, but it suddenly works and its quite a relief to see a decent detailed 3d object in AR in Flutter (that isn’t a rubber duck!).

The only additional debug during this test:

I/com.difrancescogianmarco.arcore_flutter_plugin.ArCoreView( 8021):  addArCoreNodeI/com.difrancescogianmarco.arcore_flutter_plugin.ArCoreView( 8021): addNodeWithAnchor inserted testI/com.difrancescogianmarco.arcore_flutter_plugin.ArCoreView( 8021): addNodeToSceneWithGeometry: NOT PARENT_NODE_NAME

7. Test another alternative remote gltf in debug mode

https://github.com/KhronosGroup/glTF-Sample-Models/raw/master/2.0/Buggy/glTF/Buggy.gltf

Fails silently.

Disapointing because the previous alternative with debug mode on loaded ok.

https://raw.githubusercontent.com/KhronosGroup/glTF-Sample-Models/master/2.0/Lantern/glTF/Lantern.gltf

TEST : OK – WORKS !
I’ve tested some custom objects, as well as one which I converted myself from an obj format. And on the whole they work. Some are slow due to download speeds; but this is potentially resolved if I can use a localhost.

However, with one example there are persistent cache related errors that I may need to look into.
Caused by:

java.io.FileNotFoundException: /data/user/0/dev.matwright.noirscape/cache/1612617961633-0/textures/Material.001_baseColor.jpeg: open failed: ENOENT (No such file or directory)

Whereby no textures are found within the cache directory. I suspect this is dev related though, and I’ll come back to this particular model to retest that probability.

In terms of download speed, I can also reduce the sizes of textures. There is plenty of room for reduction for use on a mobile screen.

Conceptual Design

I am currently working on a conceptual design document inspired by BJ Fogg, founder, and director of the Stanford Behavior Design Lab.

I am looking to emulate the graphical feeland mood of classic film noir as much as possible; studying posters and film titles from the era (1930s – 1950’s) and within this genre.

Product Title

The title for my product is Noirscape. This is a term often used to describe a typical cityscape, townscape or even interiorscape that carries the mood and feel of Film Noir. My concept involves spatialised narrative and is also inspired by my own enjoyment of ‘escape’ games. Thus, Noirscape, is a title that can be interpreted in the sense describe above – a Noirscape (a type of place) or as Noir[e]scape (Noir Escape).

Branding – boxset label

Product Format

The concept is primarily an app based experience. However, it will be sold as a boxed product. The box will include a number of elements which form part of this experience – a mysterious bundle including an old newspaper clipping, an ID card which is used to acivate the app aswell as provide hints during the game via it’s NFC (Near-field Communication) functionality and several other items, pencil, notebook and a set of abstract puzzle pieces. I have used 3d software to create an early concept design of what this might look like.

Early Concept Design for Noirscape

Industry Design

I created a simple visual to convey what form this concept will take from a user’s perspective. Although, this is a boxed product, the main experience will by via a smartphone’s innovative features including GPS anchors, immersive 360 degree video with special effects as well as the NFC feature described earlier.

“Getting this visual into people’s heads early helps them start thinking about your concept in concrete ways”

BJ Fogg, pp 203

Storyboarding

I have begun building a user story board to illustrate the user experience in a step by step manner. Rather than use doodles, I decided to make use of my Photoshop skills combined with 3D object design and renderings to produce a photo-montage type story.