Noirscape Swipe Card Development

Noirscape uses NFC (Near Field Communications) technology to add some fun to the signing in process. A detective ID card is among the items that will be included in the boxed version of the game but they’ll also be available via the future website and onlin merchandise store.

NFC is a tiny, low-cost, batteryless radio transmitter which allows app developers to embed the featherweight, waferthing device into clothing, toys, posters and just about anything else. In my case, I am using a business card format. The actual NFP cpmponent is inside the plastic and is only about an inch in diameter. For the purpose of my testing I have created my own version using card and a stick’on NFP emitter.

Prototype ID cards used for a gamified sign in process for the Noirscape experience.

Near Field Communication (NFC) is a standards-based short-range wireless connectivity technology that makes life easier and more convenient for consumers around the world by making it simpler to make transactions, exchange digital content, and connect electronic devices with a touch. NFC is compatible with hundreds of millions of contactless cards and readers already deployed worldwide.

https://nfc-forum.org/what-is-nfc/ [01/04/21]

The ID card for Noirscape contains a unique reference code that may only be used once and is used in conjunction with anonymous sign in through Google Firebase integration. It is a gamification feature rather than a security element. ALthough there are plenty of possiblities for future feature development; inlcuding ‘friending’ other participants in the realworld by allowing one another to scan each other’s card.

Video walkthrough of the sign in sequence upon first installation of the app.

AR Gamification : snaps

Finding hidden objects in an augmented reality [AR] experience is fun; especially when the items have interactive qualities which trigger new content, clues and narrrative. Furthermore, given the quality of 3D assets, AR lighting, shadow and reflection prediction and generation; coupled with a the Noirscape black & white filter, it’s a great little feature to be able to capture an augmented view within the home and share with others on social media or messaging.

The user may take a picture of their AR Scene and/or re position the object to their liking
The user then reframes the image to their liking.

The Noirscape snap can be shared across social media or installed apps using the device’s native Share API
The image is now used within the app’s inventory for current user and may be shared to social media.

The functionality allow the participant to to take an AR photo snap of their new found fictional objects withing the realworld space of their home. The image can then be shared on social media or sent to a recipient using the device’s sharing API. This provides content for the user to share, a memory from the experience and marketing value for the app brand.

Noirscape AR shot shared to Twitter from within the App

In the above image, the dots which represent the AR Plane object (the horizontal plane detected in the real environment). Using the the Flutter ArCore package, there was no way to hide these once the AR object had been placed and it spoils the photograph somewhat. Fortunately I was able to fork the main package and add the functionality and send over a pull release request to the main package maintainer. So, I am now able to hide these dots just prior to takeing the capture.

One of the advantages of Flutter is the great community and the opportunity for contributing to packages.

360 Flashback Interaction

The challenge is to keep the app’s media files small so that they can be served on-demand from a content delivery network (CDN). This ensures only content that is relevant to the current user is downloaded and the main app size is kept small. Instead of using a mp4 or other type of movie file to convey the town 360 scene I am using the new Google webp format which I have previously validated within the 360 Flutter component. My aim is to keep this file as small as possible so it can be quickly served from a remote server. Other content such as voice and 3d characters which are part of the experience but not specific to a town  may be compiled as assets with the build as they will not need to update as often as other interactions. There is also the option to stream the non-town specific video over the underlying 360 animated webp image file.

Using an Adobe Media Encoder plugin I was able to export a short section of my 360 movie into a webm file. The movie version of the webp format. However, this format turned out not to be supported by the 360 component I am using. So, I am looking to convert to webp as I will not need embedded audio; which can be played from a separate file.

I found that Google provides decent documentation for webp as well as a number of command line programs to help convert.

https://developers.google.com/speed/webp/docs/using

The conversion worked nicely. I am now able to open up my town scene with movement; in this case rainfall. However, (there is always a however…) the 360 plugin supports the standard Flutter Image widget which in turn support webp animated images, but I am so far unable to loop. So, the animation stops after the final frame.

The tools I am using can be downloaded here:

https://storage.googleapis.com/downloads.webmproject.org/releases/webp/index.html

Instruction for configuring the lightweight image sequence with Google tools followed by deployment to the CDN

Adding Sound and Interaction

As the 360 visual content of the spatial narrative is town based; the relevant media file is served from the CDN. Its important that the image files are as optimized as possible do they transfer quickly. Sound effects will be embedded within the app as they are common to all users, whereas narrative speech is also, like the town based visual content, based on the user’s language but also the path they take through the adventure based on their decision making. So, it also makes sense to serve this content from CDN. Speech files do not need to load instantly as they are usually triggered through an interaction. So, the final recipe involves playing embedded sound effects and an embedded visual effect while the main 360 media loads. The media is then cached so the slight delay is only noticable on the first playback. The narrative sound is then played on top of the 360 visual content at the appropriate moments.

For the sound I have opted for an AI generated voice; albeit with intonation and a Hollywood accent.

Interactive 360 Scene Demo

Interactive Rotary Phone Demo

I added some final sparkle to my vintage rotary dialler. Using various calculations I am able to predict which number has been dialled in a way this quite closely emulates the real thing. For example, a digit is only registered when the dial is turned fully to the catch.

I also added sound effects. Each digit has its own sound file to correspond with the length of time of the rotation which of course varies for each number. The sound effect also has to respond to when the dial is released too early and therefore has an irregular return rotation timespan.

Lastly, I added some texture to the background and some text using an style typewriter font which shows the participants name and their own tel number within the experience.

I had great fun putting this into the hands of my two childre, aged 9 and 12; they had no idea how to work a rotary dial phone. Who needs to invent enigmas and puzzles when we can just emulate the trials and tribulations of analog technology!

I also shared a demo with friends and colleagues as well as on social media and I was quite surprised by the number of people suggesting this could become an app in it’s own right. I haven’t researched what’s already in the app stores. But, certainly an interesting idea would be to make a standalone rotary dialler app which interacts with the phone’s call api to initiate real calls. I am already thinking about the fun that could be had creating different version; the 1970’s, 1890’s, 1950’s.

Anyway, here is a video I put together to demonstrate the feature in action.

https://www.youtube.com/watch?v=Kr54QQu3U4Q

Custom Animated Interactivity #Dialler

Problem: Calculating the button which has been pressed within a Painted Canvas

I created a telephone dial using Flutter’s CustomPainter API. This permitted me to draw the various elements onto the screen relative to the size of the screen. The problem now is : how to detect which number has been dialled. There are no workable Gesture detection strategies within the custom painter which allows me to directly detect on individual elements, like the numbers. I can however detect when the canvas is touched using the hitTest method of the CustomPainter class. This provides me with the screen coordinates (offset) of the x and y position of the ‘hit’; the position where the user has touched the screen.

Now, to be able to work out how that position corresponds to a number (on the dialler) I need to do some math. Given the size of the screen I can easily calculate the center by divided the x/y values by two. To illustrate this I used the CustomPainter to draw to red circles on my canvas; one at the center and one where a hit occurs.

Initial Dial Drawing with center point and touched point highlighted

Now, if I can calculate the angle from the position of the calculated center to the position of the hit I will be able to map the value to a known position of a number on the dial.

In the above example the tapped area shown on the left at the number seven is 50(x)  and 240 (y). It’s important to note that the values used by Flutter are logical pixels and not physical ones. So these values don’t correspond to the actual screen resolution. In the case here, the red spot at dial seven is 50 logical pixels along the x axis and 240 logical pixels along the y axis. These are both relative to the top-left of the  canvas.

The center, in my case, working on a physical Pixel 3 XL, is at 205,205 (the canvas is therefore a square with 410 logical pixel sides)

I can now calculate the angle from the center of the canvas to the hit point using Dart’s built in atan2 function

double rads = atan2(position.dy - (canvasSize.height / 2),
    position.dx - (canvasSize.width / 2));

ref : https://api.flutter.dev/flutter/dart-math/atan2.html

Using the x and y coordinates above equates to the following:

rads = atan2(240 - 205, 50 - 205);

Which returns 2.92 radians (rounded to 2 decimal places)
To calculate the angle in degrees, Dart has a build in function. Otherwise the formula is :

(2.92 * 180.0) / pi

Which returns 167 degrees and is confirmed by Google:

Google Calculator to validate Radians and Degrees conversion
The Dial with a protractor image from Mathsisfun.com superimposed to show the angles look right

Dart’s atan2 function returns a range from -pi (3.14159…) to +pi. And pi * Rad equals 180 degrees. So the final value in degrees will always between -180 and +180. Therefore the sign (+/-) determines whether the hit was in the upper or lower half of the dial.

I can now calculate which number is being dialled  based on the initial hit.  The next step is to prevent the dial from turning beyond the stopper. The stopper was used as a catch on rotary phones to indicate the desired number before automatically returning the dial back to its original position. The caller would therefore place the finger inside a circular opening above the desire number and rotate the dial clockwise until they reached the stopper. The numbers remained stationary and the angle of rotation was used to calculate each digit of the telephone number to call.

Interactive Components

The Noirscape participant collects fictional objects in the Augmented and Immersive reality parts of the experience. Beyond the fun of finding these items and seeing them appear in-situ on your living room table; they also provide important props and gameplay. The objects are associated with actions. Sometimes an object will have multiple actions, sometimes they’ll depend on other objects to work together. Here are a couple of examples of interactive fictional objects I’m currently developing.

Interactive rotary dial built using Flutter Custom Painter

A phone is discovered within the AR space and now appears in the ‘found objects’ screen. The phone has a ‘call someone’ action. A participant will need to find the right number; maybe its on a scrap of paper, hidden in a desk drawer or in the pocket of a recently murdered man. Eitherway, the participant will be able to have some fun with the interactive object.

The vintage phone founds in AR space and now listed in the user’s found objects inventory

My aim with this object is to allow the participant to operate an old fashioned rotary style dial phone. I have used Flutter’s built in painting library and decorators to build the interface for this. The interface is built up of shapes and lines which are calculated with elementary maths. For example the numbers are positioned dynamically. I references a previous project I worked on to build an art-deco style clock; as the phone dial has a similar approach. However, the numbers are not in the same order, and they are not evenly spaced around the dial face as they would be on a clock face. But, the previous project helped me get started.

example snippet of code to position the dial numbers. Among the challenged is to make the digits (text) position vertically by default.

I took a few screenshots as the painting progressed. The end result looks quite satisfactory. The next stage was work on making this interactive. I am currently able to rotate the dial at specific angles and I am also able to detect a user’s drag interaction with the screen. My next task is to combine these together so that a user can turn the dial and the dial then returns back automatically to the starting position. When a correct number (one found within the gameplay) is dialled; the user learns new narrative or clues. When they play around with random numbers; they’ll be some fun too.

stage one
stage two

stage three
stage four
stage five

Interactive Door and Keyhole

When the participant finds an old fictional wooden door in their living room they notice it has a keyhole. Clearly there will be a key to find somewhere. But, until they do find the key, they can still interact with the door by peering through the keyhole.

Interactive door found in AR

The keyhole idea came about by accident while working on the AR object scanner. As I implemented a vignette to give the AR mode a noir feel, I mistakenly set the vignette to the wrong settings and created the pin-hole type effect. Although I will improve the shape and look of the keyhole view, it does work well, I find. I have restriced the viewing angles to represent the constrains of looking through a keyhole; as clearly it wouldn’t be right if you could see a full 360 degree view! The 360 image viewer works well with animated webc files too. I also tested with a image using the new google image format (webp) which allows GIF style looping animations.

ARCore Flutter Further Tests

The main issue I’m experiencing currently is the inability to set the scale of an object. I’ve now forked the main repository and I’m going to look deeper into it and make any changes I may need.

The problem I have is that when a 3d object is rendered it has a colossal scale; I assumed this was constrained by some kind of Vector3 limit on the camera. For example I am rendering a vintage telephone that I’d like to place on a table in the camera view. However, it’s appearing the size of an gigantic spaceship.

There is a scale parameter for the Node (which represents the 3d model of a telephone in ar-speak) but this doesn’t not appear to have any effect. The parameter takes a Vector3 object (x,y,z) which is a relative measurement to the parent object. However, given that the Node’s parent object is not something I have access to, I can’t set this. Eitherway, I’ve trying setting the scale to tiny values but it makes no difference. I’ve also tried wrapping the Node in other nodes but this hasn’t helped either.

https://github.com/KhronosGroup/glTF/tree/master/specification/2.0

I have checked out the underlying ARCore Java library and understand that the scale ought to be relative to the estimated size of the detected Plane (the horizontal plane of my desktop for example). This size is taken from the realworld  estimated coordinates and should be at least accurate to a metre. The attributes are ExtendX and ExtentY. From these values it should be possible to scale the Node relatively. I’m going to check out the Java source code and see if I can spot anything.

https://developers.google.com/ar/reference/java/com/google/ar/core/Plane#getExtentX()

Reformatting the Object File
I couldn’t find anything wrong in the code at first glance. The object scale should be relative to the plane upon which it’s placed. So, I turned to my object files again. I noticed that while the earlier tests using the KhronosGroup images were big (oversized yellow duck!) they were spaceship size. So, my attention turned to the GLTF coding of my images. I went through the specification again and cross checked the Duck file with my telephone come spaceship one. It’s not easy to see anything amiss like this as it’s all about transformations and rotations – numbers; which are all relative to one another. But, I did have thought about the origins of these 3d objects. I got them from Sketchfab, where you can download them directly in GLTF format. Great! Maybe not. I noticed that even Windows 3D viewer couldn’t open my the telephone. I went back to Sketchfab and downloaded the telephone again, but this time in USDZ format. A format created by Pixar that’s becoming more and more associated with AR design. It’s a single file with the textures etc incorporated; I imported this into Adobe Dimensions and the first thing I noticed was a spaceship sized telephone.  I panned out of the ‘scene’ to see the telephone at it’s more earthly scale. My hypothesis is that Sketchfab auto-convert the source objects into GLTF as scenes rather than just objects. This could explain why the scale issues. I hope this is the case, anyway. I’ll export the telephone from Dimensions in GLTF format and test it in AR again.

Telephone in Dimensions, waiting to be exported.

Once exported, I moved the files into my web project from which I’m serving these objects from the web.

GLTF files

And deploy to firebase hosting:

deploy to firebase hosting

The result was certainly in the right direction. It’s no longer the size of USS Enterprise but seems to be fixed to the size of the detected plane, which I suspect is estimated at one square metre; and its just floating about in the air like a drone.  I shall work on the scaling further and try to understand why it’s not anchoring to the plane correctly.

Giant vintage ARCore Phone in Flutter

Reference Points

This is a good place to copy a few reference points from the Java API docs at Google for ARCore, as they are written succintly and help to keep in mind the different concepts of AR development.

PLANE

“Describes the current best knowledge of a real-world planar surface.”

https://developers.google.com/ar/reference/java/com/google/ar/core/Plane

POSE

“Represents an immutable rigid transformation from one coordinate space to another. As provided from all ARCore APIs, Poses always describe the transformation from object’s local coordinate space to the world coordinate space

https://developers.google.com/ar/reference/java/com/google/ar/core/Pose


WORLD COORDINATE SPACE

“As ARCore’s understanding of the environment changes, it adjusts its model of the world to keep things consistent. When this happens, the numerical location (coordinates) of the camera and Anchors can change significantly to maintain appropriate relative positions of the physical locations they represent.These changes mean that every frame should be considered to be in a completely unique world coordinate space.”

https://developers.google.com/ar/reference/java/com/google/ar/core/Pose#world-coordinate-space

Conclusion

I was eventually able to scale my 3D object correctly using a combination of GLTF settings and ARCore config.

With some Shader work within Flutter I’ve created a Noiresque look in which the vintage 1940’s 3D telephone I got from Sketchfab (see link in video description) is positioned consistently in the AR or ‘Mixed Reality’ world based on the detected horizontal Plane, the Pose of the object and of course the World Coordinate Space.

Dev: 3D Object into AR with Flutter

I encountered a number of issues with getting complex 3d objects into a Flutter AR app. The Flutter AR components takes either a remote glft file, a format designed for loading 3 objects in a web browser; or sfb a format specific to Sceneform, a 3D rendering component for smartphones. The webformat is fine for simple objects that need to be loaded on the fly but anything with complex detailed textures can quickly run into big file sizes. Futhermore, I am hosting my files on Firebase Storage and a slft object is actually a bundle of files. I will need to handle this at some point, but my intitial task is to implement a complex and detailed 3d asset in the AR view. To do this, given the file size and the fact there will only be a few of them I wanted to embed the complex items with the app’s assets. This requires the SFB format. The trouble here is that there are different versions of the Sceneform plugin. The original version, by Google, was frozen in time, archived as version 1.17.1 and opensourced. This was a copy of the final closed-source version by Google (1.15.0). A further version is 1.16.0 which was built as a Gradle module and which supports the glTF format instead of sfa and sfb. glTF is supported within the Flutter ARCore module as a url based resource and not an asset. However, I could get around this by serving assets through an intergrated http server within my app for the complex static objects.

https://developers.google.com/sceneform/develop
https://github.com/google-ar/sceneform-android-sdk
https://github.com/google-ar/sceneform-android-sdk

Yet despite these notices, upon testing the 1.15.0 version I WAS able to embed a remote glTF file using reference 3d objects from the KhronosGroup github repository. KhronosGroup being an American non for profit organisation which focuses on the creation of open standard, royalty-free API’s.

BUT, only some gltf models would show in the AR space.

A duck model works fine:

But, models that more akin to the type of thing I want to use would not show; such a lantern.

Both the duck and the lantern are part of the same Khronos 2.0 library; each has non-embedded textures – separate 2d png files and a bin file. But, something is prevent certain gltf files from rendering in ARCore but not others. If feel like getting to the bottom of this is going to be a drain on time but I do need a solution. And unless I can understand the constraints and limitations of using different types of 3d objects with different implementations of the SceneForm 3d engine; I will be feeling my way in the dark; and this is not how I like to proceed. I will carry out further tests before reaching a conclusion.

References/Resources

https://developers.google.com/sceneform/develop

https://github.com/google-ar/sceneform-android-sdk

https://en.wikipedia.org/wiki/Khronos_Group

https://en.wikipedia.org/wiki/Khronos_Group

Delivering 360 Video from Device Assets

In my previous post I wrote about the 360 video viewer plugin I am experimenting with using Flutter, a cross-platform development framework, and a practice clip I edited within After Effects. While there is plenty of support for 360 images within Flutter, I was surprised to only find a single plugin to support 360 video. The plugin uses a Google SDK for Cardboard VR. Although Google has since released a newer SDK for cardboard, I only need limited functionality – to view a 360 video, so I am hoping this plugin will suffice for me.

The plugin uses Google VR SDK

In my previous experiment, there was a fairly serious limitation, in that I had to call the 360 clip via a URL.

The plugin only supports 360 video via a URL

360 video files can be quite big, the short ones I have been experimenting with are around 10Mb and even on a fast wifi connection there was a significant delay while the video is downloaded in its entirety before the 360 video player starts playing. This would provide a poor user experience in terms of the wait. I could potentially add some gameplay functionality into this waiting time; but it’s still a big ask – to download large files; especially if the user is outdoors and using mobile data, which may be limited or even expensive to the user.

I spent some time analysing the plugin code as well as the underlying Google VR SDK to see how much work would be involved should I want to implement, myself, a feature to allow local, on-device files to load, rather than via a url. I found that this would not be a trivial project, so I sought an alternative approach.

I could not expect my users to access large video files via their mobile data so I had to think up a new solution. Rather than develop new functionality for the plugin or build my own implementation of a VR SDK plugin, I decided to experiment with setting up a mini http server within my app and thus serving the 360 video asset from a local server url.

Dart, the programming language which underpins the Flutter framework includes built in HTTP server functionality via its HTTPServer class.

Dart’s HttpServer class

Using this approach I would be able to create a lightweight http server within my app from which to serve the 360 videos and satisify the 360 video viewer’s play-from-URL requirement. I even found an existing Flutter plugin which uses this technique and maps http requests directly to a specific assets folder.

Flutter plugin implementing the http server class for a Flutter app scenario

I built a test app combining both the local assets server and the 360 viewer along with my test 360 clip and the video now loads much quicker using the locally served file. This means I can now consider either bundling all my app’s 360 content into the app build itself, or downloading them to local filespace as and when the user has wifi access. Thus, overcoming the problem of slowness and using a user’s mobile data for accessing these clips from the web.

The 360 clip now loads instantly when the button is clicked.

Testing 360 Video with Flutter App Development

Following on from my 360 video experimentation in Adobe After Effects I wanted to test out options for embedding my 360 videos within a a mobile app using Flutter.

I found a number of existing packages available for 360 images, however, running 360 video is less common. I found one package called video_player_360 which acts as a plugin for the Google VR SDK for both iOS and Android allowing videos to be played via a remote URL.

I have a couple of reservations about this package. For a start it using a Google plugin which is now archived. Although just for using the 360 video functionality, this should not be a problem. Although, I will be worth me looking at the new SDK which Google has released as a replacement to see if this has sililar video player functionality and maybe I can build a new Flutter plugin using this updated Google VR SDK.

New Google VR SDK for Adnroid and iOS

The second issue I have with this plugin is that it only works with remote URLs. This means that a user must be connected to the internet and given the nature of 360 video, these can be quite big files even for a short clip. In my test I was using a 12MB file hosted on Github. My phone was connected to my office WiFi and it took a good 5 seconds to load up. Once it did load, the video worked nicely on my mobile device (Google Pixel 3). I was able to move the phone about and experience the immersive effect of the video.

Testing 360 Video within a Flutter mobile app