This was a prototype app to explore AR. Although this project did not result in a full realisation of the concept, I improved my knowledge of AR platforms and design. Prototyping allowed me to “fail fast” and also reminded me that some projects do not need 3D/AR to succeed. This article records the process I went through and tools I used.
Having never worked with AR I was keen to experiment with it. There is a wide variety of tools and production methods so I wanted a project that would allow me to understand the range of options for future projects, rather than focus on a specific platform or AR feature. I also wanted to understand how to prototype AR projects.
I didn’t spend a lot of time thinking about the concept, but I liked how nutritional advice in a running magazine I read was tailored to runners and their goals. I wondered if a similar approach could work for the wider population and their individual health/wellness goals, and if AR could be used to achieve the concept.
We all know we should eat at least 5 fruit and vegetables a day, but only around half the UK population manage to meet this target. How food choices affect us can be hard to understand. What does choosing an apple over crisps do for me? If we could make the health and wellness benefits of individual foods clearer and more relevant to individuals, would this make people more likely to buy and eat them?
Inspiration for the concept comes from Candide Labels, a now defunct iOS app in which users identified plants via a computer vision machine learning model. The user then placed AR plant labels in their space.
Candide Labels. Image credit: AR/VR tips
Prototyping can be used develop and test the concept, indicating the efficacy of the solution and identifying next steps to test.
Left: User is faced with the question of what buy at a supermarket, with an internal monologue showing how non-personalised nutritional information can lead to bad choices. Right: Sketch of app UI that would enable personalised nutritional information
Lo-fi prototyping gives fast feedback, so I started with paper prototyping to sketch out the user journey.
Using physical objects was a fast method of prototyping the 3D space. The test proved the basic user flow could work as it was understood by the user. It also highlighted issues
Figma is great for designing UI, here shown are the AR food overlays. Combining this Photoshop allows placing the UI in faux 3D space for testing out how the UI would render on a phone.
I enjoyed designing a food store environment in Gravity Sketch to test the concept. After learning the controls I found it and more effective than sketching out a 3D scene in Unity:
XR experiences usually rely on one or more key mechanics, it’s worth identifying and testing these early. Some parts of the concept are more known to me (the app UI for selecting the users goals) and while the AR interaction of identifying a food and displaying an AR overlay above it is less known to me, therefore I need to prototype this. I thought I’d try this with something that would be quick to test.
Niantic 8th Wall is a platform for building Web AR experiences, so avoids needing to build a mobile app. It’s aimed at creative, brand and marketingfull real projects. Features include:
The first part of the mechanic relies on the user scanning the food object with their phone. Once that object has been recognised correctly, its position will need to be tracked via world or SLAM tracking so that the marker is positioned above the object. To recognise something in the user’s viewport, 8th Wall provides the following options:
The process I used for 8th wall was working within their web dashboard, I setup some image targets and modified some example HTML/JS/A-Frame code then published.
Due to the issues with 8th Wall’s image target I looked at other AR platforms to see if they could achieve the semantic identification of bananas I was looking for. I was also interested in comparing another framework in general. Vuforia is a popular AR platform aimed at large enterprise applications across a variety of industries. There are a lot of Vuforia products and it can be hard to figure out which is relevant, I went with Vuforia Engine marketed as “most widely used platform for AR development, with support for the majority of phones, tablets, and eyewear” Vuforia Engine supported AR targets Images and objects
using Vuforia’s sample assets is relatively straightforward if you have some Unity experience: it involves signing up, adding the Vuforia asset to a Unity Android/ iOS project and then adding a couple of Vuforia game objects.
When using image targets, Vuforia’s computer vision find some images much easier to recognise than others. It helpfully provides feedback in the web dashboard on uploaded images rating them 0-5 stars and showing “features” it identifies in the images. The more features in an image, the easier to identify. The best images are: rich in detail, high in contrast and have no repetitive patterns. My fruit images proved problematic and scored 0 stars.
Good images:
I really struggled to get Vuforia to recognise anything other than its example images, even after spending time optimising my target images to 5 stars. I tried lots of things for many hours… Until I realised in Unity I was building the Vuforia sample scene to my phone instead of the scene I was editing. Following that I adopted a more methodical Unity approach:
Although this project did not result in a realisation of the concept, I did improve my knowledge of AR platforms and design. Prototyping allowed me to “fail fast” and also reminded me that some projects do not need 3D/AR to succeed. I would use both 8th Wall and Vuforia Engine again if they suited the project, I am also keen to try ARFoundation for Unity in the future.
