An honest look into the world of AR/VR with LA startup Safari Riot
Jerry Yeh and Grayson Sanders are a long way from home. After meeting as freshman at NYU and bonding over music and art, the two now find themselves in North Hollywood, at the helm of a cutting edge Augmented and Virtual Reality studio.
With resumes that include scoring major television programs and blockbuster film trailers, the men behind Safari Riot are no strangers to creating vast, rich experiences. A shared interest in technology eventually grew from these artistic endeavors and led the pair into the fast-growing realm of AR/VR.
After an inspiring visit to their studio (and a few hours of battling robots and flipping virtual burgers), the team at SOUTH started to see similarities in our respective agencies’ approach to solving complex problems. Despite the difference in our day-to-day tasks, an alignment in primary objectives led us to ask the guys: What’s really going on in the AR/VR space? And where do they see our respective fields (and coasts) meeting in the future?
Here’s what they had to say …
What inspired your leap into AR/VR?
Jerry and I were at Sundance Film Festival in 2014 supporting a film we had worked on. We’d heard rumblings about the New Frontiers exhibition, a recent addition to the festival, which was showcasing exciting new storytelling experiments in VR. We made our way over and checked out what was on display. The exhibits admittedly left some things to be desired, but our first reaction was … holy ***, this is going to change everything and we need get involved.
When we got back to LA, we hit the ground running to find out any and everything we could about what was already happening in the local community, and how we could be a part of it. Coming from the sound and music industries, we were immediately drawn to what we felt was a weak point in the content stack: the audio pipeline.
In film, sound and music occupies an enormous amount of the emotional and believability spectrums, but in VR, they make or break an experience.
Good sound allows a user to leap over the uncanny valley with ease. Bad sound makes even a good visual seem fake. We wanted to learn as much as we could about algorithmic audio spatialization and ambisonics so we could be part of the early pioneers who would create the lexicon for what VR will sound like, and how that sound will be authored. We quickly discovered we would need to completely revamp our toolset to build effective workflows, and in the process started collaborating with more and more diverse professionals in the software and game development industries. This inspired us to grow into our holistic approach for full stack development in VR/AR/MR.
A few classes and three years later, and we’ve fully embraced all stages of the VR/AR/MR development process from ideation, research, concept development, prototyping, to deployment. Canvas to code, soup to nuts, and our stuff sounds good too!
What have been some key projects for you in growing to where you are now?
When we were first getting our feet wet in immersive work, a project came along that forced us to take the plunge straight into the deep end. We were approached by a startup in North Carolina to consult on a medical product for a major biotech company, who were developing a revolutionary AR system for surgeons. It aimed to simplify a complex (previously near impossible) cancer procedure with a novel approach to holographic visualization. At first what seemed like a simple sonic interaction design project, actually became a full research endeavor, to learn both the rules and regulations of the OR, and the types of conditioning OR personnel have been subjected to after years of doing something the same way.
What began as a grande creative, ended up getting whittled down to its simplest form. Coming from Hollywood where anything is possible with enough budget, we took two major lessons from this. First: creative development, when subject to the pressures of a multi-billion dollar corporate bureaucracy can be challenging, and requires user research backed decisions at every turn. And second: creative development when subject to the pressures of a multi-billion dollar biotech corporate bureaucracy can be extra-challenging, because the legal construct insuring a doctor’s acuity won’t be negatively impacted by hasty design choices is immense, and often stifles innovation. All in all though, we were amazed with how the project turned out, and we can’t wait to work on more healthcare products.
VR faces a noticeable amount of resistance and skepticism, and also faces an uphill climb on accessibility. How do these issues alter your approach to building your business or the experiences that you develop?
We totally get why VR has caught flack. Take a technology that has tried and failed multiple times because the tech was too primitive, and then create a prototype of a newer, less-primitive version of it and sell it for multiples of billions to a major social network. Incite an investment frenzy and rush tens of other primitive products to market before key UX issues were ironed out. Then do a global marketing push for a solution looking for a problem, and people start to actually vomit using it. And then turn it into a meme. It got off to a really great start!
These issues affect our confidence about as much as people saying they’d never buy a Palm Pilot. Who wants to touch a screen when you could have tangible buttons?!
VR is still in its infancy, but it’s already changing paradigms across numerous verticals; it’s just missing the consumer friendly accessibility and desirability variable.
Moore’s law persists. Hardware (lenses, light field capture and projection, HMDs, graphics cards, mobile processors, higher pixel density screens…) is and will keep getting faster and smaller. And software (PBR, path traced reflection modeling, ORBX and MPEG-H compression codecs, game engine timelines…) will make the hardware sing.
As the stuff behind the scenes gets better (and lucky for us it’s happening astoundingly quickly), the content living on top of these stacks will become more enticing to consumers. We believe a decent amount of consumer uptake will happen before VR reaches human eye resolution and a functional online meta-verse becomes a possibility. But when VR does become fast, lightweight, and pretty enough for people to put on an HMD and for all intensive purposes go somewhere else, things will get interesting. If you want to follow an amazing, inspiring person who’s hellbent on bringing this to a head, follow Jules Urbach.
For us, we’re focused on using these technologies to add value where value can be added. VR isn’t always the answer. AR isn’t always the answer. We’re most passionate about coming up with creative ways to hide the technology, letting people see through it and into a world of new perspectives and associations they hadn’t previously considered or felt.
Tell us a little about your process, how do you create the core ideas for experiences? And who/what does it take to bring them to life?
Our process is pretty straight forward. We’ve pulled a lot of technique from traditional design research disciplines, particularly the Cooper Method. VR and AR project planning is still a tricky process because deployment platforms are still relatively disparate. But with Youtube, Facebook, Instagram, and Snapchat all making significant AR integrations this year, and Apple/Google opening computer vision algorithms for their development community, things are about to get a lot more accessible. This will accelerate our project planning steps, with less time spent figuring out who will be able to access something, and more time spent defining ‘why will they want to?’
The majority of our core ideas come from an idea bank we’ve amassed. It feels like we have new “what if we could…!”, “omg, wait, we should…” moments every day, and we’re just waiting for the right clients to come along and let us bring them to life.
Our core team is Jerry (Producer), Grayson (CD/PM), and Christian (Lead Dev). Our extended team can scale up and down as needed and will usually feature a modeler, animator, VisD, IxD, and additional coding support. We do all sound mixing and game engine integration in-house.
There have been some significant strides into the eCommerce space but we aren’t seeing it on a consumer level yet. From your perspective, what do you think we will next in this space?
Oh do we have some ideas for this! We are actually sitting on what could be the best, or worst, eCommerce product idea ever, but we’re waiting for a couple more pieces of tech to catch up to try it. Did that pitch sell you on the idea? haha!
Alibaba is making some interesting eCommerce moves and we expect some of the other majors to follow suit shortly. It will be an interesting thing to watch. I think our meeting could happen today, but it would be matter of scope. The way we see it, eCommerce in VR/AR could follow a few trajectories. There are the obvious conclusions like “Oh! I could browse a store in VR and see photorealistic 3D objects at world scale!” or “Oh! I can 3D scan my body and try different digital clothes out on myself in the mirror.” These are cool, and people are already working on them, but they will require some significant consumer adoption of the medium and higher fidelity experiences to be demanded and delightful. We are most excited about the nascent potential to analyze navigation and shopping behaviors in both brick and mortars (Cellphone AR) and head mounted displays (VR/MR). We are actively developing some in-house tools that will aid in this process, and we are excited to test them further.
In the nearer term, Marker and GPS-based AR can provide another level of brand engagement and open up new calls to action for consumers, and with SLAM-based computer vision coming to millions of devices this year, the quality bump in these experiences will be significant.
Web VR seems to be the next logical step towards collaboration between current web-based design and development efforts and new VR/AR experiences. How do you see Safari Riot fitting in (or breaking the mold) on this trend?
Well the cool thing is, I wouldn’t say there is a mold to be broken yet. WebVR is vastly uncharted territory. Fact is, right now its very difficult to create a good online VR experience in a browser. WebGL is amazing for what it has accomplished, and OpenVR has allowed many different pieces of hardware to speak similar languages, but it’s still very limited.
Anything GPU accelerated in a browser becomes tricky business if you’re talking about having experiences that play well on mobile too. What’s happening with WebVR is incredibly exciting, and will undoubtably play a part in unifying the immersive mediums, but getting everything to play nice at an acceptable FPS, and people to actually use something that appears simplistic to the laymen’s eye is going to be a challenge.
In the short term, when the next gen high-end headsets start to trickle into the market end of this year and early next, I think WebVR might become an amazing collaborative prototyping environment. We’re really excited about WebAR and have been doing some hacking ourselves. Tossing them in the idea bank.
What are you working on now that we can tell the world about?
Right now, we have a few things running. We are working on an installation at a hotel in Downtown LA. It’s an awesome visual and sound experience that will come to life between AR-enabled smartphones and physical artwork. At our sister company Cloak (a music industry-first co-venture with Pusher/HTGR), we’re finishing up a fully immersive VR experience for a major recording artist that will premier in 2018. In-house, we continue our multi-year product development project and will move into our next round of fundraising in Q1 2018.
Powered by WPeMatico