Introducing: Holodog!

My newest infatuation is Unity, a game engine that within a matter of years has been a prime breeding ground for VR/AR development and experimentation.

Since November,  I’ve been collaborating on a Hololens application called “Holodog,” a Tamagotchi-esque virtual pet that you can interact with and actually see in the space you are in.

My Holodog team (Estella Tse and Katie Hughes) and I had little experience with creating a full-blown application in Unity.  And like the brave warriors we are, we decided to take on the challenge of learning it in a Hackathon environment within a matter of a weekend.  Forcing myself to learn so much in a short amount of time actually made me realize that developing for AR/VR isn’t as out of reach as it had seemed.  Special thanks to our mentor, Livi Erickson for helping us troubleshoot and guiding us when we needed it.

Here’s an example of our application in action, demoed by the wonderful Estella Tse.  On the left is our holographic dog, Buster.  Estella is able to see Buster when she puts on the headset.


A little overview of the Hololens –

If you’re not familiar with the Hololens, it’s an Augmented Reality headset released by Microsoft in March 2016.  It utilizes light refractions and amazing physics magic to create the illusion that virtual 3D objects are in the same space as you. Currently there is only a developer edition available, and the consumer version has yet to arrive.  

The main inputs that you can use to interact with the ‘holo-world’ are:

  1. Gaze
    This is essentially a circular cursor, except it’s in the center of your view in the Hololens.
  2. Gesture
    A large part of interacting with your surroundings is essentially signing with your hands.  Some of the main gestures:
  • Pinching / Grasping (what I call it)
  • Scrolling
  • Adjusting / Moving things around
  • Getting a menu
  • Voice
    Hololens has a built-in “Speech to text” voice command feature.

Other features include:

  • Camera
    • There’s a built-in camera in the Hololens, which is pretty useful.  You can livestream what you’re seeing to others and take pictures.  You can also integrate computer vision APIs to do complex things like image or object recognition using plugins like Vuforia (tutorial for this coming soon!)
  • Spatial mapping
    • This creates a low-poly map of your surroundings, making sure that holograms do not show up in the middle of a couch, or in a place that’s not visible to you.
    • There also is a coordinate system that the Hololens uses, so you can use…
  • …Spatial anchors!
    • You can store the placement of holograms within the local storage of your Hololens.  So if you’re in the same space and restart your device, your holographic dinosaur would be staring at you from the same location where it was set.
  • Spatial sound
    • You can create the effect that a certain sound is coming from a certain location.
  • There are many more features that I don’t have much experience using yet, but feel free to check out the details here!

How you can get started –

To be honest, I began writing a full-blown tutorial on building and deploying a first application to the Hololens, but I realized that a lot of the knowledge I got was from the Microsoft Developers tutorials, which are actually pretty straightforward for the most part.   Here are the two tutorials I would definitely urge you to do if you’re new to developing for this device:

  1. Build and deploy a basic application
  2. Add scripts to objects and use the Hololens emulator

A few notes on these tutorials:

  • Since documentation tends to be updated less frequently than the technology itself, if you find yourself questioning what the “Hololens SDK” or “Hololens Toolkit” is for Unity, know that you will NOT need that.  Unity has upgraded itself so you do not need this additional SDK when developing for the Hololens (yay!).
  • Once you begin creating Event Managers, know that you do not have to have write and drag separate EventManagers for each input script onto your “OrigamiCollection”.  You can create a Null Object and add all your Gaze, Gesture, Speech Managers onto it (they should be already available if you search for them, no need to copy/paste from the tutorial), and have that as the parent element of everything.  That will allow you to access those functions for all parts of your application without needing all of your objects to be children of OrigamiCollection.
  • This tutorial will be useful if you want to do things like screen capture what you are seeing within the Hololens, take pictures, or allow someone to view a demo without putting on the headset themselves (there is a latency issue with this method, but so far I haven’t found a better one – please let me know if there is!)

Troubleshooting.   You will most likely run into bugs, that is the nature of development and especially developing w/any emerging tech.

  • I resolved some of them using my mentor Livi Erickson’s guide to troubleshooting Hololens errors (Thanks again!).
  • Whenever I was initially deploying my app in Visual Studio, I received this fun error:
  • Unsafe code requires the `unsafe' command line option to be specified"

    I ended up creating a scms.rsp file in my asset folder and adding the line -unsafe within the file.  It didn’t make me feel great, since this is most definitely a hack, but this was a hackathon and so I persisted.  Here’s where I got that information.  You will probably have to restart Unity and Visual Studio after doing this.

  • You might also see this error in Visual Studio if you restart your device after debugging:
    DEP0001 : Unexpected Error: -2145615869

    The majority of the time what helped fix this was restarting Visual Studio entirely.  You should also pause the debugging process whenever you aren’t using it, since you might accidentally rebuild and cause this error again.  And remember: When in doubt, restart Visual Studio because that is probably your issue 🙂

Please feel free to reach out via Twitter @nerdyreddy if you’re curious how we created Holodog!

An attempt to organize my learnin’

Hi, future readers.  I’m Nidhi, and I’m learning to code (You’re now supposed to say, “Hi, Nidhi” and tell me that I’m in a safe space to share my feelings.)

The “TL;DR” version of the mess below: This blog is to help me track my coding progress, help me problem-solve, & share some nuggets of knowledge. I’m also going to be an Opportunity Fund Fellow at General Assembly for Jan 2016, so I’ll probably be talking about that soon!



I’m starting this blog for a few reasons:

  1. I want a way to discuss the problems I’m facing as I’m learning.
  2. I am one of those people who needs to write things down/read them to remember them.  Which is why I don’t know people’s names unless they wear name tags. (I still call my boyfriend ‘you’)
  3. Hopefully I can harness good ol’ societal pressure to persuade me to make cooler things. Like a web app that will automatically make your friends purchase everything in your “Save for Later” section on Amazon. Such a brilliant idea, amirite?
  4. There’s a slim chance that in the not-so-distant future, my ramblings will help someone as they learn.  If it’s in the distant future, then these words will mean nothing unless translated to Newspeak.
  5. I also want to (at some point!) have a solid list of free resources that can help self-learners out there with the basics of web development.  There are a plethora out there, and it’s hard to distinguish which the best are.  I remember the first hurdle I faced was “Well, I get the basics of coding….but WHERE DO I PUT MY CODE?!”  It’s amazing how many great coding resources don’t go over this.

A little about my me (besides the fact that I use A LOT OF PARENTHESES WHEN I WRITE):

I majored in Math & Film in college, and specialized in digital media. I took a few non-major CS courses in college, but tended to feel discouraged and confused at times (without then realizing that ~*everyone*~ feels this way when learning to code).  Also, coding seemed dull at the time and anytime someone said ‘database’ I would tune out, so I didn’t go out of my way to become a CS genius.  After I moved to the Bay Area for a fellowship at Khan Academy, I became more interested in the possibilities of technology and design.  I’m especially fascinated in using coding for creative +design endeavors – i.e. interactive storytelling, education (see: Explorable Explanations), and interaction design.

In mid-January, I’ll start as an Opportunity Fund Fellow at General Assembly’s  Web Development Bootcamp, where I’ll be learning to code full-time for three months.  I’m pretty excited about this, though I know I have a lot to learn beforehand to make good use of my time.

Where I am now in my learnin’:

  • I’ve gotten pretty familiar with HTML/CSS, though I’m realizing lately that there is a lot more to know about CSS (and how terrible it can be when you just want it to do a simple thing)
  • I’ve become more comfortable with JavaScript in terms of syntax and basic logic (using for/while loops for iteration, using collections, etc.).   I’m also getting familiar with jQuery, though it’s far from rolling off the tips of my fingers.
  • I’m now using Terminal for things (big step!) and have gotten over my fear of committing things to GitHub. I also know how to host my site locally now. Things are looking up. I still haven’t gotten used to putting a project up for the world to see yet.
  • I have taught myself ~some ~Ruby (from Aug-October), but now have forgotten most of it. I also learned more about specs and tests using
  • I’m still learning about how computers work.  It seems important to know considering I spend the majority of my time on one….though I think I enjoy tinkering more so than I enjoy knowing about the reason why it works.
  • Currently, I’m working on publishing a simply “Rock, Paper, Scissors” app using HTML/CSS + JavaScript/jQuery.

Okay, this ended up being significantly longer than I intended.