Quick Start – Setting up Webpack for a static app

Happy Sunday folks!

When I was in elementary school, we had a worksheet game in one of my classes that secretly taught us algebra.  Today I am attempting to recreate it in a digital, playable format using React, Webpack, ES6, and yarn (which I used for the first time today rather than using npm) .

I personally found it a little tricky to set up Webpack 1.0 (I plan on replacing this with 2.0 soon, I just am more familiar with 1.0 at the moment).  All I wanted was to compile my files and have it hot reload for my static application, but it was surprisingly hard to find a simple example of this.

Here’s what I did.

Screen Shot 2017-05-14 at 4.57.29 PM

A screenshot of my project file structure.

My goal

All I wanted to do is compile everything in my root ./game.js to ./dist/game.js, and I wanted it so that anytime I made a change and saved my file it would reload on localhost.

Solution

Here is what my package.json and webpack.config.js looked like, so that whenever I ran the command yarn start, it would compile my files and serve my index.html file on my local server.

// package.json

{
 "name": "algebra-game",
 "version": "1.0.0",
 "main": "dist/game.js",
 "author": "Nidhi Reddy",
 "license": "MIT",
 "scripts": {
    "watch": "webpack --progress --colors --watch --config ./webpack.config.js",
    "start": "webpack-dev-server --progress --inline --hot",
    "build": "webpack --config ./webpack.config.js",
    "deploy": "gh-pages -d dist"
 },
 "devDependencies": {
    "babel-core": "^6.24.1",
    "babel-loader": "^7.0.0",
    "babel-preset-es2015": "^6.24.1",
    "babel-preset-react": "^6.24.1",
    "gh-pages": "^1.0.0",
    "path": "^0.12.7",
    "webpack": "^2.5.1",
    "webpack-dev-server": "^2.4.5"
 },
 "dependencies": {
    "react": "^15.5.4",
    "react-dom": "^15.5.4"
 }
}

// webpack.config.js

var path = require('path');
var webpack = require('webpack');
var WebpackDevServer = require("webpack-dev-server");

const config = {
  context: __dirname, //current folder as the reference to the other paths
  entry: './game.js',
  output: {
     path: path.resolve(__dirname, 'dist'), //where the compiled JavaScript file should be saved
     filename: './game.js' //name of the compiled JavaScript file
  },
  module: {
  loaders: [
   {
     test: /\.js?$/, //translate and compile ES6 with JSX into ES5
     exclude: /node_modules/,
     loader: 'babel-loader',
     query: { //query configuration passed to the loader
         presets: ['react', 'es2015']
     }
    }
   ]
 },
  devServer: {
     hot: true,
     contentBase: './dist'
 }
};

module.exports = config;

Depending on your file structure, you might have to update the contentBase, entry, and output fields accordingly.

Serve and watch these files!

If you’re using yarn, run the command yarn start, and if you’re using good ol’ npm, you can similarly run the comamnd npm start.   You should see your application running on localhost:8080.

Hope this helps!  If you have any questions / grievances, get at me on Twitter @nerdyreddy.

Continue reading →

Advertisements

Plant AR experiment for iOS

Patrick (my boyfriend who does 3D modeling) and I recently decided to do a weekend “hackathon” AKA a hackathon where you don’t meet anybody new and everything is inside your own house.  I would argue that such a hackathon is THE BEST type of hackathon since the end goal is only learning 🙂

We wanted to make something simple since it was the first time we ever collaborated on a AR/VR project, we didn’t want it to be longer than a day’s worth of work, and we wanted to be able to create and deploy it on our own devices.

I used Vuforia for the image recognition (anytime it sees that particular image printed on the paper, our script and scene will execute), Unity, the iOS SDK for Unity, and XCode to compile the code. Patrick used Cinema 4D to create the plant and animation and imported his assets into Unity.

Here’s the “final” product on our iPad –

It was my first time to develop something that would be deployed to an Apple Device, so I had to go through signing up for a developer account, getting verified, and using XCode to actually deploy my code (rather than only using it for its iPhone emulator feature like I usually do at work).  Attempting to troubleshoot all the “Code Signing” errors in XCode was probably the hardest part of this process.

If you’re curious, here is the tiny piece of code I used for the touch event –

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class PlantGrow : MonoBehaviour {
  Animation anim;

  // Use this for initialization
  void Start () {
    anim = GetComponent<Animation>();
    Debug.Log("Start");
  }

 // Update is called once per frame
  void Update () {
   if(Input.GetMouseButtonDown(0)){
     anim.Play("grow");
     Debug.Log("Your plant is growing!");
   }

  }
}

You might notice that Input.GetMouseButtonDown(0) is referring to a PC rather than a touch device,  but it still works!  I’m assuming it is similar to making a website mobile friendly, all the click events still typically work on touch devices.

I am curious how Unity deals with touch events (like swipes / drags / etc.), however, and that is something I am going to look up ….now 🙂

Introducing: Holodog!

My newest infatuation is Unity, a game engine that within a matter of years has been a prime breeding ground for VR/AR development and experimentation.

Since November,  I’ve been collaborating on a Hololens application called “Holodog,” a Tamagotchi-esque virtual pet that you can interact with and actually see in the space you are in.

My Holodog team (Estella Tse and Katie Hughes) and I had little experience with creating a full-blown application in Unity.  And like the brave warriors we are, we decided to take on the challenge of learning it in a Hackathon environment within a matter of a weekend.  Forcing myself to learn so much in a short amount of time actually made me realize that developing for AR/VR isn’t as out of reach as it had seemed.  Special thanks to our mentor, Livi Erickson for helping us troubleshoot and guiding us when we needed it.

Here’s an example of our application in action, demoed by the wonderful Estella Tse.  On the left is our holographic dog, Buster.  Estella is able to see Buster when she puts on the headset.

 


A little overview of the Hololens –

If you’re not familiar with the Hololens, it’s an Augmented Reality headset released by Microsoft in March 2016.  It utilizes light refractions and amazing physics magic to create the illusion that virtual 3D objects are in the same space as you. Currently there is only a developer edition available, and the consumer version has yet to arrive.  

The main inputs that you can use to interact with the ‘holo-world’ are:

  1. Gaze
    This is essentially a circular cursor, except it’s in the center of your view in the Hololens.
    with-hololens-your-gaze-is-your-mouse-1280x600
  2. Gesture
    A large part of interacting with your surroundings is essentially signing with your hands.  Some of the main gestures:
  • Pinching / Grasping (what I call it)
  • Scrolling
  • Adjusting / Moving things around
  • Getting a menu
    gesture
  • Voice
    Hololens has a built-in “Speech to text” voice command feature.

Other features include:

  • Camera
    • There’s a built-in camera in the Hololens, which is pretty useful.  You can livestream what you’re seeing to others and take pictures.  You can also integrate computer vision APIs to do complex things like image or object recognition using plugins like Vuforia (tutorial for this coming soon!)
  • Spatial mapping
    • This creates a low-poly map of your surroundings, making sure that holograms do not show up in the middle of a couch, or in a place that’s not visible to you.
    • There also is a coordinate system that the Hololens uses, so you can use…
  • …Spatial anchors!
    • You can store the placement of holograms within the local storage of your Hololens.  So if you’re in the same space and restart your device, your holographic dinosaur would be staring at you from the same location where it was set.
  • Spatial sound
    • You can create the effect that a certain sound is coming from a certain location.
  • There are many more features that I don’t have much experience using yet, but feel free to check out the details here!

How you can get started –

To be honest, I began writing a full-blown tutorial on building and deploying a first application to the Hololens, but I realized that a lot of the knowledge I got was from the Microsoft Developers tutorials, which are actually pretty straightforward for the most part.   Here are the two tutorials I would definitely urge you to do if you’re new to developing for this device:

  1. Build and deploy a basic application
  2. Add scripts to objects and use the Hololens emulator

A few notes on these tutorials:

  • Since documentation tends to be updated less frequently than the technology itself, if you find yourself questioning what the “Hololens SDK” or “Hololens Toolkit” is for Unity, know that you will NOT need that.  Unity has upgraded itself so you do not need this additional SDK when developing for the Hololens (yay!).
  • Once you begin creating Event Managers, know that you do not have to have write and drag separate EventManagers for each input script onto your “OrigamiCollection”.  You can create a Null Object and add all your Gaze, Gesture, Speech Managers onto it (they should be already available if you search for them, no need to copy/paste from the tutorial), and have that as the parent element of everything.  That will allow you to access those functions for all parts of your application without needing all of your objects to be children of OrigamiCollection.
  • This tutorial will be useful if you want to do things like screen capture what you are seeing within the Hololens, take pictures, or allow someone to view a demo without putting on the headset themselves (there is a latency issue with this method, but so far I haven’t found a better one – please let me know if there is!)

Troubleshooting.   You will most likely run into bugs, that is the nature of development and especially developing w/any emerging tech.

  • I resolved some of them using my mentor Livi Erickson’s guide to troubleshooting Hololens errors (Thanks again!).
  • Whenever I was initially deploying my app in Visual Studio, I received this fun error:
  • Unsafe code requires the `unsafe' command line option to be specified"

    I ended up creating a scms.rsp file in my asset folder and adding the line -unsafe within the file.  It didn’t make me feel great, since this is most definitely a hack, but this was a hackathon and so I persisted.  Here’s where I got that information.  You will probably have to restart Unity and Visual Studio after doing this.

  • You might also see this error in Visual Studio if you restart your device after debugging:
    DEP0001 : Unexpected Error: -2145615869

    The majority of the time what helped fix this was restarting Visual Studio entirely.  You should also pause the debugging process whenever you aren’t using it, since you might accidentally rebuild and cause this error again.  And remember: When in doubt, restart Visual Studio because that is probably your issue 🙂

Please feel free to reach out via Twitter @nerdyreddy if you’re curious how we created Holodog!

October Update: I got a new job, y’all

It’s been a while since my last post, so I wanted to post an update since a lot has changed in a few months.

After my time learning at General Assembly was over, I decided to do an apprenticeship program called Future Academy at AKQA, a global digital creative agency who works with clients like Audi, Nike, Activision, and some more that I think I’m not allowed to say.

In this 3-month program, artists/designers/techies come together and work on short sprints with AKQA employees to create new products or opportunities for clients.  One of my favorite projects was where we were given an open-ended prompt to create a product that we could concept, prototype, and pitch in a few days.  My team and I created a product that helps you never lose a shower thought again.  Here’s more-or-less how our pitch went:

Do you ever have brilliant ideas in the shower, but forget them as soon as you step on the bath mat? Introducing….STREAM, an Nest-type device that allows you to record a note and transcribes it to Evernote.

I was actually able to prototype this application using a Raspberry Pi (my first time using one!), the Google Cloud API, the Evernote API, a Flic Bluetooth button, and used Python as the language.  I have a presentation on my journey in creating it that I might put up.

Here are pictures of my cohort from my fellow future academist Yeoj’s camera.

futureacademy.jpgfutureacademy2.jpg

At the end of the program, I had an offer to stay on.  So since September, I’ve been working as an Associate Software Engineer!  Currently I’m working on the front-end of the Audi website. I work mostly with JavaScript and SASS at this point (omg, maybe too much SASS for me to handle *z-snap*).


Other happenings that aren’t job-related:

  • I live in Oakland now. Patrick and I have an apartment with a patio filled with succulents and flowers, and I’m about a five minute walk from my favorite bakery. Life ain’t too bad.
  • I joined a rock climbing gym. Here’s to getting ripped / ripped off soon since I am scared of my regular gym commitment.
  • Like most people here, I have the upcoming election in mind more often than I wish I did. I probably tweet about it more than I should. I actually think lately I have *only* been tweeting about it & it has consumed my brainspace.  I also have been watching more and more videos of baby animals lately in order to cope, so there is only nonsense in my mind right now.

TPFMIL (The Past Few Months I’ve Learned)

This blog has gotten dusty since I have been too busy finishing up my time at General Assembly.  I already miss going on the train to SF, drinking a variety of teas, and attempting to figure out how to use a new framework with my classmates (*nostalgic sigh*).

In true listicle fashion, here are things I’ve learned in the past few months, both technically and emotionally:

  1.  Learning to code web applications is *so* much more than just mastering the syntax and writing algorithms.  It’s also about the framework / other tech you’re using and understanding its documentation; it’s about data structures and the underlying computer science that helps you determine how things work; it’s about breaking down your problem into questions that you can Google/StackOverflow.  And much more than that, to be honest.  It’s a never-ending, but exciting rabbit hole to fall into.
  2. Growth mindset, y’all.  As an ex-Khan Academy employee who preached the growth mindset in classrooms (and even created a lesson plan for teachers on the topic),  I felt a deep empathy for my middle-school math students when I was going through this course. I kept having to tell myself:  Learning is hard, and if it’s not, you’re probably not doing it right.  My biggest stumbling block was with the implementation of the Model-View-Controller concept.  I realized that I had to be intensely resourceful and do additional homework to really understand it. It’s silly that even as an adult it’s easy to be discouraged by basic struggles in learning a new concept. However, being raised in an environment where I was programmed (pun intended) to think that natural intelligence was more valuable than effort, I have to regularly remind myself that learning is supposed to be difficult.
  3. Specificity is key.  I constantly think about a talk that one of my former coworkers  (Laura Savino) gave about the importance of specificity in both programming and life.  Once you narrow down your problem to its most detailed form, it becomes more manageable to tackle (both emotionally and mentally).
  4. Code with people and find communities that support you.  I’ve found that especially with coding, learning with others is so beneficial.  Even as an introvert,  simply talking my problem out loud with others can move me forward.  I also realized that when two different people are learning the same new technology, they find out totally different things about it and both of you learn twice as much.  As a new programmer, getting involved in a community with both experienced and junior level developers can open your eyes to new technology and/or different ways to do X (note: not the drug).  Just because there is an endless amount of information online about programming, it doesn’t necessarily help you in the same way that talking to actual humans can.
  5. Pseudocode. Draft.  Refactor.  Repeat.  (Isn’t it cute that this is a recursive function?)  This is true with any art, and just as true with coding.  Cowboy coding works for a little while, but it sucks in the long run.  It’s better to know what direction you’re going, systematically finding a way to get there, and then polishing it up in the end.

Next blog post to come…Probably something about MAPS! Be excited.

After two weeks

I naïvely thought that I would be able to keep up a blog during my time at General Assembly’s web dev bootcamp.

Oh, ignorant me.

This will be a quick update/re-cap, as I’m currently frustrated at CSS and JavaScript right now.

First week:

  • I met everyone! There’s a class of about thirty people and three instructors, and everyone has been very supportive and helpful.  All of us come from very different backgrounds, so it’s been exciting to see us all changing our careers at once.
  • I learned a little more about how the Internet works, though I could always learn more. We quickly went over HTML, CSS, and basic JavaScript DOM manipulation.
  • On the third day, I created this Resume/Portfolio site. Much of the information is false right now, but it’s more about the creation and less about the content….though I do wish I had 60 years of programming experience 🙂
  • Over the weekend, our project was to create a tic-tac-toe game, which felt somewhat advanced just for the first week. I have had practice with DOM manipulation and JavaScript in the past, so it was less steep of a learning curve though it was still a struggle.  Let’s just say I did not get much sleep the night before.

Second week:

  • We learned several different things this week.  We started out with more practice using Github, which was especially useful for me since it’s still not immediately intuitive for me.  We also learned more about functional programming, callbacks, forms, jQuery, Bootstrap, Prototypes, OOP, and Ajax. It’s such a large variety of things, yet I see how they can all be used together.  Again, I’ve had some experience with some of this before, but I definitely needed the refresher.
  • I created a website where you input a photo of a sloth and a caption, and it posts the two onto the blog.  It’s kind of great.

Okay, really need to go and do my homework. Will post links to my projects later if I get any time.

Onwards!

Class starts tomorrow!

In the past few months, I have been teaching myself how to web-develop part-time (Note: “to code” seemed like only a part of the puzzle, the verb “web-develop” felt like a more accurate description).  This has involved sitting alone in my apartment either obsessively watching videos of how the Internet and data structures and sorting algorithms work, successfully and unsuccessfully pushing things onto Github, trying to arbitrarily hack at CSS hoping that it will center my content, attempting to solve ridiculous Ruby problems, and pulling my hair out after realizing that the only error in my JavaScript was a missing semi-colon.  Also, it admittedly has involved a lot of getting side-tracked and watching YouTube videos of unlikely interspecies animal friendships and wondering where two hours of my day went.

It’s been exciting, and at times I have even surprised myself about how much I was addicted to fixing bugs or figuring out an algorithm that I was initially convinced I wasn’t capable of solving.  I sometimes visualize certain levels of my brain being unlocked whenever this happens (most likely a side-effect of downloading Monument Valley recently and the gamified design of many educational CS applications).

When making my Rock, Paper, Scissors app last month, I felt a compulsion that I hadn’t felt in a while.  I was enjoying this work and felt confident in my ability to get better in the future.  I also realized how fun it could be to both design & develop your own application.

Tomorrow, I will finally be able to do all of these things with the guidance of mentors and the camaraderie of about twenty-five of my classmates.  I’m pretty psyched, for lack of a term that doesn’t make me sound like a SoCal surfer dude.  If it’s not apparent already, I am one of the annoying adults who always loved school, and thrive in a learning environment (Yeah, I always got invited to parties too….).

And of course, perhaps above feeling excited, I am scared. Anxious. Fearful.  Changing or developing a new career path is not the easiest thing to do, especially when you know that you’ll eventually have to compete for positions with people who are younger and more experienced than you.

However, I’m starting to see this as a positive, and I’m not only telling myself this repeatedly to quell any irrational anxiety of failure.

From afar, the tech industry seems intimidating and filled with precocious twenty year-old boys with glasses who build ‘disruptive,’ ‘innovative’ apps.  But from my experience dipping my toes into the water, my toes and sole/soul are still intact, for lack of a better metaphor.  I’ve noticed that for the most part, experienced techies are friendly and happy to share their knowledge with aspiring techies.  I’ve realized that the tech landscape is changing daily, which makes learning a new constant for the industry as a whole.  As long as you know the basics and are willing to work hard and learn,  it can be a friendly environment for newcomers (obviously some MAJOR caveats included depending on who these newcomers are).

Okay, I should go to sleep now rather than continuing this digression.  I’ll try and update tomorrow.

P.S. Happy MLK Day to everyone!  Without his leadership, I would not be the person I am today.  I will end with a quote that is mildly ironic to attach to this blog post, but is a good reminder that communicating with people is more important than communicating with computers.

“We must rapidly begin the shift from a “thing-oriented” society to a “person-oriented” society. When machines and computers, profit motives and property rights are considered more important than people, the giant triplets of racism, materialism, and militarism are incapable of being conquered.”
― Martin Luther King Jr.