[英語] Making the Donald Trump Simulator in 30 days: Part 1

 

As I mentioned in my last post, I’m currently focused on making 3D content.

So in order to get rapidly good, I decided to publicly challenge myself to make and ship a game in 30 days. Sure, there’s less stressful ways of getting better at something. I did take a few tutorials on unity3d.com to understand the basics, and I could continue to take them all, as well as enroll in a few online classes. But that’s not an efficient way of learning.

The best way to learn, in the 21st century, is to start making things right off the bat, and Googling anything you don’t know on the way. The great thing about the Internet is that somebody else has already encountered the problem you are having, and have written the answers for you.

So, despite it being the 3rd month since I first downloaded Unity (the 3D game engine) and having never made a game before, I decided to make the Donald Trump Simulator. It’s a game where Trump chases you while you throw ‘Facts’ at him. That was about all I had prepared before starting this project. Planning more than this would be inefficient, because I don’t have any clue about what is easy and what is hard in making a game.

Also, I’m a huge believer of ontological design:

…we design, that is to say, we deliberate, plan and scheme in ways which prefigure our actions and makings — in turn we are designed by our designing and by that which we have designed; – Anne-MarieWillis

Ontological design, in practice, simply means that, as I make this game, what I learn to be possible (or impossible) to do will mold the game’s ultimate direction. It’s important to embrace the limitations you face, because while you’re still a novice as something, nothing will go exactly as you planned. Some people even say constraints are what makes us more creative.

So here was my process of creating the Donald Trump Simulator in 30 days:

Environment

The first thing I did was to set up the mesh environment (map). A really creepy one. One problem though. I knew the environment of a game takes a huge amount of time to create. If I started from scratch the entire month could be spent just making the world. I only have 30 days to finish the game, though.

Luckily, Unity provides a marketplace called the Unity Asset Store where you can find 3D objects, maps, sounds, code, or just about anything you need to create a game. If you don’t have a big developer team to make everything you need from scratch, it’s the perfect store to get what you need.

assetstore

After searching around, I found a free environment that could serve as a good foundation for my horror game. Best of all, it was free!

cityblock

As you can see from the picture, the downloaded environment was inherently creepy. But I wanted to make it even creepier, and get rid of the cowboy-ish vibe. So I took away the mountains, dimmed the environment lights, changed the skybox (sky material) to a darker pattern and also added thick fog to the whole scene.

Screen Shot 2016-09-01 at 10.31.46 PM
 

BEFORE

Screen Shot 2016-09-01 at 10.31.33 PM

AFTER

Screen Shot 2016-09-01 at 10.32.28 PM

Fog + minimal lighting = creepy.
Literally any object that comes out of the fog towards the camera will terrify the player.

First Person Camera

Next, I added a first-person camera and a camera controller (so I can move the camera/player). Unity is actually nice enough to provide a standard first person camera rig, where the keyboard controls of (WASD & arrow keys) are already assigned to moving the camera. You just go to Assets -> Import Package -> Characters. Once imported, you just drag the First Person Character into the scene and it just works.

Screen Shot 2016-09-01 at 10.50.16 PM
 

AI


AI was easy. Especially since all I needed was the enemy (Trump) to chase the player. So far, Unity has made everything that I thought would be hard into a piece of cake. 🍰

The enemy AI simply needs to chase me while navigating around buildings and walls. That means all I need to add is a Nav Mesh Agent to the enemy object.

screen-shot-2016-10-11-at-10-12-12-am

Then add a script to tell the enemy object to chase the player object.


screen-shot-2016-10-11-at-10-21-38-am

After that I bake(render) the nav mesh, so that the nav mesh agent understands the map its being placed on. (This screenshot is from after the game was complete.)

screen-shot-2016-10-11-at-10-07-38-am

There’s a great tutorial on nav mesh agents here.

Making Trump

Initially, I thought making Trump would be the hardest task for this project. I’ve never created complex a 3D object before.

Again, I didn’t want to spend two weeks on this. I tried to think of the simplest way to get the best result. I considered ordering an actual halloween Donald Trump mask, 3D scanning it with the Autodesk 123D app, then sticking it onto a body asset I’d find on the Unity asset store. But that wouldn’t be a solution for future projects.

I found a web app that let’s you upload 2D images and make 3D models out of them. The result was subpar and a bit too loony. It also wouldn’t let me configure the hair, which is a huge problem if I’m trying to create Trump’s beautiful hair.

trump face
Then I found Adobe Fuse. Fuse is the photoshop of 3D character building, with customizable body parts and clothing. Best of all, it’s (currently) free. It’s like the in-depth version of the character customization you find in games like Skyrim or Tony Hawk Underground.

create-3d-character-adobe-fuse-step-1

Here was my first rendition of Trump:

screen-shot-2016-08-19-at-12-42-25-am

It was awful. It looked nothing like Trump. At best, it looks like Trump’s imaginary uncle that was a former bully in high school. The hair was also nothing like Trump’s. But there’s only so much you can do with with a pre-made selection of body parts.

But then I discovered a tool on Fuse, which let me directly alter the meshes (shapes) by dragging them! Now we’re talking.

tweet4

Things were looking up. Trump was a bit frog-faced, but with enough time, it looked like I could make a good replica of him.

Everything was looking good.
Until I hit Ctrl+Z

tweet5
Huh? What?!

*25 minutes of denial and searching for fixes on forums*

Noooooo!!!!!

The Ctrl+Z function was bugged! If you mess with the mesh directly, it couldn’t correctly undo your actions. Now I knew why Adobe released Fuse for free.

With the face ruined and no way to undo it, I had to remake Trump. But he ended up looking a lot better:

screen-shot-2016-09-05-at-8-35-09-pm

I gave him manga-hair too! I never used this guy, though. Too subtle of a change for anyone to notice, in-game.

screen-shot-2016-09-05-at-8-48-53-pm

I was satisfied with how Trump was looking. It was time to start animating him and bring him to life.😈

But Before we get into animation

Before we get into how I animated Trump, it’s important to mention that making custom 3D animations is hard. Really hard. I see many of my favorite indie developers make games that don’t have a single character in it, because of how hard animations are. Think of all of the muscle, joint, and rotation movements that go into something as simple as walking. That’s why 3D animators can make up to even six figures in salary these days.

Making this game made it clear why it’s usually impossible for a single person to make a high quality game built from scratch and compete with big companies.

We, as a global culture, are obsessed with startups, because the story of a few people getting together to compete against multi-million dollar companies is nothing short of extraordinary, and yet this disruption story is happening all around us. Kodak, with 17000 employees, filed for bankruptcy the same year Instagram was bought for 1 billon USD with 16 employees.

But disruption in gaming is a different story. The more people you have doing architecture, character, animation, props modeling, UV mapping, material design, particles and code, you’ll end up with a much more beautiful and robust game. A Grand Theft Auto 5 (GTA5) game built by 1000 people will certainly be better than a GTA5 game attempted to be built by one person in the same time frame.

But I don’t think that’ll be true for long.

With everyday that passes, it’s getting easier for solo-creators like me to build beautiful and robust games, thanks to free assets, new tools, and ease of contracting work. Even three years ago, making this Donald Trump game would’ve been impossible, since Adobe Fuse, the free environment I used, and the web application for animation I’m going to show you in the next post, didn’t exist. And that’s the important part.

With the newest tools, hacks, and an inventive perspective, we have the ability to make our imaginations into reality, all on our own.

That’s what I hope to convey to you in these posts on how I made the Donald Trump Simulator in 30 days.

That’s what I hope to convey in almost all of my blog posts.

In part 2, I’m going to tell you how ridiculously easy the animation process was. (Post coming soon)

[英語] Why I’m Making VR and 3D

Two years ago, I quit my day job as a venture capitalist. Meeting entrepreneurs on a day-to-day basis ignited my drive to become a maker, myself. Since then, I’ve started an Airbnb empire, designed an app, built a few websites, and gave a Tedx talk about the process.

When I put it on paper, it sounds like I’ve done a lot. The truth is, I just have no focus. Great, I’m a maker now, but what do i truly want to make? I believe I’ve started to figure it out.

I’ve decided to focus all of my energy into VR and 3D.

Why?

Reason 1: The ‘true calling’ factor.

In the past I’ve had trouble committing to any project. My half-finished iOS apps, chatbots, and DJ-able Nintendo Power Glove can vouch for that. 3D and VR have been different. I’m absolutely fascinated with the ability to click and code-up an entire universe from my bedroom. It’s addicting, beautiful, and I can’t get enough of it.

It’s exhilarating to see your own code come to life:
(better w/ 🎧)

 

Reason 2: It’s the future, not a fad.

Every year, information technology evolves into increasingly immersive experiences. Look at Facebook. At first, we shared blocks of texts. Then, we began sharing pictures and emojis. Soon videos began to fill up everybody’s feeds. Now, draggable 3D images are shared, and Facebook even bought the Oculus Rift company in hopes of letting users share experiences of even higher complexity. 

And this trend, it’s bigger than the evolution of your SNS feed. This is the entirety of how we are beginning to interface with information technology. Think of the last century’s most endeared pieces of technology: the telephone and the television. Over time, phones and TVs have come physically closer and closer to us: First the house, then the car, to the hand, to the pocket, and soon, with VR goggles, the head. We clearly can’t wait to shove our devices into our face. This trend will continue, as there’s already existing plans to create contact lens displays.  And at every step of this future, it’s hard to see HTML and CSS taking part in it. Hence, I’m better off becoming an expert in 3D coding and 3D front-end rather than riding the late train to generic webpage creation. I’d rather be part of that transformation, than merely stand on the sidelines.

Reason 3: ‘We are as gods now’

“We are as gods, we might as well get good at it.” –  Steward Brand

Steward Brand said that 40 years ago, referring to the effect man has on Global Warming. Now this is coming true in a literal sense, as we create entire worlds made of bits and bytes.

We’re increasingly living in a hybrid reality of the real and virtual.

Right now we can access the digital world in screens, but with virtual reality, augmented reality, mixed reality, and projection mapping, the line between what is ‘real’ and what isn’t is blurring. It’s simply getting harder to differentiate between digital material and physical material, and most importantly, it’s starting to not matter what matter is composed of. 

But what happens then? Well as Terence McKenna would say, “we are entering universes of our own construction.”

We actually already live in the construct of our personal reality. That’s what earphones do. They disengage you with the existing aural data (sound), and replace it with data that you want to hear. You’ve customized the reality of what you hear. In this sense, VR is nothing new, its just customizing visual data. As we use earphones to construct our own aural reality, we will soon go into constructions of our own visual realities. How could I not possibly take part in this?

Reason 4: It’s an artistic form of entrepreneurship

I have a weird hang-up about startups.

If the product is a copy of what someone else has done in another part of the world, it’s distasteful to me. Most of the startups here in Tokyo are Japanese clones of Silicon Valley products. Ugh. 

I also believe that a product is only truly yours if you (mostly) made it with your own hands. 

I know, it’s a weird hang-up. It’s a personal bias. While being a VC, I saw how useless the business guy always was in the product-making process. How early they’d go home compared to the designers and engineers. When I’d participate in hackathons, I saw the hipsters and hackers work-work-work through the weekend, all for the hustler to present it to the judges in the end as if it were their own creation. I didn’t want to be like that so I learned how to design, code, and analyze data. That’s when I realized that in this era of abundant educational resources, there is no reason that you can’t learn to code and design, other than laziness.

So in my eyes, if you want to make a startup, you can’t plagiarize and you have to make it on your own. I guess my mind treats startups similar to an art form.

3D and VR are nothing, if not art. It’s a creative endeavor that fits my ridiculous startup-ethics venn diagram.

Reason 5: Direct access to goosebumps

I skied in VR the other day. I put on the device, and rode a machine that simulated the way I’d turn on normal skis. I felt like I was actually sliding down a mountain. Within two minutes of playing, I actually got into flow the same way you do when you ski.

 

When you have control the data feed for someone’s eyes, ears, and movement, you can trick their brain into believing that the experience they are having is real and personal. I would argue that, in VR, the willing suspension of disbelief is almost not a choice. 

That means any emotion that you want the audience to have is amplified compared to other traditional mediums.

Soon, Shoin.wf might be a domain you experience, rather than read. You might come to my VR blog to experience sheer cognitive ecstasy, or traumatizing fear. 

As a creator, what better way is there to get a message across? 

Reason 6: Discipline

I’ve come to terms with the fact that you can’t do everything. Everybody wants to do everything, because everybody is good at a lot of things. Those who truly master their craft are the ones that said “no” to, not only opportunities that would distract them, but to their own wandering minds and ambitions as well.

Goodbye chatbots and random dev work from friends, I’m locking down and focusing on VR and 3D. I’ll still run my Airbnb business though, as its highly automated and it keeps me fed.

Conclusion

VR and 3D suits me well, and I believe it’s the future.
At first I’ll be using Unity3D to teach myself the basics of 3D game design. VR will come after that, as if I can make a 3D game I can just export it into VR. But more importantly,  I don’t have a VR rig,  as there’s currently no VR device compatible to the low specs of the Mac.

[英語] How to automate Airbnb cleanings using Zapier

Last year, I started a mini Airbnb real investment trust called REAT. I find sub-leasable properties, borrow it with investor capital, rent it out on Airbnb and split the profit with the investors. It now has 10 houses under management, and generates five-figures a month in revenue and I’m still the only employee. One of my biggest headaches was organizing cleanings via house cleaning services. Today, I’ll share how I fixed all of that.

The problem with manual entry

Manually ordering cleanings cost me a fortune. It cost me my time by having to do a repetitive almost every week. It cost me reviews, because there were occasional times when I forgot to reserve a cleaning. It cost me emotional drain from worrying if I had entered any data wrong. Worst of all, it cost me hundreds of dollars from ordering cleanings on the wrong day, no matter how careful I felt I was booking cleanings.

That’s more than enough reason to make a robot do it.

Time to automate

NOTE: The quickest solution for me was using the Tokyo Airbnb cleaning service, HouseCare (Yes, that’s me as a quote bubble on their website, I was their first customer). They let users send them the Airbnb generated iCal link, and they automatically book cleanings from that day on. Super simple. I love them.

But, HouseCare charges a lot for big houses. To be cost effective, it made sense for me to order my big houses through a different cleaning service called, Yadokari.

However, Yadokari requires users to book via Google Cal events into a specific calendar. I used to save up reservation confirmations in my Gmail, and create entries once a week.

Screen Shot 2016-04-22 at 3.04.20 PM

If Airbnb had an open API, we’d be able to connect everything directly, but they don’t.
Lucky for us, Airbnb’s reservation confirmation email for hosts always looks the same, so it’s easy to create a system around it.

Let’s dive in.

Gmail forwarding

First, we need to create a Gmail filter that automatically labels anything in that matches the filter conditions, and forwards anything with that label.

Screen Shot 2016-04-22 at 3.46.52 PM

The above photo’s filter means, if there is an email with a subject that contains “Reservation Confirmed” and has the words “4BR” or “6BR” or “Cozy Loft near Asakusa” (which are all parts of the names of my Airbnb listings), the filter will activate. Now we click, “Continue”.

Screen Shot 2016-04-22 at 3.41.06 PM

We then specify the actions for the filter. The filter will apply the label “Airbnb Zapier”, then forward the email to an email generated by Zapier’s amazing Email Parser. Now let’s go get the @robot.zapier.com email address.

Parsing the email

Zapier’s Email Parser let’s you define patterns in an email and automatically extract the data from any email you send it, that matches that pattern. The catch is, the emails you send it have to be in a similar format. But for our purposes, its perfect.

Generate a robot.zapier.com and send the parser a few template emails from your inbox. The more you feed it, the smarter it gets.

Define any data you want to extract, but for our goal, we only need the title of the listing and the checkout date.

(I’m also collecting the link that goes directly to the chat between me and the guest. I might use it to automatically scrape the email of the user, which I use to send them the directions in a PDF file. That’s a task for another day.)

Connecting the parser to Google Spreadsheets with Zapier

Screen Shot 2016-04-23 at 5.51.28 PM

Zapier is a service that let’s you connect APIs together, without coding. That means you can setup your favorite services to talk to each other with a few clicks.

First, we set up the Email Parser as the trigger. It should fire when it receives new emails.

Screen Shot 2016-04-23 at 4.57.07 PM

Next, we set up Google Spreadsheets as the action. You could skip this step, and connect it directly to Google Cal, but then you’ll never know if there was an error. I also like using Spreadsheets as a database to keep track of all events fired. 

Set the action as the creation of spreadsheet row.

Screen Shot 2016-04-23 at 5.09.37 PM

Make sure you’ve already created a spreadsheet with dummy data. Write the desired category of each column in the first row, too.

Screen Shot 2016-04-23 at 5.21.22 PM

From the drop down, connect the appropriate column category to the parsed data. Zapier makes it magically easy.

Screen Shot 2016-04-23 at 5.22.45 PM

For me, I needed to change the title of the listing into something my cleaning company understands (E.g. “6BR Home near Asakusa /6mins to JR” -> “両国”). So, I wrote a spreadsheet function.

Screen Shot 2016-04-23 at 5.30.42 PM

Code:
=IF(RegExMatch(INDIRECT(ADDRESS( ROW(),COLUMN()-4)),”6BR”),”両国”, IF(RegExMatch(INDIRECT(ADDRESS( ROW(),COLUMN()-4)),”Cozy Loft”), “東上野”, IF(RegExMatch(INDIRECT(ADDRESS( ROW(),COLUMN()-4)),”4BR”), “池袋”, “error”)))

To explain:

  • RegExMatch(INDIRECT(ADDRESS( ROW(),COLUMN()-4)) means that the function needs to find the value of what ever the value is 4 cells to the left, hence the column()-4.
  • IF that cell contains “6BR”, then the output is set to be “両国”. If it is, then “両国” is outputted. If not, it runs the next IF statement, to check the next similar statement.

If it doesn’t equal any of them, it writes an “error” into the cell. This is crucial, because if the parser isn’t perfect. If it outputs the wrong data, you need to know immediately. More on that in the next step.

Error notification zap

Screen Shot 2016-04-23 at 5.50.13 PM

Automation is great, but if we don’t know when the system has failed, we’ll always have to check if the cleaning orders have been made. So we need an error checker.

This time, make the newly created spreadsheet row from the last step, the trigger. (If you have a premium Zapier account, you could just make one long Zap, but I’m still clinging onto the free account for as long as I can.)

Screen Shot 2016-04-23 at 5.50.31 PM

Add a filter to check to see if “error” is contained in the output of the cell from the last step with all the coding.

Screen Shot 2016-04-23 at 5.50.49 PM

Make the action Zapier’s Outbound Email. We’ll send out an email if there’s ever an error, with the subject “Error with Airbnb Zapier.” Now if there’s ever a parsing problem, we’ll know! The less we have to use our brains to worry about things in daily life, the better 🙂

Google Spreadsheet to Google Calendar

Screen Shot 2016-04-23 at 6.02.28 PM

Finally, we’ll make a Zap to connect every new spreadsheet row to make a Google Cal event.

Screen Shot 2016-04-23 at 6.05.18 PM

Once you’ve connected the correct spreadsheet and calendar, what you put in is up to you. Google Calendar can understand the date string you parsed (“May, 7”) as a date input. Pretty neat, right?

No error check here, although I should add one. Honestly, the parser is the only section that worries me for errors.

Conclusion

That’s it! So much repetition, frequent worrying and mistakes now automated to run faster and more accurately than I could ever do.

And more importantly, the guests will be happier.