AR Projects
The era of VR and AR is coming, and I can see more people design and modeling in virtual space like Gravity Sketch does. We will see more AR games like Pokémon GO popup on the App store, and more amateurs like me starting to learn 3D software and contribute to our metaverse someday in the future.
Duke Start-up challenge
What does 3D have to do with Duke Start-Up Challenge? It starts with a course we took called image & video processing. We are grouped into 3-4 people teams and trying to brainstorm what the future would look like.
I shared my team with my 3D drawings and pitched them the future of Augmented Reality. Let's see if we can find a way to use AR to solve some real-world problems.
After a couple of days' discussion, we decide to use AR to deliver instructions. So we started our first company called "ARIS" (Augmented Reality Instruction & Solutions).
We started the test on some toys and delivered AR instruction to help people solve the puzzle. Here are a couple of challenges we faced:
-
How could the program differentiate puzzle pieces?
-
How to render animation on top of the screen?
-
What device and technology to use?
There are some of the solutions we tried:
1. Depth camera
With the help of a depth camera like the Intel RealSense Camera or Xbox Kinect Camera, we can scan the scene and generate a 3D point cloud. We then loaded it to the computer and used Open GL to recognize objects within the 3D point cloud. Once done, we need to generate feedback on two screens for each eye. The problem with this design are:
-
The point cloud generated by the depth camera contains a lot of noise. It does not work for small furniture or puzzle pieces.
-
Anything that connects to the target will mess up the recognition program. This situation happens when a user is holding the piece on hand or putting two-piece too close to each other.
-
Devices required are too cumbersome to wear and too expensive to afford.
2. Two phones in a VR headset
Compared to the solution above, this solution is much cheaper. It is like Oculus 2 Quest but in 2015. An average smartphone already has got camera, processor, and screen. To generate a sense of 3D, we need to put two phones into a VR headset. To make a 2D camera recognize 3D objects accurately, we used the Vuforia image target, which works like a QR code. As long as the image target is put on the object's precise location, phones will recognize the scale, location, and rotation of the object. However, the problem with this design are:
The perspective from the camera is different from your eye's view. When users put on their headsets, the latency will make them feel dizzy. Besides that, it is tough to grab the target object due to the perspective shift.
3. One phone hold on hand
This is the most practical solution. We simplified the solution to just one phone and an app. The user will scan the image target to get AR animated instructions and then put down the phone to start the assembly work. This way, they don't have to worry about expensive equipment, motion sickness brought by VR, or perspective shift.
To test our solution, we purchased a set of Ready-To-Assemble furniture, built 3D models of each furniture piece, and put image targets on all of them precisely. Here is a showcase video we put on the Indiegogo campaign for crowdfunding.
We made it to the second round of the Duke Start-Up Challenge since the final round will need us to collect $3000 from the Indiegogo campaign. However, what we make can be helpful only when furniture manufacturing companies collaborate with us.
We brought our demo to High Point Furniture Market and talked to one of the manufacturing companies. What I learned from the visit:
-
The ready-to-assemble furniture has a shallow margin rate to make it competitive in the market. The paper instruction only costs 2 cents, while AR instruction will be more expensive than that.
-
People with a hard time understanding paper instruction could watch videos to learn how to assemble. The cost of making a video is cheaper than the development of AR.
-
Training people to understand how image target works are tricky, and image target needs to be scanned in a bright place.
-
Putting image targets on each piece of furniture will add a lot of cost for manufacture.
-
Customers will need to peel off the image target themselves.
-
The AR would be more beneficial if we could do error checking. (I built that feature in my test environment using a hit test. However, the condition is more strict for the user. The user must make sure multiple image targets are in the camera view.)
Overall, using AR to help assemble RTA furniture may not be the best use case for AR. I still learned a lot from the start-up challenge:
-
How to make 3D objects and animate them to Unity 3D.
-
How to write C# in Unity 3D and generate Android App.
-
How to start a business by writing a business plan, making demo videos, and campaigning for crowdfunding.
ConnectWise View
ConnectWise Control is remote desktop software. As long as the user has a machine that supports a remote desktop, CW Control will get the problem solved. However, to fix something like a printer or a WIFI router, we have to send a technician onsite, and sometimes sending a technician to the field is expensive and time-consuming.
Video chat solved some of the issues. But for something hard to describe using words, like a reset button on the router, a VIN on the car, or a switch among a set of controls on the server machine. How to accurately navigate the person who needs help to the correct location and find the right item is challenging.
After Brainstorming with the team, our CTO of Control, Senior Developer, and I form a team to create a web app to provide AR instruction remotely.
We put video call + bar code scanning into View. The following image shows our CPO demoing ConnectWise View (used called Perspective) on the IT Nation Conference to thousands of partners. In the demo, he can see through his camera, scan bar code on the laptop, and view more data about the computer.
After that, we also add AR navigation and AR pin to CW View. The program will take many images from the video and stitch those images to make a panorama view of the room when the host's PC connects to a guest's mobile phone through CW View. The host can zoom in/out on the PC to locate their point of interest. Then we render an AR green rectangle on the mobile screen to indicate where the PC screen is. The AR rectangle could help navigate the guest walking to the point of interest or press a button at a certain point.
Create AR video using blender
Rendering 3D is cool, but moving the 3D into the real world is cooler. During ConnectWise Strategy Coloring Competition, we were asked to color one piece of paper by hand or digital painting, and I think I can do more than that. So, I asked my two-year-old daughter Rosie to be my actress and took a short video. Then I modeled the mars, put our strategy pyramid in, redraw the strategy UI in a futuristic style, and added a little AR animation. Here is the final result:
code ar app using ARkit
As a Harry Potter Fan, I always want to create something like the Daily Prophet newspaper. With the help of AR, I am sure I can do it. Since I made an Android App before, I want to create an iOS app this time. So I start to learn how to write code in Xcode and how to use ARKit. After a couple of days' learning, I made an App and a video to demo this App.