Week 8 Homework

This sort of functionality is what I believe makes the AR platform truly great. By being able to constantly take in the world around us and give us enhanced information, much of our internal processing would be offloaded by the computers around us, allowing us to function in ways that humans are better at. One potential problem with something like this is clearly shown in the given video, constantly scanned information about the world is extremely annoying and would only serve to give us more work. We would have to constantly be parsing all the information being shown to us instead of just having the information be helpful. 

 

To start, we would have to filter the constant barrage of information. If a person had an eyewear-based AR system a thought would be some sort of gesture system in which a person could specify exactly what they want to know more about. An initial idea would be if a person points towards something in their field of view and the AR device can see what they are point towards, the glasses would then show a person the name of whatever they are pointing towards. If the person would like to have more information, they could point at the item for a few extra seconds, at which point the glasses would present them with expanded information on this specific item. For an auditory example, the user could tap twice on the frame of their glasses. The glasses would would then visually display information based upon what they are hearing at that moment. I would hope that the glasses would also be trained to filter out background noise, or if they are shown information that doesn’t match what they are looking to know more about, they simply tap their glasses again to cycle through all the different audio information present at that moment. 

 

To further the content of the information itself, we could allow for different sorts of information to be shown depending on what gesture the user gives the glasses. For example, pointing at an object, then creating a circle with your hands could give an approximate measurement of the object gestured towards. Another option would be if a user has the glasses locked on some auditory information and then touches the bridge of the glasses then the glasses would display roughly how far away and what direction the audio source is.  

 

Giving a user so much information all at once is a recipe for disaster, no one would ever find use in such a device. If we can filter the information a user might want properly and allow them to control what information they would like to see at a given moment, their lives will only be improved. 

Week 7 Homework

Screenshot_20191011-175136_IKEA PlaceScreenshot_20191011-175434_IKEA Place

 

My initial thoughts on applications like this are to extend the included functionality already included. Apps like Ikea Place first show if a piece of furniture fits in with your home’s decor, but more importantly displays if a given piece of furniture will fit where you plan to place it. What would also be extremely useful would be a home renovation application. An application like this would allow you to see what the effects of different home renovations would be. It could show things like tearing down walls, changing your flooring, adding an island to your kitchen, etc. This tool would be useful in deciding how exactly you want to renovate your home before ever calling a contractor.

 

Another possibility that is slightly related would be a tool placement application. Many products today are made with proprietary tools, which are both hard to find and even harder to replace. With an application like this, you could show the application a specific part that you want to repair. The application would then walk you through the repair of the part as well as showing you the correct tools for each step of the process. If any repair requires a proprietary tool, the application would give you a list of alternative tools that are not proprietary. Also, if multiple tools can be used for any part of a repair, the application would also give you the option of choosing one of those tools. This would allow someone who may not have a lot of tools still be able to repair something.

 

Fashion is also a field that could be improved upon by an application like this. You could choose any given article of clothing and show your body in the camera. The application would then show you how you would look with that piece of clothing on. If you give the application your measurements it could also determine what size would look best on your body. The application could even be used to help create outfits for you. You could put on some pants and have the application generate the rest of the outfit for you based on how it would look in relation to those pants. A further extension would be to input all of the clothing items you currently own and have the application generate outfits from your wardrobe. This would be beneficial in two ways: the first is that the user saves time trying to decide on what to wear; the second would be that more parts of your wardrobe get worn more often. Personally, I know I have the tendency to stick to certain outfits because I like to wear specific articles and only a few things will match those articles well. By having my outfit decided for me I would feel more confident in branching out and wearing more of my own clothing and also trying different styles. 

 

To go off of the fashion angle, an AR hairstyle application would also be very useful. Show your head in the camera and the application will show you best how to work with your hair texture in order to style your hair well. It could show simple things like how straightening or curling would look, and also things using different gels, cremes, pomades, etc. would look with your current hair. It could also show how different cuts/colors would look on you. 

Homework 5 – Project 1 Thoughts

Fahrenheit 451 – Ray Bradbury 

Made by Jake TerHark

There are two pieces to this project that I love: the front cover’s animation and the UI.

The front cover has an extremely striking animation that immediately shows what you’re getting into with this book. The firemen shooting flamethrowers make for a cool visual effect, and is enhanced by the fact that the fire follows the movements of the book to a degree. A simple use of particle effects turned what is an otherwise stale scene into something eye-catching.

451fr

The other piece of this project that I enjoyed was the UI. It has a clean aesthetic that is kept consistent throughout the entire project. By giving everything the same font, color scheme, and layout, Jake made a very good looking UI. Also, by utilizing Unity’s UI system as opposed to the 3D text the rest of the class used, the words looked a lot cleaner and stood out to be more legible against the grey panels they used. 

451ui1

The best part of this project was the simple addition of having the UI elements follow the direction of the user. By having the UI constantly face towards the user, the usefulness of this application goes up incredibly. While not many people would actively appreciate that they don’t have to manipulate the book in different ways to actually read the UI, they will definitely feel less frustrated in using an application that is intended to make their lives easier. I took a look at the code, and I was surprised at how simple it was to implement. A single line of code in the update loop that sets the rotation of the UI to the rotation of the camera. A small addition that greatly improves the application as a whole. 

451b1

 

Charlie and the Chocolate Factory – Roald Dahl

By Krunal Bhatt, Suhan Nath, Shiva Reddy

This project impressed for the visual design alone. The front cover shows off the chocolate river in Charlie and the Chocolate Factory extremely well. The rolling hills, candy, and other little elements are also extremely cute and add to the scene of the chocolate river.

 

The raining candy is also an extremely fun addition to this project. Constantly instantiating new models of candy and having it rain makes this look very cool.

Charlie front

Also, I love how they made the chocolate river scene appear alongside the front cover of the book as if it was the left side of an open book with the right side being the actual cover. It adds a lot of extra real estate to the shown book scene while not making the extra space given to the scene feel that much more cumbersome.

Stargazing Application Thoughts

Since this is a fairly niche app, I don’t foresee a lot of specific uses outside of a person stargazing and wanting to know exactly which constellation or planet or other celestial object they are currently looking at. A way to add interest to this particular app(for reference, I’m using Startracker) would be to tap on a celestial object – a constellation, a planet, a star, etc. – and get information on said object. This could be a short summary of its significance, how it got its name, and maybe a link to its wikipedia page. 

 

Expanding upon this application with a glasses-type solution would grow to encompass any possible information about the sky you could see in front of you. My first thought is a glasses app dedicated to bird watching. Birdwatching can be an extremely difficult hobby at times due to many different minutiae in bird species. Birdwatchers sometimes will record what they think a specific bird they saw is, and then have to research what they saw. For example, did you know that there are over 330 species of hummingbird? By using an on-board camera, an AR-glasses application could try and scan for whatever birds are in front of it and give the species name immediately. 

 

In a roundabout way, an information-giving AR application could also send back information to help the population at large. An example could be something like a weather application. When a person is looking outside, the application will scan what the weather is like at that moment in time – cloud formation, if rain is falling, if lightning is seen, etc. – it will check against the forecast for a person’s area at that time and give them a precise measurement. The application will also send back the information scanned around the area, and allow weather tracking organizations to better analyze and predict what the weather is and will be like. 

Vuforia Tutorial + Words on AR

astronaut

 

Astronaut model created by NightSoundGames.

My initial thoughts on the uses for this technology through the phone is similar to my initial thoughts playing with it last week. The greatest use case is going to be when an application is somewhat passive in enhancing the world for a person. It would accomplish tasks like giving information or being able to see the world in some different lens that shows off other sorts of information. Entertainment applications in AR usually feel somewhat limited in comparison to an information-enhancing application. 

 

The greatest use-case for AR applications is going to be on smart-glasses platforms. The cumbersome activity of having to use a phone to look at something in AR detracts from the uses the platform could have. By incorporating AR applications into glasses, the passive use-cases already discussed above and last week gets the best of both worlds in terms of platform and possibility space. 

AR/VR Platform Comparison

As a whole, VR allows for a more immersive experience whereas AR is more useful in day-to-day activities. By setting up someone in a VR environment, a person has a more emotionally and physically active response to whatever experience they are currently engaged in. In a game context this means more genuine interaction with the medium; in an educational/training context this means higher retention of the material worked with. By contrast, AR often generally improves a person’s ability to get information from the world around them and act upon that information. This offloads cognitive resources so a person can interact with the world more efficiently.

The phone as an AR platform holds the greatest current use case of any VR/AR platform we interacted with. Even though we looked at some examples of AR as an artistic/educational tool, the greater possibilities of AR on smartphones can already be seen in something like the Google Translate camera. The camera function on Google Translate takes written text in one language and instantly translates it to the language you want – oftentimes in a similar typeface to the one it was originally written in. This enhances a person’s base ability to interact with the world at large. By placing applications like this on a portable device that many people have, we can improve how people communicate with the world at large. The main detriment I see with placing AR applications like Google Translate on the phone is that the process of using it isn’t as instant as some would like. Even though the process is: turn on display -> unlock phone -> navigate to application -> place phone over words; this still feels cumbersome. This cumbersomeness is attempted to be fixed by glasses-based MR hardware. 

Glasses-based MR hardware is the platform with the most important future use case of any of the platforms we interacted with; however, the technology has a long way to go. If we could take applications like the previous mentioned Google Translate and place them on these platforms, we would have the perfect merger of ease of use and ability to enhance everyday life. The main problem is that using one of these devices always feels like a letdown. Personally, I’ve used both the Microsoft Hololens and the Magic Leap One; both of which have problems. Both are $2000+, much higher than what the average consumer would want to spend. The Magic Leap One feels responsive, works well with both the remote and hand gestures, and generall feels good in use. Despite that, the extra processing pack and the glasses themselves don’t feel good on your person. The Hololens fixes how cumbersome MR glasses can be, but is not nearly as responsive as the Magic Leap One. Both of these platforms further devalue themselves by focusing development time on applications that operate on spectacle, instead of for things that can actually be more useful. The applications I’m shown are always more of a VR application that has some interaction with the world around you, instead of giving a person more information about the world around them. The main problems with these platforms could be fixed with further technology improvements: namely, price and ease of use. They need to be fine-tuned for actual AR applications instead of trying to make something closer to VR if they want to find a good market. 

The greatest advantage of the Vive is the enhanced emotional and physical response of people engaged in VR experiences. By fully immersing people in a VR environment, they are more attuned to reacting and retaining whatever experience they engage with. The main disadvantages are portability and pricing. The Vive is not standalone and requires an expensive computer to run, which means many people would not purchase it unless they already are into playing games on a high-end PC. They also have to be tethered to the PC in order to use it, meaning they cannot take the Vive anywhere they wish. These two issues are fixed with a device like the Oculus Quest, in that it’s only a $400 device that doesn’t require tethering to a PC to run. The caveat is graphical limitations and power consumption(~3 hours).

The Cave2 feels like an odd beast. Where I’ve seen it have the most success is in something like military training simulations. More hardware can be created to work with the Cave2 because of how it’s run. The main disadvantages are portability and cost. The Cave2 is less portable than tethered VR due to requiring a large amount of processing power and many projector bulbs. It’s also incredibly costly – I saw $500,000 thrown around + maintenance + software development. This makes it less portable in both the grab and go sense, but also less portable in that only corporations can really afford it.