This sort of functionality is what I believe makes the AR platform truly great. By being able to constantly take in the world around us and give us enhanced information, much of our internal processing would be offloaded by the computers around us, allowing us to function in ways that humans are better at. One potential problem with something like this is clearly shown in the given video, constantly scanned information about the world is extremely annoying and would only serve to give us more work. We would have to constantly be parsing all the information being shown to us instead of just having the information be helpful.
To start, we would have to filter the constant barrage of information. If a person had an eyewear-based AR system a thought would be some sort of gesture system in which a person could specify exactly what they want to know more about. An initial idea would be if a person points towards something in their field of view and the AR device can see what they are point towards, the glasses would then show a person the name of whatever they are pointing towards. If the person would like to have more information, they could point at the item for a few extra seconds, at which point the glasses would present them with expanded information on this specific item. For an auditory example, the user could tap twice on the frame of their glasses. The glasses would would then visually display information based upon what they are hearing at that moment. I would hope that the glasses would also be trained to filter out background noise, or if they are shown information that doesn’t match what they are looking to know more about, they simply tap their glasses again to cycle through all the different audio information present at that moment.
To further the content of the information itself, we could allow for different sorts of information to be shown depending on what gesture the user gives the glasses. For example, pointing at an object, then creating a circle with your hands could give an approximate measurement of the object gestured towards. Another option would be if a user has the glasses locked on some auditory information and then touches the bridge of the glasses then the glasses would display roughly how far away and what direction the audio source is.
Giving a user so much information all at once is a recipe for disaster, no one would ever find use in such a device. If we can filter the information a user might want properly and allow them to control what information they would like to see at a given moment, their lives will only be improved.