top of page
  • Writer's pictureYiming Wang

Final Project Journal

This blog will document the process of my Interactive Design MA final project.


Click on the collapsible list to expand.

Project Proposal

Week 1 Research

Outline and Schedule

Risk Assessment


Week 2 Research


Key takeaways from NDC "Communicating With Deaf Individuals":

  • There is no "one-size-fits-all" approach to communication, nor is there a "typical" deaf person.

  • Broadly defined, communication for deaf individuals occurs through visual, auditory, or tactile modes (for individuals who are deafblind). Common visual communication modes include American Sign Language, cued speech, speech reading (lip reading), and gestures. Auditory communication includes using residual hearing and spoken English received through the ear, often augmented with a hearing aid or cochlear implant to enhance the ability to interpret sound. Tactile communication translates visual and auditory communication into the hand and other parts of the body.

  • When first meeting a deaf person, do not make assumptions about the individual’s communication. Rather, inquire directly about the individual’s communication needs.

  • Get the attention of the deaf individual before speaking. If the individual does not respond to the spoken name, a tap on shoulder or another visual signal is appropriate.

  • Only about 30% of English speech sounds are visible on the mouth under the best of conditions. Factors that can affect speechreading include residual hearing, body language and facial expressions, distance from the speaker, and awareness of the topic under discussion. Communication or conversations may be easier one-on-one in a quiet setting but more difficult in a group or in a noisy environment.

Key takeaways from Quora discussions:

  • Most of the time English is not the first language of deaf people in America. It is usually ASL or some form of visual communication.

  • The education of deaf people in America has usually provided about a fourth-grade reading level (that’s usually the statistic). Every Deaf person’s English-language background/education/exposure is different.

  • Firstly, subtitles and closed captions are two completely different things. With closed captions, you are told exactly what is happening in the background, with *car door slams* and *eerie breathing in the background* rather than just dialogue that is spoken.

  • Through American Sign Language (or another Sign Language system) deaf people are able to understand the movie on a similar scale as a hearing person. American Sign Language does a phenomenal job expressing feeling, emotion, suspense, and storytelling. Possibly even on a better scale than spoken English.

  • At this point in time, CC for movies is I think the best solution. I’ve been an interpreter in a movie theater and find it not a good experience. There’s no perfect solution other than making a true deaf film such as Peter Wolf’s “Think Me Nothing”, “Deafula” etc.


Reflections:
  • Varies of preferred communication: visual, auditory, or tactile.

  • How to address this variety through one solution; or focus on the majority?

  • A cue to raise their attention before speaking.

  • Add Visual or tactile cues to start communication.

  • The best way to interpret movies/tv is directly translated them to ASL, which will require an ASL interpreter.

  • For deaf/HH individuals, sign language is their first language, and English is their second language, there is translation involved. And the English education level for the community is different.

  • Is it possible to use ASL as the medium of communication through technology rather than English? Current trending technology mostly uses English as the medium.


Key takeaways from "Deaf and Hard-of-hearing Individuals’ Preferences for Wearable and Mobile Sound Awareness Technologies"

Findlater, Leah & Chinh, Bonnie & Jain, Dhruv & Froehlich, Jon & Kushalnagar, Raja & Lin, Angela. (2019). Deaf and Hard-of-hearing Individuals' Preferences for Wearable and Mobile Sound Awareness Technologies. CHI '19: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1-13. 10.1145/3290605.3300276.

A primary outcome of this work is a set of recommendations for the design of mobile and wearable sound awareness technologies.
  • Form factor and feedback modalities. As a single device, the smartwatch is a promising form factor. However, there is a strong desire for both visual and haptic feedback (92% of participants) and even for this feedback to be provided on two separate devices (75% of participants); the most promising combinations are haptic feedback on a smartwatch and visual feedback on an HMD(head-mounted-display) or smartphone. For oral conversation support, HMDs offer the most promise.

  • Sound types. Urgent, safety-related sounds and voices directed at the user are of highest priority, followed by non-urgent alerts, people’s presence, and nature background noises. Outdoor background noises, voices not directed at the user, and indoor mechanical noises are of lower interest and could be excluded or at least automatically filtered out. This recommendation reflects past work [3, 27, 39] but with a more detailed prioritization of sound categories.

  • Sound characteristics. For information about the characteristics of sound, the source and location are of high interest (also reflecting past work [27]).

  • Captions. When comparing full vs. summary captions, full captions were of the highest interest, though keyword captions may be useful to some users who prefer oral communication. Captions should be provided on an HMD or smartphone.

  • Notification and filtering. Notifications should typically be shown immediately, but with the option to have lower priority sounds displayed only on demand. Users may want to specify which sounds should be filtered/shown and this may differ based on context (e.g., at home, at the store).

  • “Airplane” mode. Notifications should be able to be turned off easily, to reduce distraction and to accommodate the user’s desire to act appropriately in varying social contexts.

  • Cultural and Social Considerations Social context affects perceived usefulness and comfort with using a mobile or wearable sound awareness device, likely reflecting a variety of factors such as social norms around polite/rude technology use, changing social perceptions of wearable technology [21], and the stigma that can be associated with assistive technologies [38]. Several participants reported feeling the need to explain what the technology does so that other people are accepting of it, which reflects past work showing that wearable devices are more likely to be socially acceptable if they are perceived as being used for assistive purposes [35]. An important tension to address in future work is that users may be less socially comfortable using these devices around strangers, but that is also the context in which need or utility may be highest. An additional critical aspect is how a device like this may be used (or not) in a Deaf cultural context. The significant differences between individuals preferring sign language vs. oral communication vs. both likely reflect this consideration. Deaf people often get frustrated with research that uses visual/haptic devices to support communication or notification, but that does not capture usable information at all (e.g., hearing or signing gloves), have too much friction for usable communication, or are too cognitively taxing.



Key takeaways from “The Deaf Community and How You Can Get Involved


Map of the Deaf Experience


Key takeaways from “Understanding Assistive Technology for Deaf and Hard of Hearing

  • According to the National Association of the Deaf (NAD)

  • Deaf: Those who identify as Deaf (with a capital D) communicate with sign language. These are often those who have been deaf for most of their lives.

  • deaf: The lowercased is for those who do not identify as part of the Deaf culture. These can include those who became deaf later in life.

  • Hard of hearing (HoH): This describes those who have some hearing loss, but not complete hearing loss.

Assistive technology for Deaf/Hard of Hearing users
  • Alerting devices: An alerting device converts an audio alert (e.g., doorbell, fire alarm, alarm clock) into a visual or physical alert that the person can perceive.

  • Telecommunications: Many different options are available for those who are d/Deaf or HoH, including amplified telephones, TTY / TDD (software and hardware), real-time text (RTT), captioned telephones, Text-to-911, video chat, and text and video relay services.

  • Enhanced/Assistive listening: Systems can be used to overcome background noise and provide a more direct audio feed for someone who uses assistive listening devices.

  • For example:

  • In a classroom, a teacher could wear a small microphone that uses an FM radio system to transmit audio to a student’s hearing aid.

  • In a theater, an infrared or audio induction loop system can be used so that audience members with hearing impairments can hear the play through their hearing aids or cochlear implants.

  • At work, an employee can couple their cochlear implant or hearing aids with their computer via Bluetooth and hear their computer’s audio without needing headphones.

Accessibility for deaf and Hard of Hearing users

Here are some accessibility issues that restrict access to people who are d/Deaf/HoH:

  • Inaccurate caption

  • Captions that are not synchronized properly

  • No transcripts

  • Phone-only customer support

  • Low-quality audio

What an Accessible Site Looks Like
  • Accurate Captions: Captions allow viewers who are deaf or hard of hearing to follow the dialogue and the action of a program simultaneously.

  • It’s critical that captions are accurate and also include non-speech elements, otherwise the content may be incorrect, incomprehensible, or incomplete

  • Transcripts: Providing a transcript is another great way for deaf or hard of hearing users to follow along and have another means of consuming the content. (However, transcripts should not be used as a replacement for captions!)

  • Multiple methods of contact and communication: Deaf and hard of hearing visitors may have a difficult time communicating over the phone. Providing an email address, or alternative means of contact will help.

  • High-quality, clear audio with minimal background noise: Quality audio will make it easier to ensure accurate captions. Poor audio quality makes it harder for transcribers to capture all the words spoken, leading to transcripts with many [inaudible] or flagged spots.

  • Clear and high-quality audio will also be easier for hard of hearing users to understand more clearly

  • Use of clear and simple language: American Sign Language (ASL) is a different language than English, and it has its own grammar structure. Individuals who use ASL as their primary language may not be fluent in English, so making written content clear and simple to understand is important, and can be done in the following ways:

  • Avoid slang and confusing jargon

  • Use headings and subheadings to properly structure your content

  • Include bulleted lists

  • Employ an active, rather than passive, voice

  • Provide definitions in simple terms

  • Use consistent language throughout the content


"People don't want to be rude so they make us invisible."


Cochlear Implant: with it, many D/HHs can directly communicate with the hearing.
  • Cochlear implant only works for certain groups of people (Wearing hearing aids first is a necessary step in the evaluation process for a cochlear implant.)

  • Surgery needed

  • Expensive: cost between $20,000 and $40,000


Reflections

These can serve as great guidelines when designing the product. Also, it is important to consider how one can approach the deaf community. Besides considering the functional aspects of the product, social and cultural considerations are also very important.

How to make the product functional while avoiding strengthening the D/HHs' disability identity?


 


Week 3


Primary Research


Contacts for resources and potential primary research community:

Maybe it's better to only conduct surveys but not in-person interviews? (Communication difficulties - exact what we need to design for).


Based on what I learned from secondary research, I decided to design a survey and spread it to the deaf community.

Designing the survey

At the same time, I need to start the ideation phase and justify them with research insights. Later the concept will be further adjusted according to primary research findings.


Ideation


Competitor - Intelligent Gloves


Below is a pair of intelligent gloves that can turn language into audible language. It's very inspiring. But from last week's research, these still lack some consideration about the deaf culture:
  • how a device like this may be used (or not) in a Deaf cultural context. The significant differences between individuals preferring sign language vs. oral communication vs. both likely reflect this consideration. Deaf people often get frustrated with research that uses visual/haptic devices to support communication or notification, but that does not capture usable information at all (e.g., hearing or signing gloves), have too much friction for usable communication, or are too cognitively taxing.



Learning from this project, the advantage is the convenience for the deaf community.

Below are the things it inspired me to think about when designing:

  • It did not include other usable information other than the sign language itself.

  • When designing, the difference between the preferred ways of communication should be considered.

  • How to reduce the cognitive load?


Competitor or Tech Supplier - Signall



The goal behind this product is very similar to my project, but the actual system is not the same as my concept. Also, the goal is to help the deaf and the hearing communicates more easily and not be limited to a certain space. The company also has technology that translates sign language to English, which supports the possibility of my proposal.


Based on week 2 research and available technology, I proposed my concepts below.





Based on the key insights, this concept has advantages compared to other projects:

  • Easy to carry along

  • Two direction communication

  • Both the hearing and the deaf find it easier

  • Applicable to most situations

  • Adjustable for different communication preference

The other projects:

  • Fixed station

  • Single direction communication

  • The hearing finds it easier but not the deaf

  • Applicable to limited situations

  • Only work for the deaf that knows English


 


Week 4


Before starting the design, I need to understand the constraints.


Technical Constrains


Design Constrains

Optical/Visual Guidelines and Recommendations The following optical/visual parameters and issues are addressed:

  • Ocularity (monocular, biocular or binocular)

  • Field-of-view (FOV)

  • Resolution

  • Pupil-forming versus non-pupil-forming optics

  • Exit pupil and eye relief

  • Optical distortion

  • Luminance and contrast

  • See-through versus non-see-through considerations

  • Considerations for helmet-mounted sensors

  • Google glasses design guidelines

  • Microsoft's MR design guidelines

  • Mobile device interface guidelines


Reflections


Which device should the system use? Is it regular transparent screen glasses or AR, MR glasses?

I would say for now the MR glasses is the best option: the user needs to see the physical world; the area for the app on smart glasses is limited, which will distract the user's attention: the user eye's focal point needs to switch between the side and the person talking to him/her.

If using AR/MR glasses, Microsoft provided very detailed guidance of design. And the study of it should start now as this is a whole new area for me.


 


Week 5


Seeing the bright future!!!

One of the concerns I had with Microsoft Holo Lens 2 is that the device is very expensive and bulky. But seeing this commercial-level product on the market makes the future of my proposed system very promising. This device only already has the ability to provide the service that my system is offering. And designing an app on the current mobile phone system is much easier.

nreal

I am excited to see that nreal comes with hand tracking. Which makes my proposed system possible.


Which platform should I use?

Microsoft provided very detailed design guidelines, with tools and kits. And the system can be used standalone on its MR glasses. Whereas, nreal is based on the current Android or IOS system, which means I need to design the system as an Android or IOS system. This is an advantage for regular users, but nreal's design guideline is not very detailed, I do not know how to start designing for their system with developer knowledge.


I need to learn more about how it works or dig into the guideline Microsoft provided to decide which platform I should use. I also need to consider the time remaining for me to complete the design.

While learning how to start designing AR apps, I finished the user flow.


User Flow

Sketch of ASL setting temporary using MR glasses viewport

 


Week 6

Using AR/MR for this project. The justification.


User Interface Option
  • AR environment can use either GUI or NUI depending on the devices.

  • GUI when using AR apps through phones or tablets.

  • NUI when using some AR glasses, below is a demonstration of nreal interface.


  • User interfaces that you interact with using modalities such as touch, gestures, or voice are often referred to as Natural User Interfaces (NUI).

NUI is very intuitive but GUI also works for this project.


Situated analytics
  • Information can be docked or anchored in physical space.

  • This is a new way of organizing information by associating it with a space in the physical world to correspond with relevance, etc.

  • The interpreter's translation can be docked with the hearing.

  • Organize cognitive load by spatial prioritization.

  • The concept of Situated Analytics (SA) is the use of data representations organized in relation to relevant objects, places, and persons in the physical world for the purpose of understanding, sensemaking, and decision-making.


Telepresence
  • Collaborate with an interpreter or other relevant actors separated by distance.

  • So not only a virtual interpreter but also a real interpreter can be present for translation.

  • Information can be collectively shared and looked at organically.

The above images show the possible ways of providing translation through an AR/ MR environment. The image on the left is an example of telepresence. The image on right is an extended version of video chat, that has a better demonstration of the person online.


For my project, I think telepresence is one of the best solutions for the deaf that an interpreter can be present whenever he/she needs it. The other solution is a virtual interpreter, and it's good enough for daily situations.

Seeing the need for interpreter participation, I updated my user flow, to give users the option of a real interpreter session for important occasions.
Updated User Flow

 


Week 7


I was so scared when heard HoloLens can only be used indoors. But after investigating HoloLens2 and the use cases, I found that it's usable outside, standalone. So the original plan is still feasible.

Look into the design of Mixed Reality

Fundamental understanding of the design guidelines for Mixed Reality


Mixed reality blends both physical and digital worlds. These two realities mark the polar ends of a spectrum known as the virtuality continuum. We refer to this spectrum of realities as the mixed reality spectrum. On one end of the spectrum, we have the physical reality that we humans exist in. On the other end of the spectrum, we have the corresponding digital reality.


A Taxonomy of mixed reality Visual Displays

the application of mixed reality has gone beyond displays to include:

  • Environmental understanding: spatial mapping and anchors.

  • Human understanding: hand-tracking, eye-tracking, and speech input.

  • Spatial sound.

  • Locations and positioning in both physical and virtual spaces.

  • Collaboration on 3D assets in mixed reality spaces.

Human input can now include keyboards, mice, touch, ink, voice, and Kinect skeletal tracking.


API names in Windows that reveal environmental information are called the perception APIs. Environmental inputs can capture:

  • a person's body position in the physical world (head tracking)

  • objects, surfaces, and boundaries (spatial mapping and scene understanding)

  • ambient lighting and sound

  • object recognition

  • physical locations

The interactions between computers, humans, and environments.

A combination of three essential elements sets the stage for creating true mixed reality experiences:

  • Computer processing powered by the cloud

  • Advanced input methods

  • Environmental perceptions


The experiences that overlay graphics, video streams, or holograms in the physical world are called augmented reality. The experiences that occlude your view to present a fully immersive digital experience are virtual reality. In the future, new devices with a more expansive range are expected: holographic devices will be more immersive, and immersive devices will be more holographic.

  • Towards the left (near physical reality). Users remain present in their physical reality and aren't made to believe they have left that reality.

  • In the middle (fully mixed reality). These experiences blend the real world and the digital world. For example, in the movie Jumanji, the physical structure of the house where the story took place was blended with a jungle environment.

  • Towards the right (near digital reality). Users experience a digital reality and are unaware of the physical reality around them.


Whether a device is tethered to a separate PC (via USB cable or Wi-Fi) or untethered doesn't reflect whether a device is holographic or immersive. Features that improve mobility often provide better experiences. Holographic and immersive devices can be either tethered or untethered.


To conclude, this project is on the left to middle spectrum: AR and MR work best as users need to remain present in their physical reality with part of the digital world components.

What is a hologram?

Holograms can respond to your gaze, gestures, and voice commands. They can even interact with real-world surfaces around you. Holograms are digital objects that are part of your world.


A hologram is made of light and sound

Holograms add light to your world, which means that you see both the light from the display and the light from your surrounding world. Since HoloLens uses an additive display that adds light, the black color will be rendered transparent.

Holograms can have different appearances and behaviors. Some are realistic and solid, and others are cartoonish and ethereal. You can use holograms to highlight features in your environment or use them as elements in your app's user interface.

Holograms can have different appearances and behaviors. Some are realistic and solid, and others are cartoonish and ethereal. You can use holograms to highlight features in your environment or use them as elements in your app's user interface.


A hologram can be placed in the world or tag along with you

When you have a fixed location for a hologram, you can place it precisely at that point in the world. As you walk around, the hologram appears stationary based on the world around you, just like a physical object. If you use a spatial anchor to pin the object, the system can even remember where you left it when you come back later.

Some holograms follow the user instead. They position themselves based on the user. You can choose to bring a hologram with you, and then place it on the wall once you get to another room.


Best practices
  • Some scenarios demand that holograms remain easily discoverable or visible throughout the experience. There are two high-level approaches to this kind of positioning. Let's call them display-locked and body-locked.

  • Display-locked content is locked to the display device. This type of content is tricky for several reasons, including an unnatural feeling of "clingyness" that makes many users frustrated and wanting to "shake it off." In general, designers have found it better to avoid display-locking content.

  • Body-locked content can be far more forgiving. Body-locking is when you tether a hologram to the user's body or gaze vector in 3D space. Many experiences have adopted a body-locking behavior where the hologram follows the user's gaze, which allows the user to rotate their body and move through space without losing the hologram. Incorporating a delay helps the hologram movements to feel more natural. For example, some core UI of the Windows Holographic OS uses a variation on body-locking that follows the user's gaze with a gentle, elastic-like delay while the user turns their head.

  • Place the hologram at a comfortable viewing distance typically about 1-2 meters away from the head.

  • Allow elements to drift if they must be continually in the holographic frame, or consider moving your content to one side of the display when the user changes their point of view. For more information, see the billboarding and tag-along article.


Place holograms in the optimal zone--between 1.25 m and 5 m

Two meters is the optimal viewing distance. The experience will start to degrade as you get closer than one meter. At distances less than one meter, holograms that regularly move in depth are more likely to be problematic than stationary holograms. Consider gracefully clipping or fading out your content when it gets too close so you don't jar the user into an unpleasant viewing experience.


A hologram interacts with you and your world

Holograms enable personal interactions that aren't possible elsewhere. Because the HoloLens knows where it is in the world, a holographic character can look at you directly in the eyes and start a conversation with you.

A hologram can also interact with your surroundings. For example, you can place a holographic bouncing ball above a table. Then, with an air tap, watch the ball bounce, and make sound as it hits the table.

Holograms can also be occluded by real-world objects. For example, a holographic character might walk through a door and behind a wall, out of your sight.


Tips for integrating holograms and the real world

  • Aligning to gravitational rules makes holograms easier to relate to and more believable. For example: Place a holographic dog on the ground & a vase on the table rather than have them floating in space.

  • Many designers have found that they can integrate more believable holograms by creating a "negative shadow" on the surface that the hologram is sitting on. They do this by creating a soft glow on the ground around the hologram and then subtracting the "shadow" from the glow. The soft glow integrates with the light from the real world. The shadow is used to ground the hologram in the environment.


Types of mixed reality apps
  • Enhanced environment apps (HoloLens only)

  • Blended environment apps - for this project

  • Immersive environment apps


 

Week 8


Design Considerations


Designing for Content
  • let users adjust to the experience.

  • Large Objects: objects that can't normally fit within the holographic frame should be shrunk to fit when they're first introduced (either at a smaller scale or at a distance). The key is to let users see the full size of the object before the scale overwhelms the frame.

  • Many objects: experiences with many objects or components should consider using the full space around the user to avoid cluttering the holographic frame directly in front of the user. Let users understand the content layout in the experience.

  • One technique to achieve this is to provide persistent points (also known as landmarks) in the experience that anchors content to the real world.

  • Objects can also be placed in the periphery of the holographic frame to encourage user to look toward key content.

Coordinate systems
  • For this project, I'm building a world-scale experience (users wander beyond 5 meters)

  • The user walks around to different places

  • The scenarios can be either indoor or outdoor

  • The interface is docked relative to user head position but the virtual interpreter is docked around the person that's speaking to the user

  • Avoid head-locked content

  • We strongly discourage rendering head-locked content, which stays at a fixed spot in the display (such as a HUD). In general, head-locked content is uncomfortable for users and doesn't feel like a natural part of their world.

  • Head-locked content should usually be replaced with holograms that are attached to the user or placed in the world itself. For example, cursors should generally be pushed out into the world, scaling naturally to reflect the position and distance of the object under the user's gaze.

Comfort
  • For maximum comfort, the optimal zone for hologram placement is between 1.25 m and 5 m. In every case, designers should attempt to structure content scenes to encourage users to interact 1 m or farther away from the content (for example, adjust content size and default placement parameters).

  • Although content may occasionally need to be displayed closer than 1 m, we recommend against ever presenting holograms closer than 40 cm. Thus, we recommend starting to fade out content at 40 cm and placing a rendering clipping plane at 30 cm to avoid any nearer objects.

  • Objects that move in depth are more likely than stationary objects to produce discomfort because of the vergence-accommodation conflict. Similarly, requiring users to rapidly switch between near-focus and far-focus (for example, because of a pop-up hologram requiring direct interaction) can cause visual discomfort and fatigue. Extra care should be taken to minimize how often users are: viewing content that is moving in depth; or rapidly switching focus between near and far holograms.

  • When designing content for direct (near) interaction in HoloLens 2, or in any applications where content must be placed closer than 1 m, extra care should be taken to ensure user comfort.

  • We recommend creating a “depth budget” for apps based on the amount of time a user is expected to view content that is near (less than 1.0 m) and moving in depth. An example is to avoid placing the user in those situations more than 25% of the time. If the depth budget is exceeded, we recommend careful user testing to ensure it remains a comfortable experience.


Interaction Models
  • To avoid gaps in the user interaction experience, it's best to follow the guidance for a single model from beginning to end.

  • For this project, the interaction system that will be used is hands-free.

  • Because the user needs to use both hands for sign languages, most of the time, their hands for occupied. And to decrease the learning load, I will only use one interaction mode for now. But when the user is familiar with the MR environment, it is possible to include the Direct manipulation with hands model when the user is not communicating.

  • Because not all the deaf can speak, so more specifically, the interaction model is hands-free with Gaze and dwell.

  • This will also decrease social awkwardness when using large hand gestures, especially when the deaf community really care about it.

  • For similar reason above, using eye-gaze and dwell instead of head-gaze and dwell.

  • The choice of dwell time can be tricky. Novice users are ok with longer dwell times, while expert users want to quickly and efficiently navigate through their experiences. This leads to the challenge of how to adjust the dwell time to the specific needs of a user. If the dwell time is too short: The user may feel overwhelmed by having holograms react to their eye-gaze all the time. If the dwell time is too long: The experience may feel too slow and interruptive as the user has to keep looking at targets for a long time.


User experience elements
  • Rendering on holographic devices

  • Holographic devices have additive displays – Holograms are created by adding light to the light from the real world – white will appear brightly, while black will appear transparent.

  • Colors impact varies with the user’s environment – There are many diverse lighting conditions in a user’s room. Create content with appropriate levels of contrast to help with clarity.

  • Avoid dynamic lightingHolograms that are uniformly lit in holographic experiences are the most efficient. Using advanced, dynamic lighting will likely exceed the capabilities of mobile devices. When dynamic lighting is required, it's recommended to use the Mixed Reality Toolkit Standard shader

  • Color

  • Rendering light colors - White appears bright and should be used sparingly. For most cases, consider a white value around R 235 G 235 B 235. Large bright areas may cause user discomfort. For the UI window's backplate, it's recommended to use dark colors.

  • Rendering dark colors - Because of the nature of additive displays, dark colors appear transparent. A solid black object will appear no different from the real world. See Alpha channel below. To give the appearance of “black”, try a very dark grey RGB value such as 16,16,16.

  • Color uniformity - Typically holograms are rendered brightly enough so that they maintain color uniformity, whatever the background. Large areas may become blotchy. Avoid large regions of bright, solid color.

  • Gamut - HoloLens benefits from a "wide gamut" of color, conceptually similar to Adobe RGB. As a result, some colors can show different qualities and representation in the device.

  • Gamma - The brightness and contrast of the rendered image will vary between immersive and holographic devices. These device differences often appear to make dark areas of color and shadows, more or less bright.

  • Color separation - Also called "color breakup" or "color fringing", color separation most commonly occurs with moving holograms (including cursor) when a user tracks objects with their eyes.

  • Storytelling with light and color

  • Vignetting - A 'vignette' effect to darken materials can help focus the user's attention on the center of the field of view. This effect darkens the hologram's material at some radius from the user's gaze vector. This is also effective when the user views holograms from an oblique or glancing angle.

  • Emphasis - Draw attention to objects or points of interaction by contrasting colors, brightness, and lighting. For a more detailed look at lighting methods in storytelling, see Pixel Cinematography - A Lighting Approach for Computer Graphics.

  • Materials

  • Scale

  • The scale of an object is one of the most important visual cues because it gives viewer a sense of the objects size and cues to its location. Further, viewing objects at real scale is one of the key experience differentiators for mixed reality in general – something that hasn’t been possible on previous screen-based viewing.

  • Typography

  • Create clear hierarchy

  • Limit fonts

  • Avoid using more than two different font families in a single context. Too many fonts will break the harmony and consistency of your experience and make it harder to consume information.

  • Avoid thin font weights

  • Avoid using light or semilight font weights for type sizes under 42 pt because thin vertical strokes will vibrate and degrade legibility. Modern fonts with enough stroke thickness work well. For example, Helvetica and Arial are legible in HoloLens using regular or bold weights.

  • Color

  • In HoloLens, since the holograms are constructed with an additive light system, white text is highly legible. Even though white text works well without a back plate on HoloLens, a complex physical background could make the type difficult to read. We recommend using white text on a dark or colored back plate to improve the user's focus and minimize the distraction from a physical background.

  • To use dark text, you should use a bright back plate to make it readable. In additive color systems, black is displayed as transparent. This means you won't see the black text without a colored back plate.

  • Recommended font size

  • As you can expect, type sizes that we use on a PC or a tablet device (typically between 12–32pt) look small at a distance of 2 meters. It depends on the characteristics of each font, but in general the recommended minimum viewing angle and the font height for legibility are around 0.35°-0.4°/12.21-13.97 mm based on our user research studies. It's about 35-40 pt with the scaling factor introduced in Text in Unity page.



Design


Based on the above design considerations, I did sketches for the primary flow.

Style Guide
Using MRTK Figma Kit for Hifi Design

I did an inception sheet to figure out the direction of the style.

Inception Sheet

I created a moodboard to ideate style around the keywords from the inception sheet.

moodboard

From the Inception Sheet, the color range I am considering is blue or purple with a splash of yellow. And this perfectly aligned with my moldboard. There is a trend that starts from blue, and with more and more red adding to it, then ended up with yellow even peach.

Blue sparks a feeling of professionalism and also when used with dark light, inspires a high-tech feeling. Yellow gives people a feeling of warmth and happiness.

Therefore, I plan to use blue as the primary color with an accent color of yellow. To give users a feeling of trustworthiness, professionalism yet still welcoming and helpful.

Most of the time when I see HOH persons using sign language, they always have a big smile on their faces. I hope this app can help bring them more smiles when communicating with everyone, even with those that do not know sign language.


 

Week 9


Design for Hi-fidelity


Style Guide

Figma Components Examples


Hifi screens and flow

Figma Interactive Prototype




 

Week 10


Prototype in Unity for HoloLens2

Encountered a lot of problems with Visual Studio. No matter if I connect HoloLens2 with PC through USB or Wifi, there were always errors when I tried to deploy.

I tried several solutions online but none of them work, so I have to pause the development in Unity.

But I managed to simulate the MR environment: the near menu can move with "head", and "user" can interact with hands.




The downside is that the hand manipulation simulator is not very smooth and often times points in the wrong direction. So it's better to try the prototype on HoloLens.

I have to source external help with the issue. But due to the timeframe, the development in Unity will be continued after this quarter ends.


Moving Forward


I need to finish the Unity prototype and integrate SignAll SDK to have the prototype really work and to better user testings.

Preferably the testing can be with HoloLens2 so that real data can be collected.


Even though currently HoloLens2 is not super smooth, doing this project, I am confident that I am on the right path for the future solution for more efficient communication between the deaf and the hearing.


Thank you!


bottom of page