UXD Jobs News & Events

Publish your UX related news & events. Mail to publish@uxdjobs.com.



Eartracking: A New UX-Research Method

Eyetracking has long been a common way to enhance usability studies with insights that are more detailed than those gleaned from users’ thinking-aloud comments. Since 2005, Nielsen Norman Group has run many eyetracking research studies, some documented in the book Eyetracking Web Usability.

Eyetracking is particularly useful for understanding details in users’ reading behaviors and how they deal with advertisements. But new UX teams shouldn’t employ eyetracking for their initial usability studies. Only at the highest UX-maturity levels should a team start using eyetracking because eyetracking has several downsides, including:

  • high cost of the specialized equipment
  • challenges to design and moderate a methodologically valid eyetracking study
  • difficulty of tracking the eyes of people with thick-rim eyeglasses or heavy eye makeup

Luckily there is now an alternative: instead of tracking users’ eyes, we can track their ears.

Eartracking in Its Infancy

We first became aware of the benefits of eartracking during our research with cats and dogs. Many animals have ears that visibly turn in the direction of their attention. Turning the ears is clearly an evolutionary adaptation that allows predators to keenly follow the prey and it also enables potential preys to notice a stalking predator before it comes into visible range.

(Some dogs have floppy ears that don’t turn, but even these ears are adaptive: floppy-eared dogs are usually so cute that humans will feed them, and they won’t need to hunt in the first place. Also, floppy-eared dogs have high job security: the United States Transportation Security Administration has decided that the vast majority of bomb-sniffing dogs in airports should be floppy-eared breeds because of better passenger acceptance caused by extra cuteness.)

Eartracking Measures

Although humans don’t have floppy ears, our ears don’t visibly turn in the direction of potential dangers or potential food. However, evolution has preserved vestiges of ear-turning muscles, as is clearly demonstrated by anybody who wiggles their ears. Humans have some ability for small ear movements under conscious control. What is less commonly known is that we also exhibit subconscious micromovements of the ears. When analyzing ear movements as an indicator of human reaction, we look at two new biometrics: 1) the distance the ear moves, and 2) how many times the ear moves (known as microwiggles) per second.

These micromovements are not observable by the naked eye, because the ear moves less than 0.1 mm(0.004 inches), and most people don’t know that the human ear can microwiggle up to six times in one second. In fact, because microwiggles of the ear are so unobtrusive, they have not been the subject of serious research until now.

Ear–Mind Hypothesis Is Real

The main finding from our research is that the ear–mind hypothesis is as valid as the eye–mind hypothesis that underlies the use of eyetracking in user research. The eye–mind hypothesis states that people look at what they are interested in, which is why we can use measures of gaze direction to estimate what the user is attending to. Similarly, the (now confirmed) ear–mind hypothesis states that micromovements of the ear are directed toward things that are startling or surprising to the user.

Note the difference between the two sense–mind hypotheses: the eyes might look at anything that is merely interesting, while the ears react only at unexpected stimuli that are potentially of high interest or importance. This difference is obviously caused by the evolutionary background for micromovements of the ear. Fossils more than 160,000 years old have indicated that our ancestors had ear macromovements, about a thousand times more pronounced than our ear movements today. These movements supported survival in eat-or-be-eaten scenarios, where noticing surprising or startling things were of utmost importance.

Eartracking-Technology Advancements

Though small, ear micromovements can be picked up by an 8K video camera that’s placed closely enough to the ear that’s being studied. (8K cameras are not yet common, but NHK in Japan has been experimentingwith this next-generation video technology since December 2018 and was kind enough to lend us one of their cameras.)

A second technology advance now allows us to turn micromovement video streams into true eartracking and tell where the user’s attention is directed. A machine-learning algorithm has been trained with 10,000 hours of video recordings from our most recent eartracking study, during which we tracked users’ ears as they attempted standard tasks on a wide variety of websites. Unfortunately, running the resulting AI software in real time (which is obviously required for practical use of eartracking in a usability study, since, in order to ask followup questions after a task, the facilitator needs to know what the user attended to, as well as how far and how many times her ears microwiggled) currently requires a supercomputer rated at 50 petaflops per second.

Currently, only 4 computers in the world are fast enough to do eartracking: 2 in the U.S. and 2 in China. Luckily, this distribution allowed us to continue our tradition of testing with both American and Chinese users. After all, we’ve previously found that Chinese users and Western users differ in their approach to the visual complexity of website layouts. So it was a plausible hypothesis that results from eartracking studies might differ across cultures as well. However, we didn’t find any differences, so the rest of this article will refer to the combined data from the two studies.

Eartracking vs. Eyetracking

Eartracking is not a highly sensitive technique: it can only capture environment stimuli that receive a high level of interest. Thus, a phenomenon like banner blindness, which we have documented through our eyetracking studies, would not be something that we’d expect to capture with eartracking because we do not expect users to be surprised by advertising banners (unless they come with loud noises that play automatically). Thus, banners would not register in eartracking even if users did pay attention to these ads (which we know from eyetracking that they don’t).

Another interesting contrast between eyetracking and eartracking relates to gender differences. First, let me point out that we almost never observe any substantial differences between male and female users in usability studies. In terms of the user interaction, users of both genders are equally annoyed by, for example, zigzag layouts that impede scanning.

However, there are certainly some differences in the content that different genders find interesting. For instance, men in our eyetracking research looked much more at certain body parts in photos. (For the sake of remaining a family website, we won’t go into further details, but the heatmaps are in our book.)

Eartracking found another interesting difference between male and female users: Male ears twitched quite substantially when the user interface included pictures of wooly mammoths or stereo equipment. (In fact, these are the only instances in our entire research study where the micromovements reached their maximum extent of 0.1 mm, and up to five wiggles in one second. Nonmammoth and nonstereo UI elements rarely registered more than 0.05 mm, though photos of elephants scored 0.08 mm — possibly because of the resemblance between mammoths and elephants during the initial 100 ms of exposure.) In contrast female ears didn’t twitch any more on pages showing wooly mammoths or stereo equipment than on webpages with other pictures.

Why do men’s ears react more strongly than women’s ears to webpages with pictures of wooly mammoths? We can only speculate, but it’s likely that during the era of the cave people, it was mainly the men of the tribe who were assigned to hunt the wooly mammoth, and since such a kill would be a major win, the hunters got highly attuned to recognizing this animal. As for the stereo equipment, your guess is as good as mine.

The finding of gender differences in mammoth webpages was due to a lucky coincidence: we employed the new eartracking technology with a few of the users during our recent study of UX design for children, and happened to include the wooly-mammoth page on National Geographic Kids’ website. (Subsequently, we repeated this test with adult users and confirmed the finding.)

Eartracking Strengths over Eyetracking

It’s still early days in applying eartracking to UX research, but it seems a promising new methodology. Compared to eyetracking, eartracking has the advantage of being able to measure surprises, which will be particularly valuable for game user research. There’s also an obvious accessibility advantage when testing with people who are blind and can’t participate in eyetracking studies.

Eartracking Weaknesses

On the other hand, eartracking has some weaknesses that parallel the disadvantages of eyetracking. As we mentioned before, certain user characteristics are difficult for current eyetracking equipment. Similarly, eartracking equipment can’t track people with:

  • hair that covers the ears
  • diamond stud earrings that reflect light into the camera, or any ear cuffs
  • very small ears
  • hearing aids that go over the ear in any way
  • earmuffs
  • earbuds

Also, while an eyetracker is fairly expensive, the supercomputer required for an eartracking study runs close to $100 million. (Luckily we were granted free supercomputer time due to the revolutionary nature of our research.) A bigger supercomputer currently under construction is named after a senior eartracking researcher, showcasing its promise for wider use of this exciting technology.

Summary

If your user research budget has an extra $100 million that you don’t know how to spend, and you have a highly advanced UX team and mature product team, consider allocating the money to an eartracking study.

 

Source : www.nngroup.com

Author : Jakob Neilsen

Posted in Knowledge sharing | Leave a comment

What is Human-Centered Content?

“Human-centered design is a creative approach to problem solving […] It’s a process that starts with the people you’re designing for and ends with new solutions that are tailor made to suit their needs.”

Human-Centered Design (HCD) has been a buzzword in the design community for years. In short, HCD is a mindset anyone can adopt to solve problems and find solutions that meet human needs. In practice, it often starts with a hypothesis about a problem or challenge, and is followed by research to investigate that hypothesis — similar to the scientific method taught in elementary schools.

Today, Human-Centered Design is commonly applied to solve design challenges for mobile applications, websites, and services. I like to think of it as a “people first” design process. The opposite would be a “solutions first” process.

Human-Centered Design example:

“We hypothesize that low-income families struggle to find healthy, affordable food available in their local grocery store. How can we help them?”

Solutions-first design example:

“Let’s create a grocery shopping app.”

The human-centered approach is far more likely to become a successful solution, for what I hope is obvious reasons. When you start with solutions, you miss out on crucial insights that could help you decide what features to create in a product, what the visual look and feel should be, and how to shape the voice of the brand so that it all appeals to the target audience.


Content and the Human-Centered Design process

When I first left my corporate job to start a consulting business, I found myself having to explain exactly what I meant by terms like “content strategy” and “content design.”

After some guidance from a brilliant business strategist, I started to use my process as a way to explain my work.

“I start with user research,” I’d say, “to uncover what needs, goals, and questions your audience has. Then, I use those insights to create a master plan for all your content and define things like your messaging strategy and website architecture. This process aligns content with real user needs so you can easily achieve your business goals.”

After many of these conversations, I started hearing the same feedback from potential clients and peers.

Applying HCD to content: a process to follow

Since those early days, I’ve leaned in to the idea that my work as a content strategist or content designer is really just applying HCD to content.

Thinking in terms of “human-centered content” has solved a lot of challenges for me when it comes to defining my work — or at least, my version of content strategy and content design.

I realized that yes, it’s basically a blend of the Design Thinking process and the Human-Centered Design process with a content-edge. I empathize with users first, make sense of the research, create a plan, prototype the content, then test and improve if I can.

What I love about framing my work in terms of a design process is that is overcomes a lot of misconceptions. Content strategy is not a deliverable. It’s not a “phase” in a project. It’s not a document. It’s a flexible process to solve content challenges that puts human needs at the forefront.

The solutions first approach to content is designing a wireframe for a website, then “dropping in” content later. The human-centered approach starts starts by doing research about what content people need and want, then designs a wireframe around that content.

If you have content challenges to solve, here are a few principles I’ve defined in the last few years as central components to human-centered content.


Three Principles of Human-Centered Content

So, what really makes content human-centered? Aside from following a human-centered process, I like to evaluate the success (or “human-centered-ness”) of content in a three key ways. I think of these as more like guiding principles rather than a checklist.

1. Human-centered content supports user questions and tasks

Human-centered content prioritizes user questions and tasks, because those are the main reasons why someone will use an interface.

Whether it’s a website, an app, an internal web portal, people mainly care about themselves — not your business. You can be human-centered by prioritizing content that answers their questions or supports their tasks.

Examples of user questions:

  • What does your business do?
  • Is your business close to my house?
  • What are your business hours?
  • What is on your menu?
  • How can you help me?

Examples of user tasks:

  • I need to update my mailing address at my bank
  • I need to schedule an appointment
  • I need to create a new profile for my employee
  • I need to send someone a document to e-sign

2. Human-centered content is relevant to the user

Outside of questions and tasks, all content should still be created through the lens of, “how can we make this relevant to our user?”

Consider a company About Page, for example. Lots of businesses love to talk about their company’s origin story, the awards they have achieved, their certifications, etc. Chances are, if someone is looking for a new insurance company, they don’t care much about industry certifications they have never heard of.

If you must include content on your website that is not directly answering a question or supporting a user task, find a way to put a user-centered spin on it. Don’t just tell the user you have a certification. Tell them why they should care and how it helps them.

Relevant content is also:

  • In a voice and tone the user will enjoy
  • In a format the user will expect and understand

3. Human-centered content is accessible and inclusive

Humans are not all the same. As much as we all love personas, they will never represent every single person who sees your content.

Some humans won’t be able to hear the audio on your videos. Other humans might not be able to see the screen, and use things like screen readers to help them. There’s a great wide spectrum of human capabilities out there, and they are all equally wonderful.

To be truly human-centered, your content needs to be, at minimum, accessible to all humans. In other words, they should be able to access and understand the information in some way regardless of their physical or cognitive abilities. At best, human-centered content is inclusive and welcoming.

To make your content more accessible:

  • Use alternate text for images
  • Use captions and transcripts for videos
  • Prioritize important information in headings and subheads
  • Avoid jargon, acronyms, and complex sentences
  • Write descriptive and specific Call-to-Action copy (example: instead of “read more” you can say “read about accessibility”)

For more on content accessibility, check out the Readability Guidelines by Content Design London. For more on web accessibility, read Introduction to Web Accessibility. Also, see 7 Guidelines for Writing Accessible Microcopy.

To make your content more inclusive:

  • Avoid gendered language when it’s not needed
  • Asking for gender or race in a form? Explain why and include write-in options
  • Avoid phrases and idioms that won’t translate well if your audience speaks English as a second language

Problems with this terminology and other final thoughts

To wrap things up, those are my thoughts now on how we can apply HCD to content and think about content in human-centered terms. But like anything else, it’s not perfect.

For one, I’ve kind of meshed and merged “Design Thinking” and “HCD” together in this article and in my own brain. Some people might not like that, and that’s fair. I also know that the term “user-centered” might make more sense to some people.

To zoom out a bit, we’ve also got lots of confusing terms in the content industry already. Introducing “human-centered content” as a new term could just mix things up even more. That’s also fair.

All that said, I’ve personally found it to be a nice way to explain my work and think about my own process. Perhaps for some of you out there will find it useful too.

Source : uxplanet.org

Author : Veronica Cámara

Posted in Uncategorized | Leave a comment

Dressing Up Your UI with Colors That Fit

As designers, we have a powerful ally in color. It can let us work towards a number of different goals. You can use it to reinforce or highlight an idea, to provoke an emotional response from the user or to draw attention to a specific part of your website. This is, of course, in addition to making your website aesthetically pleasing to the eye.

In many cases, the color scheme chosen for a website will also reflect the company’s branding and values.

Color and Branding

The color scheme for a website can contribute to the overall brand perception of products or services. Based on research by CCICOLOR, the Institute of Color Research, users judge products online within the first 90 seconds of their initial view of the product. Between 62% and 90% of this judgment will be based upon the color scheme. Their findings showed that color can reflect the personality of the brand:

  • Red is said to reflect power, passion, and energy. It can be used to alert the user or attract the user’s attention in a design or brand. Red colors are found on the websites of CNN, MacDonald’s, KFC, YouTube, and Adobe.
  • Orange can mean friendship, unification, and youth. One example of using orange is Fanta, which you might expect to be in concert with their core branding (orange Fanta was once the only Fanta – the brand has come a long way in recent years and dramatically expanded its offerings).
  • Yellow is said to reflect happiness and enthusiasm. This isn’t surprising; it’s the primary color that we associate with the sun and with the brightness coming from a light bulb. An example of using yellow in a logo is DHL, the international carrier.
  • Green is said to reflect growth and the environment. The Inhabitate website for sustainable development makes use of green, chiming in with the color’s environmental connotations.
  • Blue is said to reflect calm, safety, and reliability. It’s a wise color to use; customers tend to feel more at ease with it. Many business sectors widely use blue, and you can find it on the websites and branding of AOL, Facebook, HP, PayPal, EA Games, DELL, and many others.

Color and User Experience

Color certainly plays its part in delivering a better user experience on websites. In particular, the right choice of color will ensure the usability and legibility (readability) of information displayed on screen.

The right contrast between text and background is an essential part of the user experience; if your customers can’t read your content easily, they’re going to go elsewhere. Think for a moment about red text on a blue background. That color combination is hard to read; our eyes can’t focus on these shades at the same time. The same goes for blue text on a red background. The colors vibrate, making us strain our eyes.

We have another tool. You can also “deprioritize” text by reducing the level of contrast compared to the background, helping the reader skip through non-essential text when skimming or speed-reading.

The vibrancy of a color can help instill an emotional experience. Bright colors give energy (which is why so many calls to action are in bright red or orange) and a sense of immediacy. News websites often use red text to call attention to breaking or important news. Softer, less vibrant colors can help a user be more relaxed when approaching navigation.

How Do Colors Complement Each Other?

To deliver a harmonious color scheme, it’s important to focus on the details of the colors chosen. There are several things to consider during this process:

Tints, Shades, and Tones

You can generate many variations of a single “hue” on the color wheel. Make a tint by adding white to the hue, a shade by adding black, and tonality by adding gray.

The easiest scheme in which to achieve balance through tints, shades and tones is the monochromatic (single color) scheme.

Contrast

Contrast is simply a measure of the variation between two colors. Colors on opposite sides of the color wheel offer the greatest level of contrast, as do black and white. Contrast can be used to achieve balance or to draw a user’s attention to a certain feature or area of text.

It’s important to keep a careful eye on the use of contrast; overdo it and you’re more likely to confuse users than to help them.

Vibrancy

We can use the vibrancy (or brightness) of a color to add additional emotional content to a color with brighter shades, generally reflecting increased energy (and thus positive emotions, such as happiness), and darker shades, offering reduced energy and thus calmer, quieter spaces.

Additive vs. Subtractive Colors

We choose modern color schemes based on the systems used to display or print designs. The two most common systems are CMYK (Cyan, Magenta, Yellow and Key – Black) and RGB (Red, Green and Blue).

CMYK is a subtractive color system in that, in the absence of any of the four colors, the output is white. Colors are calculated (including black) as a percentage of each of the four colors. CMYK is typically used for print.

RGB, on the other hand, is an additive system. It begins with black, and colors are added to achieve hues up to and including white. The values of each color are assigned from 1 to 256 and offer more than 16 million combinations for the palette. This is because RGB is typically used on digital screens, and the underlying system is based on binary pairs.

It’s worth noting that, from a human eye’s perspective, there’s little difference between the two palettes. Our eyes can, perhaps, distinguish all 10 million colors created by the CMYK scale, but the 16 million of the RGB scale often differ too subtly for the eye to tell the difference. Neither palette is “better” than the other from a visual perspective.

The difference is important from a design perspective because the two systems are used to produce different outputs – print and screen. Conversion between the two systems can be imprecise and yield varying results when viewed.

Online Color Scheme Applications

The good news is that if you’re stuck choosing a color scheme for your users, there are plenty of online tools to help with the job. You can download (and in some cases export settings for other programs) color palettes from the sites listed below:

Don’t forget to seek feedback on your color schemes from your users before moving ahead with them.

The Take Away

Color is a powerful tool for you. Choosing a color scheme for a website is especially important for branding, as research has shown. Take a company like DHL, which uses yellow (believed to reflect happiness and enthusiasm). Now, consider DHL’s business – carrying goods and documents that we value. It helps to instil happiness in the customer.

As a designer, you can optimize user experiences by choosing the right colors. This will help to ensure:

  • Usability and legibility (readability) for the user. We can choose the best color combination to make sure that customers keep reading what we’ve written.
  • An emotional experience in the user. This involves the vibrancy of the color chosen. Bright colors give energy and a sense of immediacy or urgency. We can use them to call attention to our products, services or important messages. Softer, less vibrant colors can help a user feel more at ease – especially useful for industries such as banking.

Using colors to your advantage means knowing what goes into them. Looking at the color wheel (from which we can make any color), we have:

  • Tints – We add white to a hue (the part of a color that makes it discernible as red, green, etc.) to make a tint.
  • Shades – We add black to a hue to make a shade.
  • Tones – We add gray to a hue to make a tone.

To get a balance through tints, shades and tones, it’s easiest to use the monochromatic (single color) scheme.

You should also consider the following for our color schemes:

  • Contrast – We can draw a user’s eye to a certain feature or achieve balance in our design by using the measure of variation between two colors, or black and white. Be careful with contrast: overdoing it will confuse readers. Focus it on what’s important.
  • Vibrancy – Vital for provoking an emotional user response. Feelings are important, so tap into them with your color scheme choice. Use brighter shades to reflect more energy and positive emotions like happiness; use darker shades to calm your user.

Choose your color scheme based on whether your design is for screen display or print. CMYK (Cyan, Magenta, Yellow, and Key – Black) is the subtractive color system used for print, capable of 10 million color combinations. RGB (Red, Green, and Blue) is the additive color system used for screen display, allowing over 16 million color combinations.

Many online tools can help you find the right color palette. Above all, check with your users that the color schemes you like work for them. That feedback will save your time and expense before you move ahead.

 

SOURCE : www.interaction-design.org

AUTHOR : Mads Soegaard

Posted in Knowledge sharing | Leave a comment

Effective Mobile App UX Enhancement

The quality of mobile app user experience is highly relevant in terms of holding users’ attention for the largest number of repeated launches. The Interaction Design Foundation claims that mobile UX must bring joy to users and this is something I agree with.

The point is that such a clear idea is, in fact, a challenge for all mobile app designers. Fortunately, the smartphone market has existed long enough to supply us with both good and bad practices. In this article, I survey the best mobile UX implementations to provide a holistic overview of logical and effective app UX.

It is important to consider that best practices are not always successful ones. Even such giants as Google make mistakes. At Android Dev Summit 2018, Google admitted that bright white material design canvases consume too much energy. An app’s effect on battery life is as significant for UX as are visual features, accessibility and other factors I cover in this article. User experience is a true reflection of the developer’s understanding of essential app design rules as well as knowledge of psychology and statistics.

The app market is extremely competitive, so developers lack time to launch apps – often resorting to test and fix them on the go.

How an App’s Performance Affects UX

The performance of an application is a sum of two basic measurements. The first one is the app’s average response time to commands during conditions of peak CPU load. The response time must be as fast as possible since slow response time is the top reason why users reject apps. An application must launch without an irritating downloading process.

This is where the second measurement comes in. A developer should calculate the amount of computational power needed for an app’s launch and even functioning. At 407 Session of WWDC 2018, Apple Xcode Engineer John Hess expressed this very point. He also insisted that measurements be done at all stages of application development.

According to Hess, performance enhancement consists of several stages. The first is debugging. Apple does this using profilers (measurement software) and the re-coding of applications. As a result, it deletes massive parts of redundant code from its apps. This operation must be repeated until profilers detect the desired performance growth.

Likely, users will not notice any dramatic changes but smartphone hardware will. The less capacity needed for the peak load, the higher the number of simultaneous tasks that may be run without throttling and freezes.

Effective Onboarding Flow Principles

This stage is crucial because it determines the user’s accustomization to a new application. Of course, all apps require specific onboarding steps, but their purpose is unified. They must let you in, collect your data and introduce capabilities as fast as possible. Let us get straight to the examples.

  • LinkedIn: This app gives new users a brief descriptive introduction to the network. It was a wise decision to allow users to skip the intro if they already know what the network is about. During the next stage, users are asked to enter their personal information. It feels out all the necessary data to provide instant suggestions for interest groups, opinion leaders, professionals in adjacent fields and familiar personalities from Facebook and email contacts. This way, LinkedIn completes an otherwise time-consuming research process in seconds, without users’ participation.
  • Flipboard: This is an example of a unique ‘ flow. Flipboard is designed to astonish users from the very first launch. After six years, I still remember my first launch! This app has a unique navigation mechanism that is based on flipping pages in a manner similar to flipping through a paper magazine. It would be impossible to understand the logic without instructions. That is why the introduction teaches new users how to use the app. It also collects information about the users’ interests. The impression of a soft cultural shock that the introduction provides will make users want to complete the rather dull registration process that follows.
  • Duolingo: This app is designed to help multicultural audiences learn new languages. Onboarding flow in Duolingo is multistage. Before users get to the sign-in screen, they must choose a language to study, pass a short test, choose goals and a studying path, and even complete the first lesson. This way, Duolingo figures out a newbie’s level and purpose. At the same time, it shows everything a new user should know about the app’s mission. To be fair, I tested many apps for language learning, and Duolingo is not the most effective one. However, it is the most popular one. The quality of its UX has made this app go viral. Moreover, it persuades users that learning a new language is more straightforward with Duolingo.

As you know, these apps provide entirely different services, but they do adhere to similar rules. An app must introduce itself rapidly and descriptively, teach users how to use it properly and provide a feeling of joy from the use of the application.

Proper UX Personalization Cases

Of course, it is much easier to let users search for and choose what they want, but they do not need an app that is unable to help. At the same time, users hate the feeling of being spied on. Based on these conditions, UX personalisation should be both more subtle and effective. I suggest reading these personalisation tipsto get more out of the practices below.

  • Netflix & YouTube: Both video streaming services provide automated feed personalisation. They use previously viewed and rated videos to suggest relevant content. The same algorithm is used in YouTube to show related videos directly under currently viewed videos. Besides, it is not obligatory to create a YouTube account to use both features.
  • Facebook App: Each post in this social networking app has a button that allows users to express their wish to see (or not see) something similar to the post at a later date. Users are also allowed to set a notification for specific contacts and groups. At the same time, this app has automated suggestions for new people and interest communities.

So, the outcomes are clear. It is wise to strike a balance between automated personalisation based on user activity and manual control.

The Market Search Strategy

There might be two similar applications on the mobile app market. However, the most successful of them will have more promotion. Visibility merely is higher in the App Store and Play Market charts. Sometimes, it is much more expensive to sell an app than to develop it. A good strategy includes these necessary steps to attract the attention of potential users.

  • Research: At this stage, a developer must figure out everything about target audiences and competitors to determine what people need and to deliver an app in a fresh way.
  • Good Old Landing Page: This is a field for creativity. It is impossible to suggest a perfect formula for this promotional tool. I can list only truisms. A landing page must be descriptive, exciting and catchy and it must arouse desire.
  • ASO (App Store Optimization): This task looks simple, but it requires some magic. The app must have a one-and-only icon and title to remain recognisable throughout the years. It also needs sharp explanatory screenshots and a clear description.
  • Viral Video Content: Video marketing stats show that video content is one of the most effective means of advertising. People naturally like watching videos more than they like reading or listening. This method of perception has the highest efficiency.
  • Social Networking: All smartphone users have social network accounts. It is not necessary to obtain all of them. Developers should choose those networks that hold most of their target audience.

Endless UX Optimization

I have already said that app developers do not have time for mistakes. That is why they must do their best to launch new apps with reusable code and to lighten all components. The fact is, UX optimisation is a perpetual, iterative undertaking. All apps require constant updates to improve their UX. Sometimes they may be rejected, but they still must be frequent enough to keep pace. Users see only the wrapper, but their subconscious does not miss any details. This means there are no limits to code and design improvements.

 

Source : usabilitygeek.com

Author : 

Posted in Knowledge sharing | Leave a comment

Designing a Recruitment Tool to help HR Managers

Summary

The aim was to make the process of recruitment easier for HR Managers by building a product that helps keep track of all the information about the hiring process. In order to achieve this, it was important that the product allows HR Managers to keep track of the events performed during the hiring process for each individual opening in the company.

For example, If the manager has Y days to recruit a Product Designer, he should know that the interview round 1 of all the shortlisted candidates should be done in X days (where X<Y).

The Problem

HR Managers rely on different types of software provided by their company, or just use spreadsheets to manage and recruit for different positions.

Handling multiple parts of different types of job openings simultaneously makes it’s too complicated and cumbersome, since different job openings may have a completely different hiring process.


Design Process

User Research

My team was incubated in an HR Consultancy so we had the opportunity to interview and empathize with the HR Managers closely who were recruiting for different companies. Some of the key insights we gained through the process were :

  • The most common problem was not seeing all the job openings on a single screen. Navigating through sheets or screens to see information about different job positions is not what they were fond of.
  • Keeping track of events performed and yet to be performed for a job opening was a hassle.
  • If the Manager knows how many days are spent and how many tasks have been completed for all job openings individually, it’ll help the manager decide which job opening needs more efforts.

Ideation and Wireframing

After conducting interviews and understanding the problems, we had the basic idea of what the HR Managers needed and how our product should help them.

  • The product should be a single screen product as simplicity is the key and most of the HR Managers will have a hard time using complicated/advanced software.
  • There should be a way to keep track of all the events involved in the hiring process for any job.
  • A clear indication should be present which shows whether the manager is completing the events of a particular position before time, on time or if there is some delay.
  • A Signal which shows whether the overall hiring process of a particular job opening is before time, on time or delayed.
  • Also, Pop-ups of any sort had to avoided as it is naturally a bad user experience.

What are SLAs and SLA events in the Recruitment process?

SLA duration or SLA is a time duration given for hiring for a particular position. SLA events are the different events that are to be completed to hire for a particular position. For Example, conducting interview round 1 of all the shortlisted candidate is an SLA event.


Solution

As we had to release our product for beta testers as soon as possible, we did work on the UX but we knew we could do much better on the UI part.

After changing our style guide we kept on iterating and finally came up with a better design.

Source : uxplanet.org

Author : Pankaj Kumar

Posted in Knowledge sharing | Tagged , , | Leave a comment

Ideation for Design – Preparing for the Design Race

Ideation is easy to define. It’s the process by which you generate, develop and then communicate new ideas. Ideas can take many forms such as verbal, visual, concrete or abstract. The principle is simple to create a process by which you can innovate, develop and actualize new products. Ideation is critical to both UX designers and learning experience designers.

As Pablo Picasso, the artist, said about his creations; “I begin with an idea, and then it becomes something else.”

deation does not need to be beautiful to be effective. Creating ideas is the main point rather than graphic design as you can see here.

There are many types of new idea and they are commonly found in the following patterns:

  • Problem to solution. Find a problem, find a solution – this is, perhaps, the most common form of ideation.
  • Derivation – where you take an existing idea and then change it (hopefully for the better)
  • Symbiotic – where you take a group of ideas and combine them to form a single coherent idea
  • Revolutionary – where you take an existing principle and smash it and derive a totally new perspective
  • Serendipitous discovery (or accidental discovery) – when an idea turns up when you are in pursuit of something else (penicillin would be a good example of serendipitous discovery)
  • Targeted innovation – an iterative process where the solution is theorized but the path to it is poorly understood. Repeated attempts are used to create the pathway.
  • Artistic innovation – a form of ideation which completely disregards “what is practical” and innovates without constraint
  • Computer aided innovation – where computers are used to probe for solutions and to conduct research

All of these processes can be used by the designer in search of ideas for a project. However, in many cases these are not practical (revolutionary ideation, for example, is generally a once or twice in a lifetime Eureka! moment and not a practical process) or out of budget/time constraints (such as targeted innovation or computer aided innovation).

Thus the designer will seek more practical and prosaic approaches when it comes to ideation including brainstorming, mind mapping, etc.

Ideation on Paper

Almost all ideation techniques can be deployed on paper. Brainstorming and mind mapping, for example, are simply the same process but visualized in different ways.

Thus, in this article, we will examine brainstorming as the key tool for ideation but other tools may be considered on projects to bring about similar results.

deation on paper. This is for a blog’s content but the same principles apply for any kind of ideation. Get it down on sticky notes and then organize ruthlessly.

Rules for Initial Ideation

When you are at the start of the ideation process you want to generate ideas in their multitudes. The idea is to follow a few simple rules, as a team, to deliver lots of ideas. These ideas, once the exercise is complete, can then be examined for practical considerations. The rules are as follows:

  • Prepare the space. Put up posters with user personas, the problem in hand, and any design models or processes that will be used on the project. The more context provided, the easier it should be to come up with ideas.
  • While initial ideation takes place – there are NO BAD IDEAS – the exercise is to create not judge ideas.
  • Unrelated ideas can be parked for another discussion. They should, however, be written down.
  • Volume is important don’t waste time examining any particular idea in depth just write it down and move on.
  • Don’t be afraid to use lots of space. Write ideas on Sticky-Notes and then plaster them on everything in the room. This can help participants connect seemingly unrelated ideas and enhance them,
  • No distractions. Turn off phones, laptops, etc. Lock the door or put a sign outside saying “Do not disturb.” You can’t create ideas when you’re constantly interrupted.
  • Where possible be specific. Draw ideas if you can’t articulate them in writing. Make sure you include as much data as possible to make an idea useful.

Once you have the rules understood. Grab your team and get creative. It can help to do a 10 minute warm up on an unrelated topic to get people thinking before you tackle the problem in hand. Don’t take more than 2 hours for initial ideation.

Laying down rules at the start of an ideation session will help keep things on track throughout. Don’t be afraid to call people’s attention to the rules if they begin to bend or break.

Structuring Your Ideas

Once you’ve got some ideas coming it’s a good idea to group them around specific areas. Some common idea areas include:

  • Pain Points
  • Opportunities
  • Process Steps
  • Personas
  • Metaphors

When You Get Stuck

There are also some simple techniques to get the creative juices flowing when the ideas process gets stuck.

  • Breaking the law. List all the known project constraints and see if you can break them.
  • Comparisons. Taking a single phrase that encapsulates the problem and see if you can find real world examples of this.
  • Be poetic. Try to turn the problem into a poem or haiku. Thinking about the word structures can deliver new ideas.
  • Keep asking “how and why?” – These words make us think and create.
  • Use laddering. Move problems from the abstract to the concrete or vice-versa to consider them from another perspective.
  • Steal ideas. If you get stuck on a particular concept – look to other industries and see how they’ve handled something similar. Of course, in the end you should be emulating in design not copying.
  • Invert the problem. Act like you want to do the exact opposite of what you’ve set out to do – how would you do that instead?
  • Even simple inversions can make us think very differently. Here the inversion of color changes the picture dramatically.

    Review and Filter

    Once you have a large number of ideas; you then need to review and filter these down to something more manageable. It is at this stage that ideas can be discarded as “bad”, kept as “good” or modified into something more useful. It’s best to carry out this exercise a little while after the initial ideation phase so that people have a chance to reflect on the ideation as well as become less personally attached to the original ideas.

    The Take Away

    Creating ideas is often best done in groups – though all the techniques above can be carried out by an individual too. The trick is to just create and keep doing so for an extended period of time. You can worry about works and what doesn’t later. Ideation is one of the most fun things a designer can do but it can also be frustrating if you try and do it by yourself sat in front of a piece of paper.

Source : interaction-design.org

Posted in Knowledge sharing | Leave a comment

Safe & simple: Can UX design protect us from hackers?

Digitalisation has made some things in life simpler, but not security.

By Andy Eva-Dale & Lucy Valentinova

“Currently, verifying your identity online… places a huge burden on individuals, who have to successfully remember hundreds of passwords for various identities and are increasingly being subjected to more complexity in proving their identity and managing their data.”

Beyond being inconvenient, this complexity is proving positively dangerous — and yet, neither brands nor consumers seem capable of tackling the problem. Today’s consumer has little patience for the faff of multi-device, multi-interface, multi-page verification processes, thumbing codes into cumbersome mobile keyboards and endless password resets. Businesses, meanwhile, are still figuring out how to profit from customer data, whilst the scale and costs of data hacks are escalating. The Marriot-Starwood breach this year was the second-largest in history, with 500m customers affected. The largest-ever attack — on all 3bn Yahoo accounts — ended up costing $47m in litigation expenses.

Easier said than done

Consumers are cybersecurity-conscious, but disinterested in managing the risks. PwC’s 2017 Protect.me survey found that “87% of consumers say they will take their business elsewhere if they don’t trust a company is handling their data responsibly”. And yet…

“Almost one in five people has faced an account hacking attempt but … only a third create new passwords for different online accounts and a worrying one-in-10 people use the same password for all their online accounts”

A lack of understanding seems to be an issue. Two Indiana University academics surveyed 500 American adults to understand why two-factor authentication — theoretically, a fairly effective security protocol — is not more popular. Most consumers, apparently, simply didn’t see the urgency. One of the researchers said of the participants, “We got a lot of, ‘My password is great. My password is plenty long enough.” In an interview with The Economist, Adam Cooper, Lead Technical Architect at GOV.UK Verify confessed: “I am baffled most of the time” by most ID login processes.

This shows that if we are to work towards a more cyber-secure world, consumers are unlikely to be much help. Indeed, the same PwC study found that “72% of consumers believe businesses, not government, are best equipped to protect them.” This may be a misguided expectation; conducting a business online appears to depend on hoarding a mouth-watering jackpot of juicy data.

‘A key source of stolen card credentials remains persistent compromises of merchants who store card numbers along with customer details on their own systems as illustrated by hacks against Target, Equifax, Heartland and most recently Marriott.’ — IT News

Failing to find a protocol simple enough for everyone will be costly. Marriot’s massive data breach earlier this year led to embarrassing levels of attention from the authorities. According to CNBC, “Attorney generals in Connecticut, Illinois, Massachusetts, New York and Pennsylvania said they would investigate the attack, as did the UK’s Information Commissioner’s Office”.

The NY Times reports Target’s heavily-publicised data breach in 2013 cost the retailer $202m. There is also a risk to profits. Whilst the vast majority of trade still takes place offline, digitally engaged consumers are more measurable, marketable and profitable. Mobile payment users spend twice as much online as illustrated by this Retail Dive report.

UX Proofed

UX improvements are one piece of the puzzle, making it easier for consumers to protect themselves. Indeed, simple security has become a selling point. Monzo Bank shouts about how easy it is to block/unblock a lost credit card…and brands in finance and beyond are now working to introduce this same ethos into their digital UX. There’s a host of good examples. Most obviously, brands should reduce steps and alleviate the load on consumers. Biometrics, for instance, can now be used to log into online services provided by Bank of America, Capital One and Wells Fargo. Users of the Target app can also use a thumbprint to log in.

Similar gains can be made by simple UX planning. Lloyds bank, for instance, allows customers to bypass repetitive phone security questions by calling via the app. Where additional security protocols are required, they must be effortless. Two-factor authentication (2FA) is fairly quick and easy, and may even be triggered on demand. Barclays Bank customers who receive a customer service call can request a verification message via the app to confirm it’s not a scam. You might also consider making security more fun. Players of Fortnite, an online video game, can “unlock a skin” (I’m told this a good thing) by enabling 2FA on their accounts.

As important as these developments are, however, they cannot be the full solution, as UX improvements do not improve the way brands store and protect data. This brings the debate under the bigger umbrella question of how we — or how interested parties — manage our identities online.

Got ID?

“Any individual’s identity is contingent on the recognition of others… anything like a modern life is rendered all but impossible when that recognition is not forthcoming, or is suborned.”

The Economist, December 2018

If you think about it, cash is exceptionally secure and simple: merchant and consumer confirm and verify the transaction on the spot, face-to-face. Short of violent crime, not much can go wrong. Digital payments present the challenge of how to compensate for this innate security. There are two possible approaches. One of them — currently commonplace — is to compensate for the lack of the person. This entails recording secure information (i.e., password-and-email combos), and/or by interacting with a device or card assumed to be in the customer’s possession.

The secure information is, however, a new layer of complexity, and a weak point which can be exploited by criminals focusing on card-not-present fraud…

“using stolen identification to open credit lines… creating new, digital-only identities by knitting together real and fictitious information.”

An Accenture study suggested that global annual losses of this kind of fraud may already run into the tens of billions. Dependency on ‘secure’ devices raises similar problems. The device may be stolen, and even biometrics can be hacked. Granted, brands could raise their defences. Monzo — again, leading the charge — used data analysis to help tackle a Ticketmaster breach this year. The challenger bank also operates a machine learning-powered fraud detection system.

But developments like this are really patches on a fundamentally flawed model, where a consumer brand is expected to police transactions and guard stored data. A second approach seems rather more futureproof. The thinking is that, rather than compensating for the physical person, you supplant them, relying on a verifiable digital identity instead.

Take the example of Estonia, hailed as a world leader in digital identification. ‘All residents have electronic ID cards, which are used in health care, electronic banking and shopping, to sign contracts and encrypt e-mail, as tram tickets, and much more besides — even to vote…

Estonia’s system uses suitably hefty encryption. Only a minimum of private data are kept on the ID card itself… Also issued are two PIN codes, one for authentication (proving who the holder is) and one for authorisation (signing documents or making payments). Asked to authenticate a user, the service concerned queries a central database to check that the card and relevant code match. It also asks for only the minimum information needed: to check a customer’s age, for example, it does not ask, “How old is this person?” but merely, “Is this person over 18?”’

Though rigorous and secure at a technological level, at customer level, it’s nothing more than PIN verification: an exceptionally simple protocol. The system is yet to have been hacked, and the success has not gone unnoticed. Banks, card issuers, technology companies and governments are now all proactively troubleshooting how to manage our digital identities online. Here is a podcast that talks about the issues of security, 2-step verification and SIM swapping.

More money more problems

Silicon Valley companies own the devices and operating systems on which digital transactions take place, as well as analogues of our identities in the forms of email accounts and social profiles.

This makes them useful partners. “Sign in with Google/Facebook” really is very simple. True enough, nine out ten companies which rely on a third-party identity supplier use either Google, Facebook, or both. But there are problems. For all their reach, neither Google nor Facebook is all-pervasive. Social channels phase in and out of fashion, and a majority of retail takes place offline. Nor is there a consumer appetite for putting that level of trust in big tech.

A survey of 133k consumers by consultancy Bain & Company put these companies at the bottom of the pile.

Equally, many governments may shy away from wading in. The UK’s first attempt to introduce national ID cards was a £4.5bn failure, partly because it conjured unsavoury connotations. And in countries much larger than Estonia, governmental ID requirements may hinder the relatively simple task of managing simple, secure transactions. Aadhaar, India’s foray into mandatory digital identity, has been plagued by problems, with people being refused basic services as wide-ranging as posting a letter to receiving healthcare.

Joining forces

Card issuers have seized on the natural bridge between payment and identity, and they seem to offer the greatest promise of salvation. In December 2018, Mastercard announced a strategic collaboration with Microsoft over an as-yet vague service “that would allow individuals to enter, control and share their identity data their way — on the devices they use every day”. VISA is also plugging away. Card issuers have two reasons to feel confident of their success: engrained trust, and unparalleled reach. Ninety-six percent of the UK population has a debit card, and even market stallholders now often accept card payments. Tellingly, the majority of forays into payments to date by the FAANGs have linked back to a major card issuer. Technology companies, meanwhile, may manage matters such as the device, the interface, and necessary intelligent back-end tech required to create a secure, “universally-recognised digital identity”.

The importance of an agile technology partner cannot be underestimated. Exceptionally simple, minimalist user interfaces have been part and parcel of the success of Monzo and other challenger banks, whilst traditional-model banks languish with underwhelming digital customer service provisions and outdated UX. Card issuers and big tech firms must ensure they do not also fall victim to corporate inertia. Whilst they may not face equivalent competitive threats (payment processing is a tougher nut to crack than banking), consumer adoption is far from guaranteed.

‘Pindependent’

Consumers’ own inclination to defend themselves online may only weaken with time. Start-ups are sniffing around the payments and identity spaces. Yoti, for instance, seeks to become the “world’s trusted identity platform”’, storing customers’ government documents for purposes such as buying age-restricted products. Though only a piecemeal solution, advancements such as this will further raise customers’ demands for ease and simplicity. There is no panacea for meeting these demands. Reducing the burden on consumers’ time and attention is vital, and this can be achieved through UX improvements at every stage in the customer journey. Brands, equally, cannot realistically be expected to provide a robust defence against cybercrime. Intelligent technologies may support them, but as long as they need to store customer data in order to do business, they will remain irresistible targets to fraudsters. To connect the dots, a new security protocol must be established which alleviates the burden on both parties.

Something like the card PIN code holds promise. It’s very secure, very simple, and it already enjoys mass adoption; indeed, mobile payment apps already generally rely on a PIN login. The step towards it being used to authorise online transactions seems relatively small. The prize for tackling this goes beyond preventing crime. Mobile payments in-store are still only popular with 3–7% of western consumers, and only a quarter appear willing to try. If digital payments became viewed as being safe, or safer, than card transactions, this should also open up further marketing opportunities via the customer’s device: location based-targeting, beacons, push-notifications, personalised offers, etc.

That, in turn, should unlock a simpler, safer and more profitable future for all.

Words by

Andy Eva-Dale is a process-driven Technical Director with a passion for anything technical. He has over 15 years application development experience; working with organisations such as the London Stock Exchange, BAE Systems and WPP. During this time he has worked across full stacks on projects such as Grant Thornton, Aegon and East Midlands trains; scoping, designing, documenting and delivering award-winning, large enterprise standard products on a global scale. Andy has certifications in multiple technologies, has delivered talks on emerging technologies and is an active member of various communities

Contributing writer, Lucy Valentinova is a User Experience Consultant who is focused on delivering innovative websites and digital services that meet user, business, and development goals. UX design, strategy, user research and information architecture are central to her work and enable her to make informed decisions when it comes to proposing digital solutions.

If you are considering a career in tech and want to avoid the burnout disease sweeping this industry, here are a few wellbeing initiatives we implement to eliminate the risk of running on empty.

Source: uxplanet.org

Posted in Knowledge sharing | Leave a comment

Designing User Experience for Virtual Reality applications

Virtual Reality (VR) infers a total inundation experience that closes out the physical world. Utilizing VR gadgets, for example, HTC Vive, Oculus Rift or Google Cardboard. Users can be transported into a various genuine world and envisioned situations, for example, the center of a screeching penguin state or even the back of a monster.

There are other reality experiences that exist like Augmented Reality, Mixed Reality, and Extended Reality which provide the user with different experiences

Augmented reality (AR) adds digital elements to a live view often by using the camera on a smartphone. Examples of augmented reality experiences include Snapchat lenses and the game Pokemon Go.

In a mixed reality (MR) experience, which combines elements of both AR and VR, real-world and digital objects interact. Mixed reality technology is just now starting to take off with Microsoft’s HoloLens one of the most notable early mixed reality apparatuses.

Relating Conventional Design to 3D experience

The market has furnished designers with a lot if reliable work over the past few decades and is going to move towards a new paradigm of vivid 3D content. Sound, touch, depth, and feeling will all be fundamental to the VR experience, making even the most novel 2D screen encounters feel exhausting and dated.

VR provides many of the same benefits of training in a physical environment — but without the accompanying safety risks. If a subject becomes overwhelmed, they can easily take off the headset or adjust the experience to be less overwhelming. This simple fact makes means specific industries like healthcare, military, police, and so on should prioritize finding ways to use VR for training.

Think Skype for Business on steroids. VR has the potential to bring digital workers together in digital meetings and conferences. There will be real-time event coverage, something like Facebook Live with VR. Rather than merely seeing the other person on a screen, you’ll be able to feel as if you are in the same room with them, despite being miles away.

Think about how you collaborate with a touchscreen screen today. There are various examples that we have all developed to understand, for example, swiping, squeezing to zoom, and long tapping to raise more choices. These are altogether contemplation’s that ought to be made in VR also. I’m sure that as more creators come into the VR field, there will be more personalities to make and vet new UI designs, helping the business to push ahead.

Interactivity in virtual reality is composed of three elements. These are speed, range, and mapping. Speed is the response time of the virtual world. If the virtual world responses to user actions as quickly as possible, it is considered an interactive simulation since immediacy of responses affect the vividness of the environment. Many researchers try to determine the characteristics and components of interactivity of Virtual reality in different ways. However, in order to do this perfectly, the designers have to acquire a thorough real-world understanding, meaning that they need to visualize the typical physical space surrounding the user and then build on the elements that they’ve perused. This is so because at no point you want your users to feel uncomfortable and feel like the newly introduced elements are invading their personal space.

So, What all kind of apps are we going to design

Generally speaking from a designer’s perspective, VR applications are made up of two types of components: environments and interfaces.

You can think of an environment as the world that you enter when you put on a VR headset — the virtual planet you find yourself on, or the view from the roller-coaster that you’re riding.

An interface is the set of elements that users interact with to navigate an environment and control their experience. All VR apps can be positioned along two axes according to the complexity of these two components.

  • In the top-left quadrant are things like simulators, such as the roller-coaster experience linked to above. These have a fully formed environment but no interface at all. You’re simply locked in for the ride.
  • In the opposite quadrant are apps that have a developed interface but little or no environment. Samsung’s Gear VR home screen is a good example.

How to start designing the user experience for virtual reality

Before you start designing for your VR app, considering some of these fundamental questions may help you:

  • How do people get started?
  • What affordances are provided to guide people without overwhelming them?
  • Do you want to err on the side of providing too much guidance or create a minimalist environment that doesn’t overload the user with too many choices?

Don’t expect people to know what to do and where to go. Slow and progressive familiarization, visual clues, and guidance from the software should all be used to help the user. When you’re designing for VR, you’re designing for the capabilities of people as much as you’re designing for the capabilities of the system. So it’s essential that you understand your users and the issues that may come up while they experience VR.

VR experience isn’t too different than the process for designing a web or mobile product. You will need user personas, conceptual flows, wire-frames, and an interaction model.

The Process for Designing User Experience for Virtual Reality

before you even begin considering structuring for VR, you need to consider what sort of Experience you need to make? There is certainly not a one-measure fits-all. Most ethnographic research strategies are totally open within VR, including:

Client Interviews, Fly-on-the-Wall, Usability Testing, Touchstone Tours, Simulation Exercises, Shadowing, Participant Observation, Heuristic Evaluation, Focus Groups, Eye Tracking, Exploratory Research, and Diary Studies.

WIRE-FRAMES:

Generally, as designers do, we’ll go through rapid iterations, defining the interactions and general layout.

VISUAL DESIGN

At this stage, after the features and interactions have been approved. Brand guidelines are now applied to the wire-frames, and a beautiful interface is crafted.

The Design Process for VR apps would not change dramatically apart considering few usability issues from our normal design process.

Setting up the environment for designing

Canvas size

To apply mobile app workflow to VR UIs, you first have to figure out a canvas size that makes sense.

Below is what a 360-degree environment looks like when flattened. This representation is called an equirectangular projection. In a 3D virtual environment, these projections are wrapped around a sphere to mimic the real world.

The full width of the projection represents 360 degrees horizontally and 180 degrees vertically. We can use this to define the pixel size of the canvas: 3600 × 1800. Working with such a big size can be a challenge. But because we’re primarily interested in the interface aspect of VR apps, we can concentrate on a segment of this canvas.

Building on Mike Alger’s early research on comfortable viewing areas, we can isolate a portion where it makes sense to present the interface.

The area of interest represents the one-ninth of the 360-degree environment. It’s positioned right at the center of the equirectangular image and is 1200 × 600 pixels in size.

Pencil & Paper

Before getting into any software, it’s crucial to get your ideas out on paper. It’s fast, cheap, and helps you express ideas that may take hours in software. This is especially important because moving from sketches to hi-fidelity can cost much more in 3D than in 2D.

Software

Some designers start with tools they already know like Sketch, others use it as an opportunity to learn new tools. It really depends on what engine you are going to use to build your app. If you are building a 3D game, you’ll want to use Unity or Unreal Engine. Cinema 4D and Maya are also widely used, but mostly for complex animations and renderings.

PRINCIPLES TO CONSIDER WHILE DESIGNING

TEXT READABILITY

Because of the display’s resolution, all of your beautifully crisp UI elements will look pixelated. This means, first, that text will be difficult to read and, secondly, that there will be a high level of aliasing on straight lines. Try to avoid using big text blocks and highly detailed UI elements.

Intended viewing distance: how far away we have designed these to be viewed. What is the optimal distance that these screens were intended to be viewed from and that intended viewing distance will inform the size of the screen in addition to the size and density of the content therein.

Distance-independent millimeter or A dmm can be described as one millimeter at a meter away. So it’s an angular unit that just follows a millimeter as it scales off into the distance. Let’s look at a concrete example. In the upper left-hand corner of this diagram, I have a screen space layout that I have measured in dmm’s, All of my UI elements are measured in dmms. It is 400×480 dmms tall and then I have applied that layout in world space to three separate virtual screens.

All of these virtual screens have different intended viewing distance. From the vantage point, that all of these screens are intended to be viewed from, they will look same to the user, they will have same angular size and text would be just as readable, buttons would be just as clickable and motion will appear to move same as well.

Ergonomics

When first designing for VR it’s exciting to think about creating futuristic interfaces like we’ve seen from Hollywood blockbusters like Iron Man or Minority Report, but the reality is those UIs would be exhausting if used for more than a few minutes. The following diagrams help to illustrate the comfortable range of motion zones:

We’ve all been affected by some sort of “text neck” syndrome at some point (the soreness felt from looking down at our smart phones for extended periods). Depending on how far you lean over, poor posture can create up to 60 pounds of pressure on your spine. This can lead to permanent nerve damage in your spine and neck.

AVOIDING SIMULATOR SICKNESS

Virtual reality introduces a new set of physiological considerations for design. Like flight simulators used by pilots in training, virtual reality has the potential to present mismatches between physical and visual motion cues. This mismatch can produce nausea known as “simulator sickness,” when your eyes think you’re moving, but your body does not.

Understanding the physiological effects of virtual reality design, and following these guidelines, is critical to making your app success and ensuring that users avoid simulator sickness.

BRIGHTNESS CHANGES

Be mindful of sudden changes in brightness. Given the proximity of the screen to the user’s eyes, transitioning the user from a dark scene to a bright scene may cause discomfort as they acclimate to the new level of brightness. It is similar to stepping out of a dark room into the sun.

BUTTON PLACEMENT

Avoid placing fuse buttons in close proximity to each other. Fuse buttons work best if they are large targets that are sufficiently far apart from each other.

If multiple smaller fuse buttons are placed near each other, the user could accidentally click on the wrong button. Smaller buttons that are close to each other should require a direct click to activate.

“Instead of trying to adapt ourselves to fit the limited interactions supported by our existing technologies, our interactions with VR platforms will need to be as natural and intuitive as possible.”

Tools for Designing VR Experience

Sketch

Sketch to VR is a sketch plugin that uses another tool called A-Frame. The Sketch to VR plugin automatically creates an A-Frame website, but all we need to worry about is creating our design in the sketch.

Google Blocks

Use simple 3D geometry to simulate a sense of scale and depth. If you have a rift or vive, you can use Google Blocks to prototype your ideas. This isn’t something you’d put in front of a user, but you can begin to see how your 3D environment might look and feel.

Photoshop

Photoshop lets us use core image editing tools like the pen and brush tool, to draw elements that appear to be in 3D space.

Designing VR apps on Sketch

SET UP “360 VIEW”

First things first. Let’s create a canvas that will represent the 360-degree view. Open a new document in Sketch, and create an artboard: 3600 × 1800 pixels. Import the file and place it in the middle of the canvas. If you’re using your own equirectangular background, make sure its proportions are 2:1, and resize it to 3600 × 1800 pixels.

SET UP ARTBOARD

As mentioned above, the “UI View” is a cropped version of the “360 View” and focuses on the VR interface only. Create a new artboard next to the previous one: 1200 × 600 pixels. Then, copy the background that we just added to our “360 View,” and place it in the middle of our new artboard. Don’t resize it! We want to keep a cropped version of the background here.

DESIGN THE INTERFACE

We’re going to design our interface on the “UI View” canvas. We’ll keep things simple for the sake of this exercise and add a row of tiles. Duplicate it, and create a row of three tiles.

MERGE ARTBOARDS AND EXPORT

Drag the “UI View” artboard to the middle of the “360 View” artboard. Export the “360 View” artboard as a PNG; the “UI View” will be on top of it.

TEST IT IN VR

Open the GoPro VR Player and drag the “360 View” PNG that you just exported into the window. Drag the image with your mouse to preview your 360-degree environment.

PROTOTYPING

Here, we’ll organize screens into flows, drawing links between screens and describing the interactions for each screen. We call this the app’s blueprint, and it will be used as the main reference for developers working on the project.

Conclusion

As a Designer, I think this is the ripe time to start enhancing our skills to nurture the future of the Design industry and the important part it plays to enhance and improve the day to day life of application users. The best part of the lot is that the same concepts and ideation methods used in design thinking and ux methodology still remain the same with focus on some new principles of interaction as mentioned above.

Posted in Knowledge sharing | Leave a comment

UX Will Happen Anyway: Tactics vs. Strategy

Some time ago, as I solidified my role as a UX designer, and focused most of my efforts on all the user-centered design deliverables that entailed, I still found myself with a healthy helping of UI design tasks. So I continued producing visual design and interaction on a small-scale, reactive basis, according to the needs of the development team. It turns out, this wasn’t a bad thing, as the crafting of the user experience doesn’t end with flow charts and wireframes, and inevitably unplanned feature requests will happen. I began to refer to these activities as “tactical UX design,” as a thing not apart from, but part of the overall UXD workflow.

Tactical UXD

What It Is

Tactical UXD is the day-to-day UI design that is necessary before and during product development. It presents in the form of small feature requests and unplanned enhancements, such as bug fixes or critical missed requirements. “Tactical UXD” is largely analogous to “UI design,” except for the “UX” component of the term, which implies a greater role in a larger strategic picture; it’s a discipline that doesn’t get enough love in terms of overall importance in relationship to other user experience design components. (The fact that UI design is often mistaken for UX design doesn’t address that gap; rather that’s a misunderstanding with tomes of analysis and frustration already dedicated to the topic.)

Because my job responsibilities as a UX designer at a small organization still included UI design, when I wore my “UI designer” hat, the user-centric gears still turned. When I collaborated with developers on active tickets, I was mindful to do so in a way that ensured that those new feature designs took into account the context of the user we were building them for. Even for these small features, I still wanted to understand “for whom am I designing?” and “what problem does it solve?”

Strictly speaking, someone in a UI design role may sometimes have the luxury (or misfortune, depending on the individual) of focusing only on the necessary design and interaction that is requested of her, through requirement documentation or development tickets. However, even with these smaller enhancements, context is a big benefit to designer and end-user alike, and should not be ignored.

If more upstream user research and data isn’t readily available, it’s still helpful to question the requirements in order to understand their value to the end-user. Hopefully, these requests can be understood in the broader context of a product for which more user information is available. If a full UXD process is in place, the UI designer can refer to the personas for that product and apply them to one-off requests.

Since a UI designer plays an important role in the tactical execution of a UXD strategy and should strive to integrate into it as much as possible, where should she draw the line with involvement? On the ubiquitous “UX Iceberg” visualization (which is, in turn, is just a friendlier visual metaphor for the Jesse James Garrett diagram) UI design is often equated with the “Surface” component. However, in my experience, I have met few UI designers whose responsibilities, in practice, were limited to “visual design” (VizD). There’s usually more than a little interaction design (IxD), sometimes information architecture (IA), and if she knows what’s good for her (and she can get away with it) a little user research — if she is not also the front-end developer, that is! The line is already blurred, so let’s call it something else: tactical UXD.

What It Isn’t

If tactical UX is not simply visual design done in a vacuum, it is not at all a substitute for a full user experience design process, even if for many years (and still in many companies) “UX” is still equated with “UI.” In these environments, deliberate user experience design is neglected until so late in the process that it often falls to the UI designer — or the front-end developer, as often no distinction is even made between those two roles. It’s no surprise that development teams and recruiters began to combine these responsibilities into one-and-the-same position, a contributing factor as to why so many “converted” UX designers actually cut their teeth in programming. Many picked up on the fact that being handed a directive to design from already completed requirements, with no customer context, on top of the already immense responsibility of coding, resulted in a lot of missed-marks, shelf-ware, and just generally bad UIs. (I’ve dedicated a separate article to the topic).

Strategic UXD

UX Will Happen Anyway

Strategic UXD is the user experience design process when executed in a deliberate and user-centric manner, from ideation to launch. Just as tactical UXD is somewhat analogous to UI design, “strategic user experience design” seems synonymous with “user experience design” itself — nearly a circular definition in fact — until one pauses to consider that user experience “designs” occur naturally as a by-product of any product design process, regardless of whether anyone is guiding that process. Users are sure to have some sort of experience, and chances are very high that a thoughtless or ill-fashioned experience will be a bad one. Thus, there is “user experience design,” and there is “Strategic User Experience Design.”

Tactics as Strategic Component

Strategic UXD is equivalent to deliberate UXD, and it also incorporates tactical UXD. This is because a successful strategy relies upon a successful execution. Think of it like a river flowing down a mountain: the whole body of water is named “UXD Strategy.” (I know, weird name for a river, but bear with me.) Some tributaries lie upstream, while others feed in downstream, though all of them are contributing to this river. Similarly, there may be a division of contributing roles upstream (product vision, user research) vs. downstream (interaction and visual design), but each player is conscientiously contributing to the overall UX strategy.

Product Management: Strategic UXD’s Older Supportive Brother

Another consideration is the role that product management plays in the success of a UXD strategy. This topic is gaining considerable momentum as professionals in both domains recognize the interconnectedness. Many of the upstream UXD activities are either dependent on product management processes, or are one-and-the-same. For example, product managers are well positioned to develop personas. Then of course, there is the simple fact that a positive user experience ultimately depends on whether the business goals are aligned with user needs, something only product managers and business owners can ensure at a product’s early stages (though the UX designer can and should offer input and research data).

Potential Points of Failure

1. Obviously, if there IS no UXD strategy in place, the user experience will evolve in a happenstance fashion, either by non-UX individuals upstream, or an ill-equipped UI designer/front-end developer at the end of the process.

2. There are designated individuals responsible for each phase of the UXD, but for whatever reason, they are not communicating (different departments, bureaucracy, etc.). For example, the personas and scenarios developed by a product manager never flow downstream into the UI designer’s hands.

Conclusion: Strategic UXD = Good UXD

Strategic UXD is simply conscientious, deliberate UX design, which includes curious, competent UI designers, their UX counterparts, and anyone else who is contributing to the Strategic UX stream. Failure to recognize this results in unpredictable experience design results, usually at the cost of the end-user.

Source : uxplanet.org

Posted in Knowledge sharing | Leave a comment

Design Can’t Live Alone

What is the difference between a designer and an artist? What is the primary goal of design? How can you recognize outstanding design?

My mind is getting around the answers to these questions. I am passionate about the design in all its appearances. Everything like the graphic, industrial, interior, and product design excite the imagination. Art and design are close terms. They are so close that people tend to mix them up. At this point, I would like to add some clarity. I am broadly sympathetic to the words by Matias Duarte, Vice President of Design at Google, on the definition of the primary function of the design:

Design is all about finding solutions within constraints. If there were no constraints, it’s not design — it’s art.

My job requires tight communication with Dashdevs’ design team. I get requirements from clients and transfer them to designers. Moreover, I have fair expectations for their work results. I want to see a design for the application with UI/UI solution and nowhere near an oeuvre or over-creative masterpieces that can’t be implemented in a reasonable time. My personal belief is that design must be comprehensive, functional, and transparent for the user. In my ideal world, I can take any screen from the application and show it to a bypasser. The stranger should be able to name the goal of the screen and recognize all the apparent functionality. It is like a seal of approval for me that the design is great.

Application design constraints

Let’s get back to the words of Matias Duarte. The design must have constraints. For the last years, I have been working with the development of web and mobile applications. Their constraints are really similar:

  • The Persona — we are creating the application for the particular users’ type. A typical person has the pain that the application could solve. Do not forget about the gains and additional value. Create a design for people, and they will love it. The designer must feel empathy for the user.
  • Programming language possibilities — not every creative idea of the artist may be appropriately implemented. Let me set this clear. We can develop any visual effects in the mobile application. However, they can kill the device battery in minutes or slow down the performance of the application. Do you really need it?
  • Technical limitation — the design is excellent, but almost every application has its guts. The application is connected to the backend via API (application programming interface). Backend/ API provides a valuable contribution to the design. If the application is built from scratch totally, it is much easier to negotiate technical restrictions to design. However, in some cases, you need to work with a bad architecture of API. Your perfect design may be spoiled by it.
  • Time and budget — everything can be done some way. Sometimes it may take a lot of time or cost you too much. These two constraints always go together.

I notice that neglecting these constraints can ruin even the most perspective project. First of all, your target audience can reject your application. They don’t get the idea. The second issue is when the client (or decision maker) fall in love with a magnificent prototype. A prototype looks dynamic and is done on platforms like InVision, Marvel, or Principle. The client expects to see the same implementation in the mobile application. At this point, the technical side enters with its red lines of limitations. Consequently, all of these influence on time/budget ratio.

Design processes

I consider that everyone can nurture design skills. One should not be a natural-born designer. For some people it is easy, for others it takes a lot of time and affords. That is why appropriate design workflow is so important. It can help not only get better business results, but it helps a designer to improve himself as a professional.

I come across and took part in different design workflows. Every approach has advantages and disadvantages. Giving thanks that we had various projects, we got a chance to experiment much. So, the best design workflow for us looks like this:

  1. Research. It consists of two parts. The first one is the market research: define main players, do benchmark analysis, and look for the substitute products. The second part of the research process is dedicated to users. Some products are specific. The designer needs to deep into psychology and behavior of the typical user. At this stage, we need to create a persona.
  2. Design creation. Here we try different approaches — atomic design, co-creation, design thinking. You name it. The approach depends on the team, project, and timeframe. I’ve written about the process improvements that save our time in this article — here. Please check it.
  3. Design cross-review. This step is really vital for the process of design creation. The goal is to review the design with a fresh look and improve it, if possible. A new unbiased designer may see more options and introduce new ideas than the designer wrapped up in work fully. By the way, this step also helps to share the knowledge and experience in the team.
  4. Technical review. This step can protect you from making a number of mistakes. We provide the developers with the latest design and business requirements. They check it for API compatibility and technology implementability. Usually, we have a lot of good suggestions about the flow, animations, and UX. Most of the developers have an analytical mindset. The technical review helps us to improve not only the design skills but educate designers technically.
  5. Demo for the client. A designer and client engager usually present a demo to the client. As our good practice, we prepare a dynamic prototype in Marvel or InVision. During the demo, we get feedback and discuss the possible options for the improvements. If the design is not good enough for the client, we turn back to the second step.
  6. Estimate. Only if the client approves the design, we can provide the most realistic estimation for development.

It is the perfect workflow due to our experience. However, design workflow is not the only thing that must be done. We have started a series of technical lectures for designers and managers. They need to understand how the application is working from the inside. It is impossible to create a great product if you don’t understand its internal structure.

I believe in educated intuition, which you gain through a profound experience. My great inspiration is Raymond Loewy. He worked with totally different products like copy machines, refrigerators, cars, and locomotives. No matter what was the product, he always was thinking about the user first. He had changed many things. They are so ordinary for us right now. I think that he was the best UI/UX designer of the century.

“The main goal is not to complicate the already difficult life of the consumer.”

Raymond Loewy

In Dashdevs we believe that the design of mobile and web applications is a combination of aesthetics and functionality. Every workflow and process in our company must improve these two components. We are experimenting with different approaches. The described one is the optimal choice for us at the moment. However, we take it up a notch.

How do you improve your design processes?

P.S. If you don’t see mistakes in something that you designed a year ago, I have a piece of sad news for you. You have no progress since.

Written by Irina Bulygina

Source: uxplanet.org

Posted in Knowledge sharing | Leave a comment