Section 5: Implement

Implement

Unit 3, Section 5, Overview

Next Up…
Before moving on to the rest of the content head over to the Unit 3, Section 5, assessment page to answer a few quiz questions.


Video Transcript
Hi. This is Dr. Anthony Chow and welcome to Unit 3, Section 5: Implementation and Evaluation.

In Section 4 we designed the preliminary design using WordPress. The intent was for you to emulate what we did, using our own preliminary design so you have hands-on practice designing and developing your own website. In this section we’re going to go over the I or ”implement” and start with some of the E or “evaluate” phases of the A-ADDIE model. Let’s begin.

Remember that evaluation occurs at every stage of the A ADDIE in some fashion. During the development stage — the second D in A ADDIE — we went through a few rapid design tests and refinement cycles. We are ready for implementation, but there are a few more things we must do before going live or rolling out our site.

First, make sure you perform basic quality assurance to ensure the site is working like it’s supposed to, especially in different browsers.

Second, it is highly recommended that you do quiet rollout as often things happen or do not necessarily work the way they are expected to in the live environment. This allows a few users to test your site and provide feedback before announcing and publicizing your site widely. It’s always important to provide some kind of user feedback — quick survey — so users can let you know if there are any issues that need to be resolved.

Okay. After a brief period of time, based on preliminary feedback, which can be a few weeks or even a few months, you’re ready to go ahead and roll it live.

Now three to six months later, it’s time to do some kind of formal usability testing. How do you really know how it’s doing? Well, let’s find out.

The E in A ADDIE stands for “evaluation.” Pervasive usability means that usability testing and evaluation occurs at each step in the design cycle as we have demonstrated throughout our sections.

Evaluation in usability can be both non-empirical — that’s without users — and empirical with users. It’s recommended that you always do both, but non-empirical means often is [sic] quicker and more cost effective because it does not require users.

Let’s take a closer look. The five non-empirical tests or analyses recommended are site analytics, demographics, performance metrics, cognitive walk-throughs, and heuristic testing. Site analysis gives us a real time and longitudinal view of how your site is being used. It’s quite fascinating to watch who’s using what part of your site. For example, on a project I was working on last year, we had an online survey that was not getting a lot of responses, so an email was sent out by listserv in the morning.

By 1:00 p.m. one response, 3:00 p.m. several responses. And by 8:00 p.m. the responses were pouring in by the hundreds. Peaked around 11:00 p.m. and by 3:00 a.m. we had 2,000 responses. This tells us a good time for emails to be responded to is when people get home from work.

For websites, site analytics gives you a demographic view of who’s using your site, what types of technology they’re using, and what types of information that they’re consuming. Most website software comes with analytics, and Google Analytics is extremely robust in the data it collects and gives to you.

Demographics tell us the “who” and helps us infer the “what,” based on demographic trends. Research is pretty clear that there are consumer trends between gender and certainly age groups. So it’s important to know who your predominant users are, what they would want, and whether this is the target group your site is after. Even if you’re not sure, you should align your goals for a target audience and work from there, based on actual usage.

Performance metrics takes the site analytics and uses it to inform your site goals. Is that where you wanted the traffic to go? What does a successful website look like? How many hits, how many transactions, etc. You want to have a spreadsheet of your goals, what your measures of success look like, and then what analytics and other data you need to be able to determine where you are in achieving those goals.

Cognitive walk-through is really just a logic-and-intuitive test centered around your high-priority user test. Can they achieve them as easily as possible? You walk through them yourself and have some others do the same. Remember, though, you and your associates will be biased, and this is just a low-level test that needs to be really vetted with actual uses.

Heuristic testing compares your site with some of the most accepted and used usability standards to make sure you are in compliance with them. We will discuss these in depth in the next module.

Last, but certainly not least, is compliance with ADA, or the American Disabilities Act Guidelines. The W3C organization created web standards for ADA compliance on the web called Web Content Accessibility Guidelines, and they are now on Version 2. Again we’ll go in depth about these guidelines in the next module.

I also must note that all of these, for the most part, should also be done prior to actual implementation, but for organizational purposes of our  MOOC, we’ll be discussing them in depth here.

So how does one collect this information? Well, “in usability” refers to both the empirical with users and non-empirical without user methods. “With users” includes such data collection methods as interviews, which are one-on-one discussions with individuals; focus groups, which are small group discussions; surveys, which involve both paper-based and online questions that people can fill out anonymously. Interviews give you in depth, one-on-one information where you can engage a user in a dialogue and ask them questions for clarification and elaboration. This takes usually 30 minutes to an hour, however, so you need to select your informant carefully as you’re investing a lot of time and money in meeting with and talking with them.

Focus groups allow a higher return on investment because you can talk with more than one person at the same time. It also usually lasts one hour or so and gives you a broader set of perspectives, but not as in depth or focused as an interview.

Natural observations involve observing people using your system where you do not necessarily interact with them, but rather just watch how they use it. This also has a lot of merit because you can see the natural interaction between the user and your website. What do they look at? What do they click on? How long do they stay on certain pages?

Finally, usability testing, which we’ll do a lot of soon, is a systematic investigation into whether people can actually use your system in an effective, efficient, and satisfying fashion. This allows for a systematic, in both a quantitative and qualitative way, of measuring the usability of your site in an operational way. This also allows you to compare performance data longitudinally over time. Most importantly, it is that you get to see how users actually perform on your site as opposed to just providing opinions and feedback. Can they do what they want to do or not?

There’s no better test of the system than to have representative users doing representative tasks. Nielson believes that only five testers will discover, on average, 85% of your usability problems. This is excellent ROI.

Remember to complete the review session, discuss the section in the discussion area. Look at any readings and do the hands-on activity. Take care and hope to see you again soon. Cheers.