WAI (the Web Accessibility Initiative) is a positive part of a larger universal trend to standardize certain aspects of web content and interface design. WAI is international in scope, spanning both government and the private sector (See W3: http://www.w3.org/WAI/, IBM: http://www-3.ibm.com/able/ and Microsoft: http://www.microsoft.com/enable/). Its impact on interface design and consequently on the design and development of online educational content itself cannot be underestimated. WAI is a set of recommendations intended to standardize certain key aspects of interface design so that physically and cognitively challenged users can be on an equal footing with fully-abled users when they access information and interact with others across the Internet. WAI is not a law nor is it intended to compel designers to be homogeneous in their thinking. It is a way of encouraging us to think more carefully systematically about how we present our online content. The standard has been very carefully considered and collaboratively shaped.
WAI has been steadily gaining momentum over the last five years and it is only just now taking root in the educational sector. In other words, the impact of this trend to standardization in general, and accessibility in particular has not yet been felt by the educational sector, but that day is fast approaching and there will be many issues with which to contend. This paper invites discussion on the kinds of preparation the educational sector will need in order to meet these new standards, and perhaps even more importantly, on the rational principles that might help us to decide how and where to spend resources. Of particular interest here will be the legacy content that has already been designed and mounted on the Internet and is currently non-compliant.
The WAI standard has three levels of conformance, each more stringent than the preceding one. Priority one is what absolutely "must" be done, priority two is what "should" be done, and priority three is what "may" be done. There is plenty of flexibility as well: designs that cannot be made to conform need only to be offered in an alternative way that does conform.
There are fourteen specific guidelines (http://www.w3.org/TR/WCAG10/) which are like first principles. These include things like the use of natural language and a preference for not relying on color alone as a way to communicate information. In order to determine how to meet these guidelines, the W3 has established checkpoints. Each guideline therefore has one or more checkpoints. These checkpoints (http://www.w3.org/TR/WCAG10/full-checklist.html) are like annotations to the guidelines and they explain more particularly how the guidelines can be achieved. Checkpoints in turn can be linked to specific techniques which usually contain the precise code that is needed to achieve the checklist item. The checklist items have even been extended to cover specific ways to evaluate and repair legacy interfaces that do not yet conform to standards (http://www.w3.org/TR/AERT) and there is a wonderful online tool known as Bobby (http://www.cast.org/bobby) that can also be deployed to this end.
The 508 standard is an American alternative to the
WAI (http://www.webaim.org/standards/508/checklist) although it may not have
been intended as that. 508 applies specifically to American federal agency
sites (http://www.cio.noaa.gov/itmanagement/508law.htm) and to all federally
funded programs and services. While it is an emerging standard, it nevertheless
differs from the WAI because it is also a federal law. The emergence of this
new standard, while welcome in one sense, also establishes a potentially confusing
alternative that has necessitated the W3's production of a document that maps
the relationship between the two evolving standards and shows how and where
they are similar and where they are different (http://www.w3.org/WAI/GL/508/508-UAAG).
Very few educationally-related websites (such as institutional homepages) meet even the minimum standards of priority one and much current online educational course content fails even more miserably. Legacy content that was designed in the past by any of the packaged proprietary platforms (such as WebCT, Cold Fusion, Dreaweaver, Front Page, Flash, Domino, Quick Place, Learning Space etc) does even not meet priority one. The task of re-working the design so that the interface and its code are now compliant is tedious, time-consuming, and often means the loss of much formatting and even some content. In most cases it is too tedious to be done manually so designers must wait for the vendor to come up with newer versions of these platforms so that they can covert the legacy content into compliance. The process of conversion can be expensive and in some cases is not worth it. Even more importantly, content developed in one proprietary environment is not readily transferable to another. In other words, let's suppose you built online content in Dreamweaver, then decided to make your next version of the course WAI compliant, but you wanted to convert your legacy material into WebCT because that compliancy making feature was cheaper than the Dreamweaver one. In this scenario, you could not extract your proprietary content easily if it was designed using the native code built into the platform. HTML files, .doc and .pdf files and media will extract fine, but things like tests, discussion forum archives and grades are so difficult to extract that it's often better simply to re-invent that content and then make it compliant.
The costs and inconvenience of these vendor-forced transformations are all working against the successful implementation of new standards and they will prohibit institutions from switching platforms to one which is cheaper and or one which also complies with standards. Put another way, vendor-created dependency on its own proprietary platform will inevitably interfere with institutional and designer freedom to migrate to other platforms that are more compliant or less expensive to adopt. Thus (even new) accessibility-compliant proprietary platforms lock designers into platform-specific dependency that is in the always in the interest of the vendor, but not necessarily in the interest of the institution, the designers, the instructors, or the students. And while it may be "natural" for vendors to want to make their particular design tools absolutely essential for design, the owners of the content are also forced into a dependency on the accessibility conformance of that proprietary platform, and into a dependency on that company's commitment to -- and planning of -- future compliance standards.
Of course, as educators who develop online content we are morally, ethically, and pedagogically bound take as many cognitive and physical disabilities into account as possible when designing and mounting content online, or any course for that matter. HCI (Human Computer Interaction) specialists also tell us that designing for disabilities has many other measurable benefits as well. For instance, an interface that is user-friendly for disabled people is almost always also more user-friendly for fully-abled users too. This is a win for everyone! There are other advantages too. Inclusive design welcomes cognitive difference and thereby helps create an online culture of acceptance and risk-taking, both of which are essential for learning. This kind of flexibility allows all kinds of nuanced differences in learning styles to thrive. This can only be a good thing.
So here's the nub of the struggle in a nutshell. Will the moral, ethical, and pedagogical reasons in favour of accessibility standards inevitably be outweighed by succumbing to the platform dependence that vendors want, and by the prohibitive cost of re-vamping legacy content? In the next few pages I shall unpack some of these issues in a more detail.
I taught my first university class in 1978 and I first logged on to the Internet in 1986, about four years before the WWW made its public appearance. Although it took another few years before I was able to integrate even in a crude way the power of these two distinct experiences, the interval of time between 1986 and 1993 was fraught with the stress and anxiety that accompany dramatic growth. In addition to the unwelcome negative factors, I found my self-confidence rapidly eroding as my ability to learn and comprehend seemed to stall, and I noticed that I was all-too-frequently drawn away from issues in my own field of expertise (English Literature) and pulled into technical and technological problems and paradoxes. On the other hand I was able to communicate more frequently and conveniently with students and colleagues, and the more work I did online, the more ways I found to enrich the quality of the learning experience as long as there remained a continuing and strong face-to-face component in the instruction. It was truly a sublime experience: an unholy combination of the horrifying and the exhilarating.
In the early stages of adoption and use, I naively relied on the Internet for only two purposes: social communication with colleagues, and locating useful information. It never actually occurred to me to use the Internet as a teaching tool. It became a way to communicate with colleagues in my own university at first, and in other places in the world later on, so in this sense, the Internet initially enriched my life in a private way. Most of my discoveries were accidental and serendipitous, and they occurred in the most haphazard and desultory of ways. I also contend that online educational content itself has developed in much the same manner, leaving a patchwork of different files, directories and resources each with a different set coding and formatting laws, and each with a different pedagogical intention. It is the haphazard nature of this evolution that is at once exhilarating and troublesome because very little conscious attention was paid to standards, to usability, and to accessibility. At a time when almost all new buildings in North American architecture recognized the need for wheelchair and washroom accessibility, my own online educational content seriously lagged behind in sensitivity to the needs of challenged users, and the presentation of my own content was the worse for it.
Over the course of my own development I also had to overcome many formidable obstacles, such as having to learn many different technical protocols, technologies and the jargon of the Net, but I was also driven forward by my enthusiasm to initiate the "free" interaction with other human beings and to access some of the wealth of free information that seemed to be available out there. I learned the technical material in a "just in time" manner, just as I needed it. Eventually this just-in-time manner gave way to a full time obsession and I discovered I was spending more hours each day on studying the Internet and its web-based languages, than I was in my own area of expertise. The price for this indulgence was heavy: my university, like most others in North America, did not recognize the value of that kind of work during the 1990's, and even now still does not recognize the value of that kind of work unless the post-secondary degrees of the researcher are actually in that particular technological field. And so it is that many university faculty who once developed online in the 1990's have now withdrawn from the frenzy and in their fatigue and disillusionment are content to watch rather than do. In many instances, the expertise learned by this older generation of educators has also disappeared off the radar screen. The mistakes made and lessons learned by earlier designers is undoubtedly very valuable in new designs, but there has been an absence of genuine cooperation in the educational sector, at every level, with each faculty member preferring instead to create their own (ostensibly) distinctive online content from scratch or from a vendor's proprietary template, rather than beginning with cooperating with others. As educators we have been very ineffective in minimizing the cost of designing online content and very good at creating redundancies that duplicate sub-standard codes and sub standard features in the interfaces.
The current world of online education that I know is also surprisingly naive about student expertise. In spite of frequent claims that today's student is technologically hip ("my eleven year old has more technical aptitude and expertise than my wife" etc etc) I note with dismay that many students do not know the difference between a file and a directory, nor the difference between a binary and an ASCII file: they do not know what a tree structure is, and even when the software platform has been idiot-proofed so that uploading a file is a matter browsing and clicking, they do not know how to navigate their computer in order to find a file that they have saved. I am still surprised when I see that my own university offers dozens and dozens of online courses and no one – student or faculty member – ever has to show or demonstrate any technical expertise.
One of the (many) undesirable consequences of this increased division of labour and job deskilling is that very few online courses meet any of the technical or interface standards that have recently been developed by the World Wide Web Consortium. The educational sector may be ahead of the trend in theory and in sensitivity, but we lag behind in policy, budget, and ability to re-program current online resources in ways that conform to these new standards.
Ian Webb has noted that the cost of accessibility-compliant design is not prohibitive if it is taken into account right from the onset (http://www.techdis.ac.uk/resources/webb01.html) and HCI specialists will also tell you that the sooner a design project commits to one particular solution (or kind of solution) the more likely it is that the design will be unsuccessful. Thus one of our biggest challenges will be to find ways to include and re-design legacy content, much of which is perfectly good material and flawed only because of its skin. Webb, however, does not seem to be aware of the trap of vendor specific solutions.
A standard is a set of rules of practices that one adheres to. A protocol is a procedure, a set way of sequencing steps and processes. A red light means stop and a green one means go (standards). In some countries cars must drive on the right side of the road, and in others, on the left side (protocols). Failure to conform to rules and protocols on the roads will almost certainly have fatal consequences. The language that we speak is also based on rules (vocabulary) and protocols (grammar). The Internet too is based on standards and protocols. TCP/IP (Transmission Control Protocol/ Internet Protocol) is a set of rules and protocols for how information moves across the Internet and how unique addressing is assigned.
The Internet is a conundrum when it comes to rules and protocols: it is an environment where many technical standards exist by necessity, but it is also a place where there is great resistance to cognitive and design standards. This resistance occurs for several reasons. First, standardization of a browser's interpretive ability to render HTML code in a single way seems inimically hostile to the vendor's need for product differentiation: this is an essential part of marketing in the private sector. Every browser's vendor wants its browser to do at least something different from the others. Thus accessibility is blocked at the design and development level by vendor specific (proprietary) code, and it is blocked at the client (consumption) end by the same problem with browsers. Please let me be clear about this. I am not blaming vendors for being self-interested. What I am saying, is that this inevitable and unavoidable self-interest presents an apparently insurmountable obstacle to standardization of any kind, especially when vendor self-interest makes cross-vendor migration so difficult and convoluted. As consumers, we would not tolerate a different size and thickness CD for every recording label that required a physically different CD player to pay it, so why would we tolerate the equivalent in our courseware?
The World Wide Web Consortium has historically been as interested in process as well as product, and in the early days when "browser wars" were flaring up between Netscape, Mosaic and Microsoft, the W3 consistently urged the commercial sector to design their browsers so that they would all interpret the same HTML code the same way. The W3 has strived for open standards. The "nub" of the problem is that open standards are (I'm tempted to say always) in the interest of the consumer where as the standards developed commercial (called proprietary standards) are always in the interest of the commercial designers since they try very hard to differentiate their products from others on the market. This tension is a critical part of the problem.
We know that designing interfaces (and code) for accessibility is a good thing in practice and in principle, and therefore is desirable. By improving the quality of access for one, we improve it for all. Yet the effort involved in the design process occurs at a time in the Internet's evolution when the nature of the work of online development is highly specialized, and (ironically) where the culture also favours dumbed-down software for untrained programmers and users. All of this contributes to carelessness about, and ignorance of standardization. Furthermore, we have also inherited legacy content that does not reflect accessibility issues, and even newly developed accessible-compliant content is trapped in the prison of proprietary platforms that will not allow designers to easily migrate content from one platform to another, thus restricting freedom of choice in adopting other proprietary platforms that may suddenly become more fully compliant or cheaper. Finally, educational institutions are also trapped in fiscal constraints that provide seed money or one-time-only funding for online content instead of ongoing baseline budgets that allow for re-design and upgrading.
There are many challenges to be overcome. The lure of achieving very sophisticated (non-compliant) interfaces with little or no programming expertise is at once liberating (democratizing) and dangerous. Funding is limited. Human resources are limited. Constant re-training of technical support people is essential, but expensive and has been in steady decline since 9/11. Under what conditions are private sector partnerships a viable solution? Can different institutions in the educational sector actually cooperate to the extent that they share open code and particular designs that can be re-used, and if so, what barriers are in the way?
Here are a few more specific questions for discussion.