Accessibility, I’m Still Wrong!

A while ago, I talked about the legal requirements for academia web development. I pointed to Section 508 because it was “just another section” of the Rehabilitation Act of 1973, and Section 504 was DEFINITELY required.

I was wrong.

Upon further inspection, we are ONLY under Section 504 because Section 504 is a matter of civil rights, while 508 is “just a guideline”. Section 504 (or title III of the ADA (American’s with disabilities Act of 1990)) is what people reference when filing suit. 508 is not directly enforceable outside of government agencies. Even then it can be trumped by 504.

So when doing things in academia, using 508 is useful in that it will get you 90% of the way there, but that 10% can still get you under 504. The trick is it’s not clearly defined. It’s left very ambiguous. This is probably a good thing from an accessibility standpoint because it can remain technology agnostic.

So stick to WCAG 2.0 AA.

Using jQuery in Node with jsdom

After having watched a ton of Node.js tutorials (and TAing for a JS class), I decided a while ago “for my next script, I’m totally going to use Node.”

So I finally got the opportunity this last week to write a script. Tasked with a menial job, making a script to accomplish it brightened my day.

The first script was dealing with an xml api feed. So I immediately found xml2js, a nice converter and set about looping through some api urls, collecting the data I needed and totaling it up. It was a mess, and looked like this:

var https = require("https");
var parseString = require('xml2js').parseString;

https.get("https://someplace/someapi", function(response){
 
	var body = '';
	response.on("data", function(chunk) {
		body += chunk;
	});
	
	response.on("end", function(){
		//console.log(body);
		parseString(body, function (err, result) {
			totalEntries += result.feed.entry.length;
			for(var i=0; i < result.feed.entry.length; i++){
				something += parseInt(result.feed.entry[i]['something'][0]['somethingelse'][0].$.thingiwant);
			}
			console.log("Total stuff: " + something);									
		});	
	});
}

This one was easy to get what I needed, but clearly not the right way to do it. Because the functions happen asynchronously, blah blah blah, that’s not what I’m writing about.

The next one was very similar, but I had to scrape a webpage, not just xml data. So I found a nice lib called jsdom, which created a dom for me to use jquery on.

var jsdom = require("jsdom");
 
jsdom.env(url, function(errors, window){
	var $ = require("jquery")(window);
	var total = 0;
	
	$(".some_class").each(function(key, value){
		// just use a regex to get it
		// it's buried in the onclick, so I'll have to use a regex regardless...
		var result = value.innerHTML.match(/newWindow\('([^']*)'/)[1]; // get first grouping
		jsdom.env(host + result, function(errors, window){
			var $ = require("jquery")(window);
			// use regex to get the xxxxxxx because I'm lazy
			var result = $('head').html().match(/someRegex/g);
			if(result !== null){
				for(var i = 0; i < result.length; i++){
					var thing = result[i].match(/"([^"]*)"/)[1]; // get first grouping
					total += thing;
				}
			}			
		});
	});
});

This was super easy / super powerful to use something I’m already so familiar with to accomplish a task that is well suited to that. The scripts themselves took minutes to write — if you don’t take into account the time I spent finding where to get what I needed.

Shame on Me: the missing code review (or is it unit testing?)

So a bug was reported not long ago. Let’s say “sensitive data” was available where it shouldn’t have been. I had an API view, a view that was returning json only, or should have been. Apparently early on in the project, I had added a comment to that view, to make sure it was returning the appropriate data. So I had added an html comment to a view that was supposed to be returning json. And I had it outputting everything under the sun it could output.

<!--
"secure data" => "stuff I don't want the user to see",
... 30-40 lines of this ...
-->
{"success": true}

So there’s 2 problems with that. The most important of which is that with DOM inspection tools, like chrome developer tools or firebug, you can inspect the return value of that request and see the data I don’t really want you to see. The other problem is that I’ve created invalid json, and whatever is checking that success is clearly not using that value.

Why did I do this?

This was early on in development and I was debugging the easiest way I could.

What could have prevented this?

A code review of any kind. One look at that file would have been a red flag to anyone. But a view file that was only outputting true or false never seemed like something I ever needed to look at once I got it working.

Custom Slides and Github Hosting

So I was putting together a presentation on Accessibility for my department. I knew very little about accessibility, so I read as much as I could and watched every youtube presentation I could find on the subject. A lot of them were total crap, but a few from some of the google io conferences were really great. They had working examples and code rendered inline to the slides.

I thought this was great so I went looking for what they did for this and found that (for at least the 2011 and 2012 io conferences) they have provided a slide template that is geared just for that.

The 2012 one is reasonably nice: https://code.google.com/p/io-2012-slides…

So I altered this, “forked it” and dumped it into my github: https://github.com/jazahn/axs-slides

That in itself is pretty cool, but then I thought, hey, I want to put these somewhere people can get at them. So originally I had them on my public web space for work, but it was sort of annoying to git commit, git push, log in to the server, git pull. Logging in can be an annoying step for my work if I’m not at work and need to VPN.

Artie had a cool idea of using github pages  http://pages.github.com/). Because these slides are all static, I don’t even have to worry about what server these are running on. All you have to do for this is create a gh-pages branch and anything in that branch will automatically be hosted. So what I did was create that branch, set it as the default branch, and removed the master branch (to avoid confusion and simplify). After altering my remotes, now I just have to git push from my dev environment and it’s automatically put on the server:
 http://jazahn.github.io/axs-slides

Very cool.

The Responsibility of Creativity

This article was interesting. Not altogether surprising, but interesting.
 http://www.slate.com/articles/health_and…

Most of it is pompous fooey designed to make everyone think they’re the creative person in question.

The problem I have with this is here:

A close friend of mine works for a tech startup. She is an intensely creative and intelligent person who falls on the risk-taker side of the spectrum. Though her company initially hired her for her problem-solving skills, she is regularly unable to fix actual problems because nobody will listen to her ideas. “I even say, ‘I’ll do the work. Just give me the go ahead and I’ll do it myself,’ ” she says. “But they won’t, and so the system stays less efficient.”

“I’ll do the work, give me the go ahead and I’ll do it myself.” But they won’t, so she doesn’t. This person is not a risk taker, she is asking the company to be a risk taker, and then resolving herself of responsibility when her idea doesn’t get traction. This doesn’t work.

If you want to be a risk taker, if you want to do things to make things better that are outside of the box, you have to DO them. Don’t ask for permission, just do it. Show your management some small success if you want traction within your company. Don’t cry because they don’t like your ideas. If you’re not willing to go above an beyond, just get back in line and stop complaining.

People DO like creative people, they just don’t want to take the risk you want them to. If you want to be creative or take risks for what you see is a good idea, you have to do it yourself, often on your own time. You have to be the one to make the sacrifice for your own ideas, asking others to do that for you is where we fail.

Posted in Uncategorized. Tags: . No Comments »

The Newbie: How to Set Up SSHFS on Mac OS X

Recently, I wanted to find a simple way of mounting a remote Linux file system from my Macintosh laptop. And by “simple,” I wanted the procedure to consist of mostly downloading and installing a tool, running a command, and not having to delve too deeply into editing configuration files. Fortunately, I was able to figure this out without too much trouble, and thought I would record my experience here. The procedure involves two applications, FUSE For OS X and SSHFS, both of which can be found on the FUSE for OS X web site. FUSE for OS X is a library/framework for Mac OS X developers to establish remote connections; SSHFS is an application built upon the FUSE framework.

First, let’s establish some terminology. We’ll simply refer to the remote server that I wanted to connect to as the “Linux server” (at the domain “remoteserver”) and define my local machine as simply “my laptop.” We’ll call the file directory that I wanted to access on the Linux server as “/webapps”. In essence, I wanted to be able to access the folder “/webapps” on the Linux server as if it were a folder sitting on my laptop.

I’ll also note that I had already set up my SSH keys on my laptop and the Linux server. That needs to be accomplished before anything else. If you need guidance on that, here’s a simple tutorial.

After SSH had been set up:

  1. I downloaded the latest version of FUSE for OS X at the FUSE for OS X web site.
  2. I installed FUSE for OS X on my laptop by double-clicking the disk image, then double-clicking on the installation package. There is pretty standard Mac OS X stuff; it went without a hitch.
  3. I downloaded the latest version of SSHFS for OS X at the FUSE for OS X web site.
  4. I installed SSHFS by double-clicking on the downloaded file. I ran into an issue here where Mac OS X refused to install the package because SSHFS comes from an “unidentified developer.” To get around this, you need to override the Gatekeeper in Mac OS X, which can be as simple as right-clicking on the package and selecting “Open” from the context menu.
  5. Both FUSE for OS X and SSFHS were now installed.
  6. Next, I needed to create a new folder on my laptop which would serve as the mount point. Let’s call that folder “~/mountpoint.”
  7. Now, it was a matter of learning how to invoke the appropriate command to have my laptop mount the Linux server. The command I used was:

sshfs -p 22 username@remoteserver:/webapps/ ~/mountpoint -oauto_cache,reconnect,defer_permissions,noappledouble,negative_vncache,volname=myVolName

Using the above steps, I was able to successfully mount the Linux server. Unmounting is a piece of cake:


umount ~/mountpoint

 

Additional notes:

The SSHFS command used to mount the remote server is lengthy; indeed, filled to the brim with arguments that I cut and pasted. If you would like to know what each argument does, there is a helpful guide that describes them.

Using SVN with Git

I’ve talked about this before, but I made a pretty picture for it recently to help explain it.

SVN is a centralized repository that we use for controlling deployments, sensitive data, and storing environment specific configs. The majority of the code is contained within Git (github), the development, feature branching workflows etc.

This works well because SVN is good at being central and having a linear history. Git is good at branching and going nuts with workflow.

Modern JS: Tools of the Trade

Whether you like or dislike javascript, the good parts are quite good and javascript patterns can make it easier to write sophisticated apps that are also readable and maintainable. One of the more recent trends in JS is a movement toward micro libraries. A good resource for that is microjs.com. Alas, there’s no such thing as CPAN for browser-based javascript, but I think this is a good start at organizing the useful libraries that exist on github and elsewhere.

On one of my recent projects, I wanted to implement the pub/sub pattern on some objects, but instead of writing the code myself or reaching for a heavyweight library just for event functionality, I instead found microevent.js via microjs, a mixin that adds the ability to make any object an event emitter. In total, the library is just 20 lines of plain-old-javascript that anyone can understand.

These are some other tools and services that I’ve found very useful:

  • JSHint (successor to JSlint): lints code and also useful for enforcing a common style on a project with multiple developers.
  • RequireJS: a library that aims to make dependency management more sane in javascript. This is based on AMD.
  • Jasmine: a library for doing unit testing (BDD).
  • JSDoc: for documenting javascript, kind of like Javadoc. Docco is also gaining in popularity, although I haven’t used it myself.
  • Lodash/Underscore: Lodash is a fork of underscore.js that I’ve been using, although they both serve the same role: they facilitate a more functional-style and provide common utilities that every JS dev needs, whether they know it or not, although as Brain Lonsdorf points out, there’s more to functional programming.
  • JSPerf: this is a popular benchmarking service for JS and helps answer performance-related questions. Very useful.
  • JSfiddle: great service for testing out javascript and sharing it with others.

I’m sure there are some others that I’ve missed, and I’d like to hear about them!

Posted in ATG, Javascript. 1 Comment »

Git vs SVN again

My previous Git vs SVN made some errors based on old information. I’m hoping this will be a more accurate account.

Git Pros:

  • DVCS (Distributed Version Control System). What this means is it works off of the idea that every user copies the entire VCS and runs their own repository locally. This has several benefits.
    1. Speed. This makes development/working with the repository much faster as almost every operation does not require you to contact the “primary” repository.
    2. Tiny Commits. This allows / encourages tiny commits, so changes are tracked more closely and can be more easily separated out.
    3. Control over who merges. A merge can be attempted by any access layer, but this will generate a pull request which can then be reviewed and approved.
    4. No network access required.
  • Lightweight Branching. While feature branching in SVN is technically possible, it’s also a pain in the ass. Git made branching a first class feature. Easy branching means more people will probably make use of more advanced workflows.
  • Much smaller repositories. Due to better compression.
  • Repo file formats are simpler. So repair is easy and corruption is rare.
  • Clones act as full repository backups. That’s potentially useful I guess.
  • Creating a repository is trivial. So people who want to use repositories to track tiny personal projects, can just do so without any setup.

SVN Pros:

  • Narrow Clones. You can make a checkout of just one subdirectory deep in a hierarchy, download only the files related to that directory, and still be able to make commits. This seems useful in massively large repositories.
  • Authz files are easier (more standard) to write than Git commit hooks.
  • Comfort. People are already used to the linear structure and commands of SVN and that makes sense to them. Switching to DVCS is a cost. Most developers find the switch painful because it goes against what they’ve done for so long.
  • Deals with binary files better. If you’re tracking massive amounts of non-textual data, SVN handles it better as Git’s compression won’t work as well and it’ll be copying over everything.
  • Better classical / hierarchical model. (As opposed to patch/change based revisions.) It keeps in place a simplified model, good for keeping release history. Good for if you only make large commits and don’t want / need to record small dev changes. From a Git perspective, think of fast-forwarding every pull request.
  • SVN Lock. This provides more top down control over the repository.
  • Supports empty directories. This is probably important to someone.
  • Shorter and (mostly) predictable revision numbers.

SVN encourages large commits, Git encourages smaller commits. Simplicity or granularity. We all started out used to the former, but the latter is catching on quickly.

What seems to be the bottom line for most people is that SVN is the old and Git is the new. This is the direction of things, and it’s a new way of thinking about workflows and how VC can actually be doing things OTHER than just storing data.

The only issue that really matters to us is the pain of the switch. The problem is if we don’t understand the benefits that we’re getting from the switch, there’s never going to be enough of an impetus to switch.

The Newbie: Learning Tools Interoperability

We’re educational technologists, which generally means two things:

  1. We like to develop tools for teaching and learning;
  2. We have an on-campus Learning Management System (LMS) for which we have often developed;

The above has now been complicated by the inevitable: Our LMS will be changing in the future, and that LMS is, er, unknown at the moment. There are a wide variety of candidates, to be sure: Blackboard, Moodle, Sakai, and Canvas, to name a few. So which one to develop for? Or do we simply stop development, take some time off, and head out on vacation? The latter, alas, isn’t an alternative. And since we don’t know, exactly, what we are writing for, we’re implementing stand-alone web applications at the moment. It’s nice to be doing so, but it would also be nice to easily integrate these applications into whatever LMS the University ultimately decides upon.

Enter Learning Tools Interoperability (LTI), a specification by the IMS Global Learning Consortium. The specification attempts to establish a standard way for rich learning applications to be integrated with other platforms, such as, say, an LMS. In LTI lingo, the “rich learning applications” are called Tools (delivered by a Tool Provider) and the LMS is called the Tool Consumer. The goal is that users of the LMS can connect to your external, web-based application without disrupting their experience by having to travel outside the LMS. For developers, it means “write once, use anywhere.” That seems ideal. But we know how well “write once, use anywhere” often goes.

Nevertheless, we’re starting to explore LTI, and, fortunately, Instructure, the makers of Canvas, have an entire course for developers, and several of the assignments are devoted to learning LTI (just click on the “modules” link of their course site). Additionally, the MSDLT Blog has a good article on writing a basic LTI tool provider, which lists several links that all developers should be aware of, and shares their early thoughts on LTI. And we’ll (hopefully) continue to share our own thoughts on LTI as we delve into it.

 

 

Posted in Uncategorized. No Comments »