Learning Analytics

We learn in different ways. Some of us skim read, jump in an out of books, and extract bits and pieces that are useful. Some of us read carefully and sequentially, building our understanding of one text on another. Prior experience, good and bad, will affect how we tackle a new task. Some of us find numbers easy; others dread them. There are solitary learners and gregarious communicators. Good teachers understand these differences of style, often intuitively, through empathy. Think hard enough, and we can always remember a particular teacher, hopefully for the good they did, but all to often for the bad.

Thoughtful teachers have always thought about how their students learn. When Mrs. Berry was thinking about my problems with spelling rather more than fifty years ago, and frowning about my progress in comparison with the rest of her primary school class, she was conducting learner analytics. Mrs. Berry’s tools were a red pen and a ledger; today’s are a keyboard and a spreadsheet. The objectives are the same.

The last few years’ leaps and bounds in digital technologies have revolutionized the potential in learner analytics. Education has always generated huge amounts of data. A university student registering for a semester-long module brings a sack of personal information through the door. There could then be ten to twenty assessment events; class tests, essays, projects and anything else that deserves a comment or a mark. Multiply this by the number of students in the class and then by the enrollment for the college or university as a whole and that’s a lot of data. Up until now a small fraction of this information has been used, almost always for formal assessment purposes. Most has remained passive and on-record, in case of some procedural imperative, rather like the requirement to keep personal tax documentation for a minimum of five years. But new digital technologies allow it all to be analyzed in real time, almost instantaneously and in any combination.

The real issue is not about getting access to learners’ data (although is still some work to be done in this area); it’s about what to do with it. As Christine Borgman has shown in her new book, “little data” are far more interesting and significant than “big data”. There would be little value – other than setting a world record – in having the digital learning profile of every student in the world on a common database. Of much greater value would be general agreement about key proxies for student learning success and the use of these to improve the quality of teaching.

Currently, the most widespread use of learning analytics is for helping prospective students make choices about programmes of study and universities, and for enabling funding agencies to keep track of public money. But many universities – and probably a majority – are beginning to use their own students’ data far more prospectively, and particularly for identifying students who may be at risk. Students leave a digital trace every time they log into their university’s Virtual Learning Environment (Blackboard, Moodle and their equivalents), access a library resource, connect with an on-line course or, increasingly, come onto campus and use their student smart card to get into a room or buy something at the campus store. Failure to do any of these things for a period of time can put up a flag for a tutor to e-mail, text or call to find out if assistance is needed. These interventions can result in sharp improvements in student retention, particularly for those in their first year of study who may be finding adjustment difficult.

This, though, is really only a beginning, a toe in the edge of the digital lake. As electronic resources become more sophisticated – more than a scanned copy dumped on-line as a pdf – readers will leave digital fingerprints as they page through texts and data sets. These will augment Internet browsing histories (and we all know that Google already knows more than we know they know). This will allow teaching styles to be adapted to students’ differing cognitive patterns. Knewton’s Jose Ferreira makes the point that most current applications are “just decision trees using if-then rules to herd students in pre-determined directions … lump students together in large buckets”. Future approaches will rather see each student’s path emerge in real time.

In the seconds before recommending a learning activity, Knewton takes into account – using real data – all of your proficiencies, learning patterns and past performance. We then recalibrate our understanding of exactly what you know, how well you know it, and how you learn it best. Then we examine the content and strategies that worked best for other similar students – using the combined data power of our entire network to find your best possible learning path for every concept you study.

To be really useful, an individual’s learning progress has to be benchmarked against her or his student cohort, however that is understood. As is now well understood from digital data issues more generally, this raises key issues of public interest and trust. Many people are queasy about their personal health records being pooled, despite the compelling argument that large medical databases are critical for epidemiological breakthroughs that will be to the significant benefit of everyone. There has been less public debate about educational records despite the fact that, in many countries, there is a strong correlation between the place where you live and your prospects at and beyond university.

Used inappropriately, educational profiling could be as damaging to individual rights as health profiling. Given this, should we worry about a future in which large educational data sets are owned and used by for-profit companies? Some large publishers are diversifying their business models so that they can provide, often at significant fees, layers of educational services that may include examination and student assessment and learning analytics. Is it appropriate that such companies own learner datasets, or should they continue to be owned by independent, public interest organizations such as Britain’s Universities and College Admissions Service (UCAS) or the Higher Education Statistics Agency (HESA)?

The use of learning analytics for prospective purposes, and also depositing individual student records into databases for benchmarking purposes, raises ethical issues at the individual’s level. Should students be asked to give informed consent for such use of their personal data? Should their participation in analytical exercises be on an opt-in or an opt-out basis? Should the surrender of personal data simply be a requirement for taking up a university place (as, in effect, it is at present), or does the potential for pooling student data outside the control systems and custodianship of the student’s university add a new ethical and legal dimension? While these may turn out to be non-issues, they are still questions that should be asked, and answered.

There is also a lot of work ahead in agreeing on metadata standards for learning analytics. Again, Borgman is fascinating – almost philosophical – about the conceptual implications of metadata. Terminology and classification are the battleground of the disciplines, the markers of differing theoretical positions. But without broad agreements on metadata, there can be little interoperability between data sets, limiting both the retrospective and predictive value of benchmarking. Here, universities may talk with a forked tongue. While they will espouse the principles of open data and public interest, they are all committed to the statistical challenge of being in the top 10% and inherently suspicious of comparisons unless they are sure that they will come out on top. Consequently, guarantees of anonymity as a watertight condition of data sharing will be as important for institutions as for individuals.

There’s little point in data for its own sake and the bureaucratization of universities has been relentless. Consequently, the motivation for learning analytics should be clear; to help students achieve their objectives more effectively and to help teachers guide learners with improved insight. There is a vast amount of grassroots innovation at course level and significant new opportunities for universities as a whole, and in trans-institutional co-operation. Getting this right should be well worth the effort.


Christine Borgman, 2015. Big Data, Little Data, No Data. Scholarship in the Networked World. Cambridge, MIT Press

Jose Ferreira, 2015. ‘Predetermined Adaptivity’: A Contradiction in Terms. The Knewton Blog. 1 April 2015. http://www.knewton.com

2 thoughts on “Learning Analytics

  1. Very interesting Martin. Another aspect that occurs to me is that students could be more than just gatekeepers to others’ use of their data -even if for their own good. It seems wrong that data is collected about them without it being shared with them to help them reflect on their own behaviours and actions.
    I have been looking at Facebook’s collection of data in order to sell services -currently advertising but who knows what else in future. Students make extensive use of Facebook-sometimes with their lecturers’ involvement. Facebook don’t share all of the data and they don’t share the algorithms they use to infer attributes and target consumers for ads. So who knows? Some universities may already be paying to target ads at students who are faltering at another university. This trend of separating data from its source (to ‘add value’) is not as new as it might seem. One could regard it as a natural outcome of standardisation and functional specialisation first within organisations and then beyond with outsourcing and the hoovering and connection of data that behemoths like Google and Facebook can do. This always seemed to me to be a good subject for education of all students, and of course staff.

  2. Interesting. The GetSmarter folk (www.getsmarter.co.za) presented to us a few weeks back and the one thing that really impressed me was the extremely rich learning data they gather and the extent to which they analyse it to a) test new learning designs ( e.g. the design of a UI or the use of a particular technology as part of the learning environment for a course) or b) track actual learning activity at the user level – like the one example in the article as to whether a student had engaged with certain material or not. They are able to see with what and with whom the student engages, what material or parts of the learning environment they access, the order, the amount of time spent in various activities etc etc. What impressed me the most however was that this not only fed into their striving to create better learning experiences but that it fed into the assistance and mentoring they provide learners on a personal, one-to-one basis.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s