Academic Citizenship? Or a Question of Survival?

Are academics being forced to give up altruism, driven instead by a narrow individualism and performance targets? And, if they are, are they abandoning a form of citizenship, an innate tendency to prioritize the public good? Or is it rather university managers who are at fault in prioritizing apparent short-term gains rather than appreciating the essential nature of academic work?

These questions arise from an excellent review of “academic citizenship” by John Morgan and Chris Havergal, just published in Times Higher Education. They talked to a wide range of academics across different kinds of universities, asking them about almost all the things outside the narrow scope of formal teaching and research: external examining; curricula and qualification design; student mentorship, support and reference writing; peer review of publications; editorial work; evaluations for funding bodies; working for scholarly societies; public engagement and outreach. Together, these tasks are seen as reciprocal, sometimes voluntary; contributions to a form of social capital that gives universities a collective and distinctive identity as a type of institution.

Responses to the THE survey were mixed, thoughtful. Everyone identified with the pressures of competing demands, more work. Given this, some wondered about the value of this “extra” work and others recognized its benefits, regretting that these could be lost. The overall direction of John Morgan and Chris Havergal’s take is that the combined demands of formal teaching and recognized research outputs are squeezing out “academic citizenship”, and impoverishing universities as a result.

This much is common cause. But there are further questions that follow on from this consensus and, I believe, bear thinking about a bit. For while there will be a number of “organizational altruists” in any workplace, the implication here is that academics are inherently altruistic as a class, and different. After all, it would be odd to make a case for, say, “retail citizenship”, which leads the staff of Walmart to volunteer for additional work to build the corporate brand. And in my experience the range of attitudes across an academic staffroom has much the same variation as any group of people working together in a common institution; some are mean and self-interested, some are professional and detached and a few have an inherent generosity and commitment that can make all the difference. Seen this way, “academic citizenship” is much the same as the social capital that gives any organization a distinctive identity.

From this angle, the question is a little different. Rather than asking whether academics have been a different kind of person, now being forced to be more ordinary and self-serving by a cold and calculating managerialism, the issue looks like this; does the social capital built and remade by these kinds of “extra” work have an inherent value for the university of the future, or is it rather a left-over from a disappearing past that can be safely trimmed away?

A fair number of John Morgan and Chris Havergal’s interviewees referred to the tyrannies of their universities’ workload management systems; the various ways of counting and allocating hours and proportions of effort against notional prototypes that, in Britain, feed into TRAC, the mandatory Transparent Approach to Costing.

I have no doubt that many universities’ workload management systems – or the ways in which they are applied – are a work of the devil. Teaching, research, publication and administrative tasks merge into one another as a rich and mutually reinforcing continuity. Requiring that this continuum is disaggregated into discrete categories, each allocated a number of hours a week, is to create a model of academic work that, often, bears little relationship to what a person actually does. And, because the outcomes of workload allocation can have a significant effect on a person’s professional life, these exercises can create a dangerous office politics, with the “reward” of more research time, and the “punishment” of a greater teaching load. So while aggregated workload information helps Finance Managers mitigate risk and balance the books, and HR Directors describe the characteristics of their workforce, it has little to do with quality; with inspiring students to realize their potential, or with making and applying new knowledge.

But if universities’ social capital – academic citizenship – is being decimated by the application of workload management systems that have little place for this “extra” work, will this damage the core mission of teaching and research? Yes, it will. This is because the kinds of tasks reviewed in John Morgan and Chris Havergal’s survey are neither the products of a quirky academic altruism nor now-redundant vestiges of what universities used to be. They are, rather, essential to the ways in which our knowledge systems work, whether in teaching or in research.

It’s long been established that academics’ primary loyalties are to their disciplines, fields of research or professional identity. Archaeologists are most comfortable at conferences with other archaeologists; Physicists for the most part make big research breakthroughs as part of large consortia that cut across the boundaries of many institutions. These disciplinary identities are underpinned by common sets of theories and methodologies that are built up through the teaching curriculum and kept alive through the knowledge shared in the external examination system. Interdisciplinary teaching and research isn’t a grey melding of these ways of working; its rather the innovation that comes from, for example, juxtaposing the coding of DNA with the binary logic of computer science, or by putting an economist in the same room as a historian. All the “extra” work identified in the THE’s overview is the effort that keeps these systems of intellectual reciprocity alive; none of it can be effectively reduced into a workload allocation system and managed as an inventory.

Taken together, these “invisible colleges” are a massive set of global networks and a huge asset. With the rise of the “knowledge economy” accountants and investors are finding increasingly significant ways to estimate the asset value of knowledge networks. Ironically – because they invented these systems of sharing – universities are way behind in valuing this aspect of their work; here, TRAC is about as opaque as its possible to get.

But most academics, in contrast, are driven by the voracious needs of their specific networks. Employment, tenure and promotion depend on recognition through citation and conferences, peer review and editorial work enable an understanding of how the system works. Today’s postgraduate students are tomorrow’s professors; taking the time to mentor and support now will prompt reciprocity down the line. This may be altruism, but its also sensible, shrewd and self-interested. Its how the vast machine of university teaching and research works.

Given this, universities that let their attention to short-term, narrow and mechanistic measurement get out of hand risk everything. If academics are engulfed by the kinds of workload systems that make it impossible for them to build the social capital of their networks – to be “citizens” – they will become more and more marginalized in their “invisible colleges”. In turn, or course, their universities will be marginalized.

“Invisible colleges” have been around since universities were invented. Arabic and Latin were common languages of communication for itinerant scholars moving between libraries with their collections of rare and irreplaceable manuscripts. Our digital world now connects academics everywhere at anytime, making the “extra” work of keeping these networks live and relevant central to the quality of teaching and to emerging research agendas.

This is not just a question of citizenship; it’s an issue of survival.

**John Morgan and Chris Havergal. “Is Academic Citizenship Under Strain?” Times Higher Education, 29 January 2015.

34 degrees south: “black life is cheap all across the world”

Malaika wa Azania is a smart young writer born the year after Nelson Mandela walked free from prison. Her recent opinion piece for the Johannesburg Sunday Independent, “I Am Not Charlie”, drew a swarm of hostile comment, mostly offensive. But, in insisting that history matters, wa Azania has more in common with Charlie Hebdo than she cares to admit.

Wa Azania’s point of departure, in common with some other thoughtful commentators, has been to criticize the disproportionality of global responses to the terrorist attacks in Paris on January 7 in comparison with the many more massacred by Boko Haram in northern Nigeria a few days later. “Black life”, she writes, “is cheap across the world”

What makes wa Azania’s response less usual is her insistence on the relationship between contemporary political trauma and colonialism; in other words, on history and its consequences.

Time magazine, for example, seeks to explain this disproportionality in terms of group psychology: “’We tend to empathize more with people that we feel are more ‘like us,’” says Marco Iacoboni, a psychiatry professor at UCLA. “I think in this case, cultural, anthropological differences can play a big role in how much we empathize with others. I jokingly call this the ‘dark side’ of empathy.”

For wa Azania, in contrast, its about amnesia. And its not something to joke about.

Wa Azania makes the connection between France today and its colonial legacy in Africa. She revisits the criminal and persistent behaviour of international drugs companies and their trials, sometimes under duress, that have had devastating consequences. She could also have added the determination of the present British government not to allow redress for the survivors of British colonial rule in Kenya; or the failure of the United Nations to bring justice to the victims of the Gatumba massacre; or many other examples of continuing double standards.

The point of her argument is that the West cannot slough off its colonial legacy by choosing to forget the past. Redress can only come from remembering and acknowledging, such that every Nigerian life lost to a fundamentalist bullet is condemned with the same vehemence as every life lost elsewhere, wherever this may be. It’s not the “dark side” of empathy that matters; it is rather the shadows of history.

Predictably, but sadly, wa Azania’s piece attracted a torrent of online comments. Many of the primary comments have been sufficiently abusive to require their removal by the newspaper’s moderator, resulting in the strange effect of visible reactions to empty speech bubbles; like listening to broken stereo. The on-line chatter, though, is not about wa Azania’s argument; it’s about her assertion that the west’s colonial legacy in Africa has been “cloaked in white superiority”. This, her detractors assume, make wa Azania a despicable racist.

That Europe’s colonial settlement and occupation of Africa was justified by racial superiority is beyond doubt. The archives heave with weighty self-justifications of racial hierarchies, the obligation to civilize and the justification for native subservience. In this sense, colonialism was the perfect crime, in which the perpetrators provided detailed evidence of their own actions in flagrante delicto.

There is nothing new in wa Azania’s reminder of this slice of basic history. What is revealed, though, is the pernicious effects of historical amnesia. This results in the affronted reader assuming that a ton of guilt is being loaded on their backs solely because they see themselves as racially similar to the past’s colonial adventurers. In their rage, they react with racial assertions and slurs, transforming their mistaken assumption into a reality. By hinging their abuse on this point, wa Azania’s on-line trolls make her point for her many times over. And because they are enraged by the mirror of history, and seek to shatter it into a thousand spliters, they remain ensnared by its reflection and unable to attain the dignity of proportionality.

Wa Azania is a self-styled “born free” – the first liberated generation in South Africa, with a book of reflections on her condition published last year. Her writing is about her right to offend by insisting on the continuing significance of history, just as the Paris responses to the murder of journalists has been about the right to offend through satire. In insisting that past and present are ineluctably intertwined, Malaika wa Azania has more in common with Charlie than she cares to acknowledge.

Charlotte Alter. “Why Charlie Hebdo Gets More Attention Than Boko Haram”. Time January 15 2015.

Malaika wa Azania: “I am not Charlie”. Johannesburg, Sunday Independent, January 18 2015:

Malaika wa Azania. “Memoirs of a Born Free: Reflections on the Rainbow Nation”. Jacana Media, 2014.

Net Neutrality?

What do Reykjavik and Washington have in common, that potentially affects us all? Answer: “net neutrality”, the principle that all governments and Internet service providers treat data equally, neither speeding or slowing its transmission nor blocking access without legitimate justification. And while, at first sight, net neutrality may seem straightforward, the closer one looks the more complex and contradictory the options appear. Next month, Washington’s Federal Communications Commission (FCC) will vote on new regulations for the Internet that could have a significant effect on the shape of the digital landscape over the coming years. Across the Atlantic Iceland is making a pitch as an icy anchorage for digital cargoes. Dealing with the difficult choices that regulatory bodies such as the FCC face may make places like Iceland singularly important. At the core of this is the paradox that, in order to ensure an open Internet, some data may need to be blocked.

There are five issues here. First is whether the FCC votes to stay with current light touch regulations, known as “Title I”, or whether the commission concedes to pressure to adopt tighter, “Title II”, rules. This choice is tied up with a second issue, the increasing consolidation of broadband providers that is creating near-monopolies in some parts of the world. Third is the way in which net neutrality is interpreted in practice and, in particular, what a principle known as “no blocking” means. Together, these issues have significant implications for universities – for both research and teaching. And fifth, and looking to the future, is the question of how a balance between net neutrality and data protection may be achieved, which is to return to the example of Reykjavik.

Up until now, the Internet has been lightly regulated, following Title I provisions set out in the US’s Communication Act. Anticipating that this could change Verizon, one of the largest broadband providers, challenged the extent of the FCC’s regulatory powers in a complex legal case that was decided in Verizon’s favour early last year. This ruling has raised extensive concerns, with over four million representations to the Commission, including a strong policy steer from the White House.

In essence, big broadband providers such as Verizon want to stay with light touch Title I provisions because these allow for additional revenues, particularly from speeding up or slowing down access to individual web sites according to differential payment tariffs. Their case is supported by the Republican Party, now dominant in Congress and Senate, in the interests of commercial competitiveness. The counter-case, which has grown to an avalanche of anxiety over the past year, is that the FCC should impose some of the Title II provisions of the Communications Act that have long been used for telephone services. This would prevent practices such as charging for differential Internet access, preserving a healthy ecosystem for innovation and the myriad small office start-ups that have given us a host of public benefit services and useful apps.

One petitioner to the FCC is the White House, with a rousing statement from President Obama:

An open Internet is essential to the American economy, and increasingly to our very way of life. By lowering the cost of launching a new idea, igniting new political movements, and bringing communities closer together, it has been one of the most significant democratizing influences the world has ever known.

‘Net neutrality’ has been built into the fabric of the Internet since its creation — but it is also a principle that we cannot take for granted. We cannot allow Internet service providers (ISPs) to restrict the best access or to pick winners and losers in the online marketplace for services and ideas. That is why today, I am asking the Federal Communications Commission (FCC) to answer the call of almost 4 million public comments, and implement the strongest possible rules to protect net neutrality.

The White House position goes on to list four “bright-line rules” to ensure the continuation of net neutrality: no blocking of legal content; no “throttling” (or access speed controls); net neutrality across all points of interconnection between ISPs and the rest of the Internet; and no paid prioritization.

This issue is inseparable from the increasing consolidation of large broadband providers and the near monopolies that this is creating. Comcast, the US’s largest cable company, is currently seeking approval to take over Time Warner, the second largest; if this merger is approved, the new conglomerate will be the only established cable company available to almost two-thirds of the US population. Similar moves are happening elsewhere and are a consequence of network effects; the exponential-like advantage that comes every time a new subscriber, with their web of contacts, is enrolled.

Given historic suspicions of monopolies in general, as well as more specific criticisms of companies such as Microsoft and Amazon, there is a prevalent, visceral, suspicion of the motives of big beasts such as Verizon, Comcast, AT@T and Time Warner. But its not that clear that virtual monopolies are the same as the late nineteenth century industrial giants for which most current legislation was designed. A recent, and thoughtful, essay in the Economist sets out some of the counter-arguments. These include the point that public interest use of the Internet depends on the seamless and global reach of fibre optics provided by big trans-nationals, the role of social media monopolies such as Twitter in enabling freedom of speech in the face of repressive political regimes, and the cross-subsidization effects that can follow from differential pricing by global platforms.

The question of how net neutrality is interpreted in practice is also more complex than it might at first seem. In particular, the ‘no blocking” principle goes to the core of the issues in question. Here is the first of President Obama’s bright-line rules: “No blocking. If a consumer requests access to a website or service, and the content is legal, your ISP should not be permitted to block it. That way, every player — not just those commercially affiliated with an ISP — gets a fair shot at your business”.

But by specifying that the “no blocking” principle is about commerce and enterprise, the White House is ducking the far more complicated issue of digital sovereignty. In essence, digital sovereignty is the right of a state to use its authority and legislation to control, resist or deny digital traffic across what it understands as its borders. Self-evidently, digital sovereignty is another, and significant, form of blocking.

Exercising claims to digital sovereignty is most commonly represented as actions by repressive political regimes that are intended to limit basic rights of access to information and of freedom of speech, notably by countries such as Egypt, Turkey, China and, more recently, Russia. But digital sovereignty also embraces situations where states may want to act to protect rights of privacy and the misuse of data in the interests of freedom of expression. For example, Brazil proposed exercising digital sovereignty to protect its citizens in response to the scandal of US intelligence gathering revealed by Edward Snowden. And any organization or individual that subscribes to one of the many available Virtual Private Networks exercises a similar act of sovereignty in order to block access to information about their identity and location.

The White House’s bright-line rule for “no blocking” is, then, pretty selective. This partiality is consistent with the USA judiciary’s current interpretation of the reach of America’s jurisdiction over digital data.

As net-critic Evgeny Morozov has recently pointed out, the USA Government is currently defending a landmark court case. Last year, the courts upheld a demand from the police that Microsoft hand over e-mails that are stored on a server in Dublin as part of a drugs case. As Morozov puts it, imagine the inverse, with the Chinese government demanding data from a server in Washington as evidence for a court sitting in Beijing. In upholding the Dublin demand, the courts are asserting that the USA has national sovereignty over any digital data that originates in the US, wherever that data is now located.

The court ruling is vigorously contested by Microsoft. Here is an extract from the current court documents: “the power to embark on unilateral law enforcement incursions into a foreign sovereign country – directly or indirectly – has profound foreign policy consequences. Worse still, it threatens the privacy of US citizens”. The appeal case is backed by a wide range of organizations, including the Guardian, Apple and – Verizon.

Consequently, although (as President Obama puts it), the Internet may be “one of the greatest gifts our economy — and our society — has ever known”, it does not necessarily follow that this gift can only be preserved by an absolute principle of “no blocking”. While preventing blocking may be in the clear interests of start-ups, innovation and enterprise, it is clearly more complicated when intelligence and law enforcement agencies use the principle to batter down virtual doors anywhere in the world.

What implications do these net neutrality issues have for information technologies and their use in universities? First, we have a profound interest in openness, both for teaching and research. For teaching, places such as MIT and the Open University pioneered the concept of making curricula and their associated resources freely available on the Web. Open access to publicly funded research results is a widely supported principle, irrespective of the complications in achieving it. Open data, and ability to trawl massive, globally distributed data sets without coming up against pay-walls is already fundamental to key areas of research. Given this, it must be in all universities’ interests to support the Title II case when the FCC votes on Internet regulation next month.

But second, there will be a matching concern with an interpretation of net neutrality that allows either a domestic or a foreign agency unrestricted access to data, wherever this data is, and without clear, precise and open international protocols of legality.

The Dublin case that Microsoft and others are contesting is a narcotics investigation that would probably be upheld if such international protocols existed. But remember the far more complicated case of Boston College’s Belfast Project. Here, dozens of frank interviews had been taped with former IRA and loyalist paramilitaries under the university’s assurances of strict confidentiality; the kind of assurances that researchers routinely give in the pursuit of knowledge, and which are part of the essential scaffolding of academic freedom. Some of these tapes were obtained by the Northern Ireland Police Service through the US courts, using similar principles to those applied by the US courts to the e-mails on the Dublin server. Boston College’s Belfast archive is analogue, but today it would of course be digital. If net neutrality is to be interpreted as an unrestricted right to obtain information under any circumstances, key research across a wide range of fields could be compromised.

Finally, then, how can a balance be achieved between net neutrality, data protection and the rights of privacy? Back to Iceland.

Current and future use of the Internet depends as much on the Cloud as it does on the massive infrastructure of cables owned and operated by big companies such as Comcast and Verizon. And the Cloud is, in turn, a family of massive data centres, often in remote locations, voracious in their consumption of energy and furnace-like in their generation of heat. If net neutrality is to be reconciled with ethical data preservation and security, then appropriate combinations of a cold climate, unlimited cheap energy and fierce political defence of data protection and freedom of expression should win out. Iceland meets these criteria.

Iceland’s unique combination of long, cold winters and abundant geothermal power is attracting international investment in data centres. Currently, in a development that may signal a trend, the former US Navy base at Keflavik is being converted to this purpose with equity funding from Britain’s Wellcome Trust. This economic shift, following from the spectacular collapse of Iceland’s earlier role as a banking centre, has broad political support. And because Iceland is not a member of the European Union, it is not party to the EU Data Retention Directive, which requires member states to retain records of all citizens’ telecommunications for up to two years and to permit access by police and security services.

If we are to reconcile the founding Internet principles of openness and data neutrality with the predatory and quasi-legal actions that have come to characterise governments’ digital behaviour we may well need both the principles of net neutrality encompassed in the US’s Title II provisions, as well as the political protection and environmental conditions of places like Iceland.


Economist, November 29 2014. “Internet Monopolies”

Fung, Brian December 30 2014. “The next big turning point in the net neutrality debate”. Washington Post

Fung, Brian 2 January 2014 “Get ready: The FCC says it will vote on net neutrality in February”. Washington Post

Gaedtke, Felix December 28 2014. “Can Iceland become the ‘Switzerland of data’?”

Goel, Vindu and Andrew E Kramer, 1 January 2015. Web Freedom Is Seen as a Growing Global Issue. New York Times.

Guardian, 14 December 2014: “Privacy is not dead: Microsoft lawyer prepares to take on US government”.

Karr, Timothy 31 December 2014. “Four Pivotal Internet Issues as the Year Turns 2015”. Huffington Post

Nagesh, Gautham , Wall Street Journal, 4 January 2015. Republicans Lay Plans to Fight FCC’s Net-Neutrality Rules

White House “Net Neutrality: President Obama’s Plan for a Free and Open Internet”.



Do Universities Add Value?

There’s a lot of interest at the moment in the efficacy of funding for “non-traditional” students at universities; first-in-family to get in, students from low income households, under-represented minorities, vulnerable groups. While everyone supports “social mobility”, there’s a wide range of understanding of what this means. And just off-stage, and watching intently, are the Treasury’s accountants, asking patiently how the return on investment for special funding can be measured.

A recent study for the Institute of Fiscal Studies offers an illuminating perspective on this issue, as well as an intriguing counterintuitive; one of the surprises that big data analysis can bring.

In this report Claire Crawford, who is at the University of Warwick, looked at all students with home addresses in England who began at any UK university at the ages of 18 or 19 between 2004 and 2009, with each of these six cohorts comprising between 180 000 and 235 000 students. This is about 80% of all students attending British universities. She tracked their individual attainment from their test results for English, Mathematics and Science from the age of 11, through their GCSE results at age 16, their A-level or equivalent results at age 18, and on to the quality of their university degree. She also took on board a rich mix of socioeconomic indicators (proxies for household income, neighbourhood indicators of relative deprivation and property values, and the educational levels and occupations of people living nearby) and the characteristics of the secondary school that each student had attended: state or private; selectivity; the availability of a Sixth Form College; and school performance based on the government benchmark of the percentage of students that achieve at least 5 GCSEs at grades A*-C at age 16. Data sets for a national-level study of education do not get much bigger than this.

There were three major outcomes from this research, each of which has significant implications.

First, Crawford’s work extends what we already know about the effects of a person’s prior and present life circumstances on their prospects of getting into university and getting a well-paid graduate job. Study after study has shown that where you were born and live, combined with your family’s circumstances, will determine the probability of your getting into university as well as the kind of university that will admit you. This new analysis follows students past admission to first year undergraduate study and through to degree attainment, plotting the likelihood of getting a “good degree” (a first or an upper second). It shows that individuals from different socioeconomic backgrounds face significant differences in their prospects of making it through to the final year of study and attaining the class of degree that they will need for a well-paid graduate job.

For state schools in England, less than 10% from the most advantaged backgrounds leave university within two years without completing their qualifications, 80% complete their degree within five years and nearly 70% graduate with a first or an upper second class degree. In contrast, more than 20% of those from the most deprived backgrounds will drop out, less than 60% will complete within five years and fewer than 40% will attain the coveted “good degree”.

Second, up to 80% of these differences in university-level attainment can be explained by what Crawford calls “human capital”. This is a little confusing, because a good deal of attention has already been given to “social capital”, broadly understood in this context as the attributes of a graduate household, where material and experiential support is more readily available to students. Crawford’s definition of “human capital” is far narrower, and primarily comprises a person’s prior test and examination results; national English and Mathematics testing at Key Stage 2 (approximately age 11, and including some additional testing in Science subjects), Key Stage 4 (GCSEs at age 16) and Key Stage 5 (A-levels or vocational equivalent). Because the attainment of students at state primary and secondary schools is powerfully shaped by the socioeconomic circumstances of families in their catchment areas, this finding again extends what we already know; the school that a person went to prior to admission to university will have an enduring influence on prospects for both degree completion, for the class of degree attained and for subsequent employment opportunities.

Third – and this is the counterintuitive finding from this massive data set – when all else is held constant, students from poor performing schools do better than those from high performing schools: “when comparing pupils from similar backgrounds with the same prior attainment, those from the best-performing state schools are now 2 percentage points more likely to drop-out, 2 percentage points less likely to complete their degree and 5.2 percentage points less likely to graduate with a first or 2:1”.

Why is this? Here, we are at the limits of what the statistics are able to tell us, however large the overall data set. But reasonable inferences can be made. Remember that this category of higher-achieving students has already overcome significant barriers to get into university in the first place. Given the wide differences in access to high-ranking secondary schools and to all universities by different socioeconomic groups, this will be a relatively small sub-group of the study as a whole. They are also likely to be special; people who will, for example, have been consistently ranked at the top of their class in school. It is certainly worth finding out more about them; their stories will provide valuable lessons about what universities can do to close the attainment gap that is so evident in terms of overall socio-economic background.

Claire Crawford is primarily interested in her counter-intuitive finding as a way for universities to refine their existing admission policies. This, though, is to assume that universities should seek to narrow the attainment gap between different socioeconomic groups by taking fewer students from backgrounds for which the statistical data predict higher drop out rates and lower degree outcomes.

It’s useful to test this reasoning with a simple thought experiment. Say that all universities were able to use this sort of massive, longitudinal data to select, at admission, students who would all complete their degrees, and all achieve firsts or upper seconds. Say also that all state secondary schools providing universities with their students accomplished a similar feat, as did all primary schools from which these secondary schools recruit. The overall outcome would be a clear and exclusive pathway into university from Key Stage 2 at the age of 11, and a tight and almost impenetrable relationship between the circumstances of the household in which a person is born, their subsequent education, and their prospects in life.

That, though, is not what we profess to want from universities. Every major political party now embraces social mobility as a good thing and social mobility is, by definition, the inter-generational transformation in prospects that would be impossible if this thought experiment were to become a reality. Given this, there is a more important implication from this piece of work. Here’s how Crawford summarizes this:

“The relatively crude measures of university experience at our disposal do not help to reduce the remaining difference in degree outcomes by socio-economic background very much further. Even the addition of course fixed effects – effectively comparing students on the same courses – plus an indicator for whether the student lived and studied in the same region (and, in the case of drop-out, indicators for type of qualification and mode of study), does not reduce the difference in degree outcomes between young people from different socio-economic backgrounds by very much”.

Specifically, when the attainment of students after admission to university is controlled in terms of the academic programme of study, those with similar Key Stage 5 results but from the least privileged backgrounds are still 3.4% more likely to drop out, 5.3% less likely to graduate and 3.7% less likely to attain a “good degree”.

Given that this mirrors the disparities that characterize educational attainment throughout a person’s life, it raises an obvious question: does a university education do anything to counter prior disadvantage? Does attending a university add value?

We all, of course, claim that it does, and the cause of Widening Participation (WP) is a shibboleth for the sector. The problem up to now has been that we cannot measure in any systematic way the benefits of WP; a matter of some interest to the number-crunchers in the Treasury. And the problem now is that new research such as this suggests that, seen in terms of degree attainment, there is little apparent return on the investment in WP funding.

And this is where Claire Crawford’s methodology could have wide application. In comparing groups of students within the same cohort of study but from different backgrounds, every university has a way of measuring the value that it adds, and the sector as a whole a way of demonstrating empirically the returns for public investment in Widening Participation. This would be far more revealing than the inanities of the existing league tables, or the superficialities of the National Student Survey. Given the cuts that are rumored for this year’s Comprehensive Spending Review, this could be a timely priority to adopt.


Claire Crawford, “Socio-economic differences in university outcomes in the UK: drop-out, degree completion and degree class”. IFS Working Paper W14/31.




Thought Police? Bad Idea

January 8 2015

Imagine this. You’re teaching a course on current affairs and decide to have your class debate the merits and demerits of fracking. The debate is passionate and gets out of hand, with students on both sides getting personal. You calm them down and the session ends. But you’ve noticed that one of your students, who you know to be a passionate environmentalist, is sullen and withdrawn, not engaging with others in the class, and obviously anxious. You are under a standing instruction from your Dean to report all such symptoms to the Faculty Administrator. Next week, the student is absent. You find out that, based on your report, she is now under the supervision of your university’s local authority, with a support plan to help correct her radical tendencies.

Now consider this. The Counter-Terrorism and Security Bill, under consideration by Britain’s parliament, proposes that all university governing bodies have a statutory duty to implement measures that prevent radicalization that could lead to acts of terrorism. In addition to preventing radical advocates from speaking on campuses, the proposed Act of Parliament will require every local authority to set up a panel to which the police can refer “identified individuals” who are considered to be vulnerable to radicalization. All universities are identified as “partners” with their local authorities in this process of referral.

The government’s focus is, of course, on the present and real threat posed by the desperate conflicts in Syria, Iraq, Pakistan and Afghanistan. No one can disagree with this. But, as W. J. T. Mitchell anticipated four years ago, as the beheadings were starting, one of the objectives of extreme and unpredictable violence is to create a syndrome of responses that, in themselves, promote ever more violent reactions. Will this new Act achieve its immediate aim of preventing Islamic radicalization? Or will these new statutory duties of referral push those who are singled out down a path they may otherwise not have followed?

Further, emergency measures that are perpetuated through legislation always run the risk of unintended consequences. Appropriately, the Counter-Terrorism and Security Bill is directed not just at Muslims, but at anyone with radical views, including views that are non-violent but that might open up a road to violence. Could these new statutory obligations on universities be used against opponents of fracking, or animal rights activists, or anti-nuclear movements, or any radical opposition to the status quo? And where would that leave the principles of academic freedom and freedom of speech in universities, and elsewhere?

Parliamentary Bills are dry and difficult documents to read. Here, though, a clear and often eloquent introduction to the key issues is the verbatim evidence given to Parliament’s Human Rights Committee by Minister for Security and Immigration James Brokenshire. The first part of this interrogation is concerned with the rights of police and immigration officials to seize travel documents – and detain – people suspected of travelling in order to take part in terrorist activities. The Committee was particularly concerned with “gisting” – the process by which an official establishes the essence of the case for detention. Since the Bill proposes to deny or limit the rights of people so detained, the process of “gisting” is pretty significant. After this, the Committee turned its attention to universities and their proposed statutory obligations. The Committee’s transcript is worth reading in full. Here, I’ve “gisted” the essence of the debate, retaining the original words.

Two parts of this committee session are particularly interesting. The first is Helena Kennedy’s eloquent defence of the role of universities and her identification of the issue and her question to the Minister:

“The nature of the university is to develop the mind. It is about the whole business of freedom of speech. Freedom of exchange of ideas is at the heart of the university. By challenging orthodoxies people grow in ideas. Inevitably, some of those ideas will be bad ones, but the best way to deal with them is in debate and by challenging them in the process of learning. No university has created a fundamentalist who has gone to Syria to take part in what is going on there. Yes, people may have been influenced, probably more by other students. That can happen in a café in Birmingham as much as in any university. You are introducing a chilling effect on the whole thing that universities are about, which you and I benefited from, as did most people who went to university—and 40% of our young now go to university. You are doing this when we know that universities up and down the land are already considering these issues and thinking about how they might deal with them and how they might create the debate, without having a statutory duty to do so. That is what concerns people: the statutory duty with a power to give directions from the state. The state will be able to tell universities what they ought to do, and they will be punished in some way if they do not fulfil the requirement set by the state and Government. I want you to explain to us why it needs to be a statutory duty”.

The second is James Brokenshire’s confirmation, under persistent and insistent questioning, that academic and administrative staff who refuse to follow the new statutory duty may be imprisoned for contempt of court:

“Baroness Kennedy of The Shaws: And if the institution says no?

James Brokenshire: Then the Secretary of State would have to issue a direction. If the institution then failed to comply with that direction, the Secretary of State would have to go to court in those circumstances to effectively seek a mechanism that would make the institution comply with that order.

Baroness Kennedy of The Shaws: And what is the sanction?

James Brokenshire: Ultimately, it will be a contempt of court sanction.”

As both the Minister made clear in his evidence to the Human Rights Committee, and as the Government’s current consultation document on the proposed Act confirms, most (and probably all) universities have in place carefully considered policies for dealing with situations where a speaker advocates, or may advocate, terrorism or other illegal actions. Further, universities work extensively with the police in the context of the existing Home Office policy for countering radicalization, known as “Prevent”. This requires briefing sessions for students considered at risk of radicalization.

There is also a strong counter-argument, that must be taken seriously, which is that Prevent in itself angers and radicalizes students. This is because of the implication that, simply by virtue of holding Muslim beliefs, a person is more likely to become a terrorist. The same assumption is not made about, for example, Catholics. Given that the 2011 census recorded 2.8 million Muslims living in Britain and that the Home Office is currently concerned about approximately 500 individuals (a small fraction of one percent), there is clearly a question of effectiveness and proportionality for the Prevent strategy as it is, let alone for the draconian expansion of powers contemplated for the new Act.

In addition to questions of freedom of speech and the positive value of open debate – the case made by Helena Kennedy – there are also the processes and consequences of referral that will become a statutory responsibility for all who teach. As the Minister’s testimony to the Human Rights Committee and the Government’s current consultation document make clear, universities will be required to train all staff who have contact with students to recognize what James Brokenshire called being “withdrawn and reserved, and perhaps showing other personality traits”. Where these traits are identified, the university must refer the student to a panel set up by the police and the local authority for the area in which the university is located. This panel will oversee and administer a safeguarding programme, which may include referral to the health services. There is no right for universities to be represented on the panel. The Government proposes to make HEFCE (and presumably the university funding councils for Wales, Scotland and Northern Ireland as well) responsible for overseeing and reporting universities’ compliance with their statutory duties under the Act.

This aspect of the Bill has already alarmed Sir Peter Fahy, Chief Constable for Greater Manchester and the national lead for Prevent. Sir Peter has said: “if these issues [defining extremism] are left to securocrats then there is a danger of a drift to a police state. I am a securocrat, it’s people like me, in the security services, people with a narrow responsibility for counter-terrorism. It is better for that to be defined by wider society and not securocrats. There is a danger of us being turned into a thought police. This securocrat says we do not want to be in the space of policing thought or police defining what is extremism” (Guardian 5 December 2014).

These concerns, then, are that these new measures will not address – and may make worse – the immediate and pressing problem of recruitment to terrorist activities in Syria, Afghanistan and other current flashpoints. There are also wider implications that must be of concern to many who understand what a university is, and should be. Both the Bill and the current Government consultation make it clear that these measures will apply to any form of violent extremism and, in the words of the consultation document, to “non-violent extremism, which can create an atmosphere conducive to terrorism and can popularise views which terrorists exploit”. This means that the statutory responsibilities to be introduced in the Act could be used by police and local authorities in circumstances recently faced by Canterbury Christ Church University, which was asked for a list of those attending a debate and discussion on fracking. The Kent Police justified this request in terms of the need to assess “the threat and risk for significant public events in the county to allow it to maintain public safety”. If the proposed Counter-Terrorism and Security Act had been in force, the police could have used its statutory measures to enforce their request, and could have charged staff at the university for contempt of court had they refused.

The proposed measures are still under consideration in Parliament and are open for public comment. There are serious and long-term principles that are worth fighting for, of the kind recently re-affirmed by Thomas Docherty, who has more reason than many to understand what the loss of the freedoms central to a university can mean:

“The concept of academic freedom is a product of the modern era. Its exercise is usually considered in terms of the questioning of received wisdom within a discipline; and most nonacademics might wonder why we get so concerned about it, thinking that we arrogantly consider ourselves deserving of special attention or privilege. However, the exercise of academic freedom is instrumental in determining political authority in societies. Through reasoned dialogue in which views are freely and honestly expressed, societies can establish informed democratic legitimacy. The scope of academic freedom reaches well beyond seminar rooms and laboratories. In that sense, it extends beyond discipline; and its value is diminished if it is circumscribed as merely a matter of academic procedures or protocols. It should be extended as widely as possible; yet, today, it is “managed” – managed, in fact, almost to death. The power of unconstrained knowledgeable dialogue is marginalised; and, potentially, democracy itself – based on authority given by free and open debate – is thereby weakened”.


  1. J. T. Mitchell, Cloning Terror: The War of Images, 9/11 to the Present 

Chicago, University of Chicago Press, 2011

Times Higher Education: “Academics label proposed Counter Terrorism and Security Bill ‘censorship’”. 4 December 2014

Guardian: “Chief constable warns against ‘drift towards police state’”. 5 December 2014

Guardian: “Police asked university for list of attendees at fracking debate”. 15 December 2014

Counter-terrorism and Security Bill, 2014-2015:

Joint Committee on Human Rights. Uncorrected transcript of oral evidence (to be published as HC859). Legislative scrutiny, Counter-terrorism and Security Bill, 3 December 2014.

UK Government: Open Consultation: Prevent Duty.

Times Higher Education: “Thomas Docherty on Academic Freedom”. 4 December 2014

Why Migration Matters

Remember Victor Spirescu?

A year ago today, Victor got off Wizz Air 63701 from Târgu Mures and into a media storm at Luton. He was one of the few Romanians and Bulgarians to take advantage of the end of immigration controls that had been in place for their countries since 2007. A flood had been predicted, some anticipating up to 5000 Romanians and Bulgarians a week, on into the future. There had been reports of “Olympic-style” security preparations and impossible burdens on the health services.

Victor Spirescu was polite but bemused. He’d been offered a job at a car wash outfit and came because “I love to work”. He hoped to make good money so that he could “renovate my home and make a good life in Romania because it’s much easier to live in Romania, because it’s not expensive”. He didn’t know much about the NHS.

In a report published this week, Oxford’s Migration Observatory shows how wrong these predictions of an invasion from Eastern Europe had been. It debunks claims that the increase in the allocation of National Insurance Numbers to Romanian and Bulgarian-born people in 2014 is evidence for a sharp increase in migration from Eastern Europe over the past twelve months; many of those allocated NINs over the last year were here before the European Union’s transitional labour market controls ended on 31 December 2013. The Migration Observatory instead uses Labour Force Survey data for Bulgaria and Romania (together known as the A2 countries). This provides a more reliable proxy for migration, and shows a steady pattern of increase over the years rather than the dramatic changes anticipated for 2014. While rates of migration have been increasing steadily, there is no evidence here that controls on movement within the European Union have made any significant difference. Which is another way of explaining why a quarter of the seats on Wizz Air 63701 were empty on 1 January 2013 and why Victor Spirescu was not one of many.

Migration A2 chart This said, the Labour Force Data does show that there are about 150 000 more Bulgarians and Romanians working in Britain today than in 2006. This is a small part of a broader trend. Census data shows that the numbers of foreign-born people living in the UK increased from about 4 million in 1995 to about 7 million in 2011. Most live and work in London but all major British cities are diverse; some 200 different languages are spoken in Manchester. This is a hot issue for all political parties and there is a general assumption that migration is a negative. A smart postgraduate student from Sofia asked me recently why British people hate Bulgarians (she’s making a documentary film about it); in late 2013 the President of Bulgaria made much the same point.

But is migration a negative? Leaving aside issues of rights, values, creativity and cosmopolitanism, the numbers again belie the assumption. A second recent report, this time from University College London’s Centre for Research and Analysis of Migration, shows that between 1995 and 2011, during which period the foreign-born population of the UK grew by about 3 million, migrants from within the European Union made a positive net contribution to the British economy of more than £4 billion. In contrast, the overall net contribution by native Britons was a whopping negative of £591 billion. And between 2001 and 2011 the net contribution of migrants from eastern European countries that have joined the EU since 2004, including Bulgaria and Romania, is estimated at almost £5 billion. Why?   Because, overall, migrants consume a far smaller proportion of government spending than the British-born population, making their tax contributions particularly useful when attempting to balance the national books. As the Economist puts it, migration is “the quintessential supply-side policy … It expands the labour force, encourages investment and provides taxpayers”.

This is the context in which to evaluate Theresa May’s support for further restrictions on the contributions that international graduates can make to the British economy. In contrast with other countries that are major destinations for international students, the Government has already raised visa requirements and removed the post-study work visa. The most recent proposal, trailed for the Conservative Party’s manifesto for this year’s general election, is to require all international students to return home immediately after graduation, and to apply for new visas for employment from there. In the context of the University College London study this is particularly short-sighted. International graduates, essential in key economic areas, will pay higher taxes than European Union migrants as a whole while drawing far less on state provided services than British nationals.

And Victor Spirescu? By May, when he was interviewed by Channel 5 News, he had had enough: “I speak with a lot of guys who want to come here and I tell them it’s not so easy to come here and to work here. I don’t want to stay here. I’ll go back to my country because I love my country, I love the place where I live.”

subscribe to martin hall blog posts by email


Banksy, 2013: “Migrants not welcome”. BBC, 1 October 2014, “Banksy anti-immigration birds mural in Clacton-on-Sea destroyed”.

BBC , 21 December 2014. “Theresa May backs student visa crackdown”

Channel 5 News, 14 May 2014:   “I want to go home: First Romanian to immigrate after rule changes regrets choice”.

Dustmann, Christian and Tommaso Frattini, 2014: “The Fiscal Effects of Immigration to the UK”. The Economic Journal. Available from the Centre for Research and Analysis of Migration, University College London.

Economist, 8 November 2014: “What have the immigrants ever done for us?”

Guardian, 1 January 2014: “Welcome to Luton: Romanian arrival greeted by two MPs and a media scrum”.

Guardian, 17 January 2014: “Romanian immigrant: ‘I just came to work, earn money and go home’.

Guardian, 29 December 2014: “No surge of Romanian and Bulgarian migrants after controls lifted”.

Migration Watch, 30 December 2014: “Bulgarians and Romanians coming to the UK in 2014: influx or exaggeration?”

Observer, 21 December 2013: “Bulgaria issues fierce rebuke to David Cameron over migrants”.

The Case for Diversity

It shouldn’t be necessary to have to make the case for diversity. There shouldn’t be the need for an Equality Challenge Unit, or for a kite mark for gender equity. There shouldn’t be an attainment gap for BME students. A postcode should not affect the probability of your attending university or shape the kind of university that will let you in. In an equitable society, a person realizes their capabilities though their own choices – by the objectives they set themselves and the efforts they make.

We are, though, not an equitable society and Higher Education is not an equitable sector. This is abundantly clear in the Equality Challenge Unit’s two volume Statistical Report for 2014 which I was obliged to carry on my back on the walk up the hill from Liverpool’s dockside to Lime Street Station at the end of last week’s ECU conference.

Baroness Onara O’Neill, Chair of the Equality and Human Rights Commission, set out the ethical and legal bases for equality with philosophical precision at the conference. Universities, despite the privatization of student fees, are still public bodies and as such have “positive duties” to address unfair discrimination, to minimize disadvantages. This, though, does not include the duty to achieve defined outcomes – quotas. Indeed, positive discrimination is unlawful in Britain, which is why a university cannot require an all-female short-list for a job, or set aside student places for which only ethnic minorities may apply.

What to do, though, when the cumulative, historical effects of unfair discrimination are so extreme that they threaten significant dysfunctionality? The week before the Equality Challenge Unit’s Liverpool conference I was in Johannesburg for a workshop hosted by the global energy company Chevron and True Blue Inclusion, a Washington-based think tank. This was about the dilemma faced by South African based organizations twenty years after the inauguration of an era dedicated to non-racialism, when there are still extreme disparities by race and gender in middle and senior management positions. Here, the law is clear. South Africa’s constitution requires positive discrimination in order to address historically-based disadvantage; the problem is that these constitutional requirements have yet to be met.

Britain’s problem, then, is both less acute and more complex than South Africa’s. Clearly, the extent of inequality in British universities is nothing like the levels of persistent inequality across South African Higher Education. But at the same time, British universities are required to exercise a “positive duty” to address imbalances, but without using measures that are “positively discriminatory”. As Baroness O’Neil put it, how far can we go to achieve proportionality?

Here, a clear priority is ensuring equity in our processes, whether for staff recruitment and promotion or for student admissions, evaluation and assessment. Evidently, key Higher Education processes are not equitable. If they were, then 79.9% of Vice-Chancellors would not be men, and the unemployment rate six months after qualifying for BME graduates would not be twice that of white graduates.

There will always be some cases of overt discrimination in any organization; these should be comparatively easy to deal with. The more insidious determinants of continuing inequality are the subconscious biases – those aspects of institutional culture that we rarely put into words, but which define the character of an institution, of what’s in the air, in gestures, language, customs. And because an institution’s culture is not a person, protected in law against positive discrimination, we are free to identify its characteristics, make them spoken rather than assumed and, though doing this, change our habits.

Chi Onuwurah, MP for Newcastle Central, set out an agenda for an attack on institutional culture at the Liverpool conference. An engineer-turned- politician, she based the case for equity across the Engineering disciplines on economic necessity; on the need to draw on the full spectrum of people’s experience and perspectives to drive forward innovation, “from sharpened stones to mobile phones”. And the same argument was made, with compelling force, in Johannesburg, where current cultural preferences in organizations result in companies depending on less than 10% of South Africa’s potential workforce for their recruitment of senior managers. No company, or country, can remain competitive if it self-restricts its talent pool to this extent.

Commitment to the case for diversity is a leadership issue, both at the top of an organization and also distributed across all areas of work. In preparation for this year’s conference, the Equality Challenge Unit commissioned a study of diversity leadership in ten British universities, including the University of Salford. As Chris Brink, Vice-Chancellor of Newcastle University, puts it in this report, equity is about excellence. And it’s hard to see how anything else could be more important.

Will we all be cyborgs?

The convergence of information technologies and bioscience is changing our lives. As bandwidth speeds of a gigabit per second become available and affordable for some, how will this affect who we are as we are able to reconstruct our own bodies and shape the bodies of our children? Will we be cyborgs, part flesh and part machine? What advantages will this bring? And what should we worry about?
The Pew Research Center recently canvassed over a thousand practitioners and experts in information technology. They were asked what they believed were the implications of a gigabit world. Will there be distinctive killer apps, disruptive innovations that will result in significant changes in the ways we live? The Pew survey identified health as one of the areas that will be most affected. Here is Hal Varian, chief economist for Google: “the big story here is continuous health monitoring… It will be much cheaper and more convenient to have that monitoring take place outside the hospital. You will be able to purchase health-monitoring systems just like you purchase home-security systems. Indeed, the home-security system will include health monitoring as a matter of course. Robotic and remote surgery will become commonplace”.
A gigabit connection provides one thousand megabits of information per second (Mbps). At the beginning of this year, the average connection speed across the world was just under 4 Mbps, across the United States 10.5 Mbps and in South Korea – the country with the highest average connection speed – 23.6Mbps. A gigabit world, then, would see a forty-fold increase in Internet speed in the best performing country. This may seem unattainable in the near future. But this technology is already with us. Some scientific communities have already had access to very fast networks for several years. Four years ago, Google ran a competition for the first community network running at 1 gigabit per second, a hundred times faster than the average speed for the US as a whole. Kansas City won and residents are now signing up for the service.
The convergence of bioscience and information technology is best represented in the history and triumphs of the Human Genome Project. Launched in 1990 and completed in 2003 with the sequencing of the chemical base pairs that make up our DNA, the results of this extensive collaboration is now transforming medicine and health care. Genome sequencing would not have been possible without high-speed computing and future developments will depend on almost instant access to massive data sets.
But in addition to new drug development and predictive diagnosis of genetic conditions there are more complicated innovations. There is widening enthusiasm for personal DNA profiles that establish genealogies; the confirmation that the skeleton found under a car park in Leicester was once Richard III is a famous example. But for others, there is a deep suspicion of what this could bring. For example, indigenous communities with hard-won rights to land and resources fear that the misuse of DNA sequencing may strip away these rights. And the extensive and continuing revelations about the misuse of information technologies by state agencies has encouraged a significant backlash against the pooling and use of personal health records. The union of the biological and digital sciences is a complicated marriage that will bring unanticipated consequences.
One area to watch for such surprises is digital implants; the surgical insertion of microprocessors that make us part flesh, part machine.
The specter of the cyborg has been with us since Mary Shelly’s 1818 novel Frankenstein. The early Internet and popularity of personal computers brought flesh, organs, digital processors and information technology together in fiction and theory. Milestones, and still great reading today, were William Gibson’s 1984 classic Neuromancer and Donna Haraway’s prescient essay, Cyborg Manifesto, published the following year.
But cyborgs were already on the street thirty years ago. The first surgical implantation had been in 1958 ( the recipient lived until 2001); today’s microprocessor-controlled pacemakers sense the physical activity of their host and respond by increasing or decreasing their rate. And since Gibson foresaw a future in which the body could be rebuilt at will – although for nefarious purposes in the dark world of the matrix – remarkable new medical technologies can transform the quality of life. Surgical cochlear implants pick up signals from a speech processor and send them to the auditory nerve. Robotic prosthetic limbs receive signals from the nervous and muscular systems and transmit these to an artificial arm or leg. In the near future, implantable artificial kidneys with microelectromechanical membranes will filter blood and excrete toxins while reabsorbing water and salt.
Widely available gigabit connections will enable intelligent, implanted devices such as these to become part of the “internet of things”, much as Gibson imagined in Neuromancer. The Pew Center study predicts personalized digital health within the next ten years. Here is Judith Donath, at Harvard’s Berkman Center for Internet and Society: “telemedicine will be an enormous change in how we think of healthcare. Some will be from home—chronically ill or elderly patients will be released from hospitals with a kit of sensors that a home nurse can use. For others, drugstores (or private clinic chains—fast meds, analogous to fast foods) will have booths that function as remote examining, treatment, and simple surgery rooms. The next big food fad, after hipster locavores, will be individualized scientific diets, based on the theory that each person’s unique genetics, locations, and activities mean that she requires a specific diet, specially formulated each day”.
But medical applications are likely to develop more rapidly than this. Chip implants that yield personal information to a scanner have been around – controversially – for a decade, promoted for monitoring prisoners and hospital patients. And a person offered a surgical implant that could save their life by responding to real-time information is unlikely to decline out of deference to Edward Snowden.
All these developments, whether for lifestyle choices, medical care or lifesaving technologies, will require a significant trade off between privacy and the sharing of personal digital information. Will this happen? Revelations about the extensive misuse of surveillance by state agencies across the West has resulted in a backlash against sharing. We are becoming aware that our digital traces are everywhere we go, and we don’t like it very much. But despite this, we surrender our personal data every day in return for the conveniences this brings.
Anyone who uses any free Google service pays with the surrender of some personal data, and usually a lot. Google knows where its users are, and what they are interested in, by collecting information on Internet searches, the contents of e-mails sent and received and from geospatial information transmitted from smart phones and tablets. The payback for the loss of privacy is easier shopping, finding places anywhere and a fast, free and capacious e-mail service. We all want safe cities, with protection from mugging to terrorism and everything in between. Today’s cities are impossible to police effectively without constant, digital surveillance. Every person in Britain is now photographed on average 300 times each day, often without knowing it. In London, more than 16 000 sensors automatically record the location of anyone carrying an Oyster card. Digital number plate recognition systems record the movement of every car across motorway systems, linking back to the identity of the registered owner. We are, to go back to William Gibson’s prescient novel, already in the Matrix, and this is a messy and complicated place to be.
And, finally, the engine of most contemporary change – consumerism. From the earliest Apple Mac to the latest iPhone, markets have directed and accelerated the advance towards a gigabit world. This Christmas’s best seller will be the wearable band, which offers a range of digital functions from paying for coffee to monitoring health patterns; market analysts predict that 43 million of these devices will be sold across the world. 28 million of these will be smartbands, that connect to tablets, iPhones and other digital devices. And once this market is saturated, as it surely will be, what next? With 1000 megabits of information available every second, what could be more natural than tucking the microchip away beneath a fold of skin, perhaps along with a tattoo or body piercing?
But not for everyone. Respondents to the Pew Center survey also saw in this future an entrenched digital divide. Rex Troumbley, from the University of Hawaii, commented that “we should not expect these bandwidth increases to be evenly distributed, and many who cannot afford access to increased bandwidth will be left with low-bandwidth options. We may see a new class divergence between those able to access immersive media, online telepathy, human consciousness uploads, and remote computing while the poor will be left with the low-bandwidth experiences we typically use today.”
And so again we are in William Gibson’s dystopian world, or right back to Mary Shelly’s horror of the “miserable monster”, and his reproach to his creator: “I ought to be thy Adam; but I am rather the fallen angel.”
Pew Research Center, September 2014, “Killer Apps in the Gigabit Age”
Available at:

First published in University Business, 29 October 2014:

Minding the Gap

Britain is now one of the most unequal countries in the developing world and the gap between rich and poor continues to increase. In his recent Salford Lecture Alan Milburn, Chair of the Social Mobility and Child Poverty Commission, set out five policy levers to reverse this trend. Several of his proposals require action by universities. What, then, should we be doing?

Firstly, its important to understand the key issues, stripped of Westminster rhetoric. The Social Mobility Commission has made a significant contribution to synthesising this evidence. For universities, the key indicators are inter-generational; we are the gatekeepers for the professions and a wide range of high-value jobs, and who we let in – and keep out – has a significant effect on access to opportunity. In Britain, only one in eight children from low income families achieves a high income as an adult; a much higher inter-generational replication of circumstances than countries such as Finland, Australia and Canada.

Looked at from another angle, the Commission found that more than 50% of those in elite professional positions – judges, senior military officers, senior public servants and diplomats, members of the House of Lords – attended independent schools, in comparison with 7% of the population as a whole. As Milburn put it in his presentation, “few people believe that the sum total of talent in Britain resides in just seven per cent of pupils in our country’s schools – or for that matter less than one per cent of students in our universities. The institutions that matter appear to be a cosy club. The data is so stark, the story so consistent, that is hard to avoid the conclusion that Britain is deeply elitist”.

This is why fair access –widening participation – continues to be so important. At our university 43% of our entering students come from working-class backgrounds and 22% live in disadvantaged areas. One in ten of our students was eligible for free school meals at the time of taking GCSE exams. Enabling social mobility in this way is a core part of our mission and our planning for the future, and must remain so.

Secondly – and this is a key insight from Alan Milburn’s work – the nature of inequality is changing.

All current polices for widening participation are based on the notion of a “poverty line”. There’s a significant literature on whether this should be an absolute or a relative measure and on how it should be set. But whether the $2-a-day benchmark for absolute poverty applied across Asia and Africa, or the eligibility criteria for free school meals that is extensively used as a proxy for relative poverty in Britain, these are thresholds, and if families are above the threshold then they are not considered poor.

Of course, providing the opportunity of breaking through the threshold of absolute or relative poverty remains essential. But the Commission’s work has shown that, now, the nature and extent of inequality in Britain is significantly affecting families in work and with incomes above the poverty line:

“Although entrenched poverty has to be a priority – and requires a specific approach – transient poverty, growing insecurity and stalling mobility are far more widespread than politicians, employers and educators have so far recognised.  Too often – in political discourse and media coverage – these issues are treated as marginal when in fact they are mainstream. Poverty touches almost half of Britain’s citizens at some point over a nine-year period and one third over four years. The nature of poverty has changed. When Labour came to office in 1997, less than half of the children growing up in poverty lived in a household where one adult worked. Today by contrast child poverty is overwhelmingly a problem facing working families, not the workless or the work-shy. Two-thirds of Britain’s poor children are now in households where an adult works. In almost three-quarters of those households someone already works full-time. The principal problem seems to be that those working parents simply do not earn enough to escape poverty”.

Again, there are specific implications for universities. One of the consequences of the new student fee regime that was introduced in 2011 has been a sharp decline in access by part-time students, who are invariably older and who need either to re-qualify because they have been cut out of the labour market by shifts in the structure of the economy, or because they did not have the opportunity for further study after leaving school. Today, it is close to impossible for many older students to fund their continuing education.

This autumn, more students may be starting at British universities than ever before. But this is a world for 18-year-olds. Many talented but older people will remain trapped in low-value employment, at or below the minimum wage, or in zero hours contracts and part-time positions, particularly in the new and expanding service industries. It is a significant shortcoming in current student funding policies that older, part-time students are now largely excluded from educational opportunities. In turn, this omission will depress Britain’s economic competitiveness in developing a skilled workforce.

What should we be doing? We have little direct influence over national higher education policy, which is why its important for us to continue to make the argument in partnership with other universities, and to support work by organizations such as the Social Mobility and Child Poverty Commission. But we are able to work with others across Manchester and the Northwest in ensuring that pathways into education are aligned with the imperatives of social mobility. This is why our developing partnerships with Further Education Colleges are so important, ensuring that people can get access to flexible opportunities that recognize their aptitude and potential and counter the pervasive elitism of British society.

Beyond this, we need to look far more closely at our curricula, at what we recognize as knowledge and why, at how we enhance learning, and at how we enable part-time and “interrupted” learning. I’ve found that, across our university, people are full of ideas about these things; hopefully, policy advocacy by Alan Milburn and others will help put in place the policy and funding frameworks that will make the realization of these ideas a practical possibility.

Social Mobility and Child Poverty Commission reports are available here.

Why 2014 Feels Like 2009

2014 feels a bit like 2009.

Back in August 2009, everyone knew that the dream of limitless economic growth was over. But it wasn’t at all clear what this would mean for universities, and for students (although, of course, the LibDems had pledged not to increase fees). Clarity emerged a year or so later, after the May 2010 election, with the abolition of teaching subsidies and the shift of the full cost of teaching to students, with all that has followed.

Now, five Augusts later, another dream is over; rather than resulting in a free market in Higher Education qualifications, in which varying quality produces differential pricing and lower costs, the student loan book is unaffordable and the steely-eyed gatekeepers in the Treasury are demanding that something is done. But we’ll have to wait until after next year’s election to find out just what.

2015, on the other hand, will be different to 2010.

Ahead of the last election, there was a tacit agreement between the major political parties that neither wanted student fees as a doorstep issue (the LibDems never imagined they’d be in power, so making promises to first-time voters seemed like a safe bet). And so they created the Browne Review, deferring action until after the election. This time round, though, there is no Browne. This means that positions on post-16 education in general, including universities, will be in party manifestos for next year.

There’s still a way to go before manifesto positions become clear and, faced with the possibility of implementing policies, there will be a degree of movement towards the safe middle ground. But the options that are being trailed at the moment reveal very different visions for universities’ futures.

For Labour, the lynchpin is a maximum fee of £6000. This is an interesting number because, with some amount of variability, it’s the sort of fee that a good number of Further Education colleges are charging for university-validated degrees. Labour is also pointing to the needs of “the other 50%” – those who do not go to university – and has pledged to remove international students from estimations of net immigration, following practice in other counties.

For the Coalition, whispers are for either uncapped fees, or a new ceiling of £16000, with conditions. The conditions are directed at solving the unaffordability of the student loan book. Parts of this, the Chancellor announced in his autumn statement last year, were to be sold off to fund expanded student enrolments through to 2020. Once David Willetts had cleared his desk, Vince Cable announced that he wasn’t going to sell the loan book after all (shades of the Post Office?). And once free to speak from the back benches, David Willetts argued for universities to be given the right to buy their own student debt as an investment, as an incentive to get their graduates into high paid jobs and as a condition for charging unrestricted fees. These positions are not as confused as they may seem; in this line of thinking, selling students’ debt to their own universities is preferable to selling the loan book on the general market because, it is assumed, this will provide incentives to recruit students who are more able to repay.

Evidently, these are very different visions for universities’ futures. And so 2014 feels like 2009 because we are in the diminished atmosphere ahead of a storm, quietly waiting for something big, and probably unpleasant. But 2015 will not be like 2010 because there is no tacit agreement to take student funding off the agenda. This could make the political outcomes of next year’s election very significant for universities, and for students.

Published on the Guardian Higher Education Network, 25 August 2014.



Gatumba is a small town about ten miles from Bujumbura, the capital of Burundi, and close to the border with the Democratic Republic of the Congo. It is an unexceptional place in a vast reach of grasslands and marshes at the northern end of Lake Tanganyika. There are military and police camps and, ten years ago, a refugee camp housing about 500 Burundians, recently repatriated after a period of exile in Congo, and some 825 Congolese refugees, almost all from the southern Lake Kivu area, on the border with Rwanda.

In the night of August 13th 2004 – ten years ago this week – an armed militia group crossed the marshes from the direction of the border with Congo and attacked the Gatumba refugee camp. 164 people were killed, almost all of whom were Banyamulenge., a Kinyarwanda-speaking people who live largely in the high plateaus of South Kivu A further 106 were wounded. Most victims were women and children.

Human Rights Watch carried out an immediate investigation and later in 2004 published a report on the massacre, based on the accounts of witnesses and survivors. Here are extracts from the report:

The refugee camp is situated less than a mile from the military and police camps, just beyond the edge of town on the main road towards the border. Located next to the road, the camp included one cluster of large tents, fourteen of them green, one white, on one side of a field. Some one hundred yards away and facing them, was another group of twenty-four large tents, all white. Each tent was a dormitory housing several families. Between the tents, on the fourth side of the square and facing the road, was a row of latrines and showers. Beyond them was a football field and a marsh stretching to Congo, dry enough to cross easily at this time of year.

The attackers came across the marsh from the direction of the border. At least one witness actually saw some of them cross the border; other attackers apparently joined the group on the Burundi side of the frontier. One of the attackers fired an initial shot at a distance, perhaps as a signal to others in their group. Then they moved towards the refugee camp, playing drums, ringing bells, blowing whistles, and singing religious songs in Kirundi.

Most of the attackers wore military uniforms, either camouflage or solid green, but a few were in civilian dress. Most carried individual firearms but they also had at least one heavy weapon. A number of the combatants were child soldiers. According to a survivor of the massacre, some attackers were so small that the butts of the weapons they were carrying dragged on the ground. There were women in the group, encouraging the others by their songs and shouts, and ready to assist in carrying away the loot.

The attackers, usually only two or three at a time, ripped open the tent flaps and slit the sides of the tent. Often they stayed at the entrance to the tent and either ordered people to come out or just began shooting into the tent. They then threw or shot incendiary grenades that caused the tents to catch fire. Most victims died by weapons fire or by being burned to death. Fifty-one of the corpses of adults and fifteen of children had been burned. One survivor reported seeing an attacker stab a woman to death, probably with a bayonet, and several of the dead had received blows from machetes.

About an hour after their arrival the attackers left, heading back in the direction from which they had come. They carried away loot from the camp, particularly valuable items like money, radios, and clothing, but they did not stop to take cattle from the nearby enclosures. As they made their way across the plain in the direction of the border, they again sang and made music, a sound local residents followed until it died away in the distance.

Remembering Gatumba is vital to survivors, families and community here in Manchester and at other places where the Banyamulenge diaspora is found. Similar commemorations keep memories of other atrocities alive, whether in Latin America, Bosnia, Sri Lanka of at the centenary of the slaughter of the First World War. And in joining with the Salford Forum for Refugees and the Banyamulenge community in our city in commemorating the tenth anniversary of the Gatumba massacre, I want to make two observations that apply here but also more generally.

The first observation is on what can be called “the burden of tribalism”; the assumption that traditions are inherent – hard wired – pre-determining a range of behaviour from decorating pots, to marriage customs to a propensity to kill members of other communities. This was long an assumption in the colonial anthropology of Africa, but it’s found elsewhere as well; the belief that some groups of people are driven by “ancient hatreds” that are inevitable had caused the United States and Western European countries to hold back from intervening in the Bosnia crisis that had erupted in 1992.

The burden of tribalism took a particularly sinister turn in 1994 with the genocide in Rwanda. In a compelling reassessment, Bartholomäus Grill, writing for Der Spiegel on the twentieth anniversary of the Rwanda genocide, recalls his own reading of the emerging news from Rwanda in April 1994:

The first reports from Rwanda, 4 000km away, were confusing: accounts of military showdowns, bloody unrest, ethnic squabbles and fraternal strife. Der Spiegel published a story that spoke of “anarchy that feeds on itself”. Rwanda was dismissed as a typically African conflict.

“Rwanda?” a British colleague said. “Oh, it’s just the Tutsi and the Hutu smashing each other’s heads in. It’s never-ending tribal warfare.”

The murderous excesses had nothing at all to do with “tribal warfare”. The Hutu and the Tutsi have shared language, customs and culture for centuries. There were mixed marriages, and many Rwandans were unable to tell whether someone was Hutu or Tutsi. The causes of the tragedy were different: the pressures of overpopulation in a small agricultural country, the struggle over scarce resources, colonial segregation policies that had fuelled latent racism between the ethnic groups and the ruling elite’s thirst for power.

Today, we know that the genocide was not the work of archaic, chaotic powers, but of an educated, modern elite that availed itself of all the tools of a highly organised state: the military and the police, the intelligence services and militias, the government bureaucracy and the mass media. The Hutu killers were no demons but the henchmen of a criminal system. They pursued a simple logic of extermination: if we don’t wipe them out, they will destroy us.

The genocide in Rwanda was to continue for one hundred days, with at least 800 000 dead.

Similarly, the massacre at Gatumba ten years later was not the ethnic inevitability of Hutu rebels killing Banyamulenge Tutsis. This was a politically motivated attack, in the context of conflict that had been going on for fifty years. The breakdown between Congolese factions and the Banyamulenge originated in the 1964 rebellion led by Pierre Mulele after the assassination of Patrice Lumumba three years earlier. The Banyamulenge were accused of not supporting the rebellion, resulting in violent reprisals in February 1966, when all the men in the villages of Kirumba and Kahwela were decapitated. Sporadic violence continued, significantly exacerbated by the Rwandan war of 1994 and the complexities of the subsequent peace process that was still being negotiated in 2004. These conflicts, and the massacre at Gatumba, could have been prevented by regional and international political action, and the deaths avoided by appropriate policing, as the Human Rights Watch report makes clear.

Given the Rwandan genocide of 1994, and Gatumba and other atrocities a decade and more later, it should not be necessary to insist that violence is the inevitable consequence of social identity and tradition. Whether in Bosnia, Rwanda, Burundi or Congo, we have convincing evidence of the ways in which particular and contingent sets of interests drive the political processes that can lead either to extreme violence or, if understood early enough and well enough, can be dealt with through negotiation and mediation. And yet in Gaza, Belfast, Ukraine, Sri Lanka and other contemporary hot spots, we still hear the assertion that it is inevitable that certain people will want to kill one another because it is in their blood, and has always been in the blood of their forebears to do so. And so one reason for coming together to commemorate what happened at Gatumba ten years ago is to re-assert that this massacre need not have occurred; that there were ways that the lives of 164 people could have been saved in the night of 13 August 2004, and should have been saved; that future massacres like this should not happen.

My second observation is that, in our world today, commemoration is becoming more and more difficult. This is not because the record of Gatumba – or of similar atrocities elsewhere – is disappearing. It is rather the opposite; that the traces of what happened on August 13 2004 are everywhere, and will persist for ever.

Commemoration is a form of mediation; between the past and the present, with a view to the future. Those who commemorate do so with respect; for lives lived and lost in the past, for those living more closely with the consequences of history than themselves. When commemoration is for something extreme – here for the 164 people killed at Gatumba just ten years ago – the need for respect is all the more urgent. And where respect is won for commemoration – and awareness widened through this process of mediation – then the risk of similar atrocities in the future diminishes, even if by just a little.

There is no respect in the traces of Gatumba that circulate, and multiply, today. We rightly celebrate our digital world, while thinking far too little about its dark side. Google Images brings an instant mausoleum to any screen, a morgue of images of bodies twisted in death, burned corpses, bloodstained clothes. Faces are never concealed, bringing unimaginable pain to survivors and families. At the same time, there is no forensic compensation – no utility in these jpeg files and text fragments as evidence against perpetrators. All these circulating and multiplying digital fragments are open to manipulation, to be photshopped, edited or falsified to align with the political claims that, in the case of Gatumba, were being made within days of the massacre.

These persistent, circulating digital traces also enable easy disavowal, the abrogation of responsibility. John Berger long ago showed, in a brilliant analysis of photojournalism in the Vietnam war, that shocking images of violence can encourage superficial responses rather than a political determination to address the causes of continuing conflict. Outrage, he pointed out, might prompt a donation to a charity or a letter to a newspaper; meanwhile, the war continues. Today, there are so many digital images of violence circulating and multiplying across the Internet that atrocities merge into one another as the sordid aftermath of killing in Burundi and Sri Lanka, Gaza and Ukraine, of torture and degradation in Abu Ghraib or Guatamalo. We are outraged, but helpless under the weight and prevalence of this dark, virtual world.

This devaluation of evidence – to paraphrase Hannah Arendt, the banality of digital images of evil – devalues the power of evidence. For while the grainy images of death at Gatumba causes obvious distress to survivors at memorials such as today’s, for those with no direct connection with the specific event they merge into a visual morass of twisted bodies, severed limbs and cheap and torn clothing that could be from one of a dozen killing fields in different continents. And this, as John Berger noted, leads to forgetting in a swirling sea of evidence.

This is why commemoration – an event such as this – is so important. To mark the tenth anniversary of the atrocity at Gatumba refugee camp is to insist on the political specificity of the event: that it should and could have been prevented, that everyone in the camp on August 13th should have been equally protected; that governments and international organizations should have acted decisively to bring justice to the dead and closure to those left behind. In particular, to commemorate Gatumba is to insist on the continuing urgency of ensuring human rights for refugees, whether in Congo and Burundi, or here in Britain.

To come together with the Banyamulenge community is to insist on respect with the humanity of face-to-face engagement. In doing this, Gatumba ceases to be an unexceptional place in the centre of Africa, and becomes a marker for the circumstances of refugees everywhere, and for a future in which such atrocities cannot be ignored, or allowed.

Keynote, Gatumba Massacre 10th Anniversary Memorial, Banyamulenge Community Association, 9 August 2014.


Banyamulenge Community Association:

Gatumba Refugee Survivors Foundation:

John Berger, Understanding a Photograph. Edited and introduced by Geoff Dyer. London, Penguin, 2013

Bartholomäus Grill, “Silence of Rwanda’s dead haunts those who ignored their cries”. Main and Guardian (Johannesburg). 11 April 2014.

Martin Hall, “Black birds and black butterflies”. First published in Carolyn Hamilton, Verne Harris, Jane Taylor, Michele Pickover, Graeme Reid and Razia Saleh (eds.) Refiguring the Archive. Cape Town, David Philip, 2002, pages 333-361. Republished 2013. Available at

Human Rights Watch, “Burundi: The Gatumba Massacre. War Crimes and Political Agendas”. September 2004.

Brasil, Maravilhosa

The Rio Times’ mission is to provide English speakers with local information, to “improve their understanding of the Cidade Maravilhosa and Brazil”. Its front page has three routes to follow: the World Cup (that ended with the final at the Maracană Stadium on July 13), the Olympic Games (which will open in Rio on August 5 2016) and, between, them, current news. And foregrounding the news is an editorial on education.

I was in Brazil to build on our university partnerships and the recent increases in our enrolment of Brazilian students at Salford. And in this restless country of contradictions the demand for education, at all levels, is everywhere. I was last here just over twenty years ago, mostly in the north and looking for traces of the period of Dutch colonization at Mauritsstad, now Recife. Between 1581 and 1654, the Dutch had revelled in the exoticism of Brazil until they were displaced by the Portuguese. This complex interplay of colonialism, the legacy of slavery and rich and varied indigenous culture has resulted in a famously complex and diverse contemporary country that covers half the landmass of South America with a population growing past 200 million.

Back in 1994, Brazil was a different country. Civilian governments had been in power for less than a decade and Henrique Cardoso had only just launched his Plano Real, the basis for economic reform. Recife and other cities were decrepit, the evidence of poverty everywhere. Since then, and through the long period of economic growth led by the Lula government after 2002, there have been significant reductions in poverty and Brazil’s middle classes have expanded by some 40 million people, pushing in turn the services sector to some 60% of overall GDP.

It’s not surprising, then, than education is an obsession in Brazil today. Marta, a journalist from a magazine focusing on the economy, tells me that, apart from football, all middle class Brazilians talk about is education and healthcare. And the two are closely related, because education is key to better paying jobs that can ensure a reasonable quality of life.

A year ago, Brazil’s growing economic confidence was severely shaken by waves of protest that started with objections to increases in public transport costs but which widened into opposition over spending for the World Cup in the face of other priorities. Stone Korshal, Editor of Rio Times: “Brazilians learned they can spend billions on stadiums soon to be forgotten, many of which opened in incomplete conditions, and reinforced a culture of overspending, under-delivering, and distracting the masses with the circus. Well, now it’s over, it’s winter in Rio, and life is moving on. Most are licking their wounds and trying not to think too much about the 2016 Olympics, not yet.”

Marta sees younger Brazilians like her as anxious about this future. While there are jobs, they don’t pay well without good educational qualifications. Property is expensive and becoming more so, so it’s difficult to buy an apartment. Increasingly, employers want good English language skills and international experience. Public schools are seen as low quality and private education is expensive. There is widespread disillusionment with government policies.

Stone Korshal sees lack of education opportunities as a major factor in the June 2013 protests – Brazil’s equivalent of the Arab Spring and the Occupy movement of Europe and North America. So too does President Dilma Rousseff’s PT (Workers’ Party) government, which has launched a new and comprehensive education policy that stresses access to basic education and the quality of the country’s public schools. It is intended that, by 2024, this National Education Plan will see 10% of Brazil’s GDP committed to education. According to World Bank figures, this would make Brazil’s the highest proportional education expenditure in the world; the UK currently spends 6.2% of GDP on education, and the US 5.4%. And last year, Brazil’s Congress passed legislation that earmarks 75% of petroleum royalties for education.

Continuing with interviewing my interviewers, I ask Fabrico, a journalist with a business newspaper, what he thinks about access to Brazil’s elite federal and state universities. Fabrico, whom I guess is still under 30, has had the opportunity to study in Paris and this has clearly shaped his view of the world and of Brazil.

Public federal and state universities have high entrance standards and do not charge tuition fees, and so every place is fiercely contested. Given the extent of inequality in Brazil, and the direct link between access to quality education and the country’s history of colonialism and slavery, public universities are expected to attain significant and challenging affirmative action quotas. But because the state-run schools are so bad, success in entrance examinations requires high cost, private schooling that is inaccessible to the poor. Not surprisingly, Fabrico explains, there is widespread cynicism and disillusionment about commitment to change, which young Brazilians see as cosmetic, doing little to give them the opportunities that they crave.

As with the Arab Spring two years earlier, Brazil’s 2013 protests revealed the power and potential of social media. Brazil is one of the most urbanized countries in the world, with more than 80% of the population living in massive cities such as Sāo Paulo (19.7 million), Rio Janeiro (11.8 million), Belo Horizonte (5.4 million) and Porto Alegre (3.9 million). This, combined with half the population under the age of 30, is fertile ground for social media. Brazil is a Facebook nation, with over 29 million subscribers and the highest current annual growth in sign-ups in the word, at 9.2%.

Manuel Castells, writing about the Occupy movement, sees social media as enabling “counterpower”, “the capacity of social actors to challenge the power embedded in the institutions of society for the purpose of claiming representation for their own values and interests”.

As Adalberto Müller, Professor of Film Studies at the Universidade Federal Fluminense in Rio explains to me, the 2013 protests and social media have crystalized the alternative to traditional party politics in post-dictatorship Brazil. The conundrum has been that alternatives to Dilma Rousseff’s leftist Workers’ Party are unpalatable, but her government has been tainted by political scandals and the insensitivity of prestige spending on the World Cup in the face of pressing needs for basic facilities and infrastructure. As across the Middle East in 2011, social media provide the means for alternative forms of political association – what Castells calls “rhizomatic”. Adalberto gives the example of Mídia Ninja, crowd-sourced journalism that is now an alternative to established media in Brazil, widely seen as censored by the state. 2014 is an election year in Brazil and, while the Workers’ Party is currently seen as on track to retain power, many young Brazilians are expected to spoil their ballot papers in protest (voting is compulsory).

With lack of adequate access to federal and state universities – together, they take only about 20% of Brazil’s students – demand is driving a vibrant, varied and expensive private Higher Education sector across Brazil. The country is now the fifth largest education market in the world and by next year there will be more than 10 million enrolled students. Currently, 2378 institutions are recognized by the Ministry of Education, but only 190 have the title “universidade” (101 are publicly funded and the remaining 89 are private and fee paying). The remaining 2188 institutions are private, many for-profit and part of a vigorous market in mergers and acquisitions, with increasing foreign investment. Quality and fees are variable (up to the equivalent of £1000 per month) and each offers its own diploma in its particular area of specialization. With families prioritizing investment in education, the risk of de-registration by the Ministry and the confusion that private sector qualifications brings to the job market, it is no wonder that young Brazilians are anxious about their futures, and sometimes angry.

In this world, nothing is taken for granted; Brazil’s new generations of students cannot afford the listless ennui of aging democracies, where access to education is seen as an entitlement. This is evident when we visit the Universidade do Estado do Rio de Janeiro (UERJ), a brutalist concrete construction of stairways and precipitous facades alongside the iconic Maracană Stadium (the site of Germany’s bitter triumph just ten days earlier). Although it’s after seven and a wet winter night in the vacation, students are everywhere and books are laid out for sale on the street, under protective umbrellas. Across the wall of the cramped campus bookshop, that doubles up for shots of espresso, is a quotation from Paulo Freire: “um professor que não exerce a curiosidade está equivocado” (“a teacher who does not exercise curiosity is mistaken”). UERJ has more than 26 000 students, some 2 300 academic staff and about 5000 graduate students. As Rio’s state university, UERJ leads in research fields from Energy, Engineering and the Environmental Sciences to Politics, Media and the Social Sciences.

Our host is Erick Felinto, Professor of Media Studies. After a seminar on Media Archaeology – part of our collaboration with UERJ in the area of media and music – Erick takes us to Nova Capella, one of the oldest restaurant’s in Rio’s Lapa district and an eclectic mix of traditional wall tiles, a pink studded ceiling, contemporary art and religious iconography. Some of Erick’s graduate students join us. One is mid-way through her Masters course. She holds down two jobs and takes classes at UERJ in the evenings. Another is wrestling with German media theory, central to his doctoral dissertation. Erick is working on a translation of Friedrich Kittler’s work on media technologies from the original German directly into Portuguese. This, he explains, will avoid some of the errors that have been made in the prevalent English translation. The book will be published by UERJ’s own academic press. Overall, this feels like a university should, with the political edginess, urgency and intellectual passion of the traditional campus.

Despite the polarization of the 2013 protests and the shift of the established electorate to the right, making Dilma Rousseff’s re-election later this year far from a certainty, the current government seems determined to address Brazil’s continuing hunger for education. The new National Education Plan will tackle state funded basic education; there are still an estimated three million school-aged children in Brazil who never attend a class. At the other end of the pipeline, the state-funded Science Without Borders programme is paying for 101 000 young Brazilian students to study abroad (to date, 8657 are studying at universities in Britain, including ours); in June, Science Without Borders was extended with an additional 10 000 scholarships.

It seems clear that young Brazilians’ determination to gain qualifications and engage with the world continues to grow as a determining force; with India, China and African countries, Brazil will shape a “southern century” of economic growth, politics and new cultural and intellectual syncretisms. The contradictions this will continue to bring were well expressed in the ambiguities of the World Cup: the protests about investment at the expense of other priorities, and then the dismay of defeat; the passion for the game in the face of extreme inequalities.

Brazil’s Paulo Freire again: “Knowledge emerges only through invention and re-invention, through the restless, impatient, continuing, hopeful inquiry human beings pursue in the world, with the world, and with each other.”


Stone Korshal: Editorial: “ The Learning Process”. 22 July 2014:

Manuel Castells, Networks of Outrage and Hope. Social Movements in the Internet Age. Cambridge, Polity Press, 2012

Paulo Freire, Pedagogy of the Oppressed, 1968.

Jonathan Watts “Brazil’s ninja reporters spread stories from the streets. Band of volunteer citizen journalists are setting the news agenda with their ‘no cuts, no censorship

Back to the Curriculum

Opening Address, Forum for Access and Continuing Education (FACE) Conference, July 2 2014, Salford, Manchester

How do we defend access and continuing education in the emerging, “post-crash” settlement?  Before things went wrong in 2008, Higher Education in Britain was awash with cash, steering towards an overall participation target of 50%.  Five years later, as the economic recovery gains momentum,  there is little talk of going back to these objectives.  Instead, there is inexorable motion towards a new binary divide, whether from the left or the right of the political centre.  And binary divides tend to squeeze access to appropriate opportunity, whether to correct economic inequality or to enable continuing education.

Manchester – Salford – has the history and consequences of binary squeezes written across the cityscape.  This was the heart of Britain’s industrial revolution in the eighteenth century, creating immense economic value.  It’s also the city where Friedrich Engels described the conditions of the working class – basements recently excavated by our Centre for Applied Archaeology.  Salford Quays, connected to the Atlantic trade by the Manchester Ship Canal, was one of Britain’s busiest ports a century ago.  By the 1990s it was a wasteland, a graveyard for employment.  Today, there’s  a billion pounds of investment in one of the largest digital media centres in the world.  This is also a city formed and defined by immigration, from Ireland in the eighteenth and nineteenth centuries, and from everywhere else.  Today, more than two hundred languages are spoken across Manchester and it’s a fair guess that everywhere in the world is represented.  Manchester’s universities have the largest number of international students in Britain; at our university, we have students from well over one hundred countries.

Today – 2014  – feels like 2009.  Then, we knew that cuts were coming, but the general election had to come first.  The Coalition was elected in May 2010 and there were massive protests against the new fee regime in December.  Now, we know that this fee regime is unsustainable, and far more expensive than the system it replaced.  We are waiting for the general election of mid-2015 to find out what the next policy will be.

One extreme, that Labour is working through as the party shapes its manifesto, is the reduction of the fee cap from £9000 to £6000.  The other extreme, which could follow logically from current Coalition policy and the Browne Review, would be the removal of the fee cap, further encouragement of private for-profit providers and the break up and sale of the student loan book to banks.  Whatever the eventual mix, the outcome is likely to be binary.  On the one side will be a small number of elite universities, either charging high fees or else reducing their undergraduate numbers to offset cross-subsidization.  On the other side will be large, low-cost universities, competing on price to offset fixed costs against scale, either because fees are capped or in competition with for-profits that offer the transactional, minimal requirements to achieve a qualification.

One set of responses to this developing scenario will be – and already is – anger.  But anger has not had a great success rate in recent years.  The student protests of December 2010 were the biggest in years, but the fees went up and the NUS conceded defeat.  The Council for the Defence of British Universities was formed as a broad coalition to defend the idea of the university, but is off the radar (I think I may be the only serving Vice-Chancellor who is a member).  Universities in general get a critical perspective from all sides: for failing to adapt quickly enough to disruptive digital technologies; for not developing employer-required skills; for graduate under- and unemployment; for offering funny courses.  As a consequence, anger does not stand up well in comparison to other demands on public spending, such as local government services, public transport and, particularly, the NHS.

More specifically, with regard to access and continuing education, we still find it difficult to show how we use resources – funding – to bring about a demonstrable effect.  Contrary to expectations, while anticipation of the new £9000 fee level led to a surge in applications and admissions in September 2011 and a matching and sharp dip for September 2012 when the new regime was first applied, applications have recovered and are stronger than ever for September 2014.  And, if anything, the proportion of new admissions from low-income households has increased.  In contrast to secondary education, we do not use measures of added value, which would compare statistically predicted graduate outcomes with actual results and would allow a comparison of benefits against socio-economic circumstances.  There do not seem to be evident, compelling, arguments against funding changes, such as the recent removal of support for students with disabilities, or the continuing threat to HEFCE’s student opportunity funding.  This fuzziness does not make for good public policy arguments.

An alternative to anger is to get better at demonstrating success and, through this, to become more successful in enabling access.  One approach here is to return to the curriculum, in its broadest sense.

By curriculum – as I don’t have to tell a group such as FACE – I don’t mean what’s in the prospectus or the rules and regulations that the QAA looks to for assurance.  I rather mean the whole structure of the student experience, ranging from technical mastery of new knowledge through to the excitement of discovery, the empowerment in learning and the contexts in which new understanding is applied.  And I want to emphasise two generic aspects of curriculum that, I believe, can contribute to improving access:  recognition, and extended progression.

Recognition.  One of the tacit assumptions behind a cluster of policies that, together, shape the dominant view of access is that of correcting deficit.  Whether talking about widening participation in terms of low income households or socioeconomic category, advancing equality, or embracing diversity, there is often an implication that “marginalized groups” need assistance to “catch up”.  Rarely, though, is the objective specified or how attainment is to be measured; catch up with whom, and against what criterion for success? And we know we must be in trouble when everyone from UKIP to Labour is committed to “social mobility”.  In essence, overuse has made these concepts empty ciphers, of little practical help in shaping either policy or practice.

An alternative way of looking at this, and the focus of a thoughtful and productive line of research and writing over recent years, is rather to think and act in terms of “recognition”.  I tend to think of this in terms of the continuing value in the concept of experiential learning, the insight that stepwise changes in our acquisition and use of knowledge come from moving through cycles of immersion in experience – whether in the workplace, the laboratory or the richness of sources – and reflection, when we step aside and apart, usually with others, to generalize, share, reflect and theorize.  When we do this, we find that diversity becomes a significant asset, bringing experiences and perspectives into unexpected and productive juxtaposition.  Rather than seeing some members of a class as marginalized or lacking, this gives status and dignity to their knowledge through recognizing the value that they bring into the university. Anyone who has been thoughtful about their experience of teaching at a university will have also experienced this.

Recognition, though, also has a deeper and more profound value.  To recognize the legitimacy of the experiences, perceptions and perspectives that a far wider range of people bring into the university is to broaden the epistemic framework of knowledge itself.  We are quite good at recognizing this retrospectively.  For example, in the nineteenth century novels were seen as too trivial to warrant any place in the curriculum and were widely associated with the frivolity of women.  Today, this literature is the canon, staunchly defended as a bulwark of standards of learning and scholarship.  It’s a fair bet that, in fifty years time, new ways of knowing that are considered marginal today will be similarly defended as mainstream, and that our successors will look back at us with the same mild and condescending derision that we apply to aspects of the Victorian curriculum.  To recognize the value in the new and the marginal is to get a glimpse of the future, and to benefit from this.

Extended progression. Here, I mean a priority on improving the articulation between all aspects of post-compulsory education, whether this is Further Education, Higher Education, apprenticeships, or work-based learning programmes such as Employer Owned Skills.

It is surprising how many people believe that that there is simple relationship between A-levels and university study; in fact only about half of all British university students have A-levels and, in many universities, students with A-levels are in the minority.  As far as access is concerned, the A-level route is comparatively uncomplicated with a single major point of contention; whether or not “top” universities should allow “contextual admissions”.  In other words, should the undeniable relationship between socioeconomic circumstances and attainment levels at A-level be taken into account by those universities with the most competition for places.  While this is certainly important, it is conceptually simple.

Improving the articulation between vocational courses and the university curriculum is more complex and, I would suggest, far more significant in potential for improving both access and continuing education. There is a clearly a need for far more coherent pathways through the plethora of post-compulsory qualifications other than A-levels and their assessment systems.  While A-levels are widely understood, and their standards a matter of recurring national angst, there is almost no public debate about BTECs, about how they are moderated or that they are offered by a for-profit company.  Most people who teach in universities, or their children, are unlikely to have studied for them and, I suspect, comparatively few people know what the qualification’s acronym stands for.

Work across the divide between BTECs offered at Level 3 in Further Education and the first year of undergraduate study at university yields rich rewards in access and progression.  Extending this further, by designing curricula in which learners can continue from a BTEC and into a Higher Education diploma at Level 4 in their college before having the opportunity to enter university at Level 5 will widen access, with subsequent retention, further.  We are one of a number of universities currently working with partner colleges to explore the opportunities in new pathways such as these.

Getting back to the curriculum by engaging with issues such as these will, I believe, demonstrate more convincingly how important access and continuing education continue to be, both in terms of public goods and in personal benefits.  For public good, we surely need clear and diverse pathways that equip more people for the continually emerging needs of the economy.  To recognize ever-new ways of looking at the world is to anticipate emerging, new knowledge.  For personal benefits, we need to provide prospective and present students with clear ways of imagining their future selves, so that they can make the most appropriate choices to realize their capabilities.

Thank you for joining us here, in Salford, and for the richness of discussion and inspiration that always distinguishes FACE’s annual conferences.


Forum for Access and Continuing Education (FACE) Conference, University of Salford, 2-4 July 2014.  “FACE as an organisation facilitates the exchange and dissemination of information and practices in lifelong learning and continuing education. By promoting collaboration and innovation between providers and practitioners FACE aims to support and encourage a socially inclusive framework for lifelong learning, challenging exclusion and fostering full participation”.

Angel Meadow

Eddie is dead.  It’s not quite clear how or why, but the gangsters who hang about the street corners in this part of Ancoats seem implicated and ready to do it again if they’re crossed.  The chic woman who met the eight of us in the Square for a bargain renovation opportunity has gone.  And I’m being questioned by a hyped-up thug with his stubbly face six inches from mine. We duck in through a door and up some narrow stairs.  Now I’m left in a disheveled bedroom with a girl in a red dress who believes she has seen the devil, with the head of a pig and drinking bleach.  I don’t know what to say so she grabs my wrist and I’m downstairs in the pub with someone determined to teach me snooker.  There are some others from our group led who were duped in the Square here, and one has decided to stare out our unanticipated companions, expressionless.  This seems like a good tactic to me, but Eddie’s girl pulls me onto the gents and opens up her handbag.  She wants me to touch up her lipstick.  Why I am I doing this?  Then we’re back in the pub with the others and the devil really is here, with a formidable pig’s head and a bottle of bleach.  He stares us out, one by one, and we are transfixed.  Then someone leaps onto the bar and douses himself with paraffin, lighter ready.  The bar keeper hurries us out into the street and slams the door closed after us.  The eight of us wander off, dazed after 55 minutes in a world we did not anticipate, and still don’t understand.

I don’t do interactive theatre, and try to avoid the front row in case an actor sits on me, or needs to pull a reluctant member of the audience onto the stage for some entertaining humiliation.  But Angel Meadow is so immersive and overwhelming that it makes the very concept of “audience” redundant.  This is way beyond “site-specific”, more a demolition job on any sense of order or expectation.  The eight of us are like the survivors  of an air crash, forced to be together in a ghoulish reality show.  Within a few minutes of being dragged into this world we are beyond humiliation and embarrassment.  I’m glad I went alone, and don’t know the other seven.

This loss of dimension is attenuated by the confusion of time and place – instead of theatre, perspective, positioning this is a kind of free fall.  Time veers between the nineteenth century scuttle gangs of this part of Manchester, a juke box and taunts about Facebook.  The rooms around the seedy bar are a confusion of narrow stairways and passages, domestic debris and dirty windows.  Disorientation is comprehensive.  There is no clear narrative and nor should there be.  Disorientation is the point.

Participants will take away from Angel Meadow what they want.  I understand cities – their creativity and their terrors – as fault-lines, as sets of contradictions that run through the fabric of buildings and spaces and through social relations.  Engels was one of the first to write about Manchester in this way and the quality of his prose rings true more than one and half centuries later.  Today, life expectancy varies by ten years across the city and there is substantial poverty.  Inner city gentrification – the promise of apartment living offered us as we arrived at Angel Meadow – must always work to hide this.  Our fifty-five minutes of free fall into the stairwells and corners behind the façade was a sensory experience of a fault line. of its sounds and smells, materiality, and snatches of words and bits of pieces of peoples’ stories.  There are suggestive possibilities here, for performance as active intervention, for provoking an awareness of issues in ways that conventional politics can no longer touch.


Angel Meadow.  Directed by Louise Lowe, by ANU Productions, Dublin (    10-29 June 2014, as the inaugural production by Theatre HOME, Manchester.

Democratic innovation and the university

Northern Ireland today, says Stephen Farry, is a case study of “peace as pursuit of war by other means”. Farry, who is Minister for Employment and Learning in the Northern Ireland Executive, sees a central role for universities in moving beyond a society divided by religious sectarianism and to a new form of responsible politics. Majoritarianism, he says, must be balanced against minority interests and human rights if there is to be reconciliation.

Farry was the opening speaker at the Higher Education for Democratic Innovation Global Forum at Queens University Belfast. This brought together universities and associated organizations from North America and Europe to take a fresh look at community engagement in an ever more divided world.

Tony Gallagher, who co-convened the forum, describes how Northern Ireland’s universities had stood apart during the worst years of violence. Today, he says, there is an urgency for engagement, a need to work for new forms of democracy that balance single-issue politics with practical and workable forms of social inclusion. One project at Queens that works to this end is the Prison to Peace programme. Jacki McDonald, one time political prisoner and now a Community Development Worker with Prison to Peace, described passing the Queens campus on the bus during the Troubles, never seeing the university as a real part of Belfast and the sectarianism that was tearing the city apart. Now, he sees access to education both as a key cause of Northern Ireland’s divisions and essential to eventual reconciliation.

The quiet intellectual force behind the Belfast meeting, and its co-convenor with Queens, was Ira Harkavy, Director of the University of Pennsylvania’s Netter Center for Community Partnerships and, for the past twenty years, one of the most influential thinkers about universities and the communities they serve. I first worked with Ira more than ten years ago, as part of a joint US/ South African programme to rethink the relationship between the university and the city. By this time, Penn had had a remarkable effect on West Philadelphia, using its economic muscle to invest back into the city and to counter the hollowing out of the urban core. As part of our work with the Center, we went to East Palo Alto, a low-income, working class city that services Stanford University and Silicon Valley. I well remember the sharp anger of a school principal who had banned researchers and dissertation students from her classrooms. She had had enough, she said, of her school being studied as a social problem. Instead, she wanted engagement and commitment to the issues on her priority list. Ira Harkavy’s work is all about this sort of approach to social innovation, building on John Dewey’s formative insight that democracy requires careful and deliberate attention to the living relations of person to person in all social forms and institutions.

Then, as now, South Africa so often serves to calibrate these sort of transnational discussions because its institutions serve as exemplars of the ends of the spectrum. Then, we went to the US looking for ways in which the university in South Africa could engage with the still-prevalent traces of the apartheid city, sharply divided by race and economic circumstances. Speaking at the Belfast conference Ahmed Bawa, Vice-Chancellor of Durban University of Technology (DUT), outlined subtle but significant changes in the politics of South African reconciliation over the intervening years. Today, he says, 75% of his students are the first in their families to go to university; DUT is a cauldron of social mobility. But at the same time the urgency for engagement has been lost and the culture of “together” has been replaced by a “slide towards individualism”. There is a dwindling sense of education as a public good and, in contrast, an emerging preference for the sort of economic instrumentalism so characteristic of public policy in Europe and North America today. Ahmed wants a reconsideration of the intellectual project of education, a re-engineering in partnership with students.

There are new ideas emerging here, a new concept of the university as an “anchor institution” in the city. Stephen Farry makes the point that, despite a rightly-celebrated peace process in Northern Ireland, most young people’s experience of education before coming to university is of religious segregation; university may be the first time that Catholics and Protestants live and work together. Durban University of Technology cuts across the divided legacy of its city, bringing young adults whose heritage is the Zulu Kingdom with descendants of South Asian indentured labourers working colonial sugar plantations. The Belfast forum heard from James Harris and Marcine Pickron-Davis from Widener University in Chester, Pennsylvania, the most violent city in America. Here, $8m intended for a security fence to keep the city’s communities off the Widener campus was redirected to founding a Charter School to provide an alternative to a dysfunctional public school system. Across Manchester’s City Region, all our universities bring together young people from the most diverse backgrounds imaginable, from more than two hundred countries; the largest concentration of international students in Britain. John Dewey’s challenge – that we navigate the living relations of person to person in all social forms and institutions – is still pertinent today.

Higher Education for Democratic Innovation Global Forum, Belfast 25-27 June. Organized by the Council of Europe, International Consortium for Higher Education, Civic Responsibility and Democracy, European Students’ Union and Queen’s University Belfast.