Obama, Netanyahu, Iran, Congress and the Republican Party

Open Democracy News Analysis - il y a 4 heures 36 minutes

An intense political battle is going on over Iran on Capitol Hill. Insular Republicans underestimate at their peril international pressures driven by global security concerns.

A State of the Union speech by an American president can normally be compared to a Speech from the Throne by the Queen of England. In each case the executive takes advantage of an opportunity, in a ritualised public context, to outline its legislative and political programme for the coming year. The opposition responds as predictably, the public barely notices and political life rolls on.

Last week’s State of the Union by Barack Obama’s blew this tradition out of the water, advocating a range of domestic proposals anathema to the Republican establishment. Obama inflamed the Republican right by reaffirming the historic importance of opening up diplomatic relations with Cuba and by declaring his willingness to use his veto to block any move by the Republican-majority Senate to torpedo the nuclear negotiations with Iran, in a decisive final phase.

Obama, initially elected with behind him a wave of international hope that he would bid farewell to the grim US power politics of intimidation and the threat of military strikes, sadly morphed into an advocate of extra-legal drone strikes, black operations, mass surveillance and US-led military interventions in Libya, Iraq and Syria. He ceased to be a symbol of hope, and became a creature of the political establishment. Almost for the first time since first elected in November 2008, in this State of the Union he unexpectedly re-emerged as the fighter he once was, refreshingly advocating change.

Before Obama had completed his speech Republican senators were texting their rejections. He had touched on a raw nerve. The visceral hatred which has led many Republicans to strenuously oppose everything Obama has tried to achieve since he became America’s first black president burst out into the open.

It is no longer a closely-guarded secret that, on the night of Obama’s first inauguration, Republicans met to discuss how they could stymie his congressional initiatives. If Hillary Clinton had been elected, the Republican response would undoubtedly have been different.

After this State of the Union Republican leaders appeared on TV, declaring with great self-satisfaction that almost all Obama’s domestic initiatives were “dead”. One US report described John Boehner, speaker of the House of Representatives, as throwing a fit and a tantrum. Emboldened by their majority in Congress, Republicans had expected Obama to dance to their tune.

On the same wavelength: Boehner with Netanyahu when he visited Washington just four months into Obama's first term. Flickr /Talk Radio News Service. Some rights reserved.

Republicans have lost sight of the fact that US public opinion has already shifted significantly since their crushing mid-term victory in November 2014. They have also failed to acknowledge the turnout in that election—at 34% the lowest since 1942.

Obama’s initiatives on immigration and Cuba in particular have been welcomed by America at large. According to the latest Rasmussen opinion poll, his approval rating since the State of the Union is at its highest since mid-April 2013.

The Republicans and their Fox News media managers, who perceive themselves as opinion-shapers par excellence, are already out of step with public opinion. Their bellicose fulminations reflect their increasing isolation from the mainstream. Instead of enhancing Republicans’ 2016 election prospects, they may run the risk of undercutting them.

‘Doing the right thing’

The core issue at the heart of Obama’s contretemps with the Republicans is the possible deal over Iran’s nuclear programme. Whereas until now Republican spokespersons have hidden their anti-Iran agenda by pretending to be open to a decision on the outcome of the negotiations, their leader, Boehner, has declared: “No White House threat will stop us from doing the right thing to protect the US and its allies.” 

Boehner proceeded to say—in the immediate aftermath of Charlie Hebdo—that Islam and Iran posed “grave threats to our security and very way of life”. Republican support for any deal endorsed by Obama’s negotiators is all but ruled out. Because their campaign is co-led by a leading Democrat, the Republicans falsely claim it is bipartisan. It is not supported by the Democratic Party, whose leadership is strenuously supporting the negotiations.

Boehner’s statement came with his announcement that he had invited Israel’s increasingly controversial prime minister, Binyamin Netanyahu, to critique the ‘P5+1’ Iran negotiations (involving the permanent members of the UN Security Council and the European Union) before Congress and an international audience. Indeed Netanyahu accepted Boehner’s invitation—issued following consultation with the Republican caucus but not the White House—before Obama delivered his speech.

The White House press secretary diplomatically noted that the Republicans had departed from protocol. But an unattributed White House source spoke volumes: “He spat in our face publicly … Netanyahu ought to remember that President Obama has a year and a half left to his presidency, and that there will be a price.” According to the liberal Israeli newspaper Haaretz, Obama has warned Netanyahu to stop urging Congress to back laws imposing new sanctions on Iran.

Nancy Pelosi, the house minority Leader, aptly described Boehner’s actions as evidence of hubris. The same could be said of Netanyahu, who also neglected to consult Obama at any stage and could possibly be defeated in the forthcoming Israeli elections—being far less popular in Israel than in Congress. 

His obsessive concern to grandstand was never more evident than during the recent Charlie Hebdo demonstration in Paris. Although the French president, François Hollande, had requested him not to attend, Netanyahu did, forcing himself into the front row—almost next to Hollande—where his security guard apparently manhandled a French cabinet minister.

France may now be tempted not to support Israel in forthcoming votes on the UN Security Council and elsewhere. Two members of Netanyahu’s cabinet have warned that, if he does address Congress, he may be compromising Israel’s ties with the US for the sake of his campaign.

‘Throwing a grenade’

With the EU’s new foreign policy chief, Federica Mogherini, at his side, the US secretary of state, John Kerry, entered the fray by saying that “top intelligence personnel” in Israel had advised a visiting congressional delegation that if additional sanctions were announced “It would be like throwing a grenade into the process”. Although the Israeli government made an unconvincing attempt at damage control, it is generally understood that the Mossad chief, Tamir Pardo, has added his name to those of successive heads of the intelligence agency who have publicly challenged Netanyahu’s policies on Iran. Mossad can see that, if the negotiations were derailed, an already dangerously unstable region would be further destabilised.

Kerry also stressed that the US position on the Iran negotiations reflected the view of key EU allies: France, Germany and the UK. If Congress were to pull the plug on arduous negotiations backed by the US president, the international political fallout would be far-reaching. The US would lose not just face but international credibility.

Meantime, it went almost unnoticed on Capitol Hill that, in its most recent Joint Programme of Action report, the International Atomic Energy Agency (IAEA) found Iran had complied with its obligations under the Non-proliferation Treaty and honoured its commitment not to expand its nuclear activities. Indeed, successive National Intelligence Estimates by US agencies have found that, since 1994, Iran has not aimed to develop nuclear weapons. Iran’s foreign minister, Mohammad Zarif, has also weighed in, saying new sanctions would “kill” a nuclear deal, with Iran’s Majlis (parliament) taking counter-action.

Before his State of the Union, Obama unsettled some leading Republicans by encouraging the UK prime minister, David Cameron, to approach key congressional players in support of the P5+1. Now Republicans have invited Netanyahu to speak to both houses of Congress, they cannot object to further intense lobbying by governments supporting the negotiations. Mogherini recently circulated to all members of Congress a letter on behalf of EU foreign ministers: “We have a real chance to resolve one of the world’s long-standing security threats—and the chance to do it peacefully … We have a historic opportunity that may not come again.”

Narcissistic microcosm

If the bubble of US exceptionalism can be pricked through unaccustomed exposure to the views of the international community, this will undoubtedly stir some rethinking in Republican and Democratic ranks. Congress is a narcissistic microcosm of an insular society startlingly ignorant of the outside world. Most American adults have never left the United States. Their elected representatives can only benefit from discovering that, whether they like it or not, they are also part of a global community—and now lack the capacity to ride roughshod over its interests and wishes. 

The political heat is on and Congress has become a crucible, with all kinds of resolutions being crafted by Republicans and their few Democratic allies to achieve the 67 Senate votes required to override Obama’s presidential veto. Although Netanyahu’s speech to Congress was originally set down for early February, it is now scheduled to take place on 3 March, which happens to be the last day of the 2015 American Israel Public Affairs Committee (AIPAC) policy conference.

Mossad can see that, if the negotiations were derailed, an already dangerously unstable region would be further destabilised.

By addressing two very different audiences within three days, amid heightened international media attention, Netanyahu can pose as a polished diplomat before Congress and an articulate street-fighter before AIPAC, which attracts about 14,000 delegates—including as many as two-thirds of members of Congress. At least some may however absent themselves from the conference this time. They can give Netanyahu a hearing on their own turf and if they brave TV cameras and an international audience for a second bite of his poisoned cherry it could be to their political disadvantage. 

Netanyahu could still decide to address only AIPAC. He could get his message across to his key target audience without infuriating almost everyone on whom Israel depends for support. If not, his decision to postpone his US visit for the publicity value of two headline-grabbing speeches, just two weeks before the Israeli elections, could rebound on him and his congressional supporters.

Obama’s team has a golden opportunity to pull out all the stops in lobbying members of Congress. Any inhibitions key overseas supporters of a deal might otherwise have had about interfering in US domestic processes will be swept aside. Members will be hit from all sides by intense lobbying, including from abroad. This is likely to encourage Democratic waverers to play it safe and may encourage some Republicans to vote with their conscience rather than their party. In recent days two key Fox News pundits have sharply criticised Netanyahu’s decision to address Congress, revealing splits at the heart of the Republican camp.

The Senate Committee on Foreign Relations, now controlled by the Republicans, will in the interim hear submissions from two conservative think tanks. But it will also hear from the Republican godfather Henry Kissinger, who has in advanced old age upset Republican apple-carts over Ukraine. Quite unexpectedly, Kissinger will once again find himself in the eye of a major political storm, with nothing to lose.

By inviting Netanyahu to address Congress, the Republicans are encouraging an internationally unpopular leader to undermine a key foreign-policy objective of the US government, humiliate an elected president and undo two years of hard political labour by the P5+1. This may well anger Americans on both sides of the political fence. AIPAC’s teflon façade will be indelibly scratched.

Beginning of the end

If the Republicans are unable to achieve their aim of torpedoing the nuclear negotiations, this will be a massive defeat and a huge loss of political credibility, domestically and internationally. It could even mark the beginning of the end of their presidential campaign for 2016.

But what if they were to succeed in torpedoing the nuclear negotiations? Until now the Israeli/US mantra has been that if Iran does not arrive at a negotiated settlement all options, especially war, are on the table. When the negotiations began, war may have appeared a feasible option, at least to hardliners in Israel and the Pentagon. Mossad has however consistently been opposed.

But now, in the Islamic State (IS) environment, a conventional or nuclear attack on Iran would trigger unpredictable eruptions throughout the Middle East, in the West and elsewhere in the Muslim world. Iran’s hardliners would seize the initiative and its political leadership would be neutralised or thrown out. The Iraqi and Syrian governments would attack any such intervention. The Saudi and Bahraini governments would be as fearful of public opinion at home as they would be of the US and would try to straddle barbed-wire fences. As for Libya, Yemen, Jordan, Lebanon, Turkey and Egypt, who knows? If the US were associated with an attack on Iran, any Middle Eastern government which allowed US or Israeli aircraft to overfly its airspace or use airbases on its territory would be risking its neck.

IS and al-Qaeda would be immeasurably strengthened and would seize a golden opportunity to profile themselves and their work. In the absence of a friendly understanding between Iran and the US, the fragile anti-IS coalition would disintegrate. Public rage at Israel and Netanyahu would trigger a wave of anti-Israeli activity, including from within Israel and the occupied territories. Hizbullah would have nothing to lose by attacking Israel with its new generation of long-range rockets. 

War with Iran would thus appear to be off the table for the foreseeable future. The lesser option of a diplomatic rupture would almost ensure that international moves to put the genie of Islamist militancy back into the bottle will be stillborn. Given a choice between a negotiated agreement with Iran and a variation on the above scenarios, the international community is likely to favour keeping the lid on the Middle East, at least for the time being.

Sideboxes Related stories:  Will America's political discord torpedo the Iran talks? Endgame: the United States and Iran Country or region:  United States City:  Washington DC Topics:  Conflict International politics
Catégories: les flux rss

TV-2 goes off air

Open Democracy News Analysis - il y a 5 heures 30 minutes

The closure of Tomsk’s TV-2 is a reminder of what has happened to regional media in Russia. 

The Russian media industry is currently going through hard times. Federal channels have long ago moved away from fact-based journalism towards open propaganda in support of the government. The Russian authorities understand the power and potential of media all too well; they aim to consolidate all significant resources under their control, and close down independent media’s attempts to present an alternative view of the country and the world.

But it is not only federal channels that suffer, the regions are also under threat. Tomsk’s private regional television company TV-2 turned 24 last year, and is now facing permanent closure. TV-2 is not alone.

Under attack

From legislative measures to indirect pressure via shareholders and founders, there are different ways to influence media, forcing some journalists to leave the profession, and, on occasion, to go ‘underground’.

In spring 2014, Galina Timchenko, the editor-in-chief of Lenta.ru (the most popular news portal in Russia at the time), was sacked at the request of the owner. Most of the Lenta.ru team followed Timchenko out the door; and their new project, the news aggregator Meduza, was launched in October 2014 in Riga, Latvia. Although Meduza is a worldwide news service, it remains a Russia-focused news service.

In January 2014, after a scandal broke out in connection with an online survey by independent TV station Dozhd’, many cable and satellite operators removed the channel from its packages. As a result, the very existence of the only critical TV channel in Russia came under threat. Now Dozhd broadcasts on a subscription basis via the internet.

Siberia’s media rush

The residents of Siberia’s university town are right to consider their city special, and, for a long time, the local press was another source of pride. At the start of the 2000s, Igor Yakovenko, then head of the Russian Federation Union of Journalists, described Tomsk’s situation as a ‘media anomaly’. But what is so special about Tomsk?

At the start of the 2000s, Tomsk experienced a real media boom. The quality and variety of print, radio, and television media was the envy of neighbouring regions. Ten years ago, a resident of Tomsk – a city of 500,000 – could choose between five or six local news bulletins, competing between one another on different channels. Newspapers, radio stations, and TV channels received professional awards, and several of them were recognised as the best regional media organisations in the country. And, finally, the relationship between Tomsk journalists and the local authorities turned out to be surprisingly open and transparent – something sorely lacking in other regions.

Ten years ago, a resident of Tomsk – a city of 500,000 – could choose between five or six local news bulletins

But all good things come to an end, and Tomsk’s media boom began to falter at the end of the 2000s. Print media began to close for obvious reasons – the expansion of the internet. For the television market from 2006 to 2011, the situation also changed substantially. Federal network channels – TNT, REN TV, STS – re-established relations with former partners in Tomsk, and re-organised them into local branches.

The federal network’s intervention cut in on local channels’ market share, even wiping it out completely. Moreover, during the late 2000s, Russian media discourse underwent a significant change, and a new generation of journalists with different priorities entered the profession.

Independent success

TV-2 remained, for the most part, the only reminder of the industry’s former glory. Set up in 1990, the channel became part and parcel of everyday life in Tomsk.

Judging by viewing figures, TV-2 regularly found itself in the seven leading channels, breathing down the necks of federal networks. TV-2’s own news programme (Chas pik) and morning channel frequently outpaced federal channels in viewing figures. More broadly, TV-2’s share of the 18+ age band, in 2012-2014, came to 3.82%. In comparison, REN TV – a federal channel – took 4.33% in the same age band.

Over its 24 years of service, TV-2 has received an unprecedented (for a regional channel) 22 TEFI awards from the Russian Academy of Television – the highest award available.

But apart from recognition by viewers, TV-2 has also been successful commercially. Total earnings from television advert sales in Tomsk reached approximately 250m roubles (£2.5m) in 2014, of which 40m roubles (£400,000) went to TV-2 – far higher than any individual federal channel.

In addition to its commercial success, TV-2 was also a public broadcaster. Under difficult circumstances, one could ‘ring into TV-2’ in search of justice or action from the authorities or local service. Aside from informative programmes, the channel also produced high-quality entertainment.

‘TV-2 held out so long in an environment that is increasingly hostile to a free press for several reasons,’ explains the editor-in-chief Viktor Muchnik. ‘In the first place, our economic model was market-oriented. Our main source of income was always advertising revenues. We had a very strong sales record, and our contracts with the state did not interfere with our independence. This situation allowed us a certain amount of freedom when it came to editorial policy. Secondly, we wanted this kind of editorial policy, and the people who worked here shared those moral values. Thirdly, we made a name for ourselves, and this made it harder to put pressure on us. Fourthly, we had a certain amount of understanding with the local authorities. Yes, the former governor Viktor Kress would get angry with us for certain stories, but he still understood that Tomsk needed this kind of TV station. And lastly – and this is the most important – we had our own audience; loyal, large, and varied. This audience gave us our commercial success and our political independence.’

Off the air

On 19 April 2014, TV-2 suddenly went off the air. The cause was technical: the feeder unit, which transfers electromagnetic waves from the source to the antenna, broke down. This malfunction was foreseeable – the unit was installed back in 1968.

On 21 April 2014, Tomsk’s regional radio-television broadcast centre informed TV-2 about the causes of the malfunction as well as its intention to replace the broken unit before 15 July.

‘Our offer to buy and deliver a feeder ourselves was refused. This could have been done in a couple of days,’ says Muchnik. ‘They didn’t present us with a repair schedule. The company broadcasts only via cable, and we were losing advertising contracts. We began to suspect that the sticking point wasn’t the malfunction. Gradually these suspicions were confirmed as our sources got in touch, but for the time being we kept shtum.’

A month after the feeder broke down, TV-2 received a warning from Roskomnadzor, Russia’s federal press watchdog. The break in broadcasting was a violation of licence conditions, and so grounds for TV-2’s licence to be revoked. Roskomnadzor ordered that broadcast be resumed before 20 May 2014; as a result, the TV station found itself in a bureaucratic trap.

‘It turned out that one state body, RTRS [Russian Television and Radio Network – the state monopoly on communications provision], was dragging its heels on restoring us to the air, and another, Roskomnadzor, wasn’t. But it wasn’t threatening the local broadcasting service with revoking their licence, but the TV station. We decided to go public with the conflict, and went to court,’ Muchnik comments.

‘Pickets and demonstrations in support of TV-2 began in Tomsk. The scandal broke all over Russia. Colleagues and a few political bodies came out in our support. And after all of that, the feeder suddenly started working a few days later.’

Bureaucratic pressure

The story with TV-2 took an unexpected turn at the very end of 2014. In the autumn, the TV station applied to extend its broadcast licence, which was due to end on 8 February 2015, and received confirmation of a ten-year extension from Roskomnadzor.

However, at the end of November, TV-2 received another letter from RTRS. This letter stated that the current contract for broadcast services was cancelled, with effect from 1 January 2015.

‘Our behaviour in the spring figured as a cause. According to RTRS, by going public with the conflict, we “offended” them,’ says Muchnik. ‘Roskomnadzor then proceeded to inform us that the order to extend TV-2’s licence was, so to speak, a “computer malfunction”’.

Indeed, the official press release published on the website of the Tomsk branch of RTRS states that ‘providing services in broadcasting television and radio in all the Russian regions, RTRS respects and values each of its partner broadcasters […] But in this case, and we are confident in this, the management of TV-2 has exceeded the boundaries of what is permissible between business partners.’

Cancelling the contract with RTRS signals an end to operations at TV-2. Without access to analogue networks, it could continue to broadcast on cable networks, but when the licence agreement runs out, it will lose the ability to broadcast. And so once again the Tomsk channel found itself between a rock and a hard place.

A rearguard action

On the verge of shutting down, TV-2 began an intensive support campaign in Tomsk. Complaints about RTRS’ actions were sent to official bodies – the Federal Anti-Monopoly Service, General Prosecutor’s Office, the Union of Journalists, as well as human rights and business organisations. The Mayor of Tomsk and the regional governor also received requests to intervene in the situation.

In December, Tomsk saw numerous demonstrations in support of TV-2. Around 4,000 people came out to defend TV-2 in temperatures of -16°C. While Moscow also saw a protest in support of the TV station, more than 20,000 signatures were collected in support of the channel.

However, neither appeals to official bodies, nor protests had any effect. Overnight, on 31 December, the Tomsk regional broadcast centre took TV-2 off air.

In December 2014, Tomsk residents came out in freezing temperatures to defend TV-2. (c) Anna Fofanova.

For Viktor Muchnik, ‘the actions of RTRS are a direct violation of anti-monopoly legislation. But we will have to prove that in the courts. It takes a long time. Meanwhile, analogue broadcasting was cut off on 1 January, and on 8 February, TV-2 will stop broadcasting on cable when its licence runs out. As a result, the channel will be destroyed. Our team has already been informed of the cuts. A minimum of 100 people will lose their jobs. Of course, TV-2 will not disappear from the airwaves completely. We have a website, and we intend to develop it further. We will try to work with crowd-funding, grants, and subscriptions. We will develop our online sales. But we can’t keep the team as it was with these resources. With digital media in the regions, you can maintain a team of 10-15 people. Most people are going to have find a new job.’

The future of regional television networks is murky, at best. Independent editorial policy is going to be a thing of the past. If even a strong regional channel such as TV-2 is forced to close very quickly under bureaucratic pressure, then what will happen to others? Especially under the current economic conditions, and the contraction of the Russian television market – down between 5-7% since 2013.

Preserving TV-2, its workforce, and paying wages is a priority. And, of course, the ability to ‘negotiate’ with the authorities is also a necessary part of doing business. In these pressurised conditions, however, the regional press is likely to become yet another tool of the authorities.

Standfirst image via tv2.tomsk.ru. 

Sideboxes Related stories:  The rouble crisis in Siberia Russia's new media law Samara: ripples from the Ukrainian storm City:  Tomsk Rights:  CC by NC 3.0
Catégories: les flux rss

Campaigning at home is the route to tackling poverty abroad

Open Democracy News Analysis - il y a 6 heures 56 minutes

Tax avoidance costs developing countries billions every year. So this week 16 domestic and internationally focused organisations have joined forces to launch a campaign for a Tax Dodging Bill.

When I mention in the course of a conversation that I work for an international development charity, I often get an enthusiastic response, saying how rewarding it must be to help dig wells, paint schools and hand out mosquito nets.

The truth is that I do find it rewarding – but the way my work contributes is probably somewhat different than the popular perception. Most of my work is office-based, researching the causes of poverty and inequality, and working with others to identify solutions and present those in a way that can lead to change.

Aid can change lives – there is no doubt of that. Debt relief where it has happened has led to increased social spending. But the most sustainable and effective form of long term finance for tackling poverty abroad (or for that matter in the UK) is progressively collected, accountably distributed, tax.

The trouble is that—at home and abroad—billions of pounds that could be spent providing infrastructure to provide a better life for all is instead lining the pockets of the rich. Nor is this going unnoticed. In a recent poll, a huge majority—85% of people—say that tax avoidance by large companies is morally wrong even if it’s legal - figures that hold across the political spectrum. Conversely only 1 in 5 people believe political parties have gone far enough in their promises to tackle tax avoidance.

The shift in public mood coupled with campaigning so far has prompted some significant but nevertheless incremental progress; last year for example, the OECD proposed that companies should collect data on their taxes and profits in each country they operate. Unfortunately they did not go so far as to oblige them to make this information public, therefore maintaining the shroud of secrecy. 

The Government has also introduced legislation which would make it publically known which individuals own which companies, helping to identify the kind of ‘shell companies’ that can be used for corruption and tax dodging. Unfortunately this has not yet extended to the UK’s Overseas Territories.  

Then most recently the UK Government announced a Diverted Profits Tax (AKA “Google Tax”) in December 2014, but once again, it is still weakened by what we’re calling the ‘Luxembourg Loophole’ for loan arrangements, and it seems unlikely that it would stop the kinds of tax arrangements exposed by last year’s “Lux Leaks”.

So there is a long way to go, and to ratchet up the pressure this week 16 domestic and internationally focused organisations have joined forces to launch a campaign for a Tax Dodging Bill, packaging together a series of measures to help tackle tax dodging in its various manifestations and to challenge the provision of unjustifiable tax breaks for large companies.

As we have been doing for many years, we’re still calling for public disclosure of country by country reporting, as well as a review of corporate tax breaks, a tightening up of anti-tax haven rules and a heightening of penalties for those who those who don’t play by the rules. We estimate that billions could be returned to developing countries as a result, and £3.6 billion for the UK - the equivalent of £600 for every household living below the poverty line.

And the great thing is that you don’t have to fly to the other side of the world to help dig those wells, paint those schools or hand out those mosquito nets. In fact you don’t even need to have any skills in those things, because outside of humanitarian emergencies, providing health and education infrastructure is best led by the governments of the countries concerned rather than international visitors.

What we can do is help those countries recoup the tax they are due, by acting on the knowledge that the best type of charity is justice.

Catégories: les flux rss

Campaigning at home is the route to tackling poverty abroad

Open Democracy News Analysis - il y a 6 heures 56 minutes

Tax avoidance costs developing countries billions every year. So this week 16 domestic and internationally focused organisations have joined forces to launch a campaign for a Tax Dodging Bill.

When I mention in the course of a conversation that I work for an international development charity, I often get an enthusiastic response, saying how rewarding it must be to help dig wells, paint schools and hand out mosquito nets.

The truth is that I do find it rewarding – but the way my work contributes is probably somewhat different than the popular perception. Most of my work is office-based, researching the causes of poverty and inequality, and working with others to identify solutions and present those in a way that can lead to change.

Aid can change lives – there is no doubt of that. Debt relief where it has happened has led to increased social spending. But the most sustainable and effective form of long term finance for tackling poverty abroad (or for that matter in the UK) is progressively collected, accountably distributed, tax.

The trouble is that—at home and abroad—billions of pounds that could be spent providing infrastructure to provide a better life for all is instead lining the pockets of the rich. Nor is this going unnoticed. In a recent poll, a huge majority—85% of people—say that tax avoidance by large companies is morally wrong even if it’s legal - figures that hold across the political spectrum. Conversely only 1 in 5 people believe political parties have gone far enough in their promises to tackle tax avoidance.

The shift in public mood coupled with campaigning so far has prompted some significant but nevertheless incremental progress; last year for example, the OECD proposed that companies should collect data on their taxes and profits in each country they operate. Unfortunately they did not go so far as to oblige them to make this information public, therefore maintaining the shroud of secrecy. 

The Government has also introduced legislation which would make it publically known which individuals own which companies, helping to identify the kind of ‘shell companies’ that can be used for corruption and tax dodging. Unfortunately this has not yet extended to the UK’s Overseas Territories.  

Then most recently the UK Government announced a Diverted Profits Tax (AKA “Google Tax”) in December 2014, but once again, it is still weakened by what we’re calling the ‘Luxembourg Loophole’ for loan arrangements, and it seems unlikely that it would stop the kinds of tax arrangements exposed by last year’s “Lux Leaks”.

So there is a long way to go, and to ratchet up the pressure this week 16 domestic and internationally focused organisations have joined forces to launch a campaign for a Tax Dodging Bill, packaging together a series of measures to help tackle tax dodging in its various manifestations and to challenge the provision of unjustifiable tax breaks for large companies.

As we have been doing for many years, we’re still calling for public disclosure of country by country reporting, as well as a review of corporate tax breaks, a tightening up of anti-tax haven rules and a heightening of penalties for those who those who don’t play by the rules. We estimate that billions could be returned to developing countries as a result, and £3.6 billion for the UK - the equivalent of £600 for every household living below the poverty line.

And the great thing is that you don’t have to fly to the other side of the world to help dig those wells, paint those schools or hand out those mosquito nets. In fact you don’t even need to have any skills in those things, because outside of humanitarian emergencies, providing health and education infrastructure is best led by the governments of the countries concerned rather than international visitors.

What we can do is help those countries recoup the tax they are due, by acting on the knowledge that the best type of charity is justice.

Catégories: les flux rss

Crime and punishment in Armenia

Open Democracy News Analysis - il y a 8 heures 44 minutes

After a Russian soldier emerged as the prime suspect in the murder of an Armenian family in Gyumri, Armenia, last week, relations between the two countries have become strained.

Early on the morning of 12 January 2015, conscript Valery Permyakov is thought to have deserted the 102nd Russian military base in Gyumri, Armenia, and made his way to the local home of the Avetisyan family. Here, Permyakov allegedly murdered Sergei Avetisyan (53), his wife Asmik (55), their daughter Aida (35), their son Armen (34), their daughter-in-law Araksiya (24), granddaughter Asmik (2), and grandson Sergey (six months). Although initially there were hopes that the grandson would survive, following a serious operation, the child died on 19 January.

The sheer brutality of the killings and lack of legal clarity led to protests in the northern city, including demands for the removal of the 102nd military base from Armenian territory. The situation, however, is rather more complicated than these demands can account for, and that is because the 102nd Russian military base at Gyumri is a key element in the regional balance of power, particularly in the unregulated Nagorno-Karabakh conflict between Armenia and Azerbaijan. 

Unclear motives

The motives behind the killing of the Avetisyan family are still unclear. While the investigation tries to understand what exactly led to the murder of the Avetisyan family, local observers look to the suspect’s religious background in search of answers: Permyakov’s father is a pastor in the Church of Faith, Hope, Love, in Balei, Chita. The head of the church in Chita, Andrei Subbotin, confirmed that Permyakov’s father is a pastor in the church, but did not confirm whether Permyakov was a member himself.

Local observers look to the suspect’s religious background in search of answers

After discovering that Permyakov had left sentry duty in the early hours of the morning, Russian military police started searching for the soldier. They were yet to hear of the murder of the Avetisyan family.

Meanwhile, the Gyumri police were first notified of the murder at midday on 12 January. After a preliminary search of the scene, the Armenian police formed a special team to investigate the crime. With the identity of the main suspect established by a uniform left at the scene, the Armenian police began searching for Permyakov roughly two hours later.

The Russian military had begun searching for Permyakov eight to nine hours prior to their Armenian counterparts, and at midnight on 13 January, Russian border guards arrested Permyakov as he tried to cross the Turkish border. He was then transferred to the custody of the 102nd Russian military base in Gyumri.

At 11am that same day, reports emerged that Permyakov had begun to give evidence, and two hours later, he allegedly confessed to the crime. Later that day, the General Prosecutor’s Office of Armenia stated that, given Permyakov was being held within the jurisdiction of the Russian Federation, there was no question of transferring Permyakov to the Armenian authorities: according to the Russian constitution, the suspect could not be handed over to another state.

Protests

Digital media, however, were reporting the fourth article of the agreement, signed in 1997, regulating the presence of the 102nd military base in Armenia, which states clearly that Armenian laws and institutions take precedence when it comes to offences committed by soldiers serving at the Russian military base. According to this agreement, Permyakov should have been transferred to the Armenian authorities.

A similar demand was eventually put forward by participants in the protests that took place in both Gyumri and Yerevan, which peaked on 14 and 15 January. The demonstrations took place outside the Russian military base, and Russian consulate in Gyumri, as well as the Russian embassy in Yerevan.

On 15 January, protesters clashed with the Armenian police, with more than two dozen casualties on both sides. At the height of the demonstrations in Gyumri, more than 1,500 people marched in protest, and more than 200 in Yerevan. Among the protesters, certain people even went so far as to demand the removal of the Russian military base from Armenia. However, such a radical move was far from the principal demand of the protesters, who were more concerned with the hand-over of Permyakov to Armenian law enforcement.

Guided only by press reports, the protesters were not aware that, in addition to the fourth article of the military base agreement mentioned above, there is also a fifth – which discusses exceptions to the previous article, whereby the soldier should be handed over to Russian Federation jurisdiction. In particular, when it is a matter of a military offence. The suspect’s desertion with a weapon, which preceded the alleged murder of the Avetisyan family, is, of course, an offence under military law. Permyakov’s arrest by Russian military also plays a role in the situation.

Massive protests flare over massacre by Russian soldier. Image by PAN Photo via Demotix. CC

Armenian-Russian relations have been tested before. In 1999, two Russian soldiers opened fire on a Gyumri market, killing two people and wounding many others. In that instance, in accordance with the fourth article of the military base agreement, the arrest and court proceedings were carried out by the Armenian authorities.

A statement by Armenia’s General Prosecutor Gevorg Konstanyan, late on 15 January, only fuelled the fire. Konstanyan stated that he would apply for Permyakov’s transfer to Armenian law enforcement, yet only two days earlier, the General Prosecutor’s office had already stated that they were not even considering applying for the suspect’s transfer.

Later, on 22 January, Konstanyan suggested that, according to current agreements on joint border security, Russian border guards should have handed over Permyakov to the Armenian authorities. Naturally, these contradictory statements did not help diffuse the situation. 

Contradictory statements did not help diffuse the situation.

Confusion and obfuscation

Meanwhile, people involved in the protests – partially under the influence of the press – came up with the following view of events:

1)    The Russian military should hand over Permyakov to Armenian law enforcement, but is not doing so
2)    If the Russian military fails to hand over Permyakov, then most likely it is preparing to remove him from the country (or already has done) and conceal the trial. As a result, Permyakov might avoid serious punishment.

Hence the strange demands of the protesters in Gyumri that they be allowed to visit Permyakov in jail, record that visit, and broadcast it on television.

The above interpretation of the agreement, based only on the fourth article, emerged in press coverage of the situation outside Armenia, starting from the US and Europe, and ending with the post-Soviet space and the Armenian diaspora.

In the end, it was representatives of the opposition and the church that managed to come between the protesters and the police in Gyumri – the regional head of the Armenian Apostolic Church Mikael Adzhapakhyan, and Martun Grigoryan, a member of parliament for the opposition party Prosperous Armenia, who played a decisive role in restraining aggression on the ground. Former chief military prosecutor and member of the Armenian National Congress Gagik Dzhangiryan also took part in stabilising the situation. On 15 January, Dzhangiryan outlined the main points of the Armenian-Russian military base agreement, and gave a clear legal interpretation of the events concerning the killing of the Avetisyan family.

While the parliamentary opposition condemned the crime and expressed its sympathies to the relatives of the deceased, it refused to politicise the situation, and requested that the Armenian law enforcement agencies co-operate with Russian Federation representatives. With the exception of Zharangutyun (Heritage) party, the opposition did not make any demands concerning the hand-over of Permyakov.

At the same time, the Armenian authorities revealed their lack of understanding of the situation. The General Prosecutor arrived in Gyumri four days after the crime was reported. President Serzh Sargsyan and Prime Minister Hovik Abramyan also distanced themselves from events, and started to react only on 19-20 January. 

An olive branch

The situation took a new turn of events following a telephone call on 19 January between the Russian and Armenian presidents, which took place on the initiative of Vladimir Putin, who requested that Sargsyan communicate his deep regrets to the people of Armenia in connection with the murder of the Avetisyan family.

Armenian and Russian law enforcement bodies have now begun to work more closely together in investigating the crime. After the head of the Russian Federation Investigative Committee, Aleksandr Bastrykin, visited Armenia, issues concerning the future of the investigation and trial were worked out as follows:

1)    The investigation would be carried out by both sides
2)   The prosecution’s final statement will be a joint one
3)  The military tribunal or court will be carried out publicly in Gyumri by Russian representatives, and according to its laws (Permyakov is facing life imprisonment)
4)  The accused will carry out his sentence in the Russian prison system

The continuing line of conflict

The legal clarity achieved on 20-21 January, as well as the burial of the Avetisyan family, has led to a sense of relative stability concerning the tragedy in Gyumri, although a certain tension remains.

Alongside the tragedy in Gyumri, however, the situation on the line of conflict in Nagorno-Karabakh and the north-eastern border between Armenia and Azerbaijan, has severely deteriorated. On the night of 19 January, for example, Armenian military units intercepted eight attempts to cross the border – far more than the previous record of two attempts in 24 hours. 

Baku senses a deterioration in Armenian-Russian relations and is seeking to gain political concessions from Yerevan.

While several Armenian soldiers died during this period, official sources in Azerbaijan have made no comment. One can assume that Baku senses a possible deterioration in Armenian-Russian relations and is seeking to gain political concessions from Yerevan by force.

Apart from the Nagorno-Karabakh conflict, though, Azerbaijan possesses a far greater land border than Armenia, and is vulnerable to Iran on its southern border, and from other countries on the Caspian Sea. The second and fifth corps of the Azerbaijan army patrol the southern border with Iran. 

Meanwhile, to the west, Armenia faces Turkey (a de-facto ally of Baku), which Yerevan restrains via its military alliance with Russia. While Azerbaijan, with its considerable military budget, is forced to protect its borders, Armenia and the unrecognised Nagorno-Karabakh Republic can concentrate – financially and militarily – on its eastern border precisely because of the 102nd Russian military base in Gyumri. As a result, popular demands for the removal of the 102nd military base from Armenian territory are unlikely to be heeded.

 

Standfirst image: Gyumri massacre victim Seryozha Avetisyan laid to rest. Image PHOTOLURE News Agency via Demotix. CC

Sideboxes Related stories:  Armenia's unhappy New Year Where now for Armenia’s opposition? Rights:  CC by NC 3.0
Catégories: les flux rss

Crime and punishment in Armenia

Open Democracy News Analysis - il y a 8 heures 44 minutes

After a Russian soldier emerged as the prime suspect in the murder of an Armenian family in Gyumri, Armenia, last week, relations between the two countries have become strained.

Early on the morning of 12 January 2015, conscript Valery Permyakov is thought to have deserted the 102nd Russian military base in Gyumri, Armenia, and made his way to the local home of the Avetisyan family. Here, Permyakov allegedly murdered Sergei Avetisyan (53), his wife Asmik (55), their daughter Aida (35), their son Armen (34), their daughter-in-law Araksiya (24), granddaughter Asmik (2), and grandson Sergey (six months). Although initially there were hopes that the grandson would survive, following a serious operation, the child died on 19 January.

The sheer brutality of the killings and lack of legal clarity led to protests in the northern city, including demands for the removal of the 102nd military base from Armenian territory. The situation, however, is rather more complicated than these demands can account for, and that is because the 102nd Russian military base at Gyumri is a key element in the regional balance of power, particularly in the unregulated Nagorno-Karabakh conflict between Armenia and Azerbaijan. 

Unclear motives

The motives behind the killing of the Avetisyan family are still unclear. While the investigation tries to understand what exactly led to the murder of the Avetisyan family, local observers look to the suspect’s religious background in search of answers: Permyakov’s father is a pastor in the Church of Faith, Hope, Love, in Balei, Chita. The head of the church in Chita, Andrei Subbotin, confirmed that Permyakov’s father is a pastor in the church, but did not confirm whether Permyakov was a member himself.

Local observers look to the suspect’s religious background in search of answers

After discovering that Permyakov had left sentry duty in the early hours of the morning, Russian military police started searching for the soldier. They were yet to hear of the murder of the Avetisyan family.

Meanwhile, the Gyumri police were first notified of the murder at midday on 12 January. After a preliminary search of the scene, the Armenian police formed a special team to investigate the crime. With the identity of the main suspect established by a uniform left at the scene, the Armenian police began searching for Permyakov roughly two hours later.

The Russian military had begun searching for Permyakov eight to nine hours prior to their Armenian counterparts, and at midnight on 13 January, Russian border guards arrested Permyakov as he tried to cross the Turkish border. He was then transferred to the custody of the 102nd Russian military base in Gyumri.

At 11am that same day, reports emerged that Permyakov had begun to give evidence, and two hours later, he allegedly confessed to the crime. Later that day, the General Prosecutor’s Office of Armenia stated that, given Permyakov was being held within the jurisdiction of the Russian Federation, there was no question of transferring Permyakov to the Armenian authorities: according to the Russian constitution, the suspect could not be handed over to another state.

Protests

Digital media, however, were reporting the fourth article of the agreement, signed in 1997, regulating the presence of the 102nd military base in Armenia, which states clearly that Armenian laws and institutions take precedence when it comes to offences committed by soldiers serving at the Russian military base. According to this agreement, Permyakov should have been transferred to the Armenian authorities.

A similar demand was eventually put forward by participants in the protests that took place in both Gyumri and Yerevan, which peaked on 14 and 15 January. The demonstrations took place outside the Russian military base, and Russian consulate in Gyumri, as well as the Russian embassy in Yerevan.

On 15 January, protesters clashed with the Armenian police, with more than two dozen casualties on both sides. At the height of the demonstrations in Gyumri, more than 1,500 people marched in protest, and more than 200 in Yerevan. Among the protesters, certain people even went so far as to demand the removal of the Russian military base from Armenia. However, such a radical move was far from the principal demand of the protesters, who were more concerned with the hand-over of Permyakov to Armenian law enforcement.

Guided only by press reports, the protesters were not aware that, in addition to the fourth article of the military base agreement mentioned above, there is also a fifth – which discusses exceptions to the previous article, whereby the soldier should be handed over to Russian Federation jurisdiction. In particular, when it is a matter of a military offence. The suspect’s desertion with a weapon, which preceded the alleged murder of the Avetisyan family, is, of course, an offence under military law. Permyakov’s arrest by Russian military also plays a role in the situation.

Massive protests flare over massacre by Russian soldier. Image by PAN Photo via Demotix. CC

Armenian-Russian relations have been tested before. In 1999, two Russian soldiers opened fire on a Gyumri market, killing two people and wounding many others. In that instance, in accordance with the fourth article of the military base agreement, the arrest and court proceedings were carried out by the Armenian authorities.

A statement by Armenia’s General Prosecutor Gevorg Konstanyan, late on 15 January, only fuelled the fire. Konstanyan stated that he would apply for Permyakov’s transfer to Armenian law enforcement, yet only two days earlier, the General Prosecutor’s office had already stated that they were not even considering applying for the suspect’s transfer.

Later, on 22 January, Konstanyan suggested that, according to current agreements on joint border security, Russian border guards should have handed over Permyakov to the Armenian authorities. Naturally, these contradictory statements did not help diffuse the situation. 

Contradictory statements did not help diffuse the situation.

Confusion and obfuscation

Meanwhile, people involved in the protests – partially under the influence of the press – came up with the following view of events:

1)    The Russian military should hand over Permyakov to Armenian law enforcement, but is not doing so
2)    If the Russian military fails to hand over Permyakov, then most likely it is preparing to remove him from the country (or already has done) and conceal the trial. As a result, Permyakov might avoid serious punishment.

Hence the strange demands of the protesters in Gyumri that they be allowed to visit Permyakov in jail, record that visit, and broadcast it on television.

The above interpretation of the agreement, based only on the fourth article, emerged in press coverage of the situation outside Armenia, starting from the US and Europe, and ending with the post-Soviet space and the Armenian diaspora.

In the end, it was representatives of the opposition and the church that managed to come between the protesters and the police in Gyumri – the regional head of the Armenian Apostolic Church Mikael Adzhapakhyan, and Martun Grigoryan, a member of parliament for the opposition party Prosperous Armenia, who played a decisive role in restraining aggression on the ground. Former chief military prosecutor and member of the Armenian National Congress Gagik Dzhangiryan also took part in stabilising the situation. On 15 January, Dzhangiryan outlined the main points of the Armenian-Russian military base agreement, and gave a clear legal interpretation of the events concerning the killing of the Avetisyan family.

While the parliamentary opposition condemned the crime and expressed its sympathies to the relatives of the deceased, it refused to politicise the situation, and requested that the Armenian law enforcement agencies co-operate with Russian Federation representatives. With the exception of Zharangutyun (Heritage) party, the opposition did not make any demands concerning the hand-over of Permyakov.

At the same time, the Armenian authorities revealed their lack of understanding of the situation. The General Prosecutor arrived in Gyumri four days after the crime was reported. President Serzh Sargsyan and Prime Minister Hovik Abramyan also distanced themselves from events, and started to react only on 19-20 January. 

An olive branch

The situation took a new turn of events following a telephone call on 19 January between the Russian and Armenian presidents, which took place on the initiative of Vladimir Putin, who requested that Sargsyan communicate his deep regrets to the people of Armenia in connection with the murder of the Avetisyan family.

Armenian and Russian law enforcement bodies have now begun to work more closely together in investigating the crime. After the head of the Russian Federation Investigative Committee, Aleksandr Bastrykin, visited Armenia, issues concerning the future of the investigation and trial were worked out as follows:

1)    The investigation would be carried out by both sides
2)   The prosecution’s final statement will be a joint one
3)  The military tribunal or court will be carried out publicly in Gyumri by Russian representatives, and according to its laws (Permyakov is facing life imprisonment)
4)  The accused will carry out his sentence in the Russian prison system

The continuing line of conflict

The legal clarity achieved on 20-21 January, as well as the burial of the Avetisyan family, has led to a sense of relative stability concerning the tragedy in Gyumri, although a certain tension remains.

Alongside the tragedy in Gyumri, however, the situation on the line of conflict in Nagorno-Karabakh and the north-eastern border between Armenia and Azerbaijan, has severely deteriorated. On the night of 19 January, for example, Armenian military units intercepted eight attempts to cross the border – far more than the previous record of two attempts in 24 hours. 

Baku senses a deterioration in Armenian-Russian relations and is seeking to gain political concessions from Yerevan.

While several Armenian soldiers died during this period, official sources in Azerbaijan have made no comment. One can assume that Baku senses a possible deterioration in Armenian-Russian relations and is seeking to gain political concessions from Yerevan by force.

Apart from the Nagorno-Karabakh conflict, though, Azerbaijan possesses a far greater land border than Armenia, and is vulnerable to Iran on its southern border, and from other countries on the Caspian Sea. The second and fifth corps of the Azerbaijan army patrol the southern border with Iran. 

Meanwhile, to the west, Armenia faces Turkey (a de-facto ally of Baku), which Yerevan restrains via its military alliance with Russia. While Azerbaijan, with its considerable military budget, is forced to protect its borders, Armenia and the unrecognised Nagorno-Karabakh Republic can concentrate – financially and militarily – on its eastern border precisely because of the 102nd Russian military base in Gyumri. As a result, popular demands for the removal of the 102nd military base from Armenian territory are unlikely to be heeded.

 

Standfirst image: Gyumri massacre victim Seryozha Avetisyan laid to rest. Image PHOTOLURE News Agency via Demotix. CC

Sideboxes Related stories:  Armenia's unhappy New Year Where now for Armenia’s opposition? Rights:  CC by NC 3.0
Catégories: les flux rss

Churchillism

Open Democracy News Analysis - il y a 10 heures 51 minutes

The 50th anniversary of Winston Churchill's death brought forth a spasm of second-rate musings and hagiographical blather about the great man by the London media, but little understanding of the transformation of Britain which, despite himself, he personified. Here, by contrast, is the 1982 analysis of Anthony Barnett.

Flickr.MsSaraKelly. Some rights reserved.

In an attempt to understand the Falklands War, I wrote Iron Britannia, Why Parliament Waged its Falklands War in 1982 (now republished by Faber Finds). It led me to name and then analyse what I called 'Churchillism', in the chapter reproduced below. Today I would thicken the description to include science policy and liberty, as I discuss in the forward new edition, but the central argument holds. Thatcherism was an attempt to replace Churchillism with a Gaullist modernisation, based on the global market and North Sea Oil rather than an interventionist state. It seems to have run its course. Labour may be able shake off its Blair-Brown embrace of Thatcherism but it cannot go back to its formative, Churchillist, moment, as it now seems to wish, and it is doomed unless it forges a new grand strategy.  But this is another story. Here, it is simply worth noting that half a century after he died, the entire Anglo-British political class and its media seems incapable of understanding the formative history of its political-cultural system, that took place under the sign of Churchill's cigar.

 TO LISTEN to the House of Commons debate on 3 April 1982 was like tuning in to a Wagnerian opera. Counterpoint and fugue rolled into an all-enveloping cacophony of sound and emotion. Britannia emerged once more, fully armed and to hallelujahs of assent (accompanied by fearful warnings should She be again betrayed). A thunderous 'hear, hear' greeted every audacious demand for revenge wrapped thinly in the call for self-determination. Dissent was no more than a stifled cough during a crescendo of percussion: it simply confirmed the overwhelming force of the music.

Later, opposition would make itself heard above the storm. But it was drowned out at the crucial moment. In part this was arranged. As we have seen, scheming took place to ensure a 'united House'. MPs took six days to debate entry into the Common Market in 1973. They went to war for the Falklands in three hours. The result was to preempt public discussion with a fabricated consensus. In the immediate aftermath of Argentina's take-over of the islands, most people could hardly believe it was more important than a newspaper headline about some forgotten spot. Suddenly they were presented with the unanimous view of all the party leaders that this was a grave national crisis which imperilled Britain's profound interests and traditional values. The decisive unity of the Commons was thuggish as well as inspired. The few who feared the headlong rush were mostly daunted and chose the better part of valour. Innocent islanders in 'fascist' hands, the nation's sovereignty raped: it seemed better to wait and let things calm down. The war party seized the occasion with the complicity of the overwhelming majority of MPs from all corners of Parliament. On 3 April there was scarcely an opposition to be outmanoeuvred. The result was that even if one continued to regard the Falklands as insignificant, there clearly was a Great Crisis. Within what is called 'national opinion' there was no room to disagree about that: one had either to concur or suffocate. The Commons united placed British sovereign pride upon the line; and sovereignty is not a far away matter, people feel it here at home just as they identify with their national team in a World Cup competition, however distant. With a huge endorsement from the press, Parliament had ensured that the nation—so we were told—spoke with one voice, had acted with purpose and solidarity and had thus gambled its reputation on a first-class military hazard.

Many trends were at work—consciously or blindly—to prepare for such a moment. But much more important, and what gave the militants the 'unity' essential to their cause, was the general condition that allowed them to succeed so handsomely. It held the Commons in the palm of its hand. It orchestrated the one-nation sentiments of the three geniuses of the occasion—Enoch Powell, Michael Foot and David Owen—who bound Thatcher so willingly to Hermes. To analyse this general condition properly would take a thick book, for it has many symptoms. Moreover the condition is so deeply and pervasively a part of England, so natural to its political culture, that it is difficult to see, impossible to smell as something distinct. Like the oxygen in the air we breathe, and which allows flames to burn, it is ordinarily intangible. Perhaps the Falklands crisis will at last bring the mystery into sight.

To provoke and assist this discussion of the pathology of modern British politics, I will be bold and assertive. Yet it should be borne in mind that I am only suggesting a possible description; one which will certainly need correction and elaboration. First, we need a name for the condition as a whole, for the fever that inflames Parliamentary rhetoric, deliberation and decision. I will call this structure of feeling shared by the leaders of the nation's political life, 'Churchillism'. Chuchillism is like the warp of British political culture through which all the main tendencies weave their different colours. Although drawn from the symbol of the wartime persona, Churchillism is quite distinct from the man himself. Indeed, the real Churchill was reluctantly and uneasily conscripted to the compact of policies and parties which he seemed to embody. Yet the fact that the ideology is so much more than the emanation of the man is part of the secret of its power and durability.

Churchillism was born in May 1940, which was the formative moment for an entire generation in British politics. Its parliamentary expression was a two-day debate which ended on 8 May with a crucial division on the Government's conduct of the war. Churchill himself had already entered the cabinet, which remained under Chamberlain's direction. After the hiatus of the 'phony war', an attempt by the British to secure control of Norway had ended in disaster. Although Churchill also bore responsibility for the misadventure, it was Chamberlain who was felt to be out of step with the time. Attlee asked for different people at the helm. From the Conservative back-benches Leo Amery repeated a testy remark of Cromwell's, 'In the name of God, go!'. The Government's potential majority of 240 crashed to 80. In the aftermath Churchill emerged as Prime Minister with, as I will discuss in a moment, the crucial support of Labour to create a new National Coalition. Within days, the war took on a dramatically different form, and then a catastrophic one, as the Germans advanced across Holland and into France. The British army was encircled and the order to evacuate given on 27 May. Through good fortune some 300,000 were pulled back across the Channel and Dunkirk became a symbol not only of survival but also of'national reconciliation' and ultimate resurgence as it coincided with the emergence of Churchill's coalition.(1)

At that moment Churchill himself was a splendid if desperate enemy of European fascism, while Churchillism was the national unity and coalition politics of the time. Among those who participated most enthusiastically, there were some who wanted to save Britain in order to ensure the role of the Empire, and others who wanted to save Britain in order to create a new and better order at home. But Churchillism was more than a mere alliance of these attitudes. It incorporated imperialists and social democrats, liberals and reformers.(2) From the aristocrats of finance capital to .the autodidacts of the trade unions, the war created a social and political amalgam which was not a fusion—each component retained its individuality—but which nonetheless transformed them all internally, inducing in each its own variety of Churchillism and making each feel essential for the whole.

Today Churchillism has degenerated into a chronic deformation, the sad history of contemporary Britain. It was Churchillism that dominated the House of Commons on 3 April 1982. All the essential symbols were there: an island people, the cruel seas, a British defeat, Anglo-Saxon democracy challenged by a dictator, and finally the quintessentially Churchillian posture—we were down but we were not out. The parliamentarians of right, left and centre looked through the mists of time to the Falklands and imagined themselves to be the Grand Old Man. They were, after all, his political children and they too would put the 'Great' back into Britain.

To see how the Falklands crisis brought the politicians at Westminster together and revealed their shared universe of Churchillism, it will help to note the separate strands which constituted it historically: Tory belligerents, Labour reformists, socialist anti-fascists, the liberal intelligentsia, an entente with the USA (which I will look at at greater length as its legacy is crucial) and a matey relationship with the media.

1.     Tory Imperialists

In 1939 only a minority of the Conservative Party supported Churchill in his opposition to appeasement. Their motives for doing so were mixed. The group included back-bench imperialists like Leo Amery—the father of Sir Julian Amery, who spoke in the Falklands debate—and 'one nation' reformers like the young Macmillan. A combination of overseas expansionism and social concessions had characterized Conservatism since Disraeli: a nationalism that displaced attention abroad plus an internal policy of gradualist, paternalistic reform.

Churchill, however, stood on the intransigent wing of the Party. (He had left the Conservative front bench over India in 1931 when he opposed granting it dominion status.) Unlike Baldwin, Churchill had ferociously resisted the rise of Labour, and his militancy in the General Strike made him an enemy of the trade unions until he finally took office in May 1940. Three years previously Baldwin had retired and been replaced by Chamberlain who was efficient but also aloof and stubborn. He proved incapable of assimilating Labour politicians into his confidence, while he saw the imperative need for peace if British business interests were to prosper. By continuing to exclude the restless Churchill from office, Chamberlain perhaps ensured that he would see the opposite and indeed, Churchill gave priority to military belligerency. Thus Churchill, who had initially welcomed Mussolini as an ally in the class war, became the most outspoken opponent of Nazism, because it was a threat to British power. There was no contradiction in this, but rather the consistency of a Toryism that in the last instance placed the Empire before the immediate interests of trade and industry.

2.    Labour and Reformism

As emphasized earlier, it is essential that Churchill and Churchillism be rigorously distinguished. While the man had been among Labour's most notorious enemies, the 'ism' contains Labour sentiment as one of its two major pillars. In terms of Churchill's own career, the transformation can be seen in 1943, when he sought the continuation into the postwar period of the coalition government with Labour. Conversely, the Labour Party's support was crucial in Churchill's accession to power in May 1940. Chamberlain had actually maintained a technical majority in the vote over the failure of the Norwegian expedition; but the backlash was so great that his survival came to depend on Labour's willingness to join his government. It refused, asserting that it would only join a coalition 'as a full partner in a new government under a new Prime Minister which would command the confidence of the nation'. Within an hour of receiving this message, Chamberlain resigned.(3)

It is important to recall that Chamberlain's regime wa's itself a form of coalition government. At the height of the depression in 1931, Ramsey MacDonald had decapitated the Labour Movement by joining a predominantly Conservative alliance. This incorporation of part of the Labour leadership into a basically Tory government was a triumph for Baldwin, vindicating his strategy of deradicalizing the Labour movement through the cooptation of its parliamentary representatives. By the same token, the creation of the 1931 National Government was a defeat for the hardline approach of Churchill. The great irony of 1940, then, was that Labour attained its revenge by imposing the leadership of its former arch-enemy on the Tory Party. The alliance which resulted was also quite different from the National Government of 1931: that first coalition broke the Labour Party while in 1941 it was the Conservatives who were 'shipwrecked'.(4)

Churchill dominated grand strategy but Labour transformed the domestic landscape. Ernest Bevin, head of the Transport and General Workers Union, became Minister of Labour and a major figure in the War Cabinet. Employment rose swiftly as the economy was put on a total war footing and for the first, and so far only time in the history of British capitalism, a significant redistribution of wealth took place in favour of the disadvantaged. While adamant in his attitude towards strikes and obtaining a more complete war mobilization than in Germany, Bevin ensured the extension of unionism and improvements in factory conditions. Both physically massive men, the collaboration of Churchill and Bevin personified the contrast with the earlier pact between Baldwin and MacDonald. The 1931 National Government was a formation of the centre based on compromise at home and abroad. The two prime actors in 1941 were men of deeds, determined to pursue their chosen course. Once enemies, they now worked together: an imperialist and a trade unionist, each depending upon the other.

Within the alliance, the centre worked away. To compound the ironies involved, some of the Conservatives who most readily accepted the domestic reforms were from the appeasement wing of the party. Butler, for example, who disdained Churchill even after the war began, put his name to the 1944 Education Act that modernized British education (though it preserved the public school system). But the administrative reformists of the two main parties never captured the positions of ideological prominence. Bevin was more a trade union than a Parliamentary figure, Attlee led from behind, and Labour in particular suffered from its inability to transform its 'moral equality' into an equivalent ideological hegemony over the national war effort.

3.    Anti-Fascism

Overarching the centre was an extraordinary alliance of left and right in the war against fascism. Those most outspoken on the left were deeply committed to the war effort (even when their leading advocate in the Commons, Aneurin Bevan, remained in opposition). The patriotic anti-fascists of both Left and Right had different motives, but both had a global perspective which made destruction of Nazism their first imperative. When the Falklands war party congratulated Michael Foot—the moral anti-fascist without equal on the Labour benches—for his stand, it was like a risible spoof of that historic, formative moment in World War Two when'the flanks overwhelmed the centre to determine the execution of the war.

Yet it was not a hoax, it was the real thing; though it related to 1940 as damp tea-leaves to a full mug. The Falklands debate was genuinely Churchillian, only the participants in their ardour failed to realize that they were the dregs. This is not said to denigrate either the revolutionaries or the imperialists of the World War. Their struggle against fascism was made a mockery of in Parliament on 3 April: for example, when Sir Julien Amery implicitly, and Douglas Jay explicitly, condemned the Foreign Office for its 'appeasement', just because it wanted a peaceful settlement with Buenos Aires; or when Patrick Cormack said from the Tory benches that Michael Foot truly 'spoke for Britain'.(5)

Above all, it was a histrionic moment for Foot. Although frequently denounced by the Right as a pacifist, he was in fact one of the original architects of bellicose Labour patriotism. Working on Beaverbrook's Daily Express he had exhorted the Labour movement to war against the Axis. In particular, in 1940 when he was 26, he inspired a pseudonymous denunciation of the appeasers called The Guilty Men, published by Gollancz. Foot demanded the expulsion of the Munichites—listed in the booklet's frontispiece—from the government, where Churchill had allowed them to remain. The Guilty Men instantly sold out and went through more than a dozen editions. It contains no socialist arguments at all, but instead is a dramatized accounting' of the guilt of those who left Britain unprepared for war and the soldiers at Dunkirk unprotected. It points the finger at Baldwin and MacDonald for initiating the policy of betrayal. On its jacket it flags a quote from Churchill himself, 'The use of recriminating about the past is to enforce effective action at the present'. Thus while the booklet attacks both the Conservative leadership of the previous decade and the Labour men who sold out in 1931, it impeaches them all alike on patriotic grounds: they betrayed their country. Churchill's foresight and resolve, by contrast, qualify him for national leadership—for the sake of the war effort, the remaining 'guilty men' had to go.

It was precisely this rhetoric—the language of Daily Express socialism—that was pitched against the Thatcher government in the 3 April debate by the Labour front-bench. Foot denounced its leaders for failing to be prepared and for failing to protect British people against a threat from dictatorship. The 'Government must now prove by deeds ... that they are not responsible for the betrayal and cannot be faced with that charge. That is the charge, I believe, that lies against them.' (my emphasis) Winding up, John Silkin elaborated the same theme, only as he was concluding the debate for the opposition he was able to bring the 'prosecution' to its finale, in the full theatre of Parliament. Thatcher, Carrington and Nott 'are on trial today', as 'the three most guilty people'.

4.      Liberalism

The political alliance of Churchillism extended much further than the relationship between Labour and Conservatives. The Liberals were also a key component, and this helps to explain why an important element of the English intelligentsia was predominantly, if painfully, silent at the outbreak of the Falklands crisis. In 1940 the Liberals played a more important role in the debate that brought down Chamberlain than did Labour spokesmen, with Lloyd George in particular making a devastating intervention. Later, individual Liberals provided the intellectual direction for the administrative transformation of the war and its aftermath.

Keynes was its economic architect, Beveridge the draughtsman of the plans for social security that were to ensure 'no return' to the 1930s. Liberalism produced the 'civilized' and 'fair-minded' critique of fascism, which made anti-fascism acceptable to Conservatives and attractive to aristocrats. Liberalism, with its grasp of detail and its ability to finesse issues of contention, was the guiding spirit of the new administrators. Because of its insignificant party presence, its wartime role is often overlooked, but liberalism with a small T was the mortar of the Churchillian consensus. One of Beveridge's young assistants, a Liberal at the time, saw the way the wind was blowing and joined the Labour Party to win a seat in 1945. His name was Harold Wilson.(6)

5.      The American Alliance and 'Self-Determination'

Churchillism was thus an alliance in depth between forces that were all active and influential. Nor was it limited to the domestic arena; one of its most important constituents has been its attachment to the Anglo-American alliance, and this was Churchill's own particular achievement. Between the wars the two great anglophone powers were still as much competitors as allies. During the 1920s their respective general staffs even reviewed war plans against one another, although they had been allies in the First World War. The tensions of the Anglo-American relationship four decades ago and more may seem irrelevant to a discussion of the Falklands affair; yet they made a decisive contribution to the ideological heritage which was rolled out to justify the dispatch of the Armada.

When Churchill took office in 1940 Britain was virtually isolated in Europe, where fascist domination stretched from Warsaw to Madrid, while the USSR had just signed a 'friendship' treaty with Germany and the United States was still neutral. Joseph Kennedy, the American ambassador in London (and father of the future President), was an old intimate of the Cliveden set and a non-interventionist. He had advised Secretary of State, Cordell Hull, that the English 'have not demonstrated one thing that could justify us in assuming any kind of partnership with them'.(7) But Roosevelt, eminently more pragmatic, saw that genuine neutrality would allow Hitler to win; it would lead to the creation of a massive pan-European empire, hegemonic in the Middle East and allied to Japan in the Pacific. On the other hand, by backing the weaker European country—the United Kingdom—the US could watch the tigers fight. Continental Europe would be weakened and Britain—especially its Middle East positions—would become dependent on Washington's good will. In other words, it was not fortuitous that America emerged as the world's greatest economic power in 1945, it simply took advantage of the opportunity that was offered. But this opportunity also provided Britain with its only possible chance of emerging amongst the victors. At issue were the terms of the alliance.

On May 15, immediately after he became Prime Minister and just before Dunkirk, Churchill wrote his first letter to Roosevelt in his new capacity. He asked for fifty old American destroyers and tried to lure the President away from neutrality. The Americans in turn suggested a swap arrangement that would give them military bases in the Caribbean, Newfoundland and Guyana. The trade of bases for old hulks was hardly an equal exchange, but by deepening American involvement it achieved Churchill's overriding purpose, and allowed the President to sell his policy to Congress. Later, as Britain ran out of foreign reserves, Lend-Lease was conceived. The United States produced the material of war while the British fought, and in the meantime relinquished their once commanding economic position in Latin America to Uncle Sam.(8) (So when Peron—whose country had been a British dominion in all but name for half a century—challenged the hegemony of the Anglo-Saxon bankers in 1946 by resurrecting the irredentist question of the Malvinas, it was a demagogic symbol of already fading subordination that he singled out. The real economic power along the Plata now resided in Wall Street rather than the City.)

Four months before Pearl Harbor, the 'Atlantic Charter' (August 1941) consolidated the Anglo-American alliance and prepared US opinion for entry into war. The Prime Minister and the President met off Newfoundland and agreed to publicize a joint declaration. The main argument between them was over  its fourth clause. Roosevelt wanted to assert as a principle that economic relations should be developed 'without discrimination and on equal terms'. This was aimed against the system of'imperial preferences' which acted as a protectionist barrier around the British Empire. Churchill moderated the American position by inserting a qualifying phrase before the clause. Behind the fine words of the Atlantic Charter there was a skirmish and test of wills between the two imperialisms. Although we can now see that the Charter was determined by self-interest, its function was to enunciate democratic principles that would ensure popular and special-interest support in both countries for a joint Anglo-Saxon war. Both governments announced that they sought no territorial aggrandizement or revision that did 'not accord with the freely expressed wishes of the peoples concerned'. Churchill later denied that this in any way related to the British colonies. He was to declare in 1942 that he had not become Prime Minister to oversee the liquidation of the British Empire. Nonetheless he also claimed to have drafted the phrase in the Charter which states that the UK and the US would 'respect the right of all people to choose the form of government under which they will live'.(9) There is a direct lineage between this declaration and Parliament's reaction to the Falklands.

By the end of the year America had entered the war as a full belligerent. On New Years Day 1942, twenty-six allied countries signed a joint declaration drafted in Washington which pledged support to the principles of the Atlantic Charter. Henceforward the alliance called itself the 'United Nations', and three years later a world organization of that name assembled for the first time. In its turn it enshrined the principles of 'self-determination' codified by Roosevelt and Churchill.

In his memoirs Churchill is quite shameless about the greatness of the empires, British and American, that collaborated together against the 'Hun'. But he cannot hide the constant tussle for supremacy that took place between them, within their 'Anglo-Saxon' culture, in which each measured its own qualities against the other. From their alliance, forced on the British by extreme adversity, came their declaration of democratic aims. Its objective was to secure support from a suspicious Congress that saw no profit in bankrolling an Empire which was a traditional opponent, and which was detested by millions of Irish and German-American voters. It had, therefore, to be assuaged with the democratic credentials of the emerging trans-Atlantic compact. Thus, in order to preserve the Empire within an alliance of 'the English speaking nations', Churchill—imperialist in bone and marrow—composed a declaration of the rights of nations to determine their own form of government. In international terms, this ambiguity is the nodal point of Churchillism. By tracing, however sketchily, its outline, we can begin to decode the extraordinary scenes in the House of Commons on 3 April this year. Above all, it clarifies the ease with which those like Thatcher utilized the resources of the language of 'self-determination'. When she and Foot invoked the UN Charter to justify the 'liberation' of the Falklands because its inhabitants desire government by the Crown, they reproduced the sophistry of the Atlantic Charter. What particular resonance can such terms have for the British Right, when in other much more important circumstances like Zimbabwe they are regarded as the thin wedge of Communist penetration? The answer is to be found in Churchillism, which defended and preserved 'Great' Britain and its imperial order by retreating slowly, backwards, never once taking flight, while it elevated aspirations for freedom into a smoke-screen to cover its manoeuvre.

In 1940 what was at stake was Britain's own self-determination. Invasion was imminent and an embattled leadership had to draw upon more than national resources to ensure even survival. Together with the invocation of specifically British values and tradition, Churchill revived the Wilsonian imagery of 'the great democratic crusade' (a rhetoric that had been improvised in 1917, in response to the Russian Revolution). Such ideals were crucial not only for the North American public but also for anti-fascist militants in the UK and for liberals, who loathed warfare—the experience of 1914-18 was still fresh—and who distrusted Churchill, especially for his evident pleasure in conflict. They were uplifted by the, rallying cry that gave both a moral and political purpose to the war as it coupled the UK to its greatest possible ally. While Churchill saved Great Britain, preserved its institutions and brought its long colonial history to bear through his personification of its military strengths, he did so with a language that in fact opened the way for the Empire's dissolution. The peculiarity of this explains how Britain could shed—if often reluctantly and with numerous military actions—so many peoples and territories from its power after 1945 without undergoing an immediate convulsion, or any sort of outspoken political crisis commensurate with its collapse. Instead a long drawn-out anaemia and an extraordinary collective self-deception was set in train by Churchillism.

Perhaps the Falklands crisis will come to be seen as a final spasm to this process of masked decline. Many have seen it as a knee-jerk colonialist reaction. Foreigners especially interpret the expedition to 'liberate the Kelpers' as a parody of Palmerstonian gunboat diplomacy, out of place in the modern world. It may be out of place, but in British terms its impetus is modern rather than Victorian.

The stubborn, militaristic determination evinced by the Thatcher government, her instant creation of a 'War Cabinet' that met daily, was a simulacrum of Churchilliana. So too was the language Britain had used to defend its actions. Both rhetoric and policy were rooted in the formative moment of contemporary Britain, the time when its politics were reconstituted to preserve the country as it 'stood alone' in May 1940.(10) A majority of the population are today too young to remember the event, but most members of Parliament do. The mythical spirit of that anxious hour lives on as a well-spring in England's political psyche.

6.      The Incorporation of the Mass Media

There is one final aspect of Churchillism that needs to be mentioned: the relationship he forged with the media. He brought Beaverbrook into the Cabinet, attracted by the energy of the Canadian newspaper proprietor. He himself wrote in the popular press and took great care of his relations with the newspapers, in sharp contrast to Chamberlain who disdained such matters. Then, from 1940 onwards, Churchill's broadcasts rallied the nation: he skilfully crafted together images of individual heroism with the demand for general sacrifice. No subsequent politician in Britain has been able to forge such a bond between leader and populace.

The policies of the modern State are literally 'mediated' to the public via the political and geographical centralization of the national press. London dominates through its disproportionate size, its financial strength and the spider-web of rail and road of which it is the centre. Its daily press has long provided the morning papers for almost all of England, and they are taken by many in Scotland and Wales. A journalistic strike force has been developed, which strangely illuminates the way British political life is exposed to extra-national factors through its peculiar inheritance of capitalist aristocrats and overseas finance. Astor, an American, bought The Times in 1922; Thompson, a Canadian, acquired it in 1966; Rupert Murdoch, an Australian, took it over in 1981. But Astor, educated at Oxford, became anglicized and conserved the paper's character. The hegemonic organ of the nation may have been in the hands of a foreigner financially, but it was edited by Old England all the more because of it. Thompson pretended only to business rather than political influence, but he too made the transition across the Atlantic to become a Lord.

Thompson's son, however, shifted himself and the company back to North America, allowing a Catholic monetarist to lead the paper into the abyss of British labour relations and a year-long, futile closure. Now losing money heavily, The Times was sold to Murdoch, who already controlled the News of the World and the Sun. But he sojourns in New York rather than London. His papers endorsed the Falklands expedition with such a ludicrous enthusiasm that they managed to blemish vulgarity itself. But there remains a sense in which the relationship Churchill established with Beaverbrook came to be faintly echoed in Thatcher's reliance on Murdoch. The bombastic irrelevance of'down under' helped Thatcher to storm the enfeebled ranks of gentry Conservatism, and gave her a major working-class daily—the Sun. Yet the Sun's very lack of seriousness was a signal that the militarism of the Falklands War was bursting out of the carapace of Churchillism. The cardinal world issues adjudicated by Britain in the past could hardly be applied to taking on Argentina over 1,800 people in 1982. 'UP YOUR JUNTA', was one headline in the paper as it welcomed an initial British success. Was this the way to fight the scourge of fascism?

In 1940 Churchill ,was willing to do anything and everything for victory. Yet, as we have seen, the meaning of 'victory' became increasingly ambiguous in the course of the war. Churchill fought tooth and nail to defend the Empire, but in the end—to save British sovereignty itself—he formed, and was a prisoner of, a politics which accepted the liquidation of the Empire (except for a few residual outposts like the Falklands ...). The 'regeneration' was sufficiently radical to concede decolonialization and the emergence of new states, yet it was not radical enough to adapt the British State itself to its reduced stature. This, indeed, was its fatal success. Galvanized by total war, but, unlike continental Europe, spared the ultimate traumas of occupation and defeat, Britain survived the 1940s with its edifice intact. This fact has often been alluded to as a principal cause of the 'British disease'—the country's baffling postwar economic decline; moreover, it distinguished Churchillism from Gaullism.

The contrast is illuminating. Gaullism was born of defeat at the same moment as Churchillism (May 1940), and was also personified by a right-wing militaristic figure of equivalent self-regard and confidence. But in the long run Gaullism has inspired a far more successful national 'renewal' and adaptation to the increasingly competitive environment. Was this not partially due to the paradoxical fact that the fall of France, by reducing the Third Republic to rubble, ultimately provided a convenient building site for institutional modernization? In Britain, by contrast, the institutions held firm—like St Paul's defying the blitz—with corresponding penalties for this very durability. The most ingenious of Britain's defences against destructive change and forced modernization was the conserving collaboration between labour and capital. The relationship was the very core of Churchillism.

If Churchillism was born in May 1940, it had at least a twenty-year gestation. Keith Middlemas has shown that state, capital and labour sought to harmonize relations in a protean, tripartite affair after the crisis of the First World War. In his view, 'crisis-avoidance' became the priority after 1916 and has dominated British politics ever since. A significant degree of collaboration was achieved between the wars, often covertly, sometimes called 'Mondism' (after the man who headed the cartel that became ICl). One of the key figures on the Labour side was Citrine who led the TUC; another was Bevin, whose direction of manpower was, as we have seen, the backbone of Labour's contribution to Churchillism. Thus wartime corporatism radically intensified and made explicit an already established relationship. In Middlemas's words, 1940 instituted a 'political contract' where previously there had been an unwritten economic one.(11)

It is not my purpose here to try and add further to the list of elements involved. In academic terms it can be said—and it is important to say—that the picture is incomplete. Yet even when the skeleton is fully delineated we might still miss the unifying tissues. For Churchillism was essentially the political flesh of national life: its skin, muscle tonality and arthritis. Churchillism combined the contradictions of capital and the workforce, as well as the desires for political freedom with those of imperial grandeur. Furthermore, it wedded these two distinct sets of opposites into a single enveloping universe of demagogy.

To help show that 'Churchillism' was not a momentary thing, born complete and fully armed from the jaws, of defeat in 1940, but was itself a historical process we can glance at the events of late 1942. Churchill's role was contested to some degree from both left and right after May 1940, in the House of Commons and outside, especially as military defeats continued. It was only in November 1942 that the protests against his leadership ebbed away. That month was in fact the turning point of the war in Europe. It saw the Red Army turn the scales at Stalingrad and begin the destruction of Hitler's forces. It was also the month that the Americans landed in North Africa. This opened a small 'second front' as far away as possible from the main theatre, and signalled the arrival of the United States from across the Atlantic. The huge pincer movement that was to divide Europe between Moscow and Washington was underway, and it meant 'victory' for Britain as well.

Coincidentally, the Beveridge Report was published to massive acclaim at home. It held out the promise of full employment, a health service, adequate pensions and social benefits, at the end of the war. Not only was victory forthcoming, however hard the battles ahead, but the peace would be worth fighting for. Within two weeks of its publication in December 1942, a Gallup survey in the UK discovered that 19 out of 20 had heard of the Report and that 9 out of 10 thought that it should be accepted.(12)

Yet it was none of these things that ensured the supremacy of Churchill. The combination of American power and Beveridge could reassure the liberals, the coincidence of Stalingrad and the Report seemed to confirm hopes on the left. But what mattered most, pathetically so, was the victory at El Alamein. Finally, after months of bungling and defeats in Egypt and Libya, a huge concerted effort by the Empire swung the battle against Rommel, who was massively outgunned. In comparison with the Russian front, the adventures in the North African desert were a small sideshow (even then the British had at one point begun to evacuate Cairo). Yet for Churchill it was El Alamein that was the 'Hinge of Fate'. 'Before Alamein we never had a victory. After Alamein we never had a defeat', he suggested as his conclusion to the campaign.(13) In so far as 'we' meant the Allies, it was not only wrong (Midway had given the Americans control over the Pacific six months before); it was also fortuitous, as it preceded the far greater Russian breakthrough at Stalingrad by only a fortnight. But of course, the 'we' also meant the British, as if the entire course of the conflagration had been determined by the UK and its Empire. As the war was being won, it seemed that Churchill's Britain was winning the war; El Alamein secured his position at home politically. The battle also received disproportionate coverage in the UK, and has continued to do so across four decades of war books. The number of pages dedicated to North Africa has been an index of the desert war's ideological role in preserving British face, not its actual contribution to the world conflict. In this respect the current Falklands fanfare is its descendent.

The contrast in the aspirations represented by the conjuncture of El Alamein and the Beveridge Report was never reconciled by Churchill. His passion for Grand Imperial Strategy blinded him to the upsurge of hope amongst millions of his fellow countrymen, who longed simply for health and security. He took 'strong exception' to the Report and refused to commit the Coalition to its implementation after the war, pointing out that the financial demands it might make could conflict with the costs of occupying enemy countries.(14) When a Commons' debate on the Report was finally held, the Cabinet's prevarication and crassness left it remarkably isolated. All Labour members (bar one) who were not actually in government jobs voted against the Coalition's social paralysis. This firmly associated the Labour Party with the prospects for a new future; one historian considers that its Commons' vote then was probably responsible for winning the 1945 election.(15) The debate over Beveridge also led to the formation of a Tory Reform Group that sought to reconcile the Conservatives to social change.

Which brings us to the party aspect of Churchillism and its legacy: the alternating two-party system, once heralded as proof of Britain's attachment to democracy and now under attack from the SDP as the cause of its decline. Not without reason, for each blames the other for the cocoon the two spun together after 1940. The reformers gained the ascendancy within the Conservative Party as Churchill remained aloof. The result was that despite his dominating national role, it was really Baldwin who was 'the architect of mid-century Conservatism' in attitude and spirit.(16) Yet Churchill's presence as leader of the opposition until 1951, and as Prime Minister again until 1955, prevented the overt expression of reformed Toryism from obtaining a positive, modern profile.

After his disastrous handling of the Beveridge Report, Churchill sensed the public swing away from him. In March 1943 he broadcast his own partial conversion to its principles and proposed a national coalition to continue into the postwar period. The Labour Party was unable to tolerate permanent institutionalization into a subordinate place, at least in such a naked form; it smacked too much of 1931. Rank-and-file militancy stiffened the resolve of the leaders to fight an election after the war. This opened the way for those merely sensible measures of nationalization undertaken by Labour after 1945 to be assailed as the most dreadful socialism by the Tory press. It has long been recognized that Labour's formative moment was not so much 1945 as 1940—Attlee was continuously in the Cabinet (first as Deputy Premier, then as Prime Minister) for over a decade. Labour, rather than the Tories, built the postwar consensus which was then utilized by the Conservatives.(17) To preserve this creative tension, with its invariable centrist bias, violent parliamentary attack was modulated with bipartisan understanding: Churchillism intensified and legitimized the operatics of pseudo-debate. And this was the price for so panoramic an incorporation.

Labour also inherited the full costs of Churchillism internationally. No sooner had Germany been defeated than the United States summarily severed Lend-Lease, making the abolition of the imperial preference system the precondition of any further financial aid. 'The American Loan' became the terrain of a major domestic and international battle over the financial and monetary autonomy of Labour reformism. With the installation of the coalition in May 1940, the old omnipotence of the Treasury over the national economy had been temporarily eclipsed—'in total war it was essential to plan resources first, leaving the financial side to be adjusted  accordingly.'(18)   In 1945  stringent  American conditions helped clear the path for the restoration of the Treasury's authority. Moreover, the immediate financial crisis in war-exhausted Britain—fueled by the continuing foreign exchange shortage and gigantic debts to the dominions—was exacerbated by commitments to a high rate of military expenditure. One year later, for example, Britain still retained a garrison of 100,000 troops in both Egypt and Palestine. Despite Attlee's flirtation with a withdrawal from the Middle East, Bevin and the Chiefs of Staff persuaded him otherwise.(19) Soon the relative costs of Britain's military budget would become a major factor in the slippage of its economic power. Internalizing the Churchillian delusion of the country's destiny in the 'Grand Scheme', the Attlee government and subsequent Labour governments paid on the instalment plan the double costs of Churchillism: economic subordination to America and the projection of an independent world military role.

To sum up: Churchillism condemned to a slow death that which it saved from catastrophe. Its impulse was to preserve the Empire but Churchill was pragmatic enough to pay the costs of commitment to democracy—to 'self-determination' abroad and social reforms at home—that were anathema to the bedrock of his views. His militancy against Nazism made him welcome to the left, and Labour was crucial in putting him into office: it sustained the war effort that he spoke for. Thus Churchillism opened the way for the Labour victory in 1945, the creation of the welfare state, the legislated independence of India, and American domination. So too British socialism made its compromise with the capitalist nation under the benediction of Churchill's cigar and 'V sign, which in turn crippled the modernizing, radical impulse of the social democrats and liberals who provided the brain power of the Labour Party in office. At the same time, Labour's international independence was clipped by the Cold War, itself dramatically announced by Churchill's famous 'Iron Curtain' speech of March 1946, where, in front of Truman, he called for Anglo-American military co-operation to be formalized into an anti-Soviet alliance.

At this point it may be pertinent to return to the analogy with Gaullism. Churchillism, as I have tried to show, is not a coherent ideology. Rather, it is an ideological matrix within which contending classes are caught, none of them being the 'true' exemplar since each is in some way equally constitutive. (Michael Foot was probably flabbergasted and bitter when Margaret Thatcher donned Churchill's mantle.) Gaullism, on the other hand, developed as an ideologically specific class force. It combatted Communist domination of the resistance movement and was not structurally penetrated by, or indebted to, the organized working class. This allowed the Gaullists a far greater confidence in their exercise of state power. Dirigism and extensive nationalization were essential for the modernization of French capital, and under Gaullist colours the national could comfortably dominate over the social. In contrast, the legacy of Churchillism has been twofold: not only did it prevent the emergence of a nationally hegemonic Brandt/Schmidt type of social democracy, but it also blocked the Right from creating a dynamic party of national capital.

Andrew Gamble has distinguished three main schools of explanation for Britain's decline since 1945, and notes that there are Marxist as well as bourgeois variants of each. Respectively, these are: (1) the UK's over-extended international involvement and military expenditure; (2) archaic institutions of government including the party system; (3) the 'overloading' of the state by welfare expenditures, compounded by the entrenched position of the unions.(20) Each is partially true, but instead of arguing about which is the root cause of decline, we can note here that Churchillism fathered them all. Churchillism ensured that all parties were committed to a British military and financial role that was spun world wide; it conserved the Westminster system when it should have been transformed; it brought the unions into the system and initiated a welfare-state never efficiently dominated by social democracy. In short, Churchillism ensured the preservation of the Parliamentary Nation and thus Westminster's allegiance to a moment of world greatness that was actually the moment when the greatness ceased. Churchill's National Coalition ensured an astonishing recuperation, one that left the patient structurally disabled for the future and obsessed with magical resurrection from the dead.

For the footnotes, please see the Faber Finds edition of Iron Britannia

Catégories: les flux rss

Churchillism

Open Democracy News Analysis - il y a 10 heures 51 minutes

The 50th anniversary of Winston Churchill's death brought forth a spasm of second-rate musings and hagiographical blather about the great man by the London media, but little understanding of the transformation of Britain which, despite himself, he personified. Here, by contrast, is the 1982 analysis of Anthony Barnett.

Flickr.MsSaraKelly. Some rights reserved.

In an attempt to understand the Falklands War, I wrote Iron Britannia, Why Parliament Waged its Falklands War in 1982 (now republished by Faber Finds). It led me to name and then analyse what I called Churchillism. Today I would thicken the description to include science policy and liberty, as I discuss in the forward new edition, but the central argument holds. Thatcherism was an attempt to replace Churchillism with a Gaullist modernisation, based on the global market and North Sea Oil rather than an interventionist state. It seems to have run its course. Labour may be able shake off its Blair-Brown embrace of Thatcherism but it cannot go back to its formative, Churchillist, moment, as it now seems to wish, and it is doomed unless it forges a new grand strategy.  But this is another story. Here, it is simply worth noting that half a century after he died, the entire Anglo-British political class and its media seems incapable of understanding the formative history of its political-cultural system, that took place under the sign of Churchill's cigar.

 TO LISTEN to the House of Commons debate on 3 April 1982 was like tuning in to a Wagnerian opera. Counterpoint and fugue rolled into an all-enveloping cacophony of sound and emotion. Britannia emerged once more, fully armed and to hallelujahs of assent (accompanied by fearful warnings should She be again betrayed). A thunderous 'hear, hear' greeted every audacious demand for revenge wrapped thinly in the call for self-determination. Dissent was no more than a stifled cough during a crescendo of percussion: it simply confirmed the overwhelming force of the music.

Later, opposition would make itself heard above the storm. But it was drowned out at the crucial moment. In part this was arranged. As we have seen, scheming took place to ensure a 'united House'. MPs took six days to debate entry into the Common Market in 1973. They went to war for the Falklands in three hours. The result was to preempt public discussion with a fabricated consensus. In the immediate aftermath of Argentina's take-over of the islands, most people could hardly believe it was more important than a newspaper headline about some forgotten spot. Suddenly they were presented with the unanimous view of all the party leaders that this was a grave national crisis which imperilled Britain's profound interests and traditional values. The decisive unity of the Commons was thuggish as well as inspired. The few who feared the headlong rush were mostly daunted and chose the better part of valour. Innocent islanders in 'fascist' hands, the nation's sovereignty raped: it seemed better to wait and let things calm down. The war party seized the occasion with the complicity of the overwhelming majority of MPs from all corners of Parliament. On 3 April there was scarcely an opposition to be outmanoeuvred. The result was that even if one continued to regard the Falklands as insignificant, there clearly was a Great Crisis. Within what is called 'national opinion' there was no room to disagree about that: one had either to concur or suffocate. The Commons united placed British sovereign pride upon the line; and sovereignty is not a far away matter, people feel it here at home just as they identify with their national team in a World Cup competition, however distant. With a huge endorsement from the press, Parliament had ensured that the nation—so we were told—spoke with one voice, had acted with purpose and solidarity and had thus gambled its reputation on a first-class military hazard.

Many trends were at work—consciously or blindly—to prepare for such a moment. But much more important, and what gave the militants the 'unity' essential to their cause, was the general condition that allowed them to succeed so handsomely. It held the Commons in the palm of its hand. It orchestrated the one-nation sentiments of the three geniuses of the occasion—Enoch Powell, Michael Foot and David Owen—who bound Thatcher so willingly to Hermes. To analyse this general condition properly would take a thick book, for it has many symptoms. Moreover the condition is so deeply and pervasively a part of England, so natural to its political culture, that it is difficult to see, impossible to smell as something distinct. Like the oxygen in the air we breathe, and which allows flames to burn, it is ordinarily intangible. Perhaps the Falklands crisis will at last bring the mystery into sight.

To provoke and assist this discussion of the pathology of modern British politics, I will be bold and assertive. Yet it should be borne in mind that I am only suggesting a possible description; one which will certainly need correction and elaboration. First, we need a name for the condition as a whole, for the fever that inflames Parliamentary rhetoric, deliberation and decision. I will call this structure of feeling shared by the leaders of the nation's political life, 'Churchillism'. Chuchillism is like the warp of British political culture through which all the main tendencies weave their different colours. Although drawn from the symbol of the wartime persona, Churchillism is quite distinct from the man himself. Indeed, the real Churchill was reluctantly and uneasily conscripted to the compact of policies and parties which he seemed to embody. Yet the fact that the ideology is so much more than the emanation of the man is part of the secret of its power and durability.

Churchillism was born in May 1940, which was the formative moment for an entire generation in British politics. Its parliamentary expression was a two-day debate which ended on 8 May with a crucial division on the Government's conduct of the war. Churchill himself had already entered the cabinet, which remained under Chamberlain's direction. After the hiatus of the 'phony war', an attempt by the British to secure control of Norway had ended in disaster. Although Churchill also bore responsibility for the misadventure, it was Chamberlain who was felt to be out of step with the time. Attlee asked for different people at the helm. From the Conservative back-benches Leo Amery repeated a testy remark of Cromwell's, 'In the name of God, go!'. The Government's potential majority of 240 crashed to 80. In the aftermath Churchill emerged as Prime Minister with, as I will discuss in a moment, the crucial support of Labour to create a new National Coalition. Within days, the war took on a dramatically different form, and then a catastrophic one, as the Germans advanced across Holland and into France. The British army was encircled and the order to evacuate given on 27 May. Through good fortune some 300,000 were pulled back across the Channel and Dunkirk became a symbol not only of survival but also of'national reconciliation' and ultimate resurgence as it coincided with the emergence of Churchill's coalition.(1)

At that moment Churchill himself was a splendid if desperate enemy of European fascism, while Churchillism was the national unity and coalition politics of the time. Among those who participated most enthusiastically, there were some who wanted to save Britain in order to ensure the role of the Empire, and others who wanted to save Britain in order to create a new and better order at home. But Churchillism was more than a mere alliance of these attitudes. It incorporated imperialists and social democrats, liberals and reformers.(2) From the aristocrats of finance capital to .the autodidacts of the trade unions, the war created a social and political amalgam which was not a fusion—each component retained its individuality—but which nonetheless transformed them all internally, inducing in each its own variety of Churchillism and making each feel essential for the whole.

Today Churchillism has degenerated into a chronic deformation, the sad history of contemporary Britain. It was Churchillism that dominated the House of Commons on 3 April 1982. All the essential symbols were there: an island people, the cruel seas, a British defeat, Anglo-Saxon democracy challenged by a dictator, and finally the quintessentially Churchillian posture—we were down but we were not out. The parliamentarians of right, left and centre looked through the mists of time to the Falklands and imagined themselves to be the Grand Old Man. They were, after all, his political children and they too would put the 'Great' back into Britain.

To see how the Falklands crisis brought the politicians at Westminster together and revealed their shared universe of Churchillism, it will help to note the separate strands which constituted it historically: Tory belligerents, Labour reformists, socialist anti-fascists, the liberal intelligentsia, an entente with the USA (which I will look at at greater length as its legacy is crucial) and a matey relationship with the media.

1.     Tory Imperialists

In 1939 only a minority of the Conservative Party supported Churchill in his opposition to appeasement. Their motives for doing so were mixed. The group included back-bench imperialists like Leo Amery—the father of Sir Julian Amery, who spoke in the Falklands debate—and 'one nation' reformers like the young Macmillan. A combination of overseas expansionism and social concessions had characterized Conservatism since Disraeli: a nationalism that displaced attention abroad plus an internal policy of gradualist, paternalistic reform.

Churchill, however, stood on the intransigent wing of the Party. (He had left the Conservative front bench over India in 1931 when he opposed granting it dominion status.) Unlike Baldwin, Churchill had ferociously resisted the rise of Labour, and his militancy in the General Strike made him an enemy of the trade unions until he finally took office in May 1940. Three years previously Baldwin had retired and been replaced by Chamberlain who was efficient but also aloof and stubborn. He proved incapable of assimilating Labour politicians into his confidence, while he saw the imperative need for peace if British business interests were to prosper. By continuing to exclude the restless Churchill from office, Chamberlain perhaps ensured that he would see the opposite and indeed, Churchill gave priority to military belligerency. Thus Churchill, who had initially welcomed Mussolini as an ally in the class war, became the most outspoken opponent of Nazism, because it was a threat to British power. There was no contradiction in this, but rather the consistency of a Toryism that in the last instance placed the Empire before the immediate interests of trade and industry.

2.    Labour and Reformism

As emphasized earlier, it is essential that Churchill and Churchillism be rigorously distinguished. While the man had been among Labour's most notorious enemies, the 'ism' contains Labour sentiment as one of its two major pillars. In terms of Churchill's own career, the transformation can be seen in 1943, when he sought the continuation into the postwar period of the coalition government with Labour. Conversely, the Labour Party's support was crucial in Churchill's accession to power in May 1940. Chamberlain had actually maintained a technical majority in the vote over the failure of the Norwegian expedition; but the backlash was so great that his survival came to depend on Labour's willingness to join his government. It refused, asserting that it would only join a coalition 'as a full partner in a new government under a new Prime Minister which would command the confidence of the nation'. Within an hour of receiving this message, Chamberlain resigned.(3)

It is important to recall that Chamberlain's regime wa's itself a form of coalition government. At the height of the depression in 1931, Ramsey MacDonald had decapitated the Labour Movement by joining a predominantly Conservative alliance. This incorporation of part of the Labour leadership into a basically Tory government was a triumph for Baldwin, vindicating his strategy of deradicalizing the Labour movement through the cooptation of its parliamentary representatives. By the same token, the creation of the 1931 National Government was a defeat for the hardline approach of Churchill. The great irony of 1940, then, was that Labour attained its revenge by imposing the leadership of its former arch-enemy on the Tory Party. The alliance which resulted was also quite different from the National Government of 1931: that first coalition broke the Labour Party while in 1941 it was the Conservatives who were 'shipwrecked'.(4)

Churchill dominated grand strategy but Labour transformed the domestic landscape. Ernest Bevin, head of the Transport and General Workers Union, became Minister of Labour and a major figure in the War Cabinet. Employment rose swiftly as the economy was put on a total war footing and for the first, and so far only time in the history of British capitalism, a significant redistribution of wealth took place in favour of the disadvantaged. While adamant in his attitude towards strikes and obtaining a more complete war mobilization than in Germany, Bevin ensured the extension of unionism and improvements in factory conditions. Both physically massive men, the collaboration of Churchill and Bevin personified the contrast with the earlier pact between Baldwin and MacDonald. The 1931 National Government was a formation of the centre based on compromise at home and abroad. The two prime actors in 1941 were men of deeds, determined to pursue their chosen course. Once enemies, they now worked together: an imperialist and a trade unionist, each depending upon the other.

Within the alliance, the centre worked away. To compound the ironies involved, some of the Conservatives who most readily accepted the domestic reforms were from the appeasement wing of the party. Butler, for example, who disdained Churchill even after the war began, put his name to the 1944 Education Act that modernized British education (though it preserved the public school system). But the administrative reformists of the two main parties never captured the positions of ideological prominence. Bevin was more a trade union than a Parliamentary figure, Attlee led from behind, and Labour in particular suffered from its inability to transform its 'moral equality' into an equivalent ideological hegemony over the national war effort.

3.    Anti-Fascism

Overarching the centre was an extraordinary alliance of left and right in the war against fascism. Those most outspoken on the left were deeply committed to the war effort (even when their leading advocate in the Commons, Aneurin Bevan, remained in opposition). The patriotic anti-fascists of both Left and Right had different motives, but both had a global perspective which made destruction of Nazism their first imperative. When the Falklands war party congratulated Michael Foot—the moral anti-fascist without equal on the Labour benches—for his stand, it was like a risible spoof of that historic, formative moment in World War Two when'the flanks overwhelmed the centre to determine the execution of the war.

Yet it was not a hoax, it was the real thing; though it related to 1940 as damp tea-leaves to a full mug. The Falklands debate was genuinely Churchillian, only the participants in their ardour failed to realize that they were the dregs. This is not said to denigrate either the revolutionaries or the imperialists of the World War. Their struggle against fascism was made a mockery of in Parliament on 3 April: for example, when Sir Julien Amery implicitly, and Douglas Jay explicitly, condemned the Foreign Office for its 'appeasement', just because it wanted a peaceful settlement with Buenos Aires; or when Patrick Cormack said from the Tory benches that Michael Foot truly 'spoke for Britain'.(5)

Above all, it was a histrionic moment for Foot. Although frequently denounced by the Right as a pacifist, he was in fact one of the original architects of bellicose Labour patriotism. Working on Beaverbrook's Daily Express he had exhorted the Labour movement to war against the Axis. In particular, in 1940 when he was 26, he inspired a pseudonymous denunciation of the appeasers called The Guilty Men, published by Gollancz. Foot demanded the expulsion of the Munichites—listed in the booklet's frontispiece—from the government, where Churchill had allowed them to remain. The Guilty Men instantly sold out and went through more than a dozen editions. It contains no socialist arguments at all, but instead is a dramatized accounting' of the guilt of those who left Britain unprepared for war and the soldiers at Dunkirk unprotected. It points the finger at Baldwin and MacDonald for initiating the policy of betrayal. On its jacket it flags a quote from Churchill himself, 'The use of recriminating about the past is to enforce effective action at the present'. Thus while the booklet attacks both the Conservative leadership of the previous decade and the Labour men who sold out in 1931, it impeaches them all alike on patriotic grounds: they betrayed their country. Churchill's foresight and resolve, by contrast, qualify him for national leadership—for the sake of the war effort, the remaining 'guilty men' had to go.

It was precisely this rhetoric—the language of Daily Express socialism—that was pitched against the Thatcher government in the 3 April debate by the Labour front-bench. Foot denounced its leaders for failing to be prepared and for failing to protect British people against a threat from dictatorship. The 'Government must now prove by deeds ... that they are not responsible for the betrayal and cannot be faced with that charge. That is the charge, I believe, that lies against them.' (my emphasis) Winding up, John Silkin elaborated the same theme, only as he was concluding the debate for the opposition he was able to bring the 'prosecution' to its finale, in the full theatre of Parliament. Thatcher, Carrington and Nott 'are on trial today', as 'the three most guilty people'.

4.      Liberalism

The political alliance of Churchillism extended much further than the relationship between Labour and Conservatives. The Liberals were also a key component, and this helps to explain why an important element of the English intelligentsia was predominantly, if painfully, silent at the outbreak of the Falklands crisis. In 1940 the Liberals played a more important role in the debate that brought down Chamberlain than did Labour spokesmen, with Lloyd George in particular making a devastating intervention. Later, individual Liberals provided the intellectual direction for the administrative transformation of the war and its aftermath.

Keynes was its economic architect, Beveridge the draughtsman of the plans for social security that were to ensure 'no return' to the 1930s. Liberalism produced the 'civilized' and 'fair-minded' critique of fascism, which made anti-fascism acceptable to Conservatives and attractive to aristocrats. Liberalism, with its grasp of detail and its ability to finesse issues of contention, was the guiding spirit of the new administrators. Because of its insignificant party presence, its wartime role is often overlooked, but liberalism with a small T was the mortar of the Churchillian consensus. One of Beveridge's young assistants, a Liberal at the time, saw the way the wind was blowing and joined the Labour Party to win a seat in 1945. His name was Harold Wilson.(6)

5.      The American Alliance and 'Self-Determination'

Churchillism was thus an alliance in depth between forces that were all active and influential. Nor was it limited to the domestic arena; one of its most important constituents has been its attachment to the Anglo-American alliance, and this was Churchill's own particular achievement. Between the wars the two great anglophone powers were still as much competitors as allies. During the 1920s their respective general staffs even reviewed war plans against one another, although they had been allies in the First World War. The tensions of the Anglo-American relationship four decades ago and more may seem irrelevant to a discussion of the Falklands affair; yet they made a decisive contribution to the ideological heritage which was rolled out to justify the dispatch of the Armada.

When Churchill took office in 1940 Britain was virtually isolated in Europe, where fascist domination stretched from Warsaw to Madrid, while the USSR had just signed a 'friendship' treaty with Germany and the United States was still neutral. Joseph Kennedy, the American ambassador in London (and father of the future President), was an old intimate of the Cliveden set and a non-interventionist. He had advised Secretary of State, Cordell Hull, that the English 'have not demonstrated one thing that could justify us in assuming any kind of partnership with them'.(7) But Roosevelt, eminently more pragmatic, saw that genuine neutrality would allow Hitler to win; it would lead to the creation of a massive pan-European empire, hegemonic in the Middle East and allied to Japan in the Pacific. On the other hand, by backing the weaker European country—the United Kingdom—the US could watch the tigers fight. Continental Europe would be weakened and Britain—especially its Middle East positions—would become dependent on Washington's good will. In other words, it was not fortuitous that America emerged as the world's greatest economic power in 1945, it simply took advantage of the opportunity that was offered. But this opportunity also provided Britain with its only possible chance of emerging amongst the victors. At issue were the terms of the alliance.

On May 15, immediately after he became Prime Minister and just before Dunkirk, Churchill wrote his first letter to Roosevelt in his new capacity. He asked for fifty old American destroyers and tried to lure the President away from neutrality. The Americans in turn suggested a swap arrangement that would give them military bases in the Caribbean, Newfoundland and Guyana. The trade of bases for old hulks was hardly an equal exchange, but by deepening American involvement it achieved Churchill's overriding purpose, and allowed the President to sell his policy to Congress. Later, as Britain ran out of foreign reserves, Lend-Lease was conceived. The United States produced the material of war while the British fought, and in the meantime relinquished their once commanding economic position in Latin America to Uncle Sam.(8) (So when Peron—whose country had been a British dominion in all but name for half a century—challenged the hegemony of the Anglo-Saxon bankers in 1946 by resurrecting the irredentist question of the Malvinas, it was a demagogic symbol of already fading subordination that he singled out. The real economic power along the Plata now resided in Wall Street rather than the City.)

Four months before Pearl Harbor, the 'Atlantic Charter' (August 1941) consolidated the Anglo-American alliance and prepared US opinion for entry into war. The Prime Minister and the President met off Newfoundland and agreed to publicize a joint declaration. The main argument between them was over  its fourth clause. Roosevelt wanted to assert as a principle that economic relations should be developed 'without discrimination and on equal terms'. This was aimed against the system of'imperial preferences' which acted as a protectionist barrier around the British Empire. Churchill moderated the American position by inserting a qualifying phrase before the clause. Behind the fine words of the Atlantic Charter there was a skirmish and test of wills between the two imperialisms. Although we can now see that the Charter was determined by self-interest, its function was to enunciate democratic principles that would ensure popular and special-interest support in both countries for a joint Anglo-Saxon war. Both governments announced that they sought no territorial aggrandizement or revision that did 'not accord with the freely expressed wishes of the peoples concerned'. Churchill later denied that this in any way related to the British colonies. He was to declare in 1942 that he had not become Prime Minister to oversee the liquidation of the British Empire. Nonetheless he also claimed to have drafted the phrase in the Charter which states that the UK and the US would 'respect the right of all people to choose the form of government under which they will live'.(9) There is a direct lineage between this declaration and Parliament's reaction to the Falklands.

By the end of the year America had entered the war as a full belligerent. On New Years Day 1942, twenty-six allied countries signed a joint declaration drafted in Washington which pledged support to the principles of the Atlantic Charter. Henceforward the alliance called itself the 'United Nations', and three years later a world organization of that name assembled for the first time. In its turn it enshrined the principles of 'self-determination' codified by Roosevelt and Churchill.

In his memoirs Churchill is quite shameless about the greatness of the empires, British and American, that collaborated together against the 'Hun'. But he cannot hide the constant tussle for supremacy that took place between them, within their 'Anglo-Saxon' culture, in which each measured its own qualities against the other. From their alliance, forced on the British by extreme adversity, came their declaration of democratic aims. Its objective was to secure support from a suspicious Congress that saw no profit in bankrolling an Empire which was a traditional opponent, and which was detested by millions of Irish and German-American voters. It had, therefore, to be assuaged with the democratic credentials of the emerging trans-Atlantic compact. Thus, in order to preserve the Empire within an alliance of 'the English speaking nations', Churchill—imperialist in bone and marrow—composed a declaration of the rights of nations to determine their own form of government. In international terms, this ambiguity is the nodal point of Churchillism. By tracing, however sketchily, its outline, we can begin to decode the extraordinary scenes in the House of Commons on 3 April this year. Above all, it clarifies the ease with which those like Thatcher utilized the resources of the language of 'self-determination'. When she and Foot invoked the UN Charter to justify the 'liberation' of the Falklands because its inhabitants desire government by the Crown, they reproduced the sophistry of the Atlantic Charter. What particular resonance can such terms have for the British Right, when in other much more important circumstances like Zimbabwe they are regarded as the thin wedge of Communist penetration? The answer is to be found in Churchillism, which defended and preserved 'Great' Britain and its imperial order by retreating slowly, backwards, never once taking flight, while it elevated aspirations for freedom into a smoke-screen to cover its manoeuvre.

In 1940 what was at stake was Britain's own self-determination. Invasion was imminent and an embattled leadership had to draw upon more than national resources to ensure even survival. Together with the invocation of specifically British values and tradition, Churchill revived the Wilsonian imagery of 'the great democratic crusade' (a rhetoric that had been improvised in 1917, in response to the Russian Revolution). Such ideals were crucial not only for the North American public but also for anti-fascist militants in the UK and for liberals, who loathed warfare—the experience of 1914-18 was still fresh—and who distrusted Churchill, especially for his evident pleasure in conflict. They were uplifted by the, rallying cry that gave both a moral and political purpose to the war as it coupled the UK to its greatest possible ally. While Churchill saved Great Britain, preserved its institutions and brought its long colonial history to bear through his personification of its military strengths, he did so with a language that in fact opened the way for the Empire's dissolution. The peculiarity of this explains how Britain could shed—if often reluctantly and with numerous military actions—so many peoples and territories from its power after 1945 without undergoing an immediate convulsion, or any sort of outspoken political crisis commensurate with its collapse. Instead a long drawn-out anaemia and an extraordinary collective self-deception was set in train by Churchillism.

Perhaps the Falklands crisis will come to be seen as a final spasm to this process of masked decline. Many have seen it as a knee-jerk colonialist reaction. Foreigners especially interpret the expedition to 'liberate the Kelpers' as a parody of Palmerstonian gunboat diplomacy, out of place in the modern world. It may be out of place, but in British terms its impetus is modern rather than Victorian.

The stubborn, militaristic determination evinced by the Thatcher government, her instant creation of a 'War Cabinet' that met daily, was a simulacrum of Churchilliana. So too was the language Britain had used to defend its actions. Both rhetoric and policy were rooted in the formative moment of contemporary Britain, the time when its politics were reconstituted to preserve the country as it 'stood alone' in May 1940.(10) A majority of the population are today too young to remember the event, but most members of Parliament do. The mythical spirit of that anxious hour lives on as a well-spring in England's political psyche.

6.      The Incorporation of the Mass Media

There is one final aspect of Churchillism that needs to be mentioned: the relationship he forged with the media. He brought Beaverbrook into the Cabinet, attracted by the energy of the Canadian newspaper proprietor. He himself wrote in the popular press and took great care of his relations with the newspapers, in sharp contrast to Chamberlain who disdained such matters. Then, from 1940 onwards, Churchill's broadcasts rallied the nation: he skilfully crafted together images of individual heroism with the demand for general sacrifice. No subsequent politician in Britain has been able to forge such a bond between leader and populace.

The policies of the modern State are literally 'mediated' to the public via the political and geographical centralization of the national press. London dominates through its disproportionate size, its financial strength and the spider-web of rail and road of which it is the centre. Its daily press has long provided the morning papers for almost all of England, and they are taken by many in Scotland and Wales. A journalistic strike force has been developed, which strangely illuminates the way British political life is exposed to extra-national factors through its peculiar inheritance of capitalist aristocrats and overseas finance. Astor, an American, bought The Times in 1922; Thompson, a Canadian, acquired it in 1966; Rupert Murdoch, an Australian, took it over in 1981. But Astor, educated at Oxford, became anglicized and conserved the paper's character. The hegemonic organ of the nation may have been in the hands of a foreigner financially, but it was edited by Old England all the more because of it. Thompson pretended only to business rather than political influence, but he too made the transition across the Atlantic to become a Lord.

Thompson's son, however, shifted himself and the company back to North America, allowing a Catholic monetarist to lead the paper into the abyss of British labour relations and a year-long, futile closure. Now losing money heavily, The Times was sold to Murdoch, who already controlled the News of the World and the Sun. But he sojourns in New York rather than London. His papers endorsed the Falklands expedition with such a ludicrous enthusiasm that they managed to blemish vulgarity itself. But there remains a sense in which the relationship Churchill established with Beaverbrook came to be faintly echoed in Thatcher's reliance on Murdoch. The bombastic irrelevance of'down under' helped Thatcher to storm the enfeebled ranks of gentry Conservatism, and gave her a major working-class daily—the Sun. Yet the Sun's very lack of seriousness was a signal that the militarism of the Falklands War was bursting out of the carapace of Churchillism. The cardinal world issues adjudicated by Britain in the past could hardly be applied to taking on Argentina over 1,800 people in 1982. 'UP YOUR JUNTA', was one headline in the paper as it welcomed an initial British success. Was this the way to fight the scourge of fascism?

In 1940 Churchill ,was willing to do anything and everything for victory. Yet, as we have seen, the meaning of 'victory' became increasingly ambiguous in the course of the war. Churchill fought tooth and nail to defend the Empire, but in the end—to save British sovereignty itself—he formed, and was a prisoner of, a politics which accepted the liquidation of the Empire (except for a few residual outposts like the Falklands ...). The 'regeneration' was sufficiently radical to concede decolonialization and the emergence of new states, yet it was not radical enough to adapt the British State itself to its reduced stature. This, indeed, was its fatal success. Galvanized by total war, but, unlike continental Europe, spared the ultimate traumas of occupation and defeat, Britain survived the 1940s with its edifice intact. This fact has often been alluded to as a principal cause of the 'British disease'—the country's baffling postwar economic decline; moreover, it distinguished Churchillism from Gaullism.

The contrast is illuminating. Gaullism was born of defeat at the same moment as Churchillism (May 1940), and was also personified by a right-wing militaristic figure of equivalent self-regard and confidence. But in the long run Gaullism has inspired a far more successful national 'renewal' and adaptation to the increasingly competitive environment. Was this not partially due to the paradoxical fact that the fall of France, by reducing the Third Republic to rubble, ultimately provided a convenient building site for institutional modernization? In Britain, by contrast, the institutions held firm—like St Paul's defying the blitz—with corresponding penalties for this very durability. The most ingenious of Britain's defences against destructive change and forced modernization was the conserving collaboration between labour and capital. The relationship was the very core of Churchillism.

If Churchillism was born in May 1940, it had at least a twenty-year gestation. Keith Middlemas has shown that state, capital and labour sought to harmonize relations in a protean, tripartite affair after the crisis of the First World War. In his view, 'crisis-avoidance' became the priority after 1916 and has dominated British politics ever since. A significant degree of collaboration was achieved between the wars, often covertly, sometimes called 'Mondism' (after the man who headed the cartel that became ICl). One of the key figures on the Labour side was Citrine who led the TUC; another was Bevin, whose direction of manpower was, as we have seen, the backbone of Labour's contribution to Churchillism. Thus wartime corporatism radically intensified and made explicit an already established relationship. In Middlemas's words, 1940 instituted a 'political contract' where previously there had been an unwritten economic one.(11)

It is not my purpose here to try and add further to the list of elements involved. In academic terms it can be said—and it is important to say—that the picture is incomplete. Yet even when the skeleton is fully delineated we might still miss the unifying tissues. For Churchillism was essentially the political flesh of national life: its skin, muscle tonality and arthritis. Churchillism combined the contradictions of capital and the workforce, as well as the desires for political freedom with those of imperial grandeur. Furthermore, it wedded these two distinct sets of opposites into a single enveloping universe of demagogy.

To help show that 'Churchillism' was not a momentary thing, born complete and fully armed from the jaws, of defeat in 1940, but was itself a historical process we can glance at the events of late 1942. Churchill's role was contested to some degree from both left and right after May 1940, in the House of Commons and outside, especially as military defeats continued. It was only in November 1942 that the protests against his leadership ebbed away. That month was in fact the turning point of the war in Europe. It saw the Red Army turn the scales at Stalingrad and begin the destruction of Hitler's forces. It was also the month that the Americans landed in North Africa. This opened a small 'second front' as far away as possible from the main theatre, and signalled the arrival of the United States from across the Atlantic. The huge pincer movement that was to divide Europe between Moscow and Washington was underway, and it meant 'victory' for Britain as well.

Coincidentally, the Beveridge Report was published to massive acclaim at home. It held out the promise of full employment, a health service, adequate pensions and social benefits, at the end of the war. Not only was victory forthcoming, however hard the battles ahead, but the peace would be worth fighting for. Within two weeks of its publication in December 1942, a Gallup survey in the UK discovered that 19 out of 20 had heard of the Report and that 9 out of 10 thought that it should be accepted.(12)

Yet it was none of these things that ensured the supremacy of Churchill. The combination of American power and Beveridge could reassure the liberals, the coincidence of Stalingrad and the Report seemed to confirm hopes on the left. But what mattered most, pathetically so, was the victory at El Alamein. Finally, after months of bungling and defeats in Egypt and Libya, a huge concerted effort by the Empire swung the battle against Rommel, who was massively outgunned. In comparison with the Russian front, the adventures in the North African desert were a small sideshow (even then the British had at one point begun to evacuate Cairo). Yet for Churchill it was El Alamein that was the 'Hinge of Fate'. 'Before Alamein we never had a victory. After Alamein we never had a defeat', he suggested as his conclusion to the campaign.(13) In so far as 'we' meant the Allies, it was not only wrong (Midway had given the Americans control over the Pacific six months before); it was also fortuitous, as it preceded the far greater Russian breakthrough at Stalingrad by only a fortnight. But of course, the 'we' also meant the British, as if the entire course of the conflagration had been determined by the UK and its Empire. As the war was being won, it seemed that Churchill's Britain was winning the war; El Alamein secured his position at home politically. The battle also received disproportionate coverage in the UK, and has continued to do so across four decades of war books. The number of pages dedicated to North Africa has been an index of the desert war's ideological role in preserving British face, not its actual contribution to the world conflict. In this respect the current Falklands fanfare is its descendent.

The contrast in the aspirations represented by the conjuncture of El Alamein and the Beveridge Report was never reconciled by Churchill. His passion for Grand Imperial Strategy blinded him to the upsurge of hope amongst millions of his fellow countrymen, who longed simply for health and security. He took 'strong exception' to the Report and refused to commit the Coalition to its implementation after the war, pointing out that the financial demands it might make could conflict with the costs of occupying enemy countries.(14) When a Commons' debate on the Report was finally held, the Cabinet's prevarication and crassness left it remarkably isolated. All Labour members (bar one) who were not actually in government jobs voted against the Coalition's social paralysis. This firmly associated the Labour Party with the prospects for a new future; one historian considers that its Commons' vote then was probably responsible for winning the 1945 election.(15) The debate over Beveridge also led to the formation of a Tory Reform Group that sought to reconcile the Conservatives to social change.

Which brings us to the party aspect of Churchillism and its legacy: the alternating two-party system, once heralded as proof of Britain's attachment to democracy and now under attack from the SDP as the cause of its decline. Not without reason, for each blames the other for the cocoon the two spun together after 1940. The reformers gained the ascendancy within the Conservative Party as Churchill remained aloof. The result was that despite his dominating national role, it was really Baldwin who was 'the architect of mid-century Conservatism' in attitude and spirit.(16) Yet Churchill's presence as leader of the opposition until 1951, and as Prime Minister again until 1955, prevented the overt expression of reformed Toryism from obtaining a positive, modern profile.

After his disastrous handling of the Beveridge Report, Churchill sensed the public swing away from him. In March 1943 he broadcast his own partial conversion to its principles and proposed a national coalition to continue into the postwar period. The Labour Party was unable to tolerate permanent institutionalization into a subordinate place, at least in such a naked form; it smacked too much of 1931. Rank-and-file militancy stiffened the resolve of the leaders to fight an election after the war. This opened the way for those merely sensible measures of nationalization undertaken by Labour after 1945 to be assailed as the most dreadful socialism by the Tory press. It has long been recognized that Labour's formative moment was not so much 1945 as 1940—Attlee was continuously in the Cabinet (first as Deputy Premier, then as Prime Minister) for over a decade. Labour, rather than the Tories, built the postwar consensus which was then utilized by the Conservatives.(17) To preserve this creative tension, with its invariable centrist bias, violent parliamentary attack was modulated with bipartisan understanding: Churchillism intensified and legitimized the operatics of pseudo-debate. And this was the price for so panoramic an incorporation.

Labour also inherited the full costs of Churchillism internationally. No sooner had Germany been defeated than the United States summarily severed Lend-Lease, making the abolition of the imperial preference system the precondition of any further financial aid. 'The American Loan' became the terrain of a major domestic and international battle over the financial and monetary autonomy of Labour reformism. With the installation of the coalition in May 1940, the old omnipotence of the Treasury over the national economy had been temporarily eclipsed—'in total war it was essential to plan resources first, leaving the financial side to be adjusted  accordingly.'(18)   In 1945  stringent  American conditions helped clear the path for the restoration of the Treasury's authority. Moreover, the immediate financial crisis in war-exhausted Britain—fueled by the continuing foreign exchange shortage and gigantic debts to the dominions—was exacerbated by commitments to a high rate of military expenditure. One year later, for example, Britain still retained a garrison of 100,000 troops in both Egypt and Palestine. Despite Attlee's flirtation with a withdrawal from the Middle East, Bevin and the Chiefs of Staff persuaded him otherwise.(19) Soon the relative costs of Britain's military budget would become a major factor in the slippage of its economic power. Internalizing the Churchillian delusion of the country's destiny in the 'Grand Scheme', the Attlee government and subsequent Labour governments paid on the instalment plan the double costs of Churchillism: economic subordination to America and the projection of an independent world military role.

To sum up: Churchillism condemned to a slow death that which it saved from catastrophe. Its impulse was to preserve the Empire but Churchill was pragmatic enough to pay the costs of commitment to democracy—to 'self-determination' abroad and social reforms at home—that were anathema to the bedrock of his views. His militancy against Nazism made him welcome to the left, and Labour was crucial in putting him into office: it sustained the war effort that he spoke for. Thus Churchillism opened the way for the Labour victory in 1945, the creation of the welfare state, the legislated independence of India, and American domination. So too British socialism made its compromise with the capitalist nation under the benediction of Churchill's cigar and 'V sign, which in turn crippled the modernizing, radical impulse of the social democrats and liberals who provided the brain power of the Labour Party in office. At the same time, Labour's international independence was clipped by the Cold War, itself dramatically announced by Churchill's famous 'Iron Curtain' speech of March 1946, where, in front of Truman, he called for Anglo-American military co-operation to be formalized into an anti-Soviet alliance.

At this point it may be pertinent to return to the analogy with Gaullism. Churchillism, as I have tried to show, is not a coherent ideology. Rather, it is an ideological matrix within which contending classes are caught, none of them being the 'true' exemplar since each is in some way equally constitutive. (Michael Foot was probably flabbergasted and bitter when Margaret Thatcher donned Churchill's mantle.) Gaullism, on the other hand, developed as an ideologically specific class force. It combatted Communist domination of the resistance movement and was not structurally penetrated by, or indebted to, the organized working class. This allowed the Gaullists a far greater confidence in their exercise of state power. Dirigism and extensive nationalization were essential for the modernization of French capital, and under Gaullist colours the national could comfortably dominate over the social. In contrast, the legacy of Churchillism has been twofold: not only did it prevent the emergence of a nationally hegemonic Brandt/Schmidt type of social democracy, but it also blocked the Right from creating a dynamic party of national capital.

Andrew Gamble has distinguished three main schools of explanation for Britain's decline since 1945, and notes that there are Marxist as well as bourgeois variants of each. Respectively, these are: (1) the UK's over-extended international involvement and military expenditure; (2) archaic institutions of government including the party system; (3) the 'overloading' of the state by welfare expenditures, compounded by the entrenched position of the unions.(20) Each is partially true, but instead of arguing about which is the root cause of decline, we can note here that Churchillism fathered them all. Churchillism ensured that all parties were committed to a British military and financial role that was spun world wide; it conserved the Westminster system when it should have been transformed; it brought the unions into the system and initiated a welfare-state never efficiently dominated by social democracy. In short, Churchillism ensured the preservation of the Parliamentary Nation and thus Westminster's allegiance to a moment of world greatness that was actually the moment when the greatness ceased. Churchill's National Coalition ensured an astonishing recuperation, one that left the patient structurally disabled for the future and obsessed with magical resurrection from the dead.

For the footnotes, please see the Faber Finds edition of Iron Britannia

Catégories: les flux rss

Where’s the evidence? Moving from ideology to data in economic and social rights

Open Democracy News Analysis - il y a 10 heures 56 minutes

To advance the polarized openGlobalRights debate on economic and social rights, we need more empirical research, and less ideology. EspañolPortuguês

The debate on whether social and economic rights are “true” rights, and whether they should be constitutionalized alongside civil and political rights and equally enforced by courts (i.e., the justiciability issue), has raged for some time. I have followed the discussion with interest since earning a law degree in Sao Paulo, Brazil in 1993. I was in the country’s first cohort of law graduates to study under the post-dictatorship constitution of 1988, which included a long list of progressive social and economic rights.

In that constitution’s first decade, both legal scholars and the Brazilian courts deemed economic and social rights as “non-justiciable” norms (i.e. “programmatic norms”). In 2000, however, a landmark right-to-health case in Brazil’s highest court changed this position.

Ever since, Brazilian lawyers and judges have treated social rights as fully justiciable and immediately claimable entitlements. As a result, the country has witnessed an avalanche of lawsuits, ranging from the right to health to education, housing, social benefits, minimum wage and more.

Staunch supporters of social rights celebrate the Brazilian and similar Latin American cases, including Argentina, Colombia, and Costa Rica. In their view, these countries have made huge strides in the struggle for economic and social rights.

Firm opponents of these rights or of their judicial enforcement, however, see this trend as judicial usurpation of the democratic process’ legitimate role. Problems of economic and social entitlements, the critics say, must be dealt with inside the scope of the national or local legislature, not in the courts.  

This polarized and often ideological debate shows no signs of abating, judging by exchanges in these pages, and elsewhere. It has been a sterile debate, however, because proponents tend to make abstract, rather than empirically supported, claims.

The debate on legalizing and judicializing social and economic rights must also focus on the actual empirical evidence, rather than on purely abstract and often ideological, normative arguments. There is nothing intrinsically right or wrong with legalizing and judicializing social and economic rights; both can produce good or bad results, depending on context . As a result, the debate must also focus on the actual empirical evidence, rather than on purely abstract and often ideological, normative arguments.

People of good faith can of course disagree on how to measure the impact of legalization and judicialization. But this is where the debate should be heading among those who support social rights, even if they are sceptical about the legal approach. (With radical libertarians, of course, there is no point in debating).

Fortunately, many academics are beginning to try and measure impact. In these very pages, the contribution of Jacob Mchangama, although polemical and open to several methodological challenges regarding data collection, analysis and conclusions, is an illustration of the kind of work needed. Other interesting efforts in that direction have been recently made in the growing literature. My own research empirically analyses thousands of right-to-health cases adjudicated in Brazilian courts since 2000 and tries to gauge their impact on the actual enjoyment of that right by the Brazilian population. More such research, focusing on particular societies, is required to test sweeping, sceptical claims such as Mchangama’s argument that “the introduction of social and economic rights do not, in general, have robustly positive effects on the population’s long term social development”. This seems especially hyperbolic and implausible in the case of Brazil and, I suspect, in other countries as well.

In the Brazilian case, despite being a fierce critic of what I call the “Brazilian model” of health judicialization—mainly due to the inequitable effects it has had on the public health system—I believe that the 1988 constitutionalisation of social rights has likely contributed to the country’s last 25 years of social improvements.


Demotix/Fabio Teixeira (All rights reserved)

A protest against healthcare privatization in Rio de Janeiro, Brazil.

The record is clear: Brazil's Index of Human Development has climbed from a low 0.5 in 1991, to over 0.7 in 2010—an improvement of over 47%. Life expectancy, similarly, has climbed from 67 in 1990 to 74 in 2012, and infant mortality has decreased from 52 to 13 per 1,000 live births. Illiteracy, moreover, has dropped to less than 10% for the first time in Brazil’s history. And today, most children are enrolled in primary education.

It is of course difficult to prove a direct causal connection between the constitutionalisation of social rights in Brazil in 1988, and the country’s improvements over the last 25 years. After all, there are many other important potential influences, and there is no accepted methodology available yet capable of measuring such impact, although we should continue to develop one. The challenges are many, ranging from availability and reliability of data sources, to lack of consensus on the relevant variables and on how to interpret the available data.  

Still, it is implausible to suggest that Brazil’s social rights had nothing to do with the country’s recent significant social progress in the face of strong obstacles from a wave of neoliberal policies that tried hard, and unsuccessfully, to curb the social investments mandated by the constitution. More data is needed at both the macro and micro level. For those believing in social rights as a potentially transformative tool, this is where we must now focus our efforts.

Sideboxes 'Read On' Sidebox: 

Sidebox: 

Related stories:  Open budgets, open politics? Can legal interventions really tackle the root causes of poverty? Poverty and human rights: can courts, lawyers and activists make a difference? Can Brazil promote change without changing itself? Home and abroad: balancing Brazil’s human rights commitments Beyond the courts – protecting economic and social rights Brazil too ‘traditional’ to be a global human rights leader Voicing the right to health
Catégories: les flux rss

Where’s the evidence? Moving from ideology to data in economic and social rights

Open Democracy News Analysis - il y a 10 heures 56 minutes

To advance the polarized openGlobalRights debate on economic and social rights, we need more empirical research, and less ideology. EspañolPortuguês

The debate on whether social and economic rights are “true” rights, and whether they should be constitutionalized alongside civil and political rights and equally enforced by courts (i.e., the justiciability issue), has raged for some time. I have followed the discussion with interest since earning a law degree in Sao Paulo, Brazil in 1993. I was in the country’s first cohort of law graduates to study under the post-dictatorship constitution of 1988, which included a long list of progressive social and economic rights.

In that constitution’s first decade, both legal scholars and the Brazilian courts deemed economic and social rights as “non-justiciable” norms (i.e. “programmatic norms”). In 2000, however, a landmark right-to-health case in Brazil’s highest court changed this position.

Ever since, Brazilian lawyers and judges have treated social rights as fully justiciable and immediately claimable entitlements. As a result, the country has witnessed an avalanche of lawsuits, ranging from the right to health to education, housing, social benefits, minimum wage and more.

Staunch supporters of social rights celebrate the Brazilian and similar Latin American cases, including Argentina, Colombia, and Costa Rica. In their view, these countries have made huge strides in the struggle for economic and social rights.

Firm opponents of these rights or of their judicial enforcement, however, see this trend as judicial usurpation of the democratic process’ legitimate role. Problems of economic and social entitlements, the critics say, must be dealt with inside the scope of the national or local legislature, not in the courts.  

This polarized and often ideological debate shows no signs of abating, judging by exchanges in these pages, and elsewhere. It has been a sterile debate, however, because proponents tend to make abstract, rather than empirically supported, claims.

The debate on legalizing and judicializing social and economic rights must also focus on the actual empirical evidence, rather than on purely abstract and often ideological, normative arguments. There is nothing intrinsically right or wrong with legalizing and judicializing social and economic rights; both can produce good or bad results, depending on context . As a result, the debate must also focus on the actual empirical evidence, rather than on purely abstract and often ideological, normative arguments.

People of good faith can of course disagree on how to measure the impact of legalization and judicialization. But this is where the debate should be heading among those who support social rights, even if they are sceptical about the legal approach. (With radical libertarians, of course, there is no point in debating).

Fortunately, many academics are beginning to try and measure impact. In these very pages, the contribution of Jacob Mchangama, although polemical and open to several methodological challenges regarding data collection, analysis and conclusions, is an illustration of the kind of work needed. Other interesting efforts in that direction have been recently made in the growing literature. My own research empirically analyses thousands of right-to-health cases adjudicated in Brazilian courts since 2000 and tries to gauge their impact on the actual enjoyment of that right by the Brazilian population. More such research, focusing on particular societies, is required to test sweeping, sceptical claims such as Mchangama’s argument that “the introduction of social and economic rights do not, in general, have robustly positive effects on the population’s long term social development”. This seems especially hyperbolic and implausible in the case of Brazil and, I suspect, in other countries as well.

In the Brazilian case, despite being a fierce critic of what I call the “Brazilian model” of health judicialization—mainly due to the inequitable effects it has had on the public health system—I believe that the 1988 constitutionalisation of social rights has likely contributed to the country’s last 25 years of social improvements.


Demotix/Fabio Teixeira (All rights reserved)

A protest against healthcare privatization in Rio de Janeiro, Brazil.

The record is clear: Brazil's Index of Human Development has climbed from a low 0.5 in 1991, to over 0.7 in 2010—an improvement of over 47%. Life expectancy, similarly, has climbed from 67 in 1990 to 74 in 2012, and infant mortality has decreased from 52 to 13 per 1,000 live births. Illiteracy, moreover, has dropped to less than 10% for the first time in Brazil’s history. And today, most children are enrolled in primary education.

It is of course difficult to prove a direct causal connection between the constitutionalisation of social rights in Brazil in 1988, and the country’s improvements over the last 25 years. After all, there are many other important potential influences, and there is no accepted methodology available yet capable of measuring such impact, although we should continue to develop one. The challenges are many, ranging from availability and reliability of data sources, to lack of consensus on the relevant variables and on how to interpret the available data.  

Still, it is implausible to suggest that Brazil’s social rights had nothing to do with the country’s recent significant social progress in the face of strong obstacles from a wave of neoliberal policies that tried hard, and unsuccessfully, to curb the social investments mandated by the constitution. More data is needed at both the macro and micro level. For those believing in social rights as a potentially transformative tool, this is where we must now focus our efforts.

Sideboxes 'Read On' Sidebox: 

Sidebox: 

Related stories:  Open budgets, open politics? Can legal interventions really tackle the root causes of poverty? Poverty and human rights: can courts, lawyers and activists make a difference? Can Brazil promote change without changing itself? Home and abroad: balancing Brazil’s human rights commitments Beyond the courts – protecting economic and social rights Brazil too ‘traditional’ to be a global human rights leader Voicing the right to health
Catégories: les flux rss

The limits of international criminal justice: lessons from the Ongwen case

Open Democracy News Analysis - il y a 11 heures 48 minutes

The arrest of Lord’s Resistance Army commander Dominic Ongwen may provide a much needed boost to the International Criminal Court. But it also highlights the complex challenges faced by international criminal justice.

The ICC, The Hague. Demotix/Alexandre Chevallier. All rights reserved.The recent capture in the Central African Republic of senior commander Dominic Ongwen, of the Ugandan rebel group Lord’s Resistance Army (LRA), has been heralded as an important boon for the International Criminal Court (ICC). It represents a much needed reversal of fortune for the court after the ICC prosecutor’s decision in December to abandon charges against Kenyan President Uhuru Kenyatta for his alleged role in the 2007 post-election violence. A few days later the prosecutor also announced that she was suspending the investigation into war crimes in the Darfur region due to a lack of resources and inaction by the United Nations Security Council.

These developments have deepened a simmering crisis of legitimacy facing the ICC. In contrast, Ongwen’s arrest seems to give renewed life to the idea that perpetrators of war crimes and crimes against humanity do not lie beyond the reach of international justice.

Uganda’s agreement to have Ongwen tried by the ICC is in particular seen as a positive sign. Although it self-referred the conflict in northern Uganda to the Court in 2003 and hosted the ICC’s Review Conference in 2010, Kampala has become increasingly hostile towards the Court. Ugandan President Yoweri Museveni strongly opposed the Court’s investigations in Kenya and recently called on African states to quit the ICC which he dubbed “the court of the West” and “a vessel for oppressing Africa”. There was widespread speculation over whether Kampala would comply with its international obligation to hand Ongwen over to The Hague. It was expected that Uganda would instead seek to prosecute Ongwen before a Ugandan court or grant him amnesty, as has been the case with other LRA commanders.

The prospect of some form of accountability for the crimes committed by the LRA is encouraging. The Ongwen case nevertheless highlights a number of complex challenges faced by international criminal justice. This does not necessarily mean that the project of international justice should be abandoned, as some are keen to proclaim. But it does underscore the number of hard truths about international criminal justice that need to be confronted.

Delayed justice

The ICC is often criticised for struggling to provide an immediate response to outbreaks of violence and being powerless to stem human rights violations and conflict. The reasons for this are both practical – the court has limited capacity to get involved in all instances of human rights violations and undertake on-the-ground investigations into ongoing conflicts – and political – it is dependent on state cooperation to undertake investigations and obtain the execution of its arrest warrants. Because the court’s hands are tied on so many levels, it can often only progress at a snail’s pace and with fits and starts.

The Ugandan case proves no exception to this as Ongwen’s arrest occurred nearly ten years after the Court first issued its arrest warrant against him. Additionally, even where the Court is able to gain custody of the accused, trial processes tend to take years. For instance, Congolese rebel leader Thomas Lubanga has been in detention at the ICC since 2006 but his trial process was only finalised in 2014.

Consequently, expectations that international justice will act as a first responder to atrocities are unrealistic and even somewhat dishonest. What the Ongwen case demonstrates is that international justice is a long-term process rather than a momentary event. The inability of international justice to act in the immediate might be disappointing and hard to bear for the victims, but it does not have to signal the failure of justice. Justice might be delayed – most often for regrettable political reasons – but that does not mean it will not happen at some later time when circumstances on the ground have gained.

Experiences in other countries underscore the fact that providing accountability and redress for mass human rights violations committed by state and non-state forces during war or authoritarian rule is more akin to a marathon than to a sprint. Chile only experienced progress in the prosecution of perpetrators more than a decade after the end of military rule, while Spain is only now starting to address questions of memory and reparations relating to its Civil War and the Francoist repression. The hard truth is that international justice will nearly always offer a delayed response to atrocities. Evaluations of its success should therefore not be based on how it responds to immediate crises, but whether it has advanced the accountability agenda over the long term.

Frontline justice

Closely tied to the above observation is the question of whether it is useful to mobilise the Court in the midst of conflict. This is a hotly debated topic which does not lend itself to easy answers, yet it has become fundamental in framing views about the legitimacy of the Court. Experience so far suggests that frontline justice might not be the best use that can be made of the Court. There is so far little evidence that interventions by international courts are able to exert an immediate and individual deterrence effect (that incentivise specific perpetrators to end their human rights violations). At most, they will have a general deterrence effect by shifting normative views over the long term. Asking the Court to ‘stop the violence’ thus sets it up for failure and risks fatally undermining confidence in the Court, particularly on the part of victims.

Moreover, intervention in ongoing conflict exposes the Court to excessive politicisation, as it inexorably gets sucked into political wrangling and opens itself up to political manipulation by states. In the Ugandan case, President Museveni mobilised international justice to legitimise his government’s military response to the conflict, divert attention away from the army’s own human rights practices, and to depoliticise the northern conflict. Experiences in Sudan, Kenya and Palestine in turn show how the Court may be used as a bargaining chip in political power plays, either between states or domestic elites. This becomes particularly problematic if international justice is used as a substitute to the pursuit of a political or military solution.

While it is impossible for the Court to completely act outside of politics, there is a need to reflect more on circumstances where too much politics may end up immobilising the Court and serving the interest of neither justice nor peace. The hard truth which thus needs to be confronted is that rather than ending conflict, international justice is at growing risk of becoming an additional terrain on which wars are fought out. While it would be unrealistic to simply state that the Court should therefore never intervene in ongoing conflicts, at the minimum a more critical reflection of the conditions under which this happens is needed.

Simplifying responsibility

Defining responsibilities for wartime atrocities might seem straightforward, but is in fact a complex task. Most often, a wide set of actors commit abuses and all sides to a conflict are responsible for atrocities. As an international court that acts in complement to domestic courts, the ICC is not expected to prosecute every perpetrator but instead focuses on ‘those most responsible’ for mass human rights violations. While this might seem the right thing to do from a moral point of view, in practice it has proven problematic. On occasion, the Court’s interpretation of who is most responsible has been subjective and driven more by logistical and institutional considerations than by realities on the ground.

Barlonyo massacre site. Demotix/Samson Opus. All rights reserved.In Uganda, the ICC has restricted its investigations to LRA crimes on the basis that the crimes committed by the Ugandan army were of a lesser gravity. Such an interpretation of the conflict is highly political and does not necessarily align with who the victims in northern Uganda see as responsible for their suffering. The Court has demonstrated a similar one-sided interpretation of who ‘those most responsible’ are in its investigations in the DRC, the CAR, and most recently Ivory Coast.

The ICC’s reliance on the notion of ‘those most responsible’ does not reflect the realities of modern-day conflicts. The origin of this concept partly lies in widespread beliefs that political instrumentalisation of ethnicity and grievances are the root cause of conflict. While this is sometimes the case, it mostly oversimplifies conflict dynamics, particularly in civil wars. It overlooks the fact that individuals are often driven to engage in armed violence and commit atrocities by a complex mixture of political, economic, ideological, affective and opportunistic reasons. Agency in armed conflicts, and therefore responsibility, is consequently not adequately captured by the notion of ‘those most responsible’. Violence and atrocities are most often the result of a combination of top-down and vertical processes and actors.

Armed actors in modern-day civil wars, particularly in Africa, are often characterised by weak internal command and control. Sometimes units of the same armed group are spread out over large areas and operate relatively independently of each other with limited to no direct communication with the overall leader of the group. In such instances, responsibility does not necessarily or exclusively lie with the top leadership but diffuses downward. The hard truth is that international criminal justice’s legal and normative approach to responsibility may need to be rethought in order to better integrate on-the-ground realities of what drives violence and atrocities.

In the case of Ongwen, an added challenge for the Court is that while he is responsible for mass human rights violations, he is also himself a victim of the conflict as he was abducted by the LRA when he was ten years old. International criminal justice has rarely dealt with situations where the lines between perpetration and victimhood are so blurred. So far, only the prosecutor of the Special Court for Sierra Leone has taken the express decision not to indict child soldiers because of their dual status as victim and perpetrator. Whether the ICC is likely to toe the same line in Ongwen’s case is uncertain, in light of his high profile as a senior LRA commander and the fact that he is tried for crimes which he committed as an adult. Nevertheless, Ongwen’s particular situation raises further challenges to how responsibility for wartime atrocities is understood and dealt with.

Sideboxes Related stories:  The ICC and beyond: tipping the scales of international justice
Catégories: les flux rss

The limits of international criminal justice: lessons from the Ongwen case

Open Democracy News Analysis - il y a 11 heures 48 minutes

The arrest of Lord’s Resistance Army commander Dominic Ongwen may provide a much needed boost to the International Criminal Court. But it also highlights the complex challenges faced by international criminal justice.

The ICC, The Hague. Demotix/Alexandre Chevallier. All rights reserved.The recent capture in the Central African Republic of senior commander Dominic Ongwen, of the Ugandan rebel group Lord’s Resistance Army (LRA), has been heralded as an important boon for the International Criminal Court (ICC). It represents a much needed reversal of fortune for the court after the ICC prosecutor’s decision in December to abandon charges against Kenyan President Uhuru Kenyatta for his alleged role in the 2007 post-election violence. A few days later the prosecutor also announced that she was suspending the investigation into war crimes in the Darfur region due to a lack of resources and inaction by the United Nations Security Council.

These developments have deepened a simmering crisis of legitimacy facing the ICC. In contrast, Ongwen’s arrest seems to give renewed life to the idea that perpetrators of war crimes and crimes against humanity do not lie beyond the reach of international justice.

Uganda’s agreement to have Ongwen tried by the ICC is in particular seen as a positive sign. Although it self-referred the conflict in northern Uganda to the Court in 2003 and hosted the ICC’s Review Conference in 2010, Kampala has become increasingly hostile towards the Court. Ugandan President Yoweri Museveni strongly opposed the Court’s investigations in Kenya and recently called on African states to quit the ICC which he dubbed “the court of the West” and “a vessel for oppressing Africa”. There was widespread speculation over whether Kampala would comply with its international obligation to hand Ongwen over to The Hague. It was expected that Uganda would instead seek to prosecute Ongwen before a Ugandan court or grant him amnesty, as has been the case with other LRA commanders.

The prospect of some form of accountability for the crimes committed by the LRA is encouraging. The Ongwen case nevertheless highlights a number of complex challenges faced by international criminal justice. This does not necessarily mean that the project of international justice should be abandoned, as some are keen to proclaim. But it does underscore the number of hard truths about international criminal justice that need to be confronted.

Delayed justice

The ICC is often criticised for struggling to provide an immediate response to outbreaks of violence and being powerless to stem human rights violations and conflict. The reasons for this are both practical – the court has limited capacity to get involved in all instances of human rights violations and undertake on-the-ground investigations into ongoing conflicts – and political – it is dependent on state cooperation to undertake investigations and obtain the execution of its arrest warrants. Because the court’s hands are tied on so many levels, it can often only progress at a snail’s pace and with fits and starts.

The Ugandan case proves no exception to this as Ongwen’s arrest occurred nearly ten years after the Court first issued its arrest warrant against him. Additionally, even where the Court is able to gain custody of the accused, trial processes tend to take years. For instance, Congolese rebel leader Thomas Lubanga has been in detention at the ICC since 2006 but his trial process was only finalised in 2014.

Consequently, expectations that international justice will act as a first responder to atrocities are unrealistic and even somewhat dishonest. What the Ongwen case demonstrates is that international justice is a long-term process rather than a momentary event. The inability of international justice to act in the immediate might be disappointing and hard to bear for the victims, but it does not have to signal the failure of justice. Justice might be delayed – most often for regrettable political reasons – but that does not mean it will not happen at some later time when circumstances on the ground have gained.

Experiences in other countries underscore the fact that providing accountability and redress for mass human rights violations committed by state and non-state forces during war or authoritarian rule is more akin to a marathon than to a sprint. Chile only experienced progress in the prosecution of perpetrators more than a decade after the end of military rule, while Spain is only now starting to address questions of memory and reparations relating to its Civil War and the Francoist repression. The hard truth is that international justice will nearly always offer a delayed response to atrocities. Evaluations of its success should therefore not be based on how it responds to immediate crises, but whether it has advanced the accountability agenda over the long term.

Frontline justice

Closely tied to the above observation is the question of whether it is useful to mobilise the Court in the midst of conflict. This is a hotly debated topic which does not lend itself to easy answers, yet it has become fundamental in framing views about the legitimacy of the Court. Experience so far suggests that frontline justice might not be the best use that can be made of the Court. There is so far little evidence that interventions by international courts are able to exert an immediate and individual deterrence effect (that incentivise specific perpetrators to end their human rights violations). At most, they will have a general deterrence effect by shifting normative views over the long term. Asking the Court to ‘stop the violence’ thus sets it up for failure and risks fatally undermining confidence in the Court, particularly on the part of victims.

Moreover, intervention in ongoing conflict exposes the Court to excessive politicisation, as it inexorably gets sucked into political wrangling and opens itself up to political manipulation by states. In the Ugandan case, President Museveni mobilised international justice to legitimise his government’s military response to the conflict, divert attention away from the army’s own human rights practices, and to depoliticise the northern conflict. Experiences in Sudan, Kenya and Palestine in turn show how the Court may be used as a bargaining chip in political power plays, either between states or domestic elites. This becomes particularly problematic if international justice is used as a substitute to the pursuit of a political or military solution.

While it is impossible for the Court to completely act outside of politics, there is a need to reflect more on circumstances where too much politics may end up immobilising the Court and serving the interest of neither justice nor peace. The hard truth which thus needs to be confronted is that rather than ending conflict, international justice is at growing risk of becoming an additional terrain on which wars are fought out. While it would be unrealistic to simply state that the Court should therefore never intervene in ongoing conflicts, at the minimum a more critical reflection of the conditions under which this happens is needed.

Simplifying responsibility

Defining responsibilities for wartime atrocities might seem straightforward, but is in fact a complex task. Most often, a wide set of actors commit abuses and all sides to a conflict are responsible for atrocities. As an international court that acts in complement to domestic courts, the ICC is not expected to prosecute every perpetrator but instead focuses on ‘those most responsible’ for mass human rights violations. While this might seem the right thing to do from a moral point of view, in practice it has proven problematic. On occasion, the Court’s interpretation of who is most responsible has been subjective and driven more by logistical and institutional considerations than by realities on the ground.

Barlonyo massacre site. Demotix/Samson Opus. All rights reserved.In Uganda, the ICC has restricted its investigations to LRA crimes on the basis that the crimes committed by the Ugandan army were of a lesser gravity. Such an interpretation of the conflict is highly political and does not necessarily align with who the victims in northern Uganda see as responsible for their suffering. The Court has demonstrated a similar one-sided interpretation of who ‘those most responsible’ are in its investigations in the DRC, the CAR, and most recently Ivory Coast.

The ICC’s reliance on the notion of ‘those most responsible’ does not reflect the realities of modern-day conflicts. The origin of this concept partly lies in widespread beliefs that political instrumentalisation of ethnicity and grievances are the root cause of conflict. While this is sometimes the case, it mostly oversimplifies conflict dynamics, particularly in civil wars. It overlooks the fact that individuals are often driven to engage in armed violence and commit atrocities by a complex mixture of political, economic, ideological, affective and opportunistic reasons. Agency in armed conflicts, and therefore responsibility, is consequently not adequately captured by the notion of ‘those most responsible’. Violence and atrocities are most often the result of a combination of top-down and vertical processes and actors.

Armed actors in modern-day civil wars, particularly in Africa, are often characterised by weak internal command and control. Sometimes units of the same armed group are spread out over large areas and operate relatively independently of each other with limited to no direct communication with the overall leader of the group. In such instances, responsibility does not necessarily or exclusively lie with the top leadership but diffuses downward. The hard truth is that international criminal justice’s legal and normative approach to responsibility may need to be rethought in order to better integrate on-the-ground realities of what drives violence and atrocities.

In the case of Ongwen, an added challenge for the Court is that while he is responsible for mass human rights violations, he is also himself a victim of the conflict as he was abducted by the LRA when he was ten years old. International criminal justice has rarely dealt with situations where the lines between perpetration and victimhood are so blurred. So far, only the prosecutor of the Special Court for Sierra Leone has taken the express decision not to indict child soldiers because of their dual status as victim and perpetrator. Whether the ICC is likely to toe the same line in Ongwen’s case is uncertain, in light of his high profile as a senior LRA commander and the fact that he is tried for crimes which he committed as an adult. Nevertheless, Ongwen’s particular situation raises further challenges to how responsibility for wartime atrocities is understood and dealt with.

Sideboxes Related stories:  The ICC and beyond: tipping the scales of international justice
Catégories: les flux rss

How should we remember Auschwitz?

Open Democracy News Analysis - il y a 13 heures 55 minutes

On the 70th anniversary of the liberation of Auschwitz-Birkenau, whose stories are told and whose are marginalized? 

Homosexual Auschwitz prisoner August Pfeiffer, who was murdered in the camp in 1941. Credit: US Holocaust Memorial Museum.

70 years ago today, Red Army soldiers entered Auschwitz-Birkenau and liberated its remaining 6,000 people. Auschwitz was not the largest or the most deadly annihilation camp.

But because it served as a place of both extermination and forced labor, it had a relatively large number of survivors who lived to tell the story, coining Auschwitz into a synonym for the Holocaust.

How we remember the camp today is very different from the Auschwitz of 70 years ago. The stories we hear fit into neat, sanitized frameworks of melodrama, heroism, monstrosity, and collaboration. The 70th anniversary offers an opportunity to re-examine the way we think about the Holocaust.

Looking at testimonies related to gender and sexuality shows that the history of Auschwitz does not fit into simple, black-and-white categories. Today, we will not be able to unearth the voices of those deemed “unworthy victims,” but thinking about the marginalized can help us to develop a more inclusive past. 

As a Holocaust historian, I encounter a fair share of generalizations. Women raped in the camp brothels are casually described as ‘asocials’, and prisoner functionaries (prisoners working in the camp administration with better access to food and a small measure of power) are seen as 'collaborators with the Germans, who were deservedly hated'.

When teaching I can dissipate these statements, showing how they separate Holocaust history into comfortable, moralizing categories imposed by a postwar society. But what about everyone else?

Dominant survivors’ narratives are constitutive of much of what we have come to accept about the camps and the Holocaust. For many years after the war, the master narrative was coined by political prisoners; in the 1990s, it was taken over by Jewish survivors. Both their groups were large, erudite, and middle class. Of course, not all survivors belonged to the middle class after the war, but those who bore testimony nearly always were. Alternative, conflicting stories have in effect become impossible to tell.

There are extremely few testimonies of those marginalized as 'asocials', homosexuals, or of women forced to work in the camp brothels. These stories are familiar only to a handful of experts. 

A well-known 1946 account by two Czech Jewish political prisoners, Ota Kraus and Erich Schön (later Kulka), titled The Death Factory, shows the emergence of some of these ways of framing stories about Auschwitz. Such discussions of homosexuality and forced prostitution gave birth to gendered, moralizing notions of the camp's society.

Kraus and Kulka spent two years in the Auschwitz locksmith work unit, moved around the entire camp complex, and worked for the resistance movement, smuggling out information about the mass killing. Their book contains statements like: “Pink triangle: worn by persons imprisoned for sexual perversion or homosexuality (Schwule Brüder). In the camps they had a splendid opportunity to corrupt the maximum number of young lads.”

This remark conceals the fact that nearly all men arrested for homosexual conduct (§175 of the German criminal code addressed men only; lesbians were persecuted as 'asocials' or 'criminals') were on the bottom of the camps’ social hierarchy, with a terrifyingly high mortality rate. 

Only a thin social elite, some of the prisoner functionaries, were able to have sexual relationships. These men had sex with men not necessarily because they were gay, but because they were in single sex camps. They picked teenagers because they were coded as feminine, and in exchange for protection, because youngsters could not cope in the brutal camp system alone.

There is much to analyze about this often violent sexual barter and the issue of consent; my point here is to do with the contempt in nearly all survivors’ testimonies. Prisoners engaging in same sex activity were often perceived as equally distressing as other, violent aspects of the camps. This stands alongside similar statements on women in the camp brothels or women arrested for prostitution. Even Jorge Semprún’s achingly graceful What a Beautiful Sunday contains scathing, sexualized remarks on the forced sexual exploitation in Buchenwald.

Most of the homosexuals and 'asocials' from Auschwitz did not survive. When they did, they were not only denied any reparations, but they also often faced renewed persecution. This makes Kulka’s and Kraus’s remarks all the more disturbing. Why would people who witnessed the murder of thousands, the murderous quotidian, who courageously took part in a resistance network, demean their fellow prisoners?

The explanation lies in the logic of our human society. Even in the most extreme of circumstances, people bring with them their moral codes, they differentiate and stratify to feel better about themselves. Kulka and Kraus applied the mores they internalised in the homophobic European society of the early 20th century. Sexuality is one of the salient markers of what society is about and how it expresses its essential values–which is why we need to think about these uncomfortable statements of former prisoners.

Societies often mark those who break crucial behavioral codes as sexually deviant. During the Holocaust, this mechanism applied to violent female guards, whom the victims perceived and later depicted as monsters. People more easily accept that men are violent, but they see women’s brutality as abnormal.

Just as the media depicted Lynndie England, the US soldier who abused prisoners in Abu Ghraib, as a beast, the survivors, including Ota Kraus, viewed the women guards as monsters with an unnatural sexual appetite, or as lesbians.

If you search for the keyword “homosexuality” in the over 52,000 oral histories of Jewish survivors conducted by the Shoah Survivors' Foundation, all you will find eyewitnesses looking back on homophobic encounters with disgust. Of course there were Jewish gay Holocaust victims, and it is likely that they were among the interviewees.

But the heteronormative framework prevented the interviewers from prodding, or the survivors from self-identifying as gay–even though most of the interviews took place between 1994 and 2000. There were only six interviews with people persecuted as gay, who were all gentile, and whose interviews were collected separately.

Indeed, the final staged scene in these interviews–when the survivor is joined by their spouse and grand/children–scripted success as exclusively straight, making it impossible to tell a story of a happy queer life.

The stories that could have been collected are irreparably gone. Even more, the fact that this largest, and excellent, oral history collection of Holocaust survivors exclusively depicts homosexuality as an abhorrent, violent, terrifying aspect of the concentration camps means that the homophobic stories will continue, in one version or other.

This contrasts with depictions of people with disabilities and the Sinti and Roma, whose persecution the scholarship has been addressing in the past thirty years: they may not stand in the center of today's commemorative attention, yet can still be remembered. 

We will not be able to create a canon of the marginalized voices. But what we can, and should, do today is ask questions about the omissions and contradictions in Auschwitz memoirs and histories. Thinking about these gaps enables us to break away from simple, sanitizing narratives and to remember all victims of Auschwitz, different as they were. 

People often ask me, what is the legacy of the Holocaust? I usually explain that there is no moral to the six million slaughtered; we ask for a lesson because we struggle to comprehend such a negation. A redemptive story of the Holocaust, endowing it with meaning, makes us feel better.

The next book Kulka and Kraus wrote, Mass Murder and Profit, discusses the ways in which German industry capitalized on the forced labor of the million of prisoners, under the program named “Annihilation through Labor”. Only in the 2000s did German large business recognize their responsibility and pay reparations to those forced laborers who were still alive. 

Thinking about these questions, it occurred to me that there may be a legacy of the people of Auschwitz after all: developing a more inclusive and less judgmental history; making place for the many different genocide victims; striving for a better, socially just, society; starting with ourselves.

Sideboxes Related stories:  The Holocaust's lessons remain deeply contested Israeli historian Otto Dov Kulka tells Auschwitz story of a Czech family that never existed Rights:  CC by NC 3.0
Catégories: les flux rss

How should we remember Auschwitz?

Open Democracy News Analysis - il y a 13 heures 55 minutes

On the 70th anniversary of the liberation of Auschwitz-Birkenau, whose stories are told and whose are marginalized? 

Homosexual Auschwitz prisoner August Pfeiffer, who was murdered in the camp in 1941. Credit: US Holocaust Memorial Museum.

70 years ago today, Red Army soldiers entered Auschwitz-Birkenau and liberated its remaining 6,000 people. Auschwitz was not the largest or the most deadly annihilation camp.

But because it served as a place of both extermination and forced labor, it had a relatively large number of survivors who lived to tell the story, coining Auschwitz into a synonym for the Holocaust.

How we remember the camp today is very different from the Auschwitz of 70 years ago. The stories we hear fit into neat, sanitized frameworks of melodrama, heroism, monstrosity, and collaboration. The 70th anniversary offers an opportunity to re-examine the way we think about the Holocaust.

Looking at testimonies related to gender and sexuality shows that the history of Auschwitz does not fit into simple, black-and-white categories. Today, we will not be able to unearth the voices of those deemed “unworthy victims,” but thinking about the marginalized can help us to develop a more inclusive past. 

As a Holocaust historian, I encounter a fair share of generalizations. Women raped in the camp brothels are casually described as ‘asocials’, and prisoner functionaries (prisoners working in the camp administration with better access to food and a small measure of power) are seen as 'collaborators with the Germans, who were deservedly hated'.

When teaching I can dissipate these statements, showing how they separate Holocaust history into comfortable, moralizing categories imposed by a postwar society. But what about everyone else?

Dominant survivors’ narratives are constitutive of much of what we have come to accept about the camps and the Holocaust. For many years after the war, the master narrative was coined by political prisoners; in the 1990s, it was taken over by Jewish survivors. Both their groups were large, erudite, and middle class. Of course, not all survivors belonged to the middle class after the war, but those who bore testimony nearly always were. Alternative, conflicting stories have in effect become impossible to tell.

There are extremely few testimonies of those marginalized as 'asocials', homosexuals, or of women forced to work in the camp brothels. These stories are familiar only to a handful of experts. 

A well-known 1946 account by two Czech Jewish political prisoners, Ota Kraus and Erich Schön (later Kulka), titled The Death Factory, shows the emergence of some of these ways of framing stories about Auschwitz. Such discussions of homosexuality and forced prostitution gave birth to gendered, moralizing notions of the camp's society.

Kraus and Kulka spent two years in the Auschwitz locksmith work unit, moved around the entire camp complex, and worked for the resistance movement, smuggling out information about the mass killing. Their book contains statements like: “Pink triangle: worn by persons imprisoned for sexual perversion or homosexuality (Schwule Brüder). In the camps they had a splendid opportunity to corrupt the maximum number of young lads.”

This remark conceals the fact that nearly all men arrested for homosexual conduct (§175 of the German criminal code addressed men only; lesbians were persecuted as 'asocials' or 'criminals') were on the bottom of the camps’ social hierarchy, with a terrifyingly high mortality rate. 

Only a thin social elite, some of the prisoner functionaries, were able to have sexual relationships. These men had sex with men not necessarily because they were gay, but because they were in single sex camps. They picked teenagers because they were coded as feminine, and in exchange for protection, because youngsters could not cope in the brutal camp system alone.

There is much to analyze about this often violent sexual barter and the issue of consent; my point here is to do with the contempt in nearly all survivors’ testimonies. Prisoners engaging in same sex activity were often perceived as equally distressing as other, violent aspects of the camps. This stands alongside similar statements on women in the camp brothels or women arrested for prostitution. Even Jorge Semprún’s achingly graceful What a Beautiful Sunday contains scathing, sexualized remarks on the forced sexual exploitation in Buchenwald.

Most of the homosexuals and 'asocials' from Auschwitz did not survive. When they did, they were not only denied any reparations, but they also often faced renewed persecution. This makes Kulka’s and Kraus’s remarks all the more disturbing. Why would people who witnessed the murder of thousands, the murderous quotidian, who courageously took part in a resistance network, demean their fellow prisoners?

The explanation lies in the logic of our human society. Even in the most extreme of circumstances, people bring with them their moral codes, they differentiate and stratify to feel better about themselves. Kulka and Kraus applied the mores they internalised in the homophobic European society of the early 20th century. Sexuality is one of the salient markers of what society is about and how it expresses its essential values–which is why we need to think about these uncomfortable statements of former prisoners.

Societies often mark those who break crucial behavioral codes as sexually deviant. During the Holocaust, this mechanism applied to violent female guards, whom the victims perceived and later depicted as monsters. People more easily accept that men are violent, but they see women’s brutality as abnormal.

Just as the media depicted Lynndie England, the US soldier who abused prisoners in Abu Ghraib, as a beast, the survivors, including Ota Kraus, viewed the women guards as monsters with an unnatural sexual appetite, or as lesbians.

If you search for the keyword “homosexuality” in the over 52,000 oral histories of Jewish survivors conducted by the Shoah Survivors' Foundation, all you will find eyewitnesses looking back on homophobic encounters with disgust. Of course there were Jewish gay Holocaust victims, and it is likely that they were among the interviewees.

But the heteronormative framework prevented the interviewers from prodding, or the survivors from self-identifying as gay–even though most of the interviews took place between 1994 and 2000. There were only six interviews with people persecuted as gay, who were all gentile, and whose interviews were collected separately.

Indeed, the final staged scene in these interviews–when the survivor is joined by their spouse and grand/children–scripted success as exclusively straight, making it impossible to tell a story of a happy queer life.

The stories that could have been collected are irreparably gone. Even more, the fact that this largest, and excellent, oral history collection of Holocaust survivors exclusively depicts homosexuality as an abhorrent, violent, terrifying aspect of the concentration camps means that the homophobic stories will continue, in one version or other.

This contrasts with depictions of people with disabilities and the Sinti and Roma, whose persecution the scholarship has been addressing in the past thirty years: they may not stand in the center of today's commemorative attention, yet can still be remembered. 

We will not be able to create a canon of the marginalized voices. But what we can, and should, do today is ask questions about the omissions and contradictions in Auschwitz memoirs and histories. Thinking about these gaps enables us to break away from simple, sanitizing narratives and to remember all victims of Auschwitz, different as they were. 

People often ask me, what is the legacy of the Holocaust? I usually explain that there is no moral to the six million slaughtered; we ask for a lesson because we struggle to comprehend such a negation. A redemptive story of the Holocaust, endowing it with meaning, makes us feel better.

The next book Kulka and Kraus wrote, Mass Murder and Profit, discusses the ways in which German industry capitalized on the forced labor of the million of prisoners, under the program named “Annihilation through Labor”. Only in the 2000s did German large business recognize their responsibility and pay reparations to those forced laborers who were still alive. 

Thinking about these questions, it occurred to me that there may be a legacy of the people of Auschwitz after all: developing a more inclusive and less judgmental history; making place for the many different genocide victims; striving for a better, socially just, society; starting with ourselves.

Sideboxes Related stories:  The Holocaust's lessons remain deeply contested Israeli historian Otto Dov Kulka tells Auschwitz story of a Czech family that never existed Rights:  CC by NC 3.0
Catégories: les flux rss

The Holocaust's lessons remain deeply contested

Open Democracy News Analysis - il y a 13 heures 56 minutes

This year's commemoration marks a turning point in Holocaust remembrance. What we want is to honor the dead and empower the living, but often we end doing neither.

Nazi guards removed camp prisoners' shoes - these images have become iconic of Holocaust remembrance. Credit: Demotix/Amador Guallar.

Today, 70 years after the liberation of Auschwitz-Birkenau, survivors will once again stand outside 'Death Gate', where over a million people were brought to their deaths. 

This year's commemoration marks a turning point in Holocaust remembrance. Ten years ago 1,500 surviving victims made the visit to Auschwitz to remember what happened. This year just 300 will travel. We are rapidly reaching a point where living memory is sliding into historical record and as time moves on, so does the possibility of rethinking how we relate to it. 

For an event so horrifying, the Holocaust can be a difficult thing to comprehend and remember. It occupies a strange, almost contradictory place in modern Jewish life: dwindling in one sense, utterly dominant in another. We feel the need to constantly remember but–with so much political manipulation–also to move on. What we want is to honor the dead and empower the living, but often we end doing neither.

I remember sitting in a rustic cafe in the old Jewish quarter of Kazimierz last summer. Located in the South West of Krakow, the district is a weird, idealized pastiche of a Jewish way of life, just 70 km from Oswiecim, where Auschwitz and Birkenau still stand.

It's a place where Poland's interest in Jewry has outpaced the growth of actual Jewish communities. The result is an ugly, sometimes unsettling simulacra of Jewish culture: a place where cheap figures of orthodox Jews line the markets, where Jewish themed cafes sell pork, and where Holocaust tourism lads chug specially-branded 'Kosher' vodka.

I'd come to meet a man called Konstanty Gebert, one of the leading figures in Poland's post-communist Jewish revival. To Gebert, a man dedicated to building a positive image of Polish Jewry, the way the Holocaust is remembered can be deeply frustrating.

Around 70 percent of the world's Ashkenazi European Jews trace their ancestry to Poland. For 2000 years it was the place where Ashkenazi Jewish life developed and matured, spawning movements as diverse as Hasidism, Bundism and Zionism and figures as influential as Moses Mendelssohn and Baal Shem Tov.

But despite this deeply rich past and despite a resurgence of Jewish life in Warsaw and Krakow, there is often a collective refusal to see the country as anything other than a graveyard; little interest in its active communities beyond tours of Nazi extermination camps and visits to Oscar Schindler's factory. 

The worst part of this takes place every year when a stampede of young, ideologically vulnerable Jews hold “The March of the Living”, an event where they learn about the horror of the Holocaust as the ultimate vindication of Zionism.  

On these marches Poland becomes nothing but a place of death, its history, including the Holocaust, reduced to an Israeli redemption narrative. For Polish Jews like Gebert, that have lived openly in the country for decades, the spectacle is both grotesque and hurtful.

“There is a master narrative that views Poland as the epitome of everything that is wrong with the diaspora,” Gebert told me. “And I view it as an insult. The idea is that if you have any doubts about the validity of a strong, patriotic Zionism you come to Poland and look at the alternative. It's an extremely distorted vision of diasporic history where the only thing that matters about Polish Jewry is that the Germans came in and killed us all.”

Poland may be the starkest example of the Holocaust being used to delegitimise the Jewish diaspora. But the same struggle for identity is repeated around the world. Only two and half weeks ago in Paris, four Jews shopping at a kosher market ahead of Shabbos were gunned down. As the Jews of France mourned, the Israeli President called for their immigration to Israel.

Remembering the Holocaust and Jewish suffering without undermining the legitimacy of living diaspora communities is a crucial task as fears of anti-Semitism surge across Europe. Equally pressing though, is the need to end the political exploitation of those that died, the abuse of European Jewish history by those that wish to shield Israel from all possible criticism. 

Doing this won't be easy. Many Israeli Jews suffer from what Israeli social psychologist Daniel Bar Tel has described as a siege mentality: the experience of the Holocaust–the most lachrymose moment in a history of repeated suffering–lies at the centre of that.

As Avraham Burg, former leader of the Knesset, has argued in his book The Holocaust Is Over; We Must Rise from Its Ashes, its primary consequence is Palestinian suffering: “All is compared to the Shoah” he says, “dwarfed by the Shoah, and therefore all is allowed—be it fences, sieges, crowns, curfews, food and water deprivation, or unexplained killings.”

The Holocaust remains a very genuine trauma for many Jews–both in Israel and the diaspora–and this shouldn't be forgotten. But neither should it enable Jews to hold a monopoly on suffering which blinds them to the pain of others.

Today, first and foremost, we should honor those who were killed 70 years ago. All 11 million. Jews. Roma. Gays. Dissidents. Poles. The mentally and physically disabled. But once the memorial is over we should also remember that however united we are around its unique horror, the Holocaust's lessons remain deeply contested. And as it becomes more and more distant, we need to learn to remember it in ways that respect both the living and the dead.

Sideboxes Related stories:  How should we remember Auschwitz? Collective memory, collective trauma, collective hatred Rights:  CC by NC 3.0
Catégories: les flux rss

The Holocaust's lessons remain deeply contested

Open Democracy News Analysis - il y a 13 heures 56 minutes

This year's commemoration marks a turning point in Holocaust remembrance. What we want is to honor the dead and empower the living, but often we end doing neither.

Nazi guards removed camp prisoners' shoes - these images have become iconic of Holocaust remembrance. Credit: Demotix/Amador Guallar.

Today, 70 years after the liberation of Auschwitz-Birkenau, survivors will once again stand outside 'Death Gate', where over a million people were brought to their deaths. 

This year's commemoration marks a turning point in Holocaust remembrance. Ten years ago 1,500 surviving victims made the visit to Auschwitz to remember what happened. This year just 300 will travel. We are rapidly reaching a point where living memory is sliding into historical record and as time moves on, so does the possibility of rethinking how we relate to it. 

For an event so horrifying, the Holocaust can be a difficult thing to comprehend and remember. It occupies a strange, almost contradictory place in modern Jewish life: dwindling in one sense, utterly dominant in another. We feel the need to constantly remember but–with so much political manipulation–also to move on. What we want is to honor the dead and empower the living, but often we end doing neither.

I remember sitting in a rustic cafe in the old Jewish quarter of Kazimierz last summer. Located in the South West of Krakow, the district is a weird, idealized pastiche of a Jewish way of life, just 70 km from Oswiecim, where Auschwitz and Birkenau still stand.

It's a place where Poland's interest in Jewry has outpaced the growth of actual Jewish communities. The result is an ugly, sometimes unsettling simulacra of Jewish culture: a place where cheap figures of orthodox Jews line the markets, where Jewish themed cafes sell pork, and where Holocaust tourism lads chug specially-branded 'Kosher' vodka.

I'd come to meet a man called Konstanty Gebert, one of the leading figures in Poland's post-communist Jewish revival. To Gebert, a man dedicated to building a positive image of Polish Jewry, the way the Holocaust is remembered can be deeply frustrating.

Around 70 percent of the world's Ashkenazi European Jews trace their ancestry to Poland. For 2000 years it was the place where Ashkenazi Jewish life developed and matured, spawning movements as diverse as Hasidism, Bundism and Zionism and figures as influential as Moses Mendelssohn and Baal Shem Tov.

But despite this deeply rich past and despite a resurgence of Jewish life in Warsaw and Krakow, there is often a collective refusal to see the country as anything other than a graveyard; little interest in its active communities beyond tours of Nazi extermination camps and visits to Oscar Schindler's factory. 

The worst part of this takes place every year when a stampede of young, ideologically vulnerable Jews hold “The March of the Living”, an event where they learn about the horror of the Holocaust as the ultimate vindication of Zionism.  

On these marches Poland becomes nothing but a place of death, its history, including the Holocaust, reduced to an Israeli redemption narrative. For Polish Jews like Gebert, that have lived openly in the country for decades, the spectacle is both grotesque and hurtful.

“There is a master narrative that views Poland as the epitome of everything that is wrong with the diaspora,” Gebert told me. “And I view it as an insult. The idea is that if you have any doubts about the validity of a strong, patriotic Zionism you come to Poland and look at the alternative. It's an extremely distorted vision of diasporic history where the only thing that matters about Polish Jewry is that the Germans came in and killed us all.”

Poland may be the starkest example of the Holocaust being used to delegitimise the Jewish diaspora. But the same struggle for identity is repeated around the world. Only two and half weeks ago in Paris, four Jews shopping at a kosher market ahead of Shabbos were gunned down. As the Jews of France mourned, the Israeli President called for their immigration to Israel.

Remembering the Holocaust and Jewish suffering without undermining the legitimacy of living diaspora communities is a crucial task as fears of anti-Semitism surge across Europe. Equally pressing though, is the need to end the political exploitation of those that died, the abuse of European Jewish history by those that wish to shield Israel from all possible criticism. 

Doing this won't be easy. Many Israeli Jews suffer from what Israeli social psychologist Daniel Bar Tel has described as a siege mentality: the experience of the Holocaust–the most lachrymose moment in a history of repeated suffering–lies at the centre of that.

As Avraham Burg, former leader of the Knesset, has argued in his book The Holocaust Is Over; We Must Rise from Its Ashes, its primary consequence is Palestinian suffering: “All is compared to the Shoah” he says, “dwarfed by the Shoah, and therefore all is allowed—be it fences, sieges, crowns, curfews, food and water deprivation, or unexplained killings.”

The Holocaust remains a very genuine trauma for many Jews–both in Israel and the diaspora–and this shouldn't be forgotten. But neither should it enable Jews to hold a monopoly on suffering which blinds them to the pain of others.

Today, first and foremost, we should honor those who were killed 70 years ago. All 11 million. Jews. Roma. Gays. Dissidents. Poles. The mentally and physically disabled. But once the memorial is over we should also remember that however united we are around its unique horror, the Holocaust's lessons remain deeply contested. And as it becomes more and more distant, we need to learn to remember it in ways that respect both the living and the dead.

Sideboxes Related stories:  How should we remember Auschwitz? Collective memory, collective trauma, collective hatred Rights:  CC by NC 3.0
Catégories: les flux rss

Domestic sex trafficking and the punitive side of anti-trafficking protection

Open Democracy News Analysis - il y a 15 heures 56 minutes

Despite efforts to automatically label teen and youth sex workers as ‘victims’ of trafficking, and thereby prevent their prosecution, their often extensive interactions with the legal system continue to leave lasting marks.

English sex workers protest being targeted by 'anti-social behaviour' laws in 2013. See Li/Demotix. All Rights Reserved.

Domestic sex trafficking is a decidedly American invention. Legally codified in federal and several other state laws, sex trafficking in general and domestic minor sex trafficking (DMST) in particular rebrands an old trend—underage children and teens’ forced involvement with commercial sex—and reframes it as a form of modern day slavery. While prostitution facilitated by pimps or other third party actors isn’t new per se, what is novel is the viewpoint that sex trafficking, which includes but isn’t limited to American youth and teen girls, is a localised manifestation of a global forced labour problem. Equally recent is the idea that anti-sex trafficking initiatives have the capacity to produce more ‘victim centred’ results than criminal justice interventions of the past.

Over the course of the last six years, I have researched the evolution of domestic sex trafficking in the United States and tracked different state and non-state collaborative interventions authorised in its name. What has emerged is that youth deemed ‘at risk’ of domestic sex trafficking may be arrested, charged or placed in detention in order to be protected by law enforcement. Relatedly, many adults are only recognised as victims of sex trafficking after they have been processed as criminal defendants, a problem acknowledged by the existence of special court initiatives to identify “defendants who have been trafficked.” These are initiatives pitched as alternatives to more typical criminal justice responses like arrest, detention, and prosecution, yet they still situate the criminal justice system as the main conduit through which victims of domestic sex trafficking gain access to services, programmes and some modicum of protection.

There is growing recognition among anti-trafficking actors, particularly with respect to youth, that calling kids victims in name but continuing to treat them like juvenile offenders is deeply flawed. One response has been for many states—28 so far—to implement some version of Safe Harbor laws. ‘Safe Harbor’ refers to laws that recognise youth as victims and aim to bring state laws “into line” with the federal Victims of Trafficking Victim Protection Act. Another response has to do with language, and one recent effort has sought to change how we talk about sex trafficking situations involving youth. In January 2015, selected advocacy groups in the United States along with members of Congress launched the “No Such Thing” campaign, an effort that seeks to change the treatment of victims of child sex trafficking by calling for the eradication of “the term ‘child prostitute’”. The campaign links a shift in language to changes in how youth are legally treated, implicitly suggesting that referring to girls as ‘trafficked’ rather than ‘child prostitutes’ will set the stage for their treatment as victims rather than offenders.

I welcome a change in how we talk about youths’ experiences with exploitation, no matter its form. I also wholly agree that a departure from the current paradigm, in which youth in some jurisdictions may be subject to some version of a detention-to-protection pipeline, is desperately needed. Yet whether passing more laws or striking ‘child prostitute’ from the vernacular will substantively change to how youth are treated remains to be seen, especially if such efforts aren’t accompanied by a critical evaluation of the ‘trafficking’ part of the equation and the interventions it has produced.

Indeed, for all of the recent claims that terms like ‘sex work’ and ‘child prostitute’ mask conditions of exploitation assumed to undergird all commercial, transactional, and survival sexual arrangements, it is striking that a commensurate degree of public outcry has not been lodged against the fraught term ‘trafficking.’ Equally troubling is that collective concerns haven’t been raised about anti-sex trafficking campaigns’ attachment to carceral feminist sensibilities, or about the uneven and sometimes punitive effects that anti-sex trafficking efforts have on migrants, voluntary sex workers, and now domestic youth and adults in the United States.

If there is a language change I’m calling for it is for students of forced labour and exploitation to become more fluent in speaking the language of collateral consequences. Criminologists and sex workers’ rights groups use the term collateral to frame the effects of the carceral state and anti-sex trafficking initiativesMany scholars of the U.S. carceral state have focused on the collateral consequences of mass incarceration on individuals and communities, particularly communities of colour. Similarly, sex workers’ rights groups have pointed to the collateral impact of anti-trafficking efforts on migrants who have endured ‘rescue’ raids, shelter-detention, and, more generally, born the punitive brunt of anti-trafficking laws. People now seen as at-risk of domestic sex trafficking in the United States must similarly contend with the collateral consequences of the criminal justice system, criminal convictions, and the anti-trafficking interventions designed to help them.

For example, youth may be referred to anti-sex trafficking initiatives through an arrest, which introduces them into the system and opens up access to services or specialised programming. Even though this may not lead to a prosecution per se, it may still produce a criminal record that cannot be expunged, to use a legal postconviction term, without extensive effort. As an August 2014 Congressional Research Report on domestic sex trafficking explains:

These [diversion] programs generally defer prosecution on the condition of successful completion of a treatment program. At that point, charges may be reduced or dismissed. This may or may not involve records being expunged” (emphasis mine).

Though new anti-trafficking programmes appear, on the surface, to depart from more punitive juvenile justice interventions of the past, the devil is in the details. Even when youths are recognised as victims of domestic sex trafficking, their protracted involvement in the justice system may still result in criminal records. Their status as victims may not protect them from the consequences of this, including limits on “future education, employment, housing, financial, and other life opportunities.”

I agree that it is time to move beyond trafficking and to address its structural roots.  In the interim, attention to domestic sex trafficking in the United States presents a timely opportunity to take stock of the collateral consequences the current framing has had on those migrants and domestic populations most directly affected, and to cultivate less punitive ways of interacting with them. This is crucial, as at the end of the day the purpose of anti-trafficking is to ameliorate systems that make people vulnerable to exploitation. This includes challenging the laws, systems, language and state-sponsored interventions that fail to adequately protect people in the first place.

This article draws upon insights from a previously published article “Domestic Sex Trafficking and the Detention-to-Protection Pipeline,” Dialectical Anthropology, 37.2 (2013): 257-276, and a forthcoming book by Jennifer Musto, To Control and Protect, under contract with the University of California Press for release in 2016 

Sideboxes Related stories:  Speaking of “dead prostitutes”: how CATW promotes survivors to silence sex workers Convenient conflations: modern slavery, trafficking, and prostitution
Catégories: les flux rss

Domestic sex trafficking and the punitive side of anti-trafficking protection

Open Democracy News Analysis - il y a 15 heures 56 minutes

Despite efforts to automatically label teen and youth sex workers as ‘victims’ of trafficking, and thereby prevent their prosecution, their often extensive interactions with the legal system continue to leave lasting marks.

English sex workers protest being targeted by 'anti-social behaviour' laws in 2013. See Li/Demotix. All Rights Reserved.

Domestic sex trafficking is a decidedly American invention. Legally codified in federal and several other state laws, sex trafficking in general and domestic minor sex trafficking (DMST) in particular rebrands an old trend—underage children and teens’ forced involvement with commercial sex—and reframes it as a form of modern day slavery. While prostitution facilitated by pimps or other third party actors isn’t new per se, what is novel is the viewpoint that sex trafficking, which includes but isn’t limited to American youth and teen girls, is a localised manifestation of a global forced labour problem. Equally recent is the idea that anti-sex trafficking initiatives have the capacity to produce more ‘victim centred’ results than criminal justice interventions of the past.

Over the course of the last six years, I have researched the evolution of domestic sex trafficking in the United States and tracked different state and non-state collaborative interventions authorised in its name. What has emerged is that youth deemed ‘at risk’ of domestic sex trafficking may be arrested, charged or placed in detention in order to be protected by law enforcement. Relatedly, many adults are only recognised as victims of sex trafficking after they have been processed as criminal defendants, a problem acknowledged by the existence of special court initiatives to identify “defendants who have been trafficked.” These are initiatives pitched as alternatives to more typical criminal justice responses like arrest, detention, and prosecution, yet they still situate the criminal justice system as the main conduit through which victims of domestic sex trafficking gain access to services, programmes and some modicum of protection.

There is growing recognition among anti-trafficking actors, particularly with respect to youth, that calling kids victims in name but continuing to treat them like juvenile offenders is deeply flawed. One response has been for many states—28 so far—to implement some version of Safe Harbor laws. ‘Safe Harbor’ refers to laws that recognise youth as victims and aim to bring state laws “into line” with the federal Victims of Trafficking Victim Protection Act. Another response has to do with language, and one recent effort has sought to change how we talk about sex trafficking situations involving youth. In January 2015, selected advocacy groups in the United States along with members of Congress launched the “No Such Thing” campaign, an effort that seeks to change the treatment of victims of child sex trafficking by calling for the eradication of “the term ‘child prostitute’”. The campaign links a shift in language to changes in how youth are legally treated, implicitly suggesting that referring to girls as ‘trafficked’ rather than ‘child prostitutes’ will set the stage for their treatment as victims rather than offenders.

I welcome a change in how we talk about youths’ experiences with exploitation, no matter its form. I also wholly agree that a departure from the current paradigm, in which youth in some jurisdictions may be subject to some version of a detention-to-protection pipeline, is desperately needed. Yet whether passing more laws or striking ‘child prostitute’ from the vernacular will substantively change to how youth are treated remains to be seen, especially if such efforts aren’t accompanied by a critical evaluation of the ‘trafficking’ part of the equation and the interventions it has produced.

Indeed, for all of the recent claims that terms like ‘sex work’ and ‘child prostitute’ mask conditions of exploitation assumed to undergird all commercial, transactional, and survival sexual arrangements, it is striking that a commensurate degree of public outcry has not been lodged against the fraught term ‘trafficking.’ Equally troubling is that collective concerns haven’t been raised about anti-sex trafficking campaigns’ attachment to carceral feminist sensibilities, or about the uneven and sometimes punitive effects that anti-sex trafficking efforts have on migrants, voluntary sex workers, and now domestic youth and adults in the United States.

If there is a language change I’m calling for it is for students of forced labour and exploitation to become more fluent in speaking the language of collateral consequences. Criminologists and sex workers’ rights groups use the term collateral to frame the effects of the carceral state and anti-sex trafficking initiativesMany scholars of the U.S. carceral state have focused on the collateral consequences of mass incarceration on individuals and communities, particularly communities of colour. Similarly, sex workers’ rights groups have pointed to the collateral impact of anti-trafficking efforts on migrants who have endured ‘rescue’ raids, shelter-detention, and, more generally, born the punitive brunt of anti-trafficking laws. People now seen as at-risk of domestic sex trafficking in the United States must similarly contend with the collateral consequences of the criminal justice system, criminal convictions, and the anti-trafficking interventions designed to help them.

For example, youth may be referred to anti-sex trafficking initiatives through an arrest, which introduces them into the system and opens up access to services or specialised programming. Even though this may not lead to a prosecution per se, it may still produce a criminal record that cannot be expunged, to use a legal postconviction term, without extensive effort. As an August 2014 Congressional Research Report on domestic sex trafficking explains:

These [diversion] programs generally defer prosecution on the condition of successful completion of a treatment program. At that point, charges may be reduced or dismissed. This may or may not involve records being expunged” (emphasis mine).

Though new anti-trafficking programmes appear, on the surface, to depart from more punitive juvenile justice interventions of the past, the devil is in the details. Even when youths are recognised as victims of domestic sex trafficking, their protracted involvement in the justice system may still result in criminal records. Their status as victims may not protect them from the consequences of this, including limits on “future education, employment, housing, financial, and other life opportunities.”

I agree that it is time to move beyond trafficking and to address its structural roots.  In the interim, attention to domestic sex trafficking in the United States presents a timely opportunity to take stock of the collateral consequences the current framing has had on those migrants and domestic populations most directly affected, and to cultivate less punitive ways of interacting with them. This is crucial, as at the end of the day the purpose of anti-trafficking is to ameliorate systems that make people vulnerable to exploitation. This includes challenging the laws, systems, language and state-sponsored interventions that fail to adequately protect people in the first place.

This article draws upon insights from a previously published article “Domestic Sex Trafficking and the Detention-to-Protection Pipeline,” Dialectical Anthropology, 37.2 (2013): 257-276, and a forthcoming book by Jennifer Musto, To Control and Protect, under contract with the University of California Press for release in 2016 

Sideboxes Related stories:  Speaking of “dead prostitutes”: how CATW promotes survivors to silence sex workers Convenient conflations: modern slavery, trafficking, and prostitution
Catégories: les flux rss

The Big Society - the passing of Cameron's dream

Open Democracy News Analysis - il y a 20 heures 45 minutes

This was the government's revolutionary idea, its guiding light. Now, the government seems to have given up on it. What can we learn from the final audit of the 'big society'? The audit's primary author explains.

Flickr/The Prime Minister's Office

What happened to the Big Society? David Cameron, launching the initiative in 2010,  said 'Today is the start of a deep, serious reform agenda to take power away from politicians and give it to people.' Today, references to the Big Society have been largely erased from the Government’s website. The Prime Minister no longer talks about his big idea, which remains only in the form of specific initiatives: for example, Police and Crime Commissioners, Academy and Free schools and the National Citizen Service. 

But people are entitled to know whether the Big Society worked or not, particularly in the run up to an election. Greater transparency and accountability is a Government objective and rightly so. Whose Society?  The Final Big Society Audit published today, takes a long hard look.

This isn't just about this Government. The Big Society may be seen as David Cameron's personal project but it can be traced back to Tony Blair’s Third Way, which sought to unlock potential within society beyond the state and the markets. The Prime Minister who promised "to empower communities and citizens and ensure that power is more fairly distributed across the whole of our society" was Gordon Brown in the White Paper, Communities in Control: real people, real power.  Numerous Labour Government initiatives were re-launched by this Government under a different name.

What were the results? The conclusion of Civil Exchange's three year mapping against this Government’s public commitments is that, despite some genuinely positive initiatives, the Big Society failed to deliver against its original goals.    Attempts to create more social action, to empower communities and to open up public services, the core goals of the Big Society, have not worked, with some positive exceptions. The Big Society has not reached those who need it most. We are more divided than before.

That’s not the end of the story: the next Government, whoever forms it, is likely to pursue similar goals, even if the Big Society label sinks without trace. The reasons are clear: people expect more control, governments can only deliver more with less with the help of wider society and a flagging democracy can only be revitalised by sharing more power. Cuts in public spending make the case for a Big Society approach more, not less compelling. Indeed, the Labour Party is on the case with its One Nation project and “people powered services.”

But are politicians really able to deliver a “good Big Society”? It's not just that they struggle with giving up power once they enter office. They also keep repeating the same mistakes. So what are the lessons they need to learn?

First, a future Government must replace the market-based, public sector management model that has dominated the thinking of successive governments.  On past performance, individual choice and competition for contracts will not deliver the radical changes needed to ensure those who most need the support benefit equally from public services or to make public services more effective, at lower cost. It has delivered a ‘race to the bottom’ on contract price and the dominance of large private sector ‘quasi-monopoly’ providers who lack transparency and accountability. It has not closed the educational attainment gaps and health inequalities between rich and poor. A model of collaboration, rather than competition, would mobilise wider social forces to deliver outcomes, such as better health, which the state alone cannot deliver or purchase through a contract.

Second, the next Government must share and devolve more power. There have been positive examples of communities taking more control and redesigning services under the Big Society. But real power has not being transferred on any scale. Greater devolution creates an opportunity for a new kind of government at local level that works in a genuinely collaborative way.

Third, targeting is needed. The Government failed sufficiently to focus support where the need is greatest – the least affluent and advantaged communities, where cuts in both public services and the voluntary sector have fallen the hardest.

Fourth, collaboration with civil society—the voluntary sector, faith groups, trade unions, businesses—is needed. Sadly, the Big Society leaves the voluntary sector—a key source of support for disadvantaged groups and route to understanding their needs—not strengthened but weakened. But it remains a major resource that should be better supported  and a key partner.

Finally, we need to see a fundamental change in the role business plays – not just more corporate social responsibility programmes but pursuing a purpose that serves society.

All this can’t be achieved without a different kind of government. Unless we look back at what happened to the Big Society, that is very unlikely to happen.

Catégories: les flux rss

How many judicial review cases are received by UK government departments?

Open Democracy News Analysis - il y a 20 heures 45 minutes

Government insists judicial challenges are now so frequent that they must be curbed in law. But the numbers don't seem to back this up.

During the debate in parliament on Monday 1 Dec 2014, Chris Grayling (Lord Chancellor and Secretary of State for Justice) was asked how many Judicial Review cases are brought against government ministers.

Julie Hilling (Bolton West) (Lab): The right hon. Gentleman says “all the time”. Will he give us a notion of how often that is—once a day, once a week, once a month? How many times have such cases happened since April, for instance? He is giving the impression that they happen all the time, but what does that mean?

Chris Grayling: A Minister is confronted by the practical threat of the arrival of a judicial review case virtually every week of the year. It is happening all the time. There are pre-action protocols all the time, and cases are brought regularly. Looking across the majority of a Department’s activities, Ministers face judicial review very regularly indeed. It happens weeks apart rather than months apart.

The minister gave no actual numbers in his answer. So, in this post I’ve looked at how many judicial review (JR) cases were received by central government departments (‘ministers’) over the past few years. This analysis relates to my work with Christopher Hood in the Politics Department at Oxford.

There is a good discussion of the wider issues raised by Chris Grayling’s responses during that debate by Mark Elliot on the Public Law for Everyone blog. In this post I just look at the numbers.

The number of JR cases received by each UK government department can be found in this database of JR cases from 2007 to 2012, which lists over 57,000 JR applications, giving the topic, defendant and outcome of each case. For this analysis, I corrected the data for misclassifications according to this revision note. (More recent JR databases are correctly classified, but do not report the defendants).

The largest group of JR applications were Immigration and Asylum (IA) cases (almost 10,000 in 2012, or 80% of the total). In the future IA cases will be almost entirely considered by tribunals, so I’m excluding them here (and in any case, they do not appear to be the type of case that the Justice Secretary was referring to).

There were just under 2500 ‘non-immigration’ cases per year, which I show in the graph below split into four broad categories of defendant: Criminal Justice System(e.g. courts, police, prisons), Local AuthoritiesCentral Government Departments, and ‘Other’ (e.g. medical councils, schools, NHS and tribunals).

Judicial Review Cases (excluding Immigration and Asylum). Source: MoJ database 2013

Central government departments received fewer than 500 JR cases each year, broken down by department as follows:

Only four departments received more than 15 judicial review cases in 2012. By far the most were received by the Ministry of Justice (MoJ, 257), followed by HM Revenue & Customs (HMRC, 51), the Home Office (39) and Department for Work and Pensions (DWP, 31). The other ~20 Whitehall departments received an average of about 5 cases a year (a few received none over the six years and don’t appear in the table).

Non-IA JR cases received by government departments in 2012. Source: MoJ database 2013.

The topic ‘Prisons’ made up 70% of MoJ cases, ‘Tax’ and ‘VAT’ comprised 90% of HMRC cases; 80% of DWP cases were about benefits and social security. No single topic dominated the Home Office non-IA cases (though this department also received the vast bulk of IA cases).

The information in the database is not detailed enough to identify the ‘politically motivated’ cases that particularly concern Chris Grayling. Planning is sometimes mentioned as an area in which JR is used to delay important infrastructure projects. In 2012 just 10 JR cases regarding ‘Town and Country Planning’ were addressed to government departments (9 to DCLG and 1 to DEFRA) compared with 188 addressed to local authorities.

Considering the vast numbers of government decisions that are made every year, we don’t see an explosion of judicial challenges to central government departments. The Ministry of Justice is a clear outlier, receiving more than half of all such cases (and another large tranche of cases is concerned with the wider criminal justice system). Perhaps the minister should look at the quality of decision-making in his own department and its agencies before seeking to limit judicial review.

 

This post originally appeared on Ruth Dixon’s blog.

Catégories: les flux rss

Pages