On the edge of turbulence: India between uncertainty and promise

Lecture at the Indian Institute of Management Ahmedabad on 21 November 2024

Ladies and Gentlemen:

It gives me great pleasure to be in your midst. A long time has passed since I spent three of my formative years in the late 1960s as a student at Vallabh Vidyanagar. That was an exciting period in the life of a teenager – young, enthusiastic, naive and adventuresome. I was touched by the tolerance of people I met. The experience triggered a desire in me to go places, to learn about peoples and cultures in distant lands, and to try to understand how history shapes societies, and interaction between peoples. A couple of years after leaving Gujarat, I left India to work for the United States federal government. I saw three presidents in office – Richard Nixon, Gerald Ford and Jimmy Carter. It was a tumultuous period in American politics: the Watergate scandal forced President Nixon out of office; the Vietnam war was at a decisive point; American forces were about to withdraw; the cold war, the 1973 Arab-Israel conflict and the OPEC oil embargo were wreaking havoc on the economy. In India, too, a political crisis was brewing, soon to climax in the state of emergency. In an uncertain world, a youth in his early 20s found excitement and plenty of scope to learn about different societies, peoples and problems. After a hectic period in which I travelled from coast to coast through the American continent, my next destination was Europe. Forty years on, I am here again, and I ask myself: What has changed?

The topic I have chosen today has dual rationale. India in the twenty-first century is the second largest country by population; it is a democracy in which, after over-heated campaigns, when the governing party loses a general election, transfer of power happens peacefully. India’s economists, scientists, technicians are among the world renowned. It was the eleventh largest economy by market exchange rates in 2013, according to the International Monetary Fund. The number of students enrolled in tertiary education is around seventeen million. India is a leading emerging economy, inviting comparisons with China. But India has problems, too: bureaucracy, corruption and inertia are often cited. There are disturbed areas inside India along the periphery. There is a history of adversarial relations with neighbours. And a vast region of high turbulence, the Greater Middle East, ripe with internal strife and external interventions lies just west. To sum up, India is on the edge of disturbance – unceasing and violent. At the same time, on the threshold of bigger and better things which might come. It is a journey between uncertainty and promise. In this context, how the country navigates is crucial.

I want to spend a few moments on the role of ideology or dogma in determining foreign policy. Strict obedience to ideology of whatever kind offers a vision that is fanciful. It is stark, clear, simple. Reality is far more complex; displays contradictions, and often requires skilful navigation in an uncertain world. Dogma may seem to provide a pure vision. Whether that pure image can be attained is questionable, because reality often imposes limitations. Reality informs us not only about what can be changed, but also things we cannot do much about.

I am sometimes reminded of an observation made by Harold Macmillan, Britain’s prime Minister, in the late 1950s. A journalist asked him what blows governments off course. Macmillan’s reply was: “Events dear boy, events.” Macmillan had taken over as prime minister after the failed Anglo-French attempt in 1956 to seize control of the Suez canal, which had been nationalised by President Nasser of Egypt. The military debacle had forced Prime Minister Anthony Eden to resign. Macmillan, who succeeded Eden in 1957, knew very well the power of events to shape history. His aphorism “Events dear boy, events” is now part of the lexicon of politics.

In the 1960s, Prime Minister Harold Wilson gave us another famous maxim when he said: “A week in politics is a long time.” What he meant was that things can change within a very short period, and what looked possible only recently may not be achievable now.

I want to make two general points which are essential to the understanding of a country’s relations with the outside world. On one hand, foreign policy is a function of domestic needs, since among the most important functions of a state is to defend its territory from external and internal threats, to maintain order and ensure its people’s welfare. On the other hand, from time to time there are external events over which a country has little or no control, and such events can derail its policy.

Let us therefore look back. India gained the dominion status in August 1947, and became a sovereign republic in January 1950. A vast, but fragile, country; wary of Western imperial powers; its challenges were huge – poverty, hunger, disease, lack of development; resources limited; the task huge; and the choice was simple. Development or military build-up. There were policy differences, but idealists prevailed over realists. The 1950s were the decade of Panchsheel, incorporating the five principles of “mutual respect, non-aggression, non-interference in each other’s internal affairs, equality and cooperation, and peaceful coexistence.”

In the years immediately after independence, India was most vulnerable, but recognized in the growing community of emerging nations for its moral leadership, the way it emerged from the trauma of partition and its commitment to democracy and its resolve to achieving self-sufficiency, so the country could reinforce its independence. The country seemed willing to walk away from instant gains that could jeopardise its long-term interests.

Then, major events occurred either side of the year 1960. The Tibetan uprising, followed by a Chinese crackdown and the flight of the Dalai Lama to India in 1959. India’s decision to grant refuge to the Dalai Lama came with a certain cost for India’s relations with China. But to hand over the twenty-four-year-old Tibetan leader to the Chinese was inconceivable.

Three years after the Dalai Lama’s escape to India, there was a fierce border war with China. Other events were also responsible for the China-India breakup. But 1959, the year of the Tibetan crisis, triggered a major deterioration between Beijing and Delhi. The Chinese leadership felt humiliated by the tumultuous reception the Dalai Lama received in India. And the friendship was over. The Chinese leadership linked the Lhasa uprising to India’s expansionist policy. Prime Minister Nehru’s Tibet policy was fiercely criticized. Addressing the Standing Committee of the Communist Party’s Politburo, Mao Zedong told members not to be afraid of irritating Nehru and causing trouble for him. On the other hand, much of the non-Communist world was gripped by what American diplomat William Bundy described in an article in the French magazine Preuves as a “fearful view” of China. The humiliation felt in Beijing, and the suspicion in Delhi, were too much to prevent the collapse of their relationship. In his review of Neville Maxwell’s book India’s China War, controversial in India but acclaimed abroad, Gregory Clark wrote that “up until 1959, Nehru genuinely favoured Zhou Enlai’s compromise for an Aksai Chin/NEFA exchange.” Nehru had been trying to prepare India’s public opinion. But after the 1959 escalation in Tibet and raised passions in India, Clark said that “Nehru lost control of the situation.”

It marked the failure of India’s “Forward Policy” – that meant establishing advance posts that could only be supplied by air, and could not be defended at all. But 1962 was a turning point, for a new realist era in Indian foreign policy had begun. Two years on, China carried out a nuclear test – it was the beginning of a nuclear arms race in Asia.

The 1965 conflict with Pakistan helped India recover its pride. Indian forces made territorial gains, and many Indians felt that the country had shaken off the 1962 defeat by China. However, the Tashkent agreement reversed those gains under some pressure from the Soviet Union, because under the pact the Indian army was required to withdraw from the territory it had captured from Pakistan.

Two further events happened in the 1970s. First, the 1971 India-Pakistan war, resulting in the dismemberment of Pakistan, and the emergence of Bangladesh in its eastern half. That was when India finally shook off the “China syndrome.” Second, in 1974, ten years after China, India carried out a nuclear test. India’s nuclear test made Pakistan’s nuclear weapons program inevitable. With Pakistan’s nuclear weapons programme a reality in time, the advantage India had secured would eventually diminish in relation to Pakistan. Then in 1975, the leader of Bangladesh, Sheikh Mujibur Rahman, was assassinated. India lost a close ally and some of the strategic gains made in the 1971 war with Pakistan. Looking back, the 1971 victory over Pakistan has been a mixed blessing.

In the late 1980s, Prime Minister Rajiv Gandhi thought it possible to impose peace in Sri Lanka’s ethnic conflict under the India-Sri Lanka accord. A large military force was sent to the island state, but there were unintended consequences. Among neighbours, the image of India behaving like a “big brother” was reinforced.

The Soviet invasion of Afghanistan in December 1979, and the proxy war the United States fought against the Soviets in the 1980s had profound consequences for India, the region and beyond. Few countries that were bystanders had control over the long and violent sequence of events during the 1980s. And the consequences of the growth of Islamism and the collapse of Soviet communism were far-reaching. I will explain the emergence of a wholly new context which was unforeseen and unpredictable. For example, by helping the most hard-line armed groups in the war against Soviet and Afghan communism, the United States greatly contributed to the phenomenon of Islamist radicalization. Erstwhile allies turned against the United States. Radicalization, once begun, cannot be switched on and off at will. Militant groups are reborn again and again. They split. And each time, they mutate into more violent splinters.

By the mid-1990s, the backlash could be witnessed across frontiers in India and faraway lands. What happened in the 1980s not only radicalized sections of Indian society. It more or less closed India’s foreign policy alternatives.

In the 1980s, India had reacted at most with muted criticism of the Soviet occupation of Afghanistan. In the 1990s, India had to reorient its foreign policy towards the United States. And following the events of 9/11, India came to support the US-led invasion of Afghanistan in October 2001. As it was then, India’s objective now is to counter Pakistan and China. Also like before, the environment around India is adversarial. So India has built what I view is a diplomatic flyover to Israel, bypassing the Muslim and Arab world. The flyover then goes on from Tel Aviv to Washington. And the spaces in between–meaning the Muslim world and Europe–have not received the attention they perhaps deserve. As India and Pakistan remain locked in a cold war, each side tries to outmanoeuvre the other to get the United States to punish the other. Each side seeks to demonstrate that it is the true ally in the American-led “war on terrorism.”

There are two unchangeable factors in international politics. One is location; the other neighbours. In the vast South Asian subcontinent, India has emerged as the dominant country and the strongest economy. At the same time, there is considerable historical baggage which bears heavily on Indian foreign policy. Neighbours are near, yet far. This explains India’s quest to build bridges to avoid the risks that have accumulated in the long run. The impact of events in the Greater Middle East over the centuries has been undeniable. And it continues to be the case.

Much of my academic work on Middle East history and contemporary politics involves an attempt to explore how war and humiliation affect human attitudes, and how cultures evolve. Here Milan Kundera, one of the most recognized Czech writers, is worth citing. Kundera was twice expelled from the Communist Party; forced to leave his homeland to go to live in France seven years after the 1968 Soviet invasion of Czechoslovakia; then stripped of his Czech citizenship. He became a French citizen in 1981. In his novel Immortality, Kundera wrote: “The basis of shame is not some personal mistake of ours, but the ignominy, the humiliation we feel that we must be what we are without any choice in the matter, and that this humiliation is seen by everyone.”

Kundera’s words capture the powerful emotion that humiliation is –– whether it applies to an individual, a community or nation. Part of my thesis is that the bigger the group that feels humiliated, the greater the chance that the humiliator’s act will have far-reaching consequences.

I discuss the role of shame in my book, Imperial Designs: War, Humiliation and the Making of History, the final volume of a trilogy on the Middle East. Imperial Designs follows Breeding Ground, which is a study of Afghanistan from the 1978 Communist coup to 2011. Based on Soviet and American archives, Breeding Ground covered the gradual disintegration of the Afghan state –– a particularly violent phase of history of that country, including the Soviet invasion of December 1979; America’s proxy war against the Soviet forces in the 1980s; the collapse of Soviet and Afghan communism around 1990; the rise of the Taliban and the creation of safe havens for groups like al Qaida; the circumstances of America’s return to Afghanistan after the events of September 11, 2001; and the war thereafter. The second book, Overcoming the Bush Legacy in Iraq and Afghanistan, evaluates George W. Bush’s presidency in terms of the “war on terror”; that book is about the invasions of both Afghanistan and Iraq; and thereafter.

I suggested in these books that among the factors contributing to the events of September 11, 2001 was the sense of humiliation felt in the Muslim world, the Middle East in particular. The history of Arabs and Persians is rich and interesting. They have fought many wars over the centuries. The history of external actors meddling in the region––the Ottomans, the British and the Americans is intriguing. And the consequences have been profound and far-reaching.

The collapse of the Ottoman Empire around the First World War in the early twentieth century and its aftereffects; the discovery of oil in the region and the division of Arab lands between Britain and France; the creation of the state of Israel after the Second World War and its meaning for Palestinians and Arabs; and further conflicts. In Iran, the early democracy movement; the 1953 overthrow of the elected government of Prime Minister Mohammad Mosaddeqh in an Anglo-American intelligence plot; and subsequent events over a quarter century until the overthrow of the Pahlavi dynasty in the 1979 revolution. Examination of events such as these is relevant in any study of the role of humiliation and the shaping of the contemporary Middle East.

The upheavals of recent decades in the Greater Middle East have their origins in the events around the First World War a century before, when Ottoman rule was replaced by British and French colonial rule using the instrument of Mandate.

Conflict between tribes and wars with external invaders have determined the thinking and behaviour of local peoples through history. Vast sandy deserts, a free spirit and a warrior instinct are fundamental characteristics of Middle Eastern cultures. Repeatedly, wars have put these instincts on display and have reinforced them.

Where desert communities were sparsely located, interaction was less between them, but more within members of each community or tribe. The emphasis was on cohesion within each tribe. Personal possessions within the general populous were fewer; lifestyle was frugal for most members. Wealth tended to accumulate with chiefs. Honour, its dispossession causing humiliation and promises betrayed became strong drivers of human behaviour. Defending the honour of a person, a clan, tribe or nation––and regaining it after humiliation––became of utmost importance. Past injustices and unsettled disputes still persist. More have been added to the long list in the new century, and we are only living through the second decade.

One of the earliest references to imperial behaviour in literature can be found in Plato’s work The Republic. There is a dialogue between Socrates and Glaucon about rapid development in society. The essence of that dialogue is that increase in wealth results in war, because an enlarged society wants even more for consumption. Plato’s explanation is fundamental to understanding the causes of war. This is how empires rise, military and economic power being essential to further their aims. A relevant section in Plato’s Republic reads: “We shall have to enlarge our state again. Our healthy state is no longer big enough; its size must be enlarged to make room for a multitude of occupations none of which is concerned with necessaries.”

Nearly two and a half millennia after Plato, Michael Hardt and Antonio Negri offered a Marxist interpretation of neo-imperialism in the twenty-first century in their book, Empire. Their core argument in the book, first published in 2001, was that globalization did not mean erosion of sovereignty, but that it is a set of new power relationships in the form of national and supranational institutions like the United Nations, the European Union and the World Trade Organization. According to Hardt and Negri, unlike European imperialism based on the notions of national sovereignty and territorial cohesion, empire now is a concept in the garb of globalization of production, trade and communication. It has no definitive political centre and no territorial limits. The concept is all pervading, so the “enemy” must be someone who poses a threat to the entire system–– so it is a “terrorist” entity who must be dealt with by force. Written in the mid-1990s, I think that Empire got it right, as events thereafter would testify.

At an early stage of the “war on terror,” Johan Galtung said in 2004 something which looks like a fitting definition of the term “empire.” Galtung described empire as “a system of unequal exchanges between the centre and the periphery.” The rationale of his thesis is that empire “legitimizes relationships between exploiters and exploited economically, killers and victims militarily, dominators and dominated politically and alienators and alienated culturally.” Galtung observed that the U.S. empire “provides a complete configuration,” articulated in a statement by a Pentagon planner. That Pentagon planner was Lt. Col. Ralph Peters, who in 1999 wrote a book Fighting for the Future: Will America Triumph?. Here I quote Ralph Peters: “The de facto role of the United States Armed Forces will be to keep the world safe for our economy and open to our cultural assault. To those ends, we will do a fair amount of killing.”

What did the Pentagon planner mean by “keeping the world safe and open to America’s cultural assault”? To appreciate the relationship between economic interest and cultural symmetry, we need to understand culture as a broad concept. English anthropologist Edward Burnett Tylor (1832–1917) defined culture as “that complex whole which includes knowledge, belief, art, morals, law, customs and many other capabilities and habits acquired by … [members] of society.” Culture is the way of life which people follow in society without consciously thinking about how it came into being. Robert Murphy described culture as “a set of mechanisms for survival, but it also provides us with a definition of reality.” It determines how people live, the tools they use for work, entertainment and luxuries of life. Culture is a function of homes people live in, appliances, tools and technologies they use––and ambitions.

I would therefore argue that culture is about consumption in economic terms. Culture defines patterns of production and trade, demand and supply, as well as social design. I will give a number of examples. In Moscow, the old Ladas and Wolgas of yesteryear began to be replaced by Audi, Mercedes and BMW cars in the late twentieth century; the number of McDonalds restaurants in Russia rose after the launch of the first restaurant in the capital in 1990; in Russia, China and India, luxury goods from cars to small electronic goods and jeans became objects of desire for the growing middle classes, while grinding poverty still affected vast numbers of their fellow-citizens. Consumption of luxury goods in China and India rose as their economies grew. Following the U.S.-led invasions of Afghanistan and Iraq, sales of American brands in Kabul and Baghdad increased. Such trends form an essential part of what defines societal transformation and, at the same time, represent a powerful cause for opposition. To comprehend this vast phenomenon, we need familiarity with the nature of hegemony and its effects.

The hegemon flaunts its power, but also reveals its limitations. It invades and occupies distant lands, but cannot end opposition from determined resistors. Economic interests of the hegemon, and the way of life it advocates, are fundamentally interlinked. The hegemon claims superiority of its own culture and civilization over the adversary’s. Its own economic success depends on the exploitation of natural and human assets of others. The hegemon allows political and economic freedoms and protections enshrined for the privileged at home. Indeed, the hegemon will frequently buy influence by enlisting rulers in foreign lands. Rewards for compliance are high, though human labour and life are cheap in autocracies of distant lands.

The costs of all this accumulate, and their sum total eventually surpasses the advantages. Military adventures are hugely expensive. As well as haemorrhaging the economy, they drain the hegemon’s collective morale as the human cost in terms of war deaths and injuries rises. Foreign expeditions by empires tend to attain a certain momentum. But a regal power is unlikely to pause to reflect on an important lesson of history––that adventure leads to exhaustion. Only when the burden of liabilities––economic, political, moral––causes the hegemon’s own citizenry to revolt does it mean that the moment for change has arrived. There is a simple truth about the dynamic of imperialism. Internal discontent turning into outright rebellion grows as the hegemon’s involvement in foreign conflicts gets deeper and its difficulties mount. On the other hand, radicalization of, and resistance from, the adversary seem to be in direct proportion to the depth of humiliation felt by the victim. Effects of this phenomenon are durable and unpredictable, such is the desire to avenge national humiliation. For whereas every human possession comes with a price tag, honour is priceless.

The historical development of the Middle East, comprising vast desert lands between the Red Sea and the Persian Gulf, is complex and messy. A careful survey of imperial designs from the early twentieth century, when the Ottoman Empire collapsed at the end of the First World War, leaving a void, to the present time is revealing. Historically, the Middle East has had two distinct spheres of cultural influence––Arabian and Persian. The Arab provinces had been under Ottoman control whereas Iran had been a theatre of rivalries between Imperial Russia, Britain and France. A clash of interests between these major powers was the primary cause of upheavals of the last century that continue to date.

The race for hegemony in the contemporary Middle East has its origins in the discovery of oil in Khuzestan in south-western Iran in 1908. The leap of technology from steam to more efficient petrol engine gave new urgency to the search for oil. Khuzestan became an autonomous province of great strategic importance, but drilling had already been going on in anticipation of vast oil reserves in what is now Iraq and was then part of Mesopotamia. Nearly twenty years after Khuzestan in Iran, oil was found in Iraq in October 1927. And a decade after, vast oil reserves were discovered in al Hasa, on the coast of the Persian Gulf, in Saudi Arabia, which at the time was among the poorest countries in the Middle East. Imperial designs by great powers in the post-Ottoman Middle East became a certainty.

The demise of the Ottoman Empire and the discovery of oil in the Middle East were two major factors which would determine the course of history for the next century and more. Victory in the First World War was to destroy the existing balance of power, and with that any pretence of equality and fair play when there were clear victors and vanquished. With the prospect of war turning in the Allies’ favour, a grand plan began to emerge. In May 1916, Sir Mark Sykes and Francois George-Picot signed what came to be known as the Sykes-Picot agreement, under which Britain and France were to divide up much of the Middle East between themselves, should the Ottoman Empire fall. That is what subsequently happened.

A year later, the British Foreign Secretary Arthur James Balfour gave an undertaking on behalf of the United Kingdom to Baron Walter Rothschild, a leader of the British Jewish community. Balfour wrote in his letter to Rothschild: “I have much pleasure in conveying to you … the following declaration of sympathy with Jewish Zionist aspirations which has been submitted to, and approved by, the Cabinet.” Balfour went on to say: “His Majesty’s government would view with favour the establishment in Palestine of a national home for Jewish people, and will use their best endeavours to facilitate the achievement of this object.” Despite words of assurance that this would not be at the expense of the Palestinians’ rights, contrary was the case. Jewish immigration and colonization of Palestine on a large scale was allowed and has continued since. By the time the state of Israel was established in 1948, the United States had become the most powerful nation in the West and the main backer of Israel.

The 1993 Oslo accords, which promised a permanent settlement within five years, barely limped to Oslo 2 in 1995, and finally collapsed. It was bound to happen, for virtually everything that mattered – the question of Jerusalem, the return of refugees, borders, security, and Jewish settlements, all these issues were left for future negotiations. All those issues still haunt the region. The Israeli-Palestinian conflict remains at the heart of the wider Middle East crisis. And it can be argued that the fundamental nature of the cycle of conflict which started nearly a century ago has not changed.

This is the broad context in which India has to navigate. I said at the beginning that the goal of foreign policy is to meet domestic essentials, namely security, prosperity and a fair distribution of wealth, because a fair wealth distribution is necessary for peace in society in the long run. Sure, India has considerable economic vitality – but in the immediate environment there are adversarial circumstances, too. Beyond, there are fierce rival forces, local and distant great powers, which make the Greater Middle East a region of extreme volatility. It is also a region where the rulers and the ruled are dangerously apart; too many in the populous are alienated. So in my concluding remarks, here are some pointers.

One – awareness of the history of difficult relationships, the composition of societies around the country and of the country itself, all are important factors. Two – it should not be forgotten that there is a dangerous rift between the ruling elite and the alienated in many of these societies. Authoritarian rule means unacceptable use of coercion to maintain social order – and inevitable loss of legitimacy of government. Therefore the third pointer – a deliberate emphasis on diplomacy which includes people-to-people contact. Fourth – after nearly seven decades it is perhaps time for lower rhetoric and less blame game in dealings with immediate neighbours. Finally, when thinking foreign policy, think long term – very long term.

[END]

Advertisements

Cosmopolitanism and Human Rights

Notes on a talk I gave at a roundtable during a conference at the University of Roehampton in London on 10-12 July 2013. 

It is a vast topic, and I want to make a few points in the next 10 minutes or so about the meaning of the term cosmopolitanism, how it correlates with the idea of human rights, and the wider debate in the present context. Some of you may find my view on the topic somewhat pessimistic. And I should also add that I am going to make an argument that is essentially economic. Cosmopolitanism derives from the Greek terms cosmos, meaning the Universe, and polites, referring to citizen. Therefore, cosmopolitanism has come to be understood as “citizen of the world.” It is not difficult to understand that the concept of cosmopolitanism has developed with society. Social development involves two apparently contradictory, and at the same time, complementary processes. A growing society has a greater need to feed its people and ensure their welfare. That, however, is not enough, because human needs grow with development: better homes, roads, transportation, hospitals, education and training, entertainment and comfort – all these require more raw materials and skills. Any society which aims to achieve all of these requires a balance in its workforce. It is perhaps the most fundamental reason for both internal and cross-border migration, which has gone on through much of history.

The next problem concerns the allocation of space and resources, which in turn raises many political, moral and ethical questions. For example, if a society has shortages of certain skills for development, it will require people who have those skills from outside until its own citizens are trained, which takes time. Should a country open its doors to foreign workers it needs? If so, to how many and for how long? Should people with those skills be considered for entry, irrespective of their nationality, color or ethnic origin?// Or only of selected backgrounds? These are some of the most important questions which must be resolved to start with. For they eventually determine the kind of society that there is, and how others will see it in the wider world. Of course, such matters must be decided within each society. However, wise rulers will consider the implications of how they settle these issues.

In Britain and other European Union member-states, there is supposed to be free movement of people to work and live, which in turn has meant stricter controls on immigration from the rest of the world. As I have already said, this is a government’s sovereign right. But once a foreign worker arrives in a country, his or her place in society can become a topic of contention. As well as legal migrants, a second topic of contention is that of refugees, often described these days as asylum seekers, fleeing their countries for safety. But I want to limit myself here to when there are not dramatic and unexpected events straining a society’s will.

Let us suppose a cosmopolitan society develops gradually as planned by government and policy makers. It consists of citizens of different nationalities, cultures, religious faiths – secularists as well as atheists. Suppose one faith or nationality is dominant. On what basis should it be organized? The total wealth in each society, even if vast, has a limit. For the sake of organization, how should the population be divided up into units, groups or categories? And how big a share of the national pie should go to each section? In other words, how much should each be paid? And what rights should they enjoy? That essentially is the debate at this time of increased globalization – not merely for economic reasons, but also because of the movement of people within and across continents due to political turmoil.

The nature of debate about cosmopolitanism keeps changing. Why? Because there are powerful opposing arguments. In his fourth inaugural address in January 1945, President Franklin D. Roosevelt said: “We have learned that we cannot live alone, at peace; that our own wellbeing is dependent on the wellbeing of other nations, far away.” From that idealistic sentiment toward the end of the Second World War arose the idea that all people belong to a single family, and unless they feel they have their fair share which ensures their security, there cannot be order and peace in society. It remains true today. And so the argument founded on global justice and universalism – right to live, to have a family, property, security, free speech, a say in governance, freedom of thought and organization, and so on, irrespective of gender, ethnic, religious or political background.

Against this is the argument which appears to be on the ascendancy – the argument for fenced societies; fewer and fewer migrants; minority voices even advocating repatriation and expulsion of migrants. Disturbingly, such demands have begun to shape the overall debate. In the post-9/11 world, wars, and the inevitable social and economic price being paid force introversion in individuals and groups alike. So we hear: “We were here first, we are entitled to more of what our society has.” This majoritarian argument clearly has won the day, for now at least, among policy makers.

[END]

Bias Against Understanding Terrorism

Deepak Tripathi

Centre for Research on Nationalism, Ethnicity and Multiculturalism, University of Surrey, 4 October 2010

The events of September 11, 2001 and the “war on terror” have made an undeniable impact on human and international relations. Increasingly, these relationships have come to be seen and interpreted through the prism of counter-terrorism, migration and a selective focus on “religious fundamentalism” of a certain kind, namely Islamic fundamentalism. The result has been a loss of context. The way it has changed media discourse over the last decade is more obvious. However, the nature of scholarship on terrorism and political violence has also come under pressure. The themes of migration and security, democracy and the rule of law have become more salient at the expense of the historical context, which explains imperialism, great power rivalries and other causes of conflict where the Western world has played a crucial role. Francis Fukuyama’s 1989 declaration of “The End of History” has proved short-lived and his prediction that Western liberal democracy would become universal is far from being achieved. Over the last twenty years there have been two major wars and numerous minor conflicts around the globe.

With this context in mind, I will offer a personal critique of the debate about terrorism and political violence as it has evolved in recent years. Focusing on Afghanistan since the early 1970s, I will discuss the war in its various stages and the evolution of a “culture of violence”. I will explain the internal, regional and international dimensions of the Afghan conflict and offer an indicative analysis of the failure to learn from the recent past, let alone long-term history.

First of all, I want to thank the Centre for asking me to give this talk, and to thank you for coming. I am delighted to be here. As you know, I have had a career in journalism. I went into journalism at an early age; in my late teens, but by early twenties, I was well established and found myself working for the federal government in Washington. So while I have had a long eventful, very interesting working life, the sense of fulfillment was tinged with some regret. Occasionally, I have reflected – success in finding a job perhaps came too early. I missed being close to scholarship long enough. So occasions such as this one have a special meaning for me. I am glad to be here; glad to be talking about a subject that has been close to me for many years.

Journalists and academics have an interesting relationship. Journalism is instant, scholarship reflective. Journalists are sometimes called frivolous, inconvenient, mischievous; academics deep, serious, thinking people. Disparagingly, we are called “hacks.” On the other hand, I recall occasions when a colleague in my own profession would summarily dismiss me by saying: “Deepak is not punchy enough; he is an academic.” We both have our detractors. But  on a serious level there exists a common purpose: challenging the status quo; questioning conventional wisdom. Science cannot progress, the boundaries of knowledge cannot be pushed unless we question what is now.

Now to the topic of my talk: “Bias Against Understanding Terrorism.” If there were any suggestion of frivolousness or mischievousness about it, I would deny that. I have chosen this topic to challenge the conventional wisdom which has been accumulating rapidly in the last decade, mainly in the West, but also in other parts of the world. “Terrorism” was always a highly contested term, but the ease with which “terrorism” and “freedom” – these two central terms – have entered common usage is remarkable. Remarkable because whereas they were both contested terms before, they are even more poorly defined now in the wake of September 11, 2001. Many of us have bought into the idea that we are all engaged in fighting for “freedom” and against “terrorism” when both terms remain largely undefined.

What is “freedom?” The mere fact of participation in an electoral exercise and putting our vote in the ballot box, or something more? Does taking part in periodic elections, only to see state control over citizens’ lives further tightened mean freedom? Volatility of public opinion and the “tyranny of the majority” that Alexis de Tocqueville wrote about so eloquently constantly haunt minorities and their freedoms that democracy is supposed to protect. In Europe, we are witnesses to the French government’s expulsion of Romani people and planned legislation to revoke the citizenship of certain immigrants who have acquired French nationality in recent years. Some opinion polls suggest these actions are popular in France.

I want to briefly talk about freedom in a different context which does not receive sufficient attention in the West. As many as three million nomads, people of Kuchi tribes, inhabit Afghanistan and the north in Central Asia, constantly on the move. Waves of Kuchi communities are used to migrating from north to south in Afghanistan and across the frontier inside Pakistan in harsh winter to relatively milder climate, only to move north again when spring arrives. Freedom means something different to them and they would not barter their freedom for the right to vote once every few years. Their movements have been disrupted, they are more endangered by war. Ask them what is freedom.

I was in India a few months ago, where we hear Maoist terrorists are active. The Indian press is full of stories about them. To describe them as “Maoist terrorists” is plain wrong. These are tribal people who know little, if anything, about Maoism or who Mao was. I heard accounts of what is happening in the remote areas of central India. Suddenly one day, workers hired by the state, or by a private firm, arrive in a remote tribal community. An area is cleared of trees, flattened. To appease the local tribal community, a small building, a school, is erected. The tribal population of the village is told: “Look, we have built a school for you.” Often, within days, the entire little village has disappeared from that spot; moved deep inside the forest. The tribes don’t want such rapid change in their life. Ask them what freedom is to them. The point I am trying to make is this: the “war on terror” is a war fought in the name of two concepts; both undefined despite ceaseless use of the terms “freedom” and “terrorism.” But, in fact, these terms have become tools to protect the majority against minorities, and the mighty against the weak and vulnerable. The right of self-defense of the powerful has superseded the right of the underdog to resist.

There has never been a universally accepted definition of terrorism and the United Nations has consistently failed to agree on how to define this phenomenon. Less than three decades ago, Ronald Reagan proclaimed that “one person’s terrorist is another person’s freedom fighter.” Soviet communism has since collapsed, but geopolitical factors still play a critical part in the states’ determination of policy, more so in this post-Cold War era. Two decades after Francis Fukuyama, one of the leading lights of neoconservatism, declared “The End of History” and “universalization of Western democracy” in his 1989 essay, history has delivered a sharp rebuke to those who forget or ignore it. We are witnesses to two, I would say, three major wars: Afghanistan, Iraq and the wider “war on terror”. “Terrorism” and “terrorist” have become much overused terms of abuse for non-state groups and a handful of states while friendly states, and client regimes, can employ extreme repressive measures, and overwhelming force, and justify them in the name of self-defense.

So what is “terrorism” and what are its causes? The next part of my paper deals with these questions in trying to understand the phenomenon of terrorism, casting aside the subjectivity that clouds the debate today. I will attempt to look at “terrorism” and “political violence” (both terms are subsumed here) as part of a “culture of violence.” I will focus on Afghanistan, though parallels can be seen in Iraq, Palestine and other conflicts.

The conflict in Afghanistan can be seen in four separate but overlapping, sometimes simultaneous, stages. These stages are: internal conflict; great power involvement; state disintegration; and foreign indifference and the rise of extremism. These are the four main building blocks of a culture of violence. The question I want to raise here is: How did this dialectic play out in Afghanistan?

The last two decades of the twentieth century were a period of intense struggle between competing ideologies – a struggle which was played out in the Afghan conflict. Afghanistan was caught up in the Cold War between the United States and the Soviet Union as early as the 1950s. The clash of capitalism and communism, both essentially Western ideologies, magnified the internal divisions in what is a tribal system in that country. Such a society has two essential characteristics – an inner weakness born out of social fragmentation, and a defensive instinct to react violently against foreign interference. These very characteristics were reinforced as intervention by massive military-economic aid and secret intelligence operations grew in Afghanistan and the country fell under Soviet domination. Afghan Communists became bolder and they seized power in a bloody coup in 1978. The rise of communism radicalized Islamic groups in Afghanistan.

Imposition of a Soviet-style system on a deeply religious people was the beginning of a chain of events which shook the Communist regime in Afghanistan. Rebellions in rural areas, mutinies and desertions in the armed forces and escalating internal warfare in the ruling People’s Democratic Party created a crisis in the country. The deeper the crisis became, the more repressive measures were used by the first Communist regime in 1978-1979.

The nature of such a chain reaction, or dialectic, is self-perpetuating. A dialectical process acquires a life of its own by virtue of what is described as the power of ‘negativity’. Negativity is what comes into being in opposition to the ‘subject’. The first ‘subject’ is a thesis in the shape of an event or force which is gradually stripped of its immediate certainty after coming into existence as it embarks on a “pathway of doubt.”

Simply put, a thesis is what rises in its environment as a distinct entity, its character imposing itself before reaching a point at which that entity begins to come under challenge by the negative force which the original thesis created. In the ensuing struggle between the thesis and its negative, or antithesis, the certainty of the original entity progressively weakens as doubts over its viability are raised. This explanation of the nature of dialectic is based on an acknowledgment that things are multi-faceted and always in the process of becoming something else.

The conflict between a thesis and its negative is a process which slowly strips the former of properties that determined its certainty and lends the latter contradictory properties. What is obtained in such a process is a reconciliation between the two – a synthesis. While the original and its negative were contrary to each other, their synthesis preserves both, and stresses unity once again. It is at this point that the synthesis transforms itself into another thesis, leading to further contradictions and conflict before reaching another stage of resolution. So the dialectical progression goes on. It has no beginning, and no end.

We can now begin to understand in dialectical terms the advent of various external and internal forces that eventually conspired to create a culture of violence in Afghanistan. When a small group of Communist sympathisers in the armed forces, representing an ideology that was foreign and contrary to the basic character of Afghan society, seized power in 1978, it was an event that was bound to lead to profound consequences. Under the Communist regime, there was a short-lived experiment to restructure Afghan society on the Soviet model – an experiment carried out by coercion, including purges, imprisonment, torture and assassination of opponents. The Marxist experiment provoked violent opposition that became progressively more stubborn as measures of the Communist regime acquired greater ruthlessness. There was resistance not only in wider society, but also within the regime. It took many forms – the Parcham (or Banner) faction against the Khalq (the Masses) faction, internal dissidents within Khalq, ethnic Pashtun against non-Pashtun, communist against anti-communist and so on. As the conflict escalated, fear and chaos began to take hold and the outcome was the Soviet invasion of Afghanistan in December 1979.

The scale of violence was altogether different during the years of Soviet occupation. The overwhelming war machine of the Communist superpower was at work and, in the final major confrontation of the Cold War, the United States threw its vast resources in support of the anti-Communist Mujahideen groups to fight that war machine. Weapons of terror were used by all sides and the conflict produced millions of victims. The violence committed by the Soviet occupation army was answered by the Mujahideen opposition on the ground.

The war against the Soviet Union in Afghanistan is often portrayed as one in which the Afghan resistance took on a superpower and won. This is an over-simplification, because such a view ignores the dialectical nature of the conflict which triggered intervention by other external powers in opposition to the USSR. The Mujahideen victory could not have been possible without the military and financial support from America and its allies, notably Saudi Arabia, Pakistan, Egypt and China. American and Pakistani intelligence services were deeply involved in the planning and execution of the war against the Soviet occupation forces. The role of Pakistan in the recruitment and training of anti-communist guerrillas was critical.

State intervention from outside also brought foreign militants to Afghanistan. The military government of Pakistan allowed thousands of Islamic radicals to train and fight in the conflict, which made them battle-hardened and reinforced their fundamentalist ideology. After the defeat of communism, they were left without a cause and many returned to their own countries to engage in struggle against regimes they regarded as un-Islamic and corrupt.

Islam has been a powerful force in modern Afghanistan. It was the main source of resistance to change from above, whether imperial powers like Britain and Russia tried to impose that change, or internal regimes such as those of Mohammad Daud and subsequently under Communism in the 1970s and 1980s. Religion, interwoven with a tribal system, provided the core of this resistance. It was endorsed by local mullahs who found their position in society threatened. The war against the Soviet Union in Afghanistan went beyond this. Islam was used as a political ideology to bind together the disparate factions and their members at the insistence of President Zia of Pakistan and with the active support of the CIA-ISI alliance.

The idea of Islam as a political ideology, not merely a religion, to be used to reshape and control society is sometimes described as ‘Islamism’. Afghanistan is a deeply religious country, but Islamism had not taken root in the wider Afghan society before the Communists seized power in 1978. In the early 1970s, religious militancy was primarily concentrated in Kabul, where a relatively small number of educated Afghan fundamentalists fought for influence with left-wing groups in student politics and the armed forces. However, the Islamists became isolated in later years. Almost all prominent activists had fled to Pakistan by 1975, when an attempt to overthrow President Daud failed.

At this stage, the Islamist movement of Afghans underwent internal turmoil as it prepared to oppose the Daud regime. The movement split into two significant groups: the Hizb-i-Islami, dominated by ethnic Pashtuns and led by Gulbuddin Hikmatyar, and the mainly-Tajik Jamiat-i-Islami under the leadership of Burhanuddin Rabbani. The Pashtun-Tajik divide was to prove permanent, but both groups had a lot in common with their Middle Eastern counterparts. They both recruited members from the intelligentsia. Many of the activists of these Islamist groups had been students in scientific and technical institutions. They were joined by more educated Afghans and foreign militants who eventually fought against the Soviet occupation forces. They were Sunni Muslims with a strong anti-Shi‘a stance, reflecting the wider trend in the Arab world against Iran. Sunni Arab regimes, threatened by the growing Shi‘a militancy following the 1979 Islamic revolution in Iran, wanted to keep Iranian influence in check. Their answer was to support anti-Shi‘a forces, whether it meant the Iraqi leader, Saddam Hussein, in his war with Iran or Sunni militants in Afghanistan.

It has been suggested that the ideology of the Afghan Islamists was ‘borrowed entirely’ from two foreign movements: the Muslim Brotherhood, founded in Egypt, and the Jamaat-i-Islami of Pakistan. Just like these two movements, the Afghan Islamists opposed secular tendencies and rejected Western influence. Within Islam, they opposed Sufi influence, with its emphasis on love and universality of all religious teachings. Rabbani was among those prominent Afghans who had spent years at al-Azhar University in Cairo and had been active in the Muslim Brotherhood. Hikmatyar, on the other hand, was close to Pakistan’s Jamaat-i-Islami, which was itself influenced by the Brotherhood and its ideologue, Sayed Qutb. The writings of Qutb were a source of inspiration to a large number of Arabs who fought against the Soviet Union in Afghanistan in the 1980s.

The main appeal of Qutb comes from his assertion that the world is ‘steeped in jahiliyyah’, the Arabic term for ignorance. He argues that this ignorance originates from the rebellion against God’s sovereignty on earth. Qutb attacks communism for denying humans their dignity and capitalism for exploiting individuals and nations. He claims that the denial of human dignity and exploitation are nothing but consequences of the challenge to God’s authority. The solution advanced by Qutb is that Islam acquires a ‘concrete form’ and attain ‘world leadership’, but this is possible only by initiating a movement for its revival.

Qutb does not openly preach violence, but other ingredients of a revolutionary brand of Islam are present in his writings. He recognises that there is a significant body of educated people who are disillusioned with the existing order. These people represent a constituency for change in a number of Middle Eastern countries, where economic and social problems, corruption and a lack of involvement in political processes have created a wide gulf between governments and the people. Qutb rejects the Communist and capitalist systems alike and  asserts that Islam is the only alternative. His vision is idealistic and its attraction very strong for the alienated looking for political adventure.

The Muslim Brotherhood was hostile to successive Egyptian governments and firmly aligned itself with the Palestinian cause after the creation of the state of Israel in 1948. When Anwar Sadat became president of Egypt in 1970 following the death of Nasir, he promised to implement Islamic law and released all Brotherhood members from jail in an attempt to pacify the movement. But Sadat’s decision to sign a peace treaty with Israel in 1979 resulted in a new confrontation, which led to his assassination in September 1981. The Muslim Brotherhood went underground and, in subsequent years, developed a complex network of more than seventy branches worldwide.

The disintegration of the Afghan state system between 1992 and 1994 and the subsequent rise of the Taliban turned Afghanistan into a haven to which foreign fighters could return without fear of retribution. Many more new Islamic radicals came from the Middle East, North and East Africa, Central Asia and the Far East to study, train and fight in Afghanistan during the Taliban period. They developed personal contacts with each other, learned about the Islamist movements of other countries and planned cross-border activities.

No other veteran of the Afghan conflict has achieved worldwide notoriety like Osama bin Laden. He had his initiation to radical Islam as a student at King Abdul Aziz University in the Saudi city of Jiddah, from where he got a degree in economics and management. It was there that bin Laden developed a deep interest in the study of Islam and used to hear recorded sermons of the fiery Palestinian academic, Abdullah Azzam. In the 1970s, Jiddah was a centre of disaffected Muslim students from all over the world and Azzam was a leading figure in the Muslim Brotherhood. His influence encouraged bin Laden to join the movement.

After the Soviet invasion of Afghanistan in December 1979, bin Laden moved with several hundred construction workers and heavy equipment to the Afghan-Pakistan border and set out to “liberate the land from the infidel invader.” He saw a desperately poor country taken over by tens of thousands of Soviet troops and millions of Muslims bearing the brunt of the military machine of a superpower. Afghans neither had the infrastructure or manpower to mount effective resistance to the occupation of their country.

Osama bin Laden created an organisation to recruit people to fight the Soviets and began to advertise all over the Arab world to attract young Muslims to Afghanistan. In just over a year, thousands of volunteers, including experts in sabotage and guerrilla warfare, had arrived in his camps. Their presence clearly suited CIA operations in Afghanistan. Bin Laden’s private army became part of the Mujahideen forces based in Pakistan and supported by the United States. Military experts with a close understanding of US policy estimated that a “significant quantity” of high-technology American weapons, including Stinger anti-aircraft missiles, reached bin Laden and were still with him in the late 1990s.

Bin Laden helped build an elaborate network of underground tunnels in the mountains in eastern Afghanistan in the mid-1980s. The complex was funded by the CIA and included a weapons depot, training facilities and a health centre for the Mujahideen. He set up his own training camp for Arab fighters and his following increased among foreign recruits. But he became increasingly disillusioned by two things: one, the continuing infighting in the Afghan resistance after the Soviets left; the other, America’s disengagement from Afghanistan that many saw as abandonment. Bin Laden returned to Saudi Arabia to work for his family business.

When Iraq invaded Kuwait in 1990 and it looked as though the security of Saudi Arabia was under threat,  he urged the royal family to raise a force from the Afghan war veterans to fight the Iraqis. Instead, the Saudi rulers invited the Americans – a decision that greatly angered bin Laden. As half a million US troops began to arrive in the region, bin Laden openly criticized the Saudi royal family and lobbied Islamic leaders to speak out against the deployment of non-Muslims to defend the country. It led to a direct confrontation between him and the Saudi royal family.

He left for Sudan, which was going through an Islamic revolution. He was warmly welcomed, not least because of his wealth, in a country devastated by years of civil war between the Muslim north and the Christian south. His relationship with Sudan’s de facto leader, Hasan al-Turabi, was close and he was treated as a state guest in the capital, Khartoum. Returning veterans of the Afghan conflict were given jobs and the authorities allowed bin Laden to set up training camps in Sudan. Meanwhile, his criticisms of the Saudi royal family continued. The Saudi authorities finally lost patience and revoked his citizenship in 1994. Osama bin Laden was not to return to his homeland again.

These events had a lasting impact on bin Laden. He had fallen out with the United States and the Saudi ruling establishment and his freedom of movement was severely restricted. In Khartoum, he began to concentrate on building a global network of Islamist groups. His business, Laden International, had a civil engineering company, a foreign exchange dealership and a firm that owned peanut farms and corn fields. Other business ventures failed, but he had enough money to support Islamic movements abroad. Funds were sent to militants in Jordan and Eritrea and a network was set up in the former Soviet republic of Azerbaijan to smuggle Islamic fighters into Chechnya. He set up more military training camps, where Algerians, Palestinians, Egyptians and Saudis were given instructions in making bombs and carrying out sabotage.

The ideological nucleus of what became al Qaeda also attracted Ayman al-Zawahiri, regarded as Osama bin Laden’s deputy. Al-Zawahiri was born into a leading Egyptian family and fell under the influence of revolutionary Islam at an early age. His grandfather, Rabia‘a al-Zawahiri, was once head of al-Azhar Institute, the highest authority of the Sunni branch of Islam. His great-uncle, Abdul Rahman Azzam, was the first Secretary-General of the Arab League. When he was a boy of 15, Ayman al-Zawahiri was arrested for being a member of the Muslim Brotherhood. He trained as a surgeon, but his radical activities led to a rapid advancement in the Egyptian Islamic Jihad. By the late 1970s, when he was still in his twenties, he had taken over the leadership of the group.

In October 1981, al-Zawahiri was arrested with hundreds of activists following the assassination of President Sadat by members of his group at a military parade. The authorities could not convict him of direct involvement in the murder, but he was sentenced to three years in prison for possessing weapons. He left Egypt after his release – first going to Saudi Arabia and then to Pakistan’s North-West Frontier Province, from where large numbers of foreign fighters entered Afghanistan during Soviet occupation.

There is evidence that the association of Ayman al-Zawahiri with the Afghan resistance started just before his arrest in Egypt in 1981. He was a temporary doctor in a clinic run by the Muslim Brotherhood in a poor suburb of Cairo, where he was asked about going to Afghanistan to do some relief work. He thought it was a ‘golden opportunity’ to get to know a country which had the potential to become a base for struggle in the Arab world and where the real battle for Islam was to be fought. On his way to Afghanistan several years later, al-Zawahiri briefly worked as a surgeon in a Kuwaiti Red Crescent Hospital in the Pakistani frontier city of Peshawar. He made frequent visits inside Afghanistan to operate on wounded fighters, often with primitive tools and rudimentary medicines. Ayman secured his place in the Afghan resistance as someone who treated the sick and the wounded – just as Osama had secured his by virtue of being a wealthy Arab who spent his money and time helping people in an impoverished country which had been devastated by Soviet forces.

In subsequent years, al-Zawahiri emerged as an intellectual and the main ideological force behind Osama bin Laden. He enunciated clear distinctions between his and other Islamist groups. Al-Zawahiri saw democracy as a ‘new religion’ which must be destroyed by war. He accused the Muslim Brotherhood of sacrificing God’s ultimate authority by accepting the idea that people are the source of authority. Other Islamist groups were also condemned for accepting constitutional systems in the Arab world. In his view, such organisations exploit the enthusiasm of young Muslims, who are recruited only to be directed towards “conferences and elections (instead of armed struggle).”

The further al-Zawahiri went in his consideration of modern social systems, the more radicalised he became in reaction. He implied that the moral and ideological pollution was made worse by material corruption. He complained that the Muslim Brotherhood had amassed enormous wealth. This material prosperity, he said, was achieved because its leaders had turned to international banking and big business to escape the repressive and secular regime of Nasir in Egypt. Joining the Muslim Brotherhood created opportunities for its members to make a living. Their activities were driven by materialistic, rather than spiritual, aims. These views amounted to a complete rejection by al-Zawahiri and his organisation, the Islamic Jihad, of other Islamist groups and brought the Jihad closer to Osama bin Laden and his network.

The influence of the Palestinian-Jordanian academic, Abdullah Azzam, was central in all this. Azzam was a child when Israel was founded in 1948 and had been active in the Palestinian resistance movement from an early age. He had links with Yasir Arafat, but their association ended when he disagreed with the secular philosophy of the Palestine Liberation Organisation, eventually coming to the view that it was far removed from “the real Islam.” Azzam’s logic was that national boundaries had been drawn by infidels as part of a conspiracy to prevent the realisation of a trans-national Islamic state. And he came to the view that his goal was to bring together Muslims from all over the world.

Abdullah Azzam saw in the Afghan conflict an opportunity to realise this ambition. Recruitment of volunteers from all over the Muslim world to fight the Soviet occupation forces was to be an important step towards his goal to set up an Islamic internationale. To achieve this, these volunteers would train, acquire battle experience and establish links with other radical Islamic groups. The Mujahideen resistance in Afghanistan had already established a legendary reputation which would inspire potential followers all over the world. The resistance could eventually become a highly-motivated and trained force, ready to destroy the decadent West and export the Islamic revolution to other parts of the world.

In November 1989, Azzam and his two sons were assassinated in a bomb attack as they drove to a mosque in Peshawar to pray. The identity of their murderers remained a mystery, but rumours persisted about a link with bin Laden and al-Zawahiri. It was reported that while they both supported the idea of extending the struggle to overthrow Arab regimes, Azzam wanted the job completed first in Afghanistan by replacing the Communist regime of Najibullah with a Mujahideen government. Other players, including the Soviet and Afghan secret services, also had an interest in removing Azzam. Whoever was responsible for his assassination, its most significant consequence was that bin Laden and al-Zawahiri gained almost total control of the network of foreign fighters linked to the Afghan conflict.

The split between Osama bin Laden and Abdullah Azzam in the late 1980s was the beginning of al Qaeda. Whereas Azzam insisted on maintaining the focus on Afghanistan, bin Laden was determined to take the war to other countries. To this end, bin Laden formed al Qaeda. His main goal was to overthrow corrupt and heretical regimes in Muslim states and replace them with the rule of Shari‘a, or Islamic law. The ideology of al Qaeda was intensely anti-Western and bin Laden saw America as the greatest enemy that had to be destroyed.

To sum up, we need to consider the dialectic I have been explaining that led to the creation of al Qaeda’s ideology to understand the organization itself. The two main ideologies to emerge after the Second World War were communism and free-market liberalism.  Competition between them during the Cold War obscured the challenge they faced from a third force, radical Islam in the Middle East. The first significant manifestation of this force was the Islamic revolution in Iran in the late 1970s. The Soviet occupation of Afghanistan in the 1980s created an environment in which the challenge from radical Islam was directed against communism. America strengthened it by pouring money and weapons into the Afghan conflict, but failed to recognise that the demise of the Soviet empire would leave the United States itself exposed to assaults from groups like al Qaeda.  In time, this failure proved to be a historic blunder. And it created a “culture of violence” – a condition, fuelled by war, in which violence permeates all levels of society, and becomes part of human nature, thinking and way of life.

[END]

The Relevance of Positivism in Social Science

Sussex Paper

Deepak Tripathi  

January 2003     

The philosophy of positivism founded by Auguste Comte (1798-1857) has come under severe criticism in the last 40 years. Criticism in itself of something that is 150 years old is not surprising. A set of theories developed by Comte so long ago is being examined and tested by social scientists now when we have the benefit of the knowledge gained over more than a century. Society has moved on in this period; there are new perspectives and many more minds ready to challenge the old theories. So the post-positivist social scientists are justified in one respect at least.[1]

The sustained, repeated assaults on positivism over many years are quite another matter. In this respect, its critics seem to have exhausted themselves in order to demolish positivism. This raises new questions. What is post-positivism? Is it an exercise to dismiss again and again something that is old and has encountered difficulties when tested in the modern world? Does post-positivism provide a coherent alternative to positivism? Is there anything relevant in the advocacy of a scientific approach in social enquiry that Comte first advocated all those years ago?

As Anthony Giddens says, positivism has become a term of abuse. It is not fashionable to suggest that contemporary philosophers have anything to do with it. However, I am going to raise this possibility as I pose the questions mentioned in the above paragraph. But first it is important to recognise that social inquiry cannot serve its purpose if it is not relevant in the conditions in which it takes place. We need to look at positivism in its historical perspective – the social conditions in which it evolved. I will therefore examine its development to logical positivism, which I describe as a more rigorous form of positivism. I will look at some of the criticisms of positivism in today’s context. At the same time, there will be a concurrent investigation into whether there are positivist undercurrents in what the post-positivists propose. I will not hesitate to speculate about the social factors that have influenced post-positivism. I do not claim to have read all, even most, of the relevant literature before I present my view. Rather, it is an attempt to understand positivism and to determine whether it is time to accept those elements which are relevant in social science inquiry, and then move on, leaving behind those which are not.

Development of positivism: a historical perspective

One of the most significant contributions of Comte, in his early work, was the law of three stages of knowledge. These stages were theological, metaphysical and positive. It is not in dispute that the formulation of this law played a significant role in pushing science to the forefront and relegated theology and metaphysics in the study of society. In this sense, the idea remains as relevant today as it was then. What drove him to this position?

Comte lived in the wake of the French Revolution, which began in 1789. He grew up when there was political and social upheaval in the country. It was also a period of great tensions between France and its neighbours – Austria and Britain included. France had declared war on Britain and was supporting the American war of independence against British rule.[2] On the other hand, Britain had been through the Industrial Revolution by the mid-nineteenth century. The bulk of the working population in the country had changed from agriculture to industry. Big advances in the farming methods were being introduced. Steam power had all but replaced the use of muscle, wind, and water. The textile industry was the prime example of industrialisation. Roads, railways, and steamships were to radically change the face of society. All this brought profound changes in Britain, leaving France behind. The consequences of the internal chaos and wars with other European countries were corrosive for French society. Emmett Kennedy discusses the impact of these events on the philosophy of Comte:

The absence of any integrated, organic culture after the disorder that followed the Enlightenment and the Revolution indicated to Auguste Comte the deep malaise that beset French society. The organic worldview of medieval Christianity had been disturbed. … He approached the problems of society with reason alone; in that he was a philosopher. But he wrote from … the side that had learned the cost of corrosive criticism.[3]

It is easier to understand the intervention of Comte in the above context. His philosophy of positivism was a product of widespread upheaval in his own country, conflict with its neighbours and profound social changes brought by the Industrial Revolution in Britain. The introduction of machinery in the day-to-day running of society in Britain had propelled the use of science and technology to the forefront of human thinking. Theology and metaphysics had been demoted. It is hardly surprising that almost all of the definitions of positivism by Comte have something to do with science. For example:

Positivism is a theory of knowledge according to which the only kind of sound knowledge available to humankind is that of science grounded in observation.

Positivism is a unity of science thesis according to which all sciences can be integrated into a single natural system.[4]  

The impact of scientific advances on society gives further clues about his work. Peter Halfpenny points out that, to Comte, sociology was the ‘queen of the sciences’. Positivism was ‘scientific’ because knowledge had practical value and the growth of science was for the benefit of humankind. To him, it was ‘empiricist’ as only humans could experience it. It was ‘encyclopaedic’ because all the sciences came under a single system of natural laws. And it was ‘progressivist’ because social stability could be restored by re-establishing a moral order, based on scientific knowledge, not on religion which made the world mysterious and prevented empirical inquiry, or metaphysical speculations which had no practical value.

As France was going through political disorder and human suffering, the main concerns for Comte would have been how to re-establish social order and achieve scientific progress for the benefit of society. Therefore, we see the assertion by him that sociology was the “queen of sciences” and that ‘all sciences came under a single system of natural law’. It is true that Comte placed great stress on hierarchy. It became even more obvious in the latter part of his life, when he began to talk of the need for ‘a strong moral order’, which his critics described as a new kind of theology. In Comte’s view, there were four enemies of the positive philosophy: religion (as a dogma not as a moral force), metaphysics (in which he included psychology), individualism (which to him was the cause of social disorder) and revolutionary utopianism.[5]

It was at this stage that those who had agreed with his early views came to oppose him. His critics were essentially positivists, but began to articulate their differences with Comte. They also showed a marked reluctance to accept the ‘positivist’ label on themselves. By this time, the assessment of Comte had begun in several centres. The English sociologist and philosopher Herbert Spencer (1820-1903) was a supporter of the theory of evolution like Comte but differed with him in two important respects. For unlike Comte, Spencer was a strong advocate of the pre-eminence of the individual over society and of science over religion (Comte’s new theology of a new moral order).

In his ‘Reasons for Dissenting from the Philosophy of M. Comte’, Herbert Spencer devotes the entire first chapter to his criticism of Comte:

M. Comte’s ideal of society is one in which government is developed to the greatest extent – in which class-functions are far more under conscious public regulation than now – in which hierarchical organisation with unquestioned authority shall guide everything – in which the individual life shall be subordinated in the greatest degree to the social life.

Spencer’s response:

That form of society towards which we are progressing, I hold to be one in which government will be reduced to the smallest amount possible, and freedom increased to the greatest amount possible – one in which human nature will have become so moulded by social discipline into fitness for the social state, that it will need little external restraint, but will be self-restrained – one in which the citizen will tolerate no interference with his freedom …[6]

On Comte’s law of three stages, Spencer writes (in the same chapter) that there is one, and in essence, the same method of philosophising. The integration of causal agencies is a process, which involves the passing through all ‘intermediate steps’ between these extremes. Any appearance of stages, says Spencer, can be but ‘superficial’.

Although Spencer chose to concentrate on his disagreements with Comte, especially over his assertion about the need for subordination of the individual to universal laws, Spencer was a positivist. He may have taken issue with the law of three stages, but what he offered in its place was very similar: ‘intermediate steps’ between the extremes – theology and science. He and his contemporary English philosopher John Stuart Mill (1806 – 1873) applied scientific rigour to the study of society.

Like Spencer, J S Mill was also impressed with Comte’s early work, and was in fact responsible for introducing his ideas in Britain. Mill agreed with Comte that the study of society had been retarded by its failure to employ scientific methods; and he was in agreement with the empirical methods recommended by Comte, including observation, experiment, comparison and historical methods. However, being a strong supporter of individual liberty, Mill dissented from other aspects of Comte’s ideas. If Comte was the pioneer who founded positivism, Spencer, and Mill, representing the British school, followed.

The same trend was noticeable in France, too. As Halfpenny records in Positivism and Sociology (p 23), Emile Durkheim (1858 – 1917) was instrumental in establishing an academic discipline in French universities at the beginning of the twentieth century. Durkheim adopted Comte’s major themes – empiricism, sociologism, naturalism, scientism, and social reformism, as well as contributed much to the development of sociology as a separate science. But, like Mill, he thought that Comte’s formulation of the law of three stages ‘verged on metaphysical speculation’. Durkheim added a ‘quite independent tradition’ of statistics in his book Suicide (1897); for he brought Comtean social philosophy and the collection and analysis of social facts together:

At every moment of its history, each society has a certain tendency towards suicide. The relative intensity of this tendency is measured by taking the relationship between the total of voluntary deaths and the population of all ages and sexes. We shall call this numerical datum the rate of mortality due to suicide, characteristic of the society under consideration. It is generally calculated in proportion to a million or a hundred thousand inhabitants …[7]

The areas of agreement between Comte and Durkheim were significant. Durkheim said that social facts were ‘no different’ than facts about the physical world and therefore there was no reason why the methods used to study the natural sciences could not be used in the social studies[8]. He stressed the need for objectivity and rules. And he argued that there were external objects (social factors) that influenced human behaviour, that society was greater than the sum total of its members and that the properties of society could not be understood by studying individuals only living in it.

By the 1920s and early 1930s, a group of philosophers and scientists describing itself as the Vienna Circle had begun to discuss the implications of logic for the debate. There was a striking similarity with the social conditions in which Comte founded positivism. For members of the Vienna Circle were debating in the wake of the devastation caused by the Second World War. Under the leadership of Moritz Schlick, the Vienna Circle argued for a ‘reduction of human knowledge to scientific and logical foundations’. To separate itself from positivism, the Vienna Circle adopted the term ‘logical positivism’. One of its most significant characteristics was its rejection of the non-empirical statements made in metaphysics, theology, and ethics as meaningless. Ethics and morality, the Circle believed, are a matter of taste and not connected to science. Science, the Circle said, tells only what will happen, not what should happen.

In an essay entitled ‘Logical Positivism’ (Positivism and Sociology, p 47), Peter Halfpenny makes a further distinction between positivism of Comte and that of the Vienna Circle. Logical positivism, he says, was scientistic but not progressive or social reformist. And, he adds that the Vienna Circle believed the growth of science would benefit humankind but would not do so necessarily.

Positivism and its essence 

It is very difficult to gain a clear understanding of positivism because of the number of ways in which the term has been defined and interpreted by many of its supporters and critics. It is, however, safe to say that an important goal of positivism was objectivity.  The law of three stages of Comte suggests that he used the term ‘positive’ to mean ‘scientific’. His assertion was that scientific inquiry must be empirical; it should be based on the observation of facts and not on religion which created mystery about the world, or metaphysics which was of no practical value.  In 1944, W T Stace wrote a critique of positivism. In it, he put forward his Positivist Principle, which explains the essence of positivism:

A set of words purporting to express a factual proposition P is significant only if it is possible to deduce or infer from it, in combination if necessary with other premises, some proposition or propositions (Q1, Q2, Q3 … etc), the truth or falsity of which it would be logically possible to verify by direct observation. If no such direct deductions are possible, then the set of words purporting to express P is non-significant, and P is not really a proposition at all.[9]

The use of verification by ‘direct observation’ is noticeable. For it helped to free positivism of theological and metaphysical presuppositions. Stace developed from this the Principle of Observable Kinds later in the essay, and explained why (p 218). He recalled that the Vienna School had, at one stage, required full and complete verification under the ‘Principle of Verifiability’, but had faced difficulty. It was realised that if direct verification were required, statements about the past would become ‘non-significant’, because it was logically impossible to observe the past. For the same reason, if complete verification were required, all universal statements would be impossible to verify. As a consequence, the Vienna school later came to accept indirect and partial verification. Stace said that, in his Principle of Observable Kinds, verification meant the possibility of observing at least some of the effects of a statement for it to be significant.

We require general laws for verification. Laws, in turn, characterise relationships between given objects. We need data to arrive at general laws. In his System of Positive Philosophy (1830, pp 5-6), these were precisely the recommendations of Auguste Comte.[10] Writing his essay, In Defence of Positivism (Sociological Theory, 1985, pp 24-36), Jonathan Turner makes strong criticisms of modern sociologists who, he says, have portrayed Comte as an ‘eccentric’ and positivism as ‘negative’ and ‘naïve’. He says that modern sociologists ‘rarely theorise’, which is why our knowledge about the social universe is ‘embarrassingly little’. Turner’s view is that this lack of knowledge is because ‘we have failed to be positivists in Comte’s sense of the term’.

Post-Positivism: some reflections   

Post-positivism is a confusing term. It does not represent one school of thought, but includes philosophers and social scientists that have been strongly critical of Comte and ‘logical positivism’ of the Vienna Circle over the last four decades. For example, there are those who reject the positivist view that the aim of scientific investigation should be to find regularities between events, or laws that can be used to make society better; rather, they say, human behaviour cannot be determined by external laws and the investigation should be into the underlying causes of events (Critical Realism). Then there are advocates of social inquiry by interpretation (Interpretive account). Some say there should be a strict separation between objectivity and all value judgements (Ideal types). Still others regard theories as catalytic agents that will overthrow, or replace the established order and create something new (Critical Theory). There are advocates of social inquiry into the actions of individual actors (Methodological Individualism) and of inquiry within a framework (Functionalism). And so on …

Positivism was about understanding the world so that we could predict and control it by changing laws. In a period of chaos in Europe, it was for order and unity. Post-positivism has renounced unity and represents ‘methodological pluralism’.[11]

Yosef Lapid has described post-positivism as a ‘loosely patched-up umbrella’ of remotely related articulations.[12] I am interested in looking at the context in which the new philosophy of science is seeking to establish itself. This context is radically different from the glory days of ‘logical positivism’ in the 1920s and 1930s. The Second World War (1939 – 1945) ended in the defeat of fascism and set the stage for the economic and political reconstruction of Western Europe. Since the 1950s, we have seen an intensification of the ideological war, followed by the defeat of Communism in the Soviet Union and Eastern Europe in the 1990s. The United States and West European countries have enjoyed an increasing degree of individual freedom and prosperity in the second half of the twentieth century. Countries of the former Communist bloc are rapidly moving towards that goal. This is reflected in the ‘pluralism’ of the post-positivist era.

Also in the second half of the twentieth century, decolonisation has seen the emergence of a large number of new nations. First, it happened as a result of the withdrawal by the old colonial powers like Britain and France from Asia and Africa; then, in Europe and Central Asia when the Soviet Union disintegrated. The process has been chaotic. Political upheaval still continues in several parts of the world, but there is little doubt that the most important social and political phenomenon to emerge out of all this is democracy. There has been greater pluralism of ideas and political views in societies which are mature democracies: for example, the United States and West European countries. One need not go back more than 40 years to see this diversity in the movements opposed to the American role in Vietnam, nuclear armament, capitalism and free trade, environmental pollution and so forth. The main characteristic of these, and of the social phenomena like the Hippie movement in the 1960s, has been opposition to the ‘established order’. Even as problems with the centrally planned economic system in the Soviet bloc were becoming increasingly obvious, and the system was collapsing, Marxist thinking continued to exercise considerable influence at university campuses and the thinking of many post-positivist philosophers (in Critical Realism and Critical Theory, for example).

Clearly, a ‘more precise formulation’ of the vastly differing post-positivist philosophies is needed to understand them better. Debra Morris has provided an account that distinguishes post-positivism from its predecessor and suggests some common features within its components.[13] According to Morris, post-positivism represents: (1) a determination to free theoretical speculation from strict dependence on confirming data (2) gives the theory component ‘a pride of place’ and approaches science in a philosophical way, and (3) opens a direct link to democratic theory.

The most simple and enduring definition of democracy is that of Abraham Lincoln, who described it as ‘a government of the people, by the people, and for the people’. However, democracy in the second half of the twentieth century, both in aspiration and reality, has thrown complications. Different individuals and groups in each society have differing views about its meaning and how it would best serve the interests of citizens. Nationalist aspirations have given rise to an increasing number of conflicts. Spirited debates continue in established democracies about what kind of society there should be. Such debates cannot take place without ‘democratic individuality’[14] and ‘perspectivism’[15]. The former acknowledges the right of equal say for each individual, the latter allows underlying assumptions in the formulation and application of theory. The need for maintaining neutrality or distance from the objects of social inquiry does not come into it.

What then happened to objectivity? As assumptions have become an accepted part of post-positivism, its supporters may say that objectivity is not really their goal. Those who engage in social inquiry critical of the existing order claim that they want to change the status quo in any case. Others contend that post-positivistic pluralism creates conditions for ‘objective conclusions’ to be reached. This presupposes that all those who wish to reach objective conclusions have the knowledge to do so. Another major problem arises in deciding which of the large number of alternatives to choose, and how to avoid ‘ignorance’ or ‘intolerance’ in the absence of clear ‘criteria’?[16] Indeed, as Thomas Biersteker says, ‘post-positivist scholars have been extremely effective critics but have been generally reluctant to engage in the construction and elaboration of alternative interpretations and understandings’.

Having focused on the many differences, let us finally see what remains common between positivism and post-positivism. Rejection of metaphysical inquiry in favour of science was the most important feature of positivism. It remains among the foundations of modern social inquiry. The role of theory and science was always crucial for positivists.[17] So it is today. To Comte, positivism had practical value and the growth of science was for the benefit of humankind. Most post-positivist scholars would not deny that such reformist tendencies remain among their underlying objectives. Data collection and analysis are still part of social inquiry. The purpose of all these examples is not to deny that the two have significant differences. They do and their differences are well established. It is, however, time to move on from the debate that focuses on the criticisms of positivism towards a more coherent post-positivistic philosophy in social science.*

*Word Count: 3960.

 

 

Bibliography

Biersteker, Thomas. September 1989. “Critical Reflections on Post-Positivism in International Relations”, International Studies Quarterly: Volume 33, Issue 3, pp 265-266.

Durkheim, Emile. 1897. Suicide (ed) Thompson, K & Martell, Luke (1985). ResFac, Sussex Library.

Why do Durkheim’s theories remain appealing to social scientists?. EssayBank, available from http://www.essaybank.co.uk/free_coursework/335.html (accessed on 20 November 2002).

Giddens, Anthony. 1977. Positivism and its critics (pp 29-95) of Studies in Social and Political Theory.  London: Hutchinson.

History of Anglo-French relations. 29 October 2002. Guardian, from politics.guardian.co.uk/foreignaffairs/story/0%2C11538%2C821636%2C00.html (accessed on 25 November 2002).

Halfpenny, Peter. 1982. Positivism and Sociology. London: George Allen and Unwin.

Kateb, George. August 1984. “Democratic Individuality and the Claims of Politics”, Political Theory: Volume 12, Issue 3, p 332.

Kennedy, Emmett. 1989. A Cultural History of the French Revolution [online]. New Haven, CT: Yale University Press. From www.tasc.ac.uk/histcourse/frenrev/resource/20a1.htm (accessed on 6 December 2002).

Lapid, Yosef. September 1989. “The Third Debate: On the Prospects of International Theory in a Post-Positivist era”, International Studies Quarterly: Volume 33, Issue 3, pp 235-254.

McLennan, Gregor. 2000. The New Positivity (Ch 1, pp 18-20) of For Sociology: Legacies and Prospects (ed) Eldridge, J, MacInnes, J et al. London: Sociology Press.

Morris, Debra. April 1999. “How Shall We Read What We Call Reality?: John Dewey’s New Science of Democracy”, American Journal of Political Science: Volume 43, Issue 2, pp 611-612.

Spencer, Herbert. 1864. Reasons for Dissenting from M. Comte [online]. From www.marxists.org/reference/subject/philosophy/works/en/spencer.htm (accessed 0n 15 November 2002).

Stace, W T. July 1944. “Positivism”, Mind, New Series: Volume 53, Issue 211, pp 215-237.

Turner, Jonathan. Autumn 1985. “In Defence of Positivism”, Sociological Theory: Volume 3, Issue 2, pp 24-30.


[1] Anthony Giddens, ‘Positivism and its critics’ (Studies in Social and Political Theory, 1977, pp 29-95).

[2] History of Anglo-French relations, Guardian, 29 October 2002, from   politics.guardian.co.uk/foreignaffairs/story/0%2C11538%2C821636%2C00.html).

[3] Emmett Kennedy, A  Cultural History of the French Revolution (1989, pp 374-384), available online www.tasc.ac.uk/histcourse/frenrev/resource/20a1.htm.

[4] Peter Halfpenny, Positivism and Sociology (1982, p 114).

[5] Peter Halfpenny (p 18).[6] Herbert Spencer, Reasons for Dissenting from M. Comte (1864), available from http://www.marxists.org/reference/subject/philosophy/works/en/spencer.htm.

[7] Emile Durkheim, Suicide, Part Four, K Thompson and Luke Martell (ed), 1985, p 95.

[8] For an overview, see ‘Why do Durkheim’s theories remain appealing to social scientists’, from http://www.essaybank.co.uk/free_coursework/335.html.

[9] W T Stace, ‘Positivism’ (Mind, New Series, July 1944, p 215).

[10] Jonathan Turner, ‘In Defence of Positivism’ (Sociological Theory, Autumn 1985, p 24).

[11] Yosef Lapid, ‘The Third Debate: On the Prospect of International Theory in a Post-Positivist Era’ (International Studies Quarterly, September 1989, p 244).

[12] Yosef Lapid (p 239).

[13] Debra Morris, ‘How Shall We Read What We Call Reality?: John Dewey’s New Science of Democracy’ (American Journal of Political Science, April 1999, pp 611-612).

[14] George Kateb, ‘Democratic Individuality and the Claims of Politics’ (Political Theory, August 1984, p 332).

[15] Yosef Lapid, ‘The Third Debate: On the Prospect of International Theory in a Post-Positivist Era’ (International Studies Quarterly, Sep 1989, p 241).  In the same article, he identifies, in addition to ‘perspectivism’, two more component themes of post-positivism: ‘paradigmatism’ and ‘relativism’.

[16] Thomas Biersteker, ‘Critical Reflections on Post-Positivism in International Relations’ (International Studies Quarterly, September 1989, pp 265-266).

[17] Gregor McLennan, ‘The New Positivity’ (For Sociology: Legacies and Prospects, 2000, pp 18-20).