2 The Fight against Want
  Material Prosperity, Inequality, and Poverty
  with Elisabeth Garratt and Lindsay Richards

Introduction

‘Want is one only of five giants on the road of reconstruction and in some ways the easiest to attack,’ argued Sir William Beveridge in his 1942 report.1 Want is a good place to start a review of social progress. While I have absolutely no desire to take a purely materialistic view of social progress, alleviation from Want is surely a precondition for achieving many other elements of social progress. Certainly, Want seems to be a major cause of many contemporary problems, driving populations whose food supplies have failed as a result of climate change, famine, or civil war to migrate—many of them to the El Dorado of the European Union where generous social insurance schemes like those proposed by Beveridge appear to have largely eliminated Want.

So my central question in this chapter is how successful has Britain been in tackling the giant of Want. Was it as easy as Beveridge had imagined? Did his social insurance proposals succeed in providing an effective safety net so that Want was no longer a problem? According to the social research of Seebohm Rowntree and his colleagues in York, poverty fell from 31.1 per cent of the wage-earning population in 1936 to a much more modest 2.8 per cent in 1951.2 They claimed that much of the fall had been due to the reforms initiated by the Beveridge Report. Others were a bit sceptical as to whether Beveridge’s social insurance schemes should take quite so much credit—they argued that post-war full employment and higher wages might have played a bigger part.3 But there seemed to be agreement that ‘social conditions are vastly better now than in pre-war days, although the remaining pockets of poverty, especially among the old, rule out any undue complacency’.4 My question for this chapter is whether the giant of Want has continued to be kept at bay, or whether we have become unduly complacent.

There are several reasons why we might have been lulled into complacency. There was broad agreement between the main political parties throughout the 1950s and 1960s on the nature and scale of British welfare programmes. However, British welfare policy then changed direction after Margaret Thatcher became prime minister in 1979. Britain gradually moved away from the post-war settlement with its universal benefits towards a more US-style market approach in which welfare benefits became less generous relative to average earnings in employment and more restricted in coverage.5 Incomes and earnings also became a great deal more unequal in the 1980s. The full unemployment of the 1950s was not maintained in the later decades of the twentieth century. Increasing immigration may also have exerted some downward pressure on wages, and more recently government austerity measures following the 2007/8 financial crash and the subsequent recession further curtailed government expenditure on benefits for working-age people.6

There are also indications that the giant of Want may be on the attack once again. There has been considerable publicity given to the rising use of foodbanks since the recession. Commentators have linked this to austerity measures.7 There has also been a suggestion that diseases linked with poverty, such as rickets, which we had long thought to be a thing of the past in a developed country like Britain, have been making an unwelcome reappearance. So it is by no means implausible that Want has reappeared in contemporary Britain.

Debates about recent trends and the effect of austerity measures are, unsurprisingly, highly political. Before turning to these contentious issues, I will first set them in context with an overview of long-term trends in material prosperity since Beveridge’s day. I will compare Britain’s performance with that of other developed democracies—our peer countries. My questions are: how much has material prosperity improved in Britain? Have we fallen behind, or powered ahead of, other major developed countries? And how have the benefits of economic growth been distributed between rich and poor? Have we followed the US model with the rich gaining most of the benefits of growth, or have those at the bottom shared in the general improvement? And crucially, what have the implications been for keeping Want at bay? Has there been a resurgence of poverty since the financial crash and the subsequent recession? Let me warn you in advance, though, that we have more questions than convincing answers.

How Does Britain’s Material Prosperity Compare with That in Peer Countries?

Let me start, then, with a review of the overall changes in Britain’s material prosperity since Beveridge’s day. I will set this in context by looking at the trends in Canada, France, Germany, Italy, Japan, Sweden, and the USA—which I refer to as our peer countries. Perhaps the most instructive comparisons are with France, Germany, and Italy, which are large post-industrial societies of a similar size as Britain (and with some similarities in historical and industrial legacies). Comparison with similar countries can potentially tell us, or at least give us a clue, whether British reforms like the move away from the post-war welfare state to a more US-style model and the deregulation of the labour market have increased risks of poverty in Britain. Alternatively, did the reforms increase the rate of economic growth and thus make it easier to eliminate poverty (which was almost certainly Margaret Thatcher’s intention)?

I start in Figure 2.1 with the trends in GDP per head for the UK and the seven peer countries. GDP—gross domestic product—is a standard measure of a country’s total economic output.8 There are several important criticisms of GDP per head as a measure of material prosperity. I will come to these in a moment, but GDP has the advantage that it has been estimated over a long time span for all our peer countries. (This is because it can be derived from administrative data such as tax returns to government and does not rely on representative surveys of the population, which only started to become available in the 1960s.) The main data source I use here goes back to 1950. These figures have been adjusted so that they take account of inflation and also of differences in purchasing power between countries.9 These adjustments are rather tricky issues, and the results obtained by different economists differ somewhat, but the broad picture is fairly clear.

Figure 2.1. Like Britain, all the peer countries became much richer between 1950 and 2014

image

Source: Max Roser, ‘Economic growth: I.4 GDP growth since 1950’, Our World in Data, https://ourworldindata.org/economic-growth10

Figure 2.1 shows that there was impressive improvement in material prosperity in all eight countries. In the case of the UK, GDP per head in 2014 was 4.3 times what it had been in 1950. It is also clear that the USA was well in the lead throughout, about fifteen years ahead of second-placed Germany. That is, in 2014, Germany had reached the same level of GDP per head as the USA had reached fifteen years earlier in 1999. The UK was another four years behind and, at the back of this bunch of large developed countries, Japan brought up the rear twenty-six years behind the USA.

We can also see some pretty big downturns, more or less every decade (for example, mid-1970s, early 1980s, early 1990s) and of course the big dip after the financial crash of 2007/8 and the ensuing recession. It’s been rather a bumpy ride. Different countries seem to have been more or less affected by the recession following the financial crash, but the most recent data from the World Bank suggest that, by 2016, all countries had regained the peak which they had achieved before the crash.11 Britain has just about returned to its previous peak but in effect growth stalled after the crash. In contrast, Germany has performed much more impressively than Britain since the crash—quite possibly because Britain had an unbalanced economy at the time of the crash with too large a dependence on the financial services sector. Nonetheless, the big picture is pretty clear: all these countries were a lot richer in 2014 than they had been in 1950, and so all of them should have found it a much easier task to fight the giant of Want.

It also seems fairly clear that Britain was in the middle of the bunch both at the beginning and at the end of the period, more or less level pegging with France. The only reasonably clear change in ranking over the years is that of Germany, which made greater progress than most other countries. However, I should mention that one’s choice of baseline year can make quite a big difference to one’s conclusion. For example, Germany, Italy, and Japan (the principal defeated countries in the Second World War) took a considerable hit in the war years and by 1950 had not caught up with the position they had been in fifteen years earlier in 1935.12 One could therefore argue that their positions in our baseline year of 1950 were misleadingly low. Sweden on the other hand remained neutral during the war and avoided the human and material costs which the other countries suffered. So perhaps Sweden’s starting point in 1950 was misleadingly high.

What about Britain? Britain slipped behind in the first half of the period, and had fallen to the back of the bunch by 1980. However, Britain then began to make up some of the lost ground in the 1980s and 1990s and grew particularly fast in the decade leading up to the financial crash. Since then, Britain’s progress has been unspectacular, though remaining in the middle of the bunch. Most economists would attribute Britain’s improved progress since the 1980s to Margaret Thatcher’s reforms, increasing competitiveness. On the other hand, the reforms may well have unbalanced the economy, making Britain too reliant on financial services and hence vulnerable to the financial crash of 2007/8.

The other major development after 1980 was the increase in inequality. As I mentioned earlier, there are several major issues with GDP and it is increasingly recognized as having limited usefulness for measuring economic progress.13 Perhaps most important for the purpose of this book is that GDP per head ignores the issue of inequality around the average. If the rich become much richer, while the prosperity of the majority stagnates, then the average could increase even though the prosperity of the majority of people had not improved. Hence changes in GDP per head (which is essentially an average) can give a misleading impression of the material progress experienced by the bulk of the population. A better measure than the average is the median, a concept which I will try to explain in the next section.

In Figure 2.2 I show how income inequality changed over time in the eight peer countries. I use the Gini coefficient, named after the Italian statistician Corrado Gini, who first published the measure in 1912.14 The Gini coefficient is the most widely used measure of income inequality, although there are lots of alternative measures. In essence it tells us how far the actual distribution of income (or wealth) in a country departs from perfect equality. Thus in a society where everyone receives the same income, the Gini coefficient would be 0. At the other extreme, if one person had all the income, and everyone else had none, the coefficient would be 1.15

Figure 2.2. Income inequality increased particularly rapidly in the UK after 1979

image

Note: for the USA the Gini coefficient is based on gross equivalized household income whereas for the other countries it is based on equivalized household disposable income

Source: Anthony B. Atkinson and Salvatore Morelli, Chartbook of Economic Inequality (2016), www.chartbookofeconomicinequality.com

As Figure 2.2 shows, the general trend in our peer countries, up until the 1970s, had been towards greater equality. There was then a reversal and income inequality began to rise. Of all eight countries, Britain showed the largest increase in inequality after 1980, going from being one of the most equal economies in the 1970s to one of the most unequal by the 1990s. Sweden also showed quite a large increase in inequality after 1990. In contrast, France saw declining inequality (albeit from a very high baseline) while Japan showed a much more modest increase than countries like Britain and the USA and was the most equal of the eight countries more or less throughout the period.

Why inequality increased in almost all of the peer countries, but especially rapidly in Britain, is not well understood even by professional economists. A range of explanations have been put forward, and there is probably some truth in all of them. One argument focusses on globalization, specifically on trade liberalization. Free trade with developing countries such as China, where labour is relatively plentiful, means that goods can be produced more cheaply in the developing countries than in the developed world. Manufacturing jobs thus get exported to China; there is declining demand for less-skilled manufacturing workers in Western countries; and wages of these less-skilled workers fall behind those of more highly skilled workers.

Globalization could also potentially explain the rocketing salaries of top executives and bankers, as the most skilled of these are now competing in a larger global marketplace and not just in the smaller marketplace of their own country. In addition to globalization, technical progress could be another explanation, as technical progress will tend to increase the demand for, and hence salaries of, the most highly qualified workers. In addition, the increasing employment of women in the labour market and their entry into higher-level jobs (which most of our peer countries apart from Japan experienced) means that there will be a new class of dual-career high-earning households with a larger combined household income.

None of these explanations really explains why inequality increased faster in Britain than in the other countries. It is likely that political choices may have made a difference. As economist Jonathan Cribb explains:

Changes in government tax and welfare policy may be part of the reason for increased inequality in net incomes (income after tax has been paid and including any welfare benefits). Before the election of the Conservative government in 1979, the top rate of income tax was 83% on earned and 98% on unearned income. Successive cuts to the top rates of tax during the 1980s directly boosted net incomes at the top of the income distribution, and increased the incentives to work for people earning very high salaries, which increased the very highest incomes relative to the rest of the population during the 1980s.16

Whatever the exact explanation, the key point for us is that the gains in material prosperity from economic growth were not shared equally but went disproportionately to the better-off. If we are interested in how prosperity has changed since Beveridge’s day for those people who are most at risk of experiencing Want, then GDP per head may not be a good guide at all. The fourfold increase in GDP tells us that Britain was in a position to tackle Want. But it does not follow that Britain actually did.

How Has the Material Prosperity of Richer and Poorer Households in the UK Evolved?

In order to explore whether Want has indeed been kept at bay in Britain, I look first at the way in which household incomes have increased over time for richer and poorer sections of the population. In Figure 2.3 I therefore shift the focus from GDP per head to household income and how incomes have grown for poorer and for richer households. Unfortunately, unlike GDP, the available data only go back as far as 1961. This is because measures of household income are not available from the kinds of administrative sources that were used to provide estimates of GDP. Instead, they come from representative sample surveys of the population, which were not widely used by government (or academics) until the 1960s and 1970s. For the most part I focus on Britain since details for most of the peer countries are only available in this format for a much shorter period.17

Figure 2.3. Real household income in the UK more than doubled between 1961 and 2015/16

image

Note: expressed in 2015–16 prices, adjusted for inflation using a variant of the CPI that includes owner-occupied housing costs

Source: calculated by Jonathan Cribb of the Institute for Fiscal Studies using the Family Expenditure and Family Resources Surveys

Figure 2.3 shows how incomes (adjusted for inflation and for household size and composition) have changed for households at different points in the income distribution. Let me try to explain what these figures mean. I define a middle-income household as one that lies exactly at the middle, with half of households receiving more income, and half receiving less. This is technically termed the median income. Following the same logic, a household at the tenth percentile is towards the bottom of the distribution, with only 10 per cent of households receiving less income. These are the households who will be at most risk of Want. And it is the other way round for the ninetieth percentile—90 per cent of households receive less income than a household at the ninetieth percentile and only 10 per cent receive more.

To give an idea of what this means in practice: a household with the median income in 2015 might be one containing two children and two adults with a combined income after tax (and including any benefits) of £34,500. A household with two adults and two children at the tenth percentile would have a combined income of £17,500, and a similar household at the ninetieth percentile would have a combined income of £68,900. A household at the ninety-ninth percentile—the top 1 per cent—would have an income of £171,500.18

Similarly to Figure 2.1, Figure 2.3 shows an impressive increase in material prosperity. In the case of a household at the middle of the income distribution—the median household—the increase was from £177 (measured in 2015/16 prices) to £482 per week over the fifty plus years from 1961 to 2015/16. This is a smaller increase than the increase in GDP per head over the same period, which increased by more than three times.

Part of the reason for the discrepancy between the GDP and household income figures is the rise in inequality. We can see that, up until 1980, there was steady although unspectacular growth for households at all the different income levels. In contrast, after 1980 the lines fan out, indicating different rates of growth for households at different income levels. As a result, over the period as a whole, for a well-off household at the ninetieth percentile the increase in their real income was over three times (rising from £307 to £950). In the case of households in the middle (the fiftieth percentile), income increased by two and two thirds (£177 to £482), but for less well-off households at the tenth percentile the increase was just over two and a half (£96 to £244).

This is of course just another way of saying that inequality increased over the period. Figure 2.2 shows clearly that the big increase in inequality was in the 1980s. Thus there was a rather steep rise in household income for those at the ninetieth percentile (and an even steeper rise for the ninety-fifth or ninety-ninth percentiles) between 1980 and 1990 whereas the gradient for households at the median or below was not all that different from what it had been before the 1980s.19 The benefits of economic growth were thus shared rather unevenly between richer and poorer sections of society after 1980. Nevertheless, there were real gains over this period even for the most disadvantaged sections of society. In the USA, in contrast, there was virtually no progress at all after 1980 for households at the tenth percentile and by 2010 their real household income had slipped behind that of the tenth percentile in Canada, France, Germany, and Sweden and was only just ahead of the UK. (We do not have comparable data for Japan.)20

We should not, however, put all the blame on the growth of inequality. Another reason for the discrepancy between the growth in median household income and that of GDP per head is that average GDP per head is a calculation based on the number of individuals whereas median household income is based on the number of households. Over this period the number of households grew faster than the number of individuals.21 This was due to a range of social changes, such as increasing divorce rates, divorcees being more likely to form a separate household (at least until they start living with a new partner). There was also an increase in the number of elderly people living alone. Since it is generally more expensive for two people to live apart than to live together, this shift towards more and smaller households can partly explain why the high growth in GDP per head did not translate into an equally high growth in median income per household.

Another point which we need to take into account is the increasing participation of women in the labour market. When I was growing up back in the 1950s, my mother did not go out to work, and it was quite rare for married women with school-age children to go out to work. The following decades, however, saw a huge increase in the proportion of married women with school-age children who go out to work (often part time). This will have increased household income quite considerably for these particular households. But a lot of that extra income will have gone straight out of the household budget in order to pay for childcare. In a sense, the spare cash available for the household to spend will not have increased nearly as much as their total monetary income. So it is not entirely clear that the actual increase in material prosperity was quite as large as these household income figures would imply.

Another way of putting the same point is to say that the value of the housewife’s unwaged contribution to the household’s prosperity is not taken into account by official estimates of household income, whereas the wages received by the paid childcare assistant will be included. In essence some activities such as childcare have become monetized—they have shifted from the non-market economy to the market economy. ‘If you marry your housekeeper, national income falls’ is a catchphrase which captures the essence of this argument (i.e., you used to pay her a wage when she was your housekeeper but no longer pay a wage once she is your wife). Since childcare costs are likely to be a larger proportion of the median household’s expenditure than of the well-off household’s, it is quite likely that the figures for the growth in median household income over the long term are unduly rosy.

There is one other important complication which I should mention—the rise of indebtedness. To be sure, household debt is not always a bad thing. Borrowing can smooth consumption over the life cycle, for example, by allowing young people to borrow against their future income (which is effectively what one does when one takes out a mortgage to buy a house). However, interest payments on the debt mean that the cash which the household has available for maintaining its material prosperity could be considerably less than the nominal household income shown in Figure 2.3. This provides another reason why the actual experience of Want may be somewhat different from the trends in income received before any outgoings like childcare costs and interest payments have been deducted.22 Highly indebted households are also more vulnerable to economic shocks such as unemployment, a drop in income, or an increase in interest rates. Income, therefore, may not always be a good guide to the actual living standards of poorer households. It may well be better to construct measures of poverty based on consumption rather than income.23

Figures for levels of household indebtedness have been estimated by various bodies such as the Office for National Statistics and the Bank of England from 1988 onwards. The standard procedure is to show indebtedness as a multiple of total household income. In effect this measure tells us how long it would take a household to pay off their debt if they were to use the whole of their annual income on the debt repayment. This gives us a sense of the size of the burden on households.

Different sources estimate different levels of this debt to income ratio, but as we can see from Figure 2.4 they all show a similar picture of trends over time. The ratio of debt to income was just over 1 in 1988. It remained fairly steady up until 1999 but then increased sharply from 2000 onwards, peaking at 1.7 in 2007, just before the financial crash. Various reasons such as rising house prices, low interest rates, and increasing availability of credit have been put forward to explain this sharp rise. When the financial crisis hit in 2008, cut-backs in mortgage lending led the debt-to-income ratio to drop but it only fell back to about 1.5, still well above its level in the 1990s.

Figure 2.4. Household debt rose particularly rapidly in Britain during the 1990s

image

Source: Marii Paskov, ‘Have we become more indebted?’ CSI briefing note no. 16 (based on data from the Office for National Statistics, Eurostat, Organisation for Economic Co-operation and Development, and British Household Panel Study), http://csi.nuff.ox.ac.uk/wp-content/uploads/2015/11/CSI-16-Have-we-become-more-indebted.pdf

As one might expect, the increase in household indebtedness was not evenly spread across the population. Unsurprisingly, the ratio of debt to income tends to be higher among younger people and among those with lower incomes. Furthermore, detailed research by Marii Paskov has shown that low-income groups are particularly likely to have unsecured debt as opposed to secured debt in the form of mortgages (which are backed up by assets such as the house being bought).24 Unsecured debt is a type of loan or credit that is extended without a collateral requirement. It constitutes a smaller proportion of overall debt but it is generally more expensive with higher interest rates. Unlike secured debt, unsecured debt does not seem to have fallen back since the financial crash. Moreover, the growth in unsecured debt after the 1990s was particularly large for the poorest households.25

Unsecured indebtedness may increase living standards in the short term, but is unlikely to be sustainable in the longer run. The increase in indebtedness among poorer households may also lead to a disconnect between nominal income growth and actual experience of Want. Research by the Institute for Fiscal Studies, for example, has shown that families for whom unsecured debt such as credit cards is a heavy burden are much more likely to be materially deprived than others with the same household income.26 There is no one-to-one relationship, therefore, between household income and level of material deprivation.

So What Has Actually Happened to Want and Poverty since Beveridge’s Day?

This is not quite as easy a question to answer as Beveridge might have expected. For a start, it is not straightforward to know what is meant by Want or by terms such as poverty or material deprivation, which are more usual nowadays than Beveridge’s quixotic term Want. As Professor Joad, a celebrity philosopher in the 1940s who became a household name through his contributions to the BBC radio programme The Brains Trust in my childhood, used to say, ‘It all depends on what you mean by…’ So what do we mean by Want?

There are basically two approaches to defining and measuring Want or material deprivation (and a huge number of detailed variants). The first approach, which is the one Beveridge seems to have had in mind, interpreted Want as lack of a basic subsistence level of living. We can think of this as a fixed and unchanging measure of destitution.27 The second approach, which contemporary social reformers have emphasized, interprets poverty as a standard of living falling below the minimum socially acceptable standard, a standard which can change over time. This second approach lies behind contemporary notions such as the living wage.

Let’s start with Beveridge’s subsistence-level approach (an approach which does not seem to have been contentious in his day). He robustly declared:

During [the immediate pre-war years] impartial scientific authorities made social surveys of the conditions of life of a number of principal towns in Britain … They determined the proportions of the people in each town whose means were below the standard assumed to be necessary for subsistence, and they analysed the extent and causes of that deficiency. (My italics)28

One of the impartial scientific authorities on whom Beveridge relied was the Quaker businessman and social researcher Seebohm Rowntree. Rowntree’s original approach, described in his book Poverty (published in 1901), defined a family as living in poverty if its total earnings were ‘insufficient to obtain the minimum necessaries for the maintenance of merely physical efficiency’.29 He was deliberately parsimonious in his estimates as he did not want to be accused of exaggerating the problem. He emphasized too that ‘Expenditure needful for the development of the mental, moral, and social sides of human nature will not be taken into account at this stage of the enquiry.’30 Drawing on the research of physiologists—one of whom conducted remarkably unethical experiments on prisoners, checking how their weight changed when they were given smaller rations and how much food would lead to an increase in their weight—Rowntree estimated how much protein, fat, and carbohydrate were needed for a working man, for a woman (eight tenths of the man’s requirements), and for children. He even gave sample menus, which contained plenty of bread, milk and porridge, some cheese, but very little meat and no fresh fruit. He then worked out the cost of this subsistence-level diet, added on sums for rent, clothing, light, fuel, and soap. He derived the latter estimates from a survey of York, asking working people questions such as ‘what in your opinion is the very lowest sum upon which a man can keep himself in clothing for a year?’

Rowntree then estimated the necessary expenditure each week for different sizes of family—21s 8d for a couple with three children, for example. From his 1899 survey of York he found that 1465 families, comprising 7230 persons, were living in poverty—15 per cent of the wage-earning class in York, and nearly 10 per cent of the whole population of York. When he repeated the exercise fifty years later in 1950 he found that the proportion below subsistence level had fallen to only 2.8 per cent of the wage-earning class in York. He attributed this largely to the success of Beveridge’s reforms.31

The second approach to measuring poverty expands the necessities of life beyond food, clothing, heating, and rent. It adds on a range of items which are believed to be necessary for a socially acceptable standard of living. This approach goes beyond subsistence to include the ‘mental, moral and social elements’ that Rowntree had mentioned but excluded in his 1901 study. Following this kind of approach, more recent researchers such as Jonathan Bradshaw and his colleagues defined a minimum income standard as one which ‘is rooted in social consensus about the goods and services that everyone in modern Britain should be able to afford’.32 They argued that a minimum standard of living in Britain today ‘includes, but is more than just, food, clothes, and shelter. It is about having what you need in order to have the opportunities and choices necessary to participate in society.’33

In this study by Jonathan Bradshaw and his colleagues, the mental, moral, and social aspects over and above subsistence requirements were judged by ordinary members of the public. They included: mobile phones and landlines, internet access (for secondary-school children), childcare (so that mothers could go to work), public transport (but not a car), money for social and cultural participation including meals out, going to the cinema, pubs, and also money for maintaining a healthy lifestyle, such as gym membership. It allowed for only a one-week budget holiday, but also included Christmas and birthday presents. These needs were based on what ordinary people themselves, after deliberation in focus groups, regarded as requisites for participating in society.

In my childhood my family would not have reached this minimum. At that time, we obviously did not have mobile phones (not yet invented) or internet access (ditto), and there would not have been many gyms (though there were tennis, golf, and rugby clubs), but it makes a lot of sense to add mobile phones and internet access today because they are part of the fabric of modern life: more and more essential functions are being carried out online, like job applications. Others, such as childcare or gym membership, reflect changing ways of life. When I was growing up, mothers were not expected to work and a minimum income standard for the 1950s would certainly not have included childcare costs. But it is entirely appropriate today, where there is a broad social acceptance that women should have the opportunity for paid employment.34 On the other hand, some of the things we did have when I was growing up, such as a telephone landline, would not have been included in an equivalent study of minimum income standards in the 1950s. So my childhood circumstances would have been superior to a 1950s minimum, but below that of a 2010 minimum. As Britain has got richer, and also as ways of life have changed, so the minimum socially accepted standards have risen, too. The goal posts are continually shifting. The key point is that the socially acceptable standard of living is not fixed—it changes over time as society changes.

The first approach, then, takes a more or less constant yardstick of subsistence-level requirements. (Actually, even this constancy can be questioned—the specimen budgets which Rowntree gave in his 1901 book would certainly not be regarded as reaching minimum subsistence standards today since they did not include fresh fruit or vegetables—vitamins had not then been discovered.) The second approach allows that social needs will change over time and that, as a society becomes richer, what is needed in order to be a participating member of the society will rise too. Families with the kind of living standard my family had in the 1950s would nowadays feel left behind and left out, though at the time we were relatively well-off.

So taking this second approach, what we would really like to know is whether the increase in real household income for the poorest households has actually kept up with the changing socially acceptable standards. Unfortunately, the honest answer to this question is that we really do not know about long-term changes. We do not have the data from the 1950s, 1960s, or even 1970s about what were socially acceptable standards at those times. Possibly we could get older people like myself to reminisce and work out what would have been required in the 1960s, but I would not advise you to trust our memories.

There is one official measure of relative poverty that might conceivably give us a clue, but it should be treated with great caution. Thus it has become, for opaque reasons, standard for governments and international bodies to use the yardstick of 60 per cent of the median household income to define what is termed ‘relative poverty’. To give a concrete example: in 2014/15 the median household income in Britain for a couple with two children under 14 was £688 per week. So the relative poverty yardstick amounted to 60 per cent of £688, that is to a household income of £398 per week.35 Jonathan Bradshaw and colleagues’ analysis suggested that this 60 per cent yardstick is rather lower than the minimum income standard derived from what members of the public thought was necessary in order to achieve a socially acceptable standard of living. Their minimum income standard varies for different types of household but is generally closer to 70 per cent than 60 per cent of the median.

But let us accept the official definition of 60 per cent of the median, and let us assume that the public’s conception of what is a socially acceptable minimum has more or less tracked the growth in median household incomes. These are pretty heroic assumptions but they do allow us to report—though only from 1973 onwards—what proportion of households fell below the threshold. This is shown in Figure 2.5.

Figure 2.5. Relative poverty in Britain was higher in 2015/16 than it had been in 1961

image

Source: proportion of households with less than 60 per cent of median household income, calculated by Jonathan Cribb of the Institute for Fiscal Studies using the Family Expenditure and Family Resources Surveys

Figure 2.5 suggests that the proportion of households in relative poverty was fairly stable at around 13 per cent in the 1970s and early 1980s. It then climbed rather steeply in the latter half of the 1980s (which of course happened to be the era when inequality was increasing fastest), reaching 22 per cent in 1990. Since then it has been gradually declining but has not regained the levels seen in the 1970s. Thus in 2015/16, relative poverty remained at 16 per cent. In the long run, then, it seems that the numbers in relative poverty has remained higher after the 1980s than it was before. Incidentally, recent OECD data suggest that relative poverty in Britain is considerably higher than in Sweden, Germany, or France but lower than in the USA.36

Again, we need to take these figures with a pinch of salt. One of the oddities of the official measure of 60 per cent of the median income is that the numbers in relative poverty will be driven as much by what is happening to median income as they are by what has happened to the living standards of poorer households. For example, after the 2007/8 financial crash, economic growth initially fell sharply and wages (and median household income) fell. At first the value of state benefits (on which a lot of the poor depend) were maintained, and so the gap closed between people receiving benefits and the median household where wages were the major contributor to overall income. So the numbers in relative poverty declined somewhat in the years after 2007/8, even though their actual real incomes hardly changed. To be sure, the public’s standards of what constituted a minimum socially acceptable standard of living may also have fallen after 2007/8 in line with the fall in median incomes, but we do not have much hard evidence on this.37

Has There Been a Resurgence of Want in the Twenty-First Century?

Can we say anything about Want in the sense of falling below the basic subsistence level rather than in the relative, socially acceptable sense? There is no good reason to expect subsistence levels of poverty to follow the same path as relative poverty. Whereas we might expect the proportion experiencing the latter to remain more or less constant over time as socially acceptable standards rise (or fall) in line with general prosperity, we might in contrast expect the proportion who are destitute to fall in the long run as society gets richer. Seebohm Rowntree had claimed that it was already down to 2.8 per cent in his 1950 survey of York. Has it subsequently vanished altogether?

The Joseph Rowntree Foundation (which was established by Seebohm Rowntree’s father Joseph in 1904) has recently carried out a study of the level of destitution in the UK in 2015.38 They defined destitution as occurring if two of the following six essentials were missing: (1) shelter (slept rough for one or more nights); (2) food (had fewer than two meals a day for two or more days), (3) heating their home (been unable to do this for five or more days, (4) lighting their home (been unable to do this for five or more days), (5) clothing and footwear (appropriate for the weather), or (6) basic toiletries (soap, shampoo, toothpaste, toothbrush).39 While the details differ, it is probably no accident that these six necessities are the same as the six included in Seebohm Rowntree’s measure of subsistence-level poverty sixty-five years earlier.

The Joseph Rowntree researchers estimated that 668,000 households containing 1,250,000 people (of whom 312,000 were children) were destitute in the UK in 2015. This amounts to about 1.9 per cent of the UK population.40 The Joseph Rowntree figures were based on a survey of people seeking help from voluntary agencies. It will therefore probably undercount destitution because some people will not have sought this help and others will have sought help from statutory agencies.

These 2015 estimates of destitution are only slightly lower than Seebohm Rowntree’s 1950 estimate for York. We cannot legitimately compare the two sets of figures because of the completely different methodologies employed (even though the underlying concepts are pretty similar) and because of the different populations studied. Nevertheless it is pretty clear from the Joseph Rowntree research that destitution has not vanished, despite Britain’s much greater prosperity at the beginning of the twenty-first century.

Although they did not compare their results with Seebohm Rowntree’s, the authors of the Joseph Rowntree Report did attempt to answer the question of whether there had been any tendency for destitution to increase in recent years. They reviewed a range of data sources, such as the proportions on very low incomes, homelessness, and the use of foodbanks, but they were forced to admit that direct evidence on recent trends in destitution were lacking.41 They concluded: ‘The most plausible conclusion is therefore that destitution will have increased in the UK in recent years, but we cannot directly demonstrate this’.42

I think this is a fair conclusion. But let’s have a quick look at the kind of data which is available. It is certainly true that the number of food parcels delivered by the Trussell Trust, the UK’s largest foodbank network, increased hugely between 2008/9 and 2013/14. However, we also know that the Trussell Trust increased its network over this period, so some of the increase may have been due to increased supply rather than to increased demand. Nor do we know for sure whether the increased number of food parcels were consumed by the same people going back more and more often for parcels, or whether a larger number of people in total were making use of foodbanks.

Unfortunately, neither the government nor academics routinely collect data on food insecurity and the use of foodbanks. (The government really ought to do more to check on the scale of problems affecting vulnerable members of our society.) However, Figure 2.6 combines the various data sources which are available in order to provide an overview of trends in food insecurity and emergency food provision in recent years. As we can see, the number of food parcels distributed by the Trussell Trust and meals provided by FareShare increased more steeply than measures of how many people were missing meals or compromising food choices. This is in line with the interpretation that the rise in emergency food provision might in part reflect greater availability of this type of assistance. On the other hand, the survey data also suggest that there was an increase in food insecurity and not just increasing availability.43

Figure 2.6. Food insecurity and the distribution of food parcels and meals increased in England after 2008

image

Source: Elizabeth Garratt, ‘Food insecurity and foodbank use’, CSI briefing note 28 (based on data from the Trussell Trust, FareShare, and English Longitudinal Study of Ageing) http://csi.nuff.ox.ac.uk/wp-content/uploads/2016/11/CSI-28-Food-insecurity-revised.pdf

There is also some circumstantial evidence in line with this interpretation. We know for example that food costs increased between 2002 and 2012 more rapidly than the rate of inflation, and that the increases were far greater for healthier food.44 On average, the nutritional quality of the food people purchased declined after the financial crash.45 Between 2007 and 2012, people spent more on food but bought less (although whether this reflects going without or being less wasteful cannot be determined). These changes were also larger in lower-income groups.46 It therefore does seem plausible that poorer families might have been increasingly stretched to buy good food after the recession. We should certainly not dismiss the increasing-destitution hypothesis out of hand.

A quite different source of data can be found in hospital admissions records on malnutrition. There is some short-run data on hospital admissions with a primary or secondary diagnosis of malnutrition. In England and Wales such admissions increased from 3899 in 2009–10 to 6686 in 2013–14, an increase of 72 per cent, while primary diagnoses of malnutrition increased by 28 per cent from 478 in 2009–10 to 612 in 2013–14.47 Unsurprisingly, primary diagnoses of malnutrition were concentrated in more disadvantaged areas, and in December 2013 a group of doctors wrote to the British Medical Journal, warning of food insecurity as a public health emergency.48 Low birth weight is another condition associated with poverty. In England and Wales, around 3 per cent of full-term live births were low birthweight (that is, less than 2500g) in 2015, more or less unchanged since 2006, although unfortunately we do not have longer-run data.49

We do, however, have much longer-term data on another condition associated with poverty and poor diet—namely, rickets. Rickets is a condition in children in which bones fail to develop properly due to a lack of vitamin D and calcium absorption. It was common in the nineteenth century but subsequently declined. Michael Goldacre and his colleagues have looked at hospital admissions records from 1963 to 2011 and found that there was a particularly marked increase after 1999, as Figure 2.7 shows.50

Figure 2.7. The incidence of rickets remained around 1 per 100,000 children up until 1999 but increased sharply in the twenty-first century

image

Source: derived from Michael Goldacre, Nick Hall, and David G. R. Yeates, ‘Hospitalisation for children with rickets in England: A historical perspective’, The Lancet, 383, no. 9917 (2014): 597–8, with permission from Elsevier (Licence number: 4217160155087).51 DOI: http://dx.doi.org/10.1016/S0140-6736(14)60211-7

There is always the possibility with administrative data like hospital admissions that the increase may be due to changes in hospital admissions procedures or diagnoses. However, Michael Goldacre and his colleagues argue that, since rickets is a straightforward diagnosis, there is a good possibility that the true incidence had indeed risen.

The problem with this evidence is that the population was changing over this period, with increasing numbers of ethnic minorities resident in Britain. Children with darker skin tend to have higher rates of rickets in Britain, since they absorb less vitamin D from the available light. In fact, the increase in the size of the ethnic minority population started much earlier than in 1999 (and the recent increase in migration to Britain has been driven by migrants from European countries) so this is not a conclusive counter-argument. The increase could also be explained by children spending more time indoors, for example playing computer games. Once again, then, this is suggestive but not conclusive evidence.

Conclusions

We can have no doubt that Britain, like other large Western democracies, became substantially richer in the post-war decades up until the financial crash of 2007/8. Even after the crash, both GDP per head and average household income were still much greater than they had been fifty years earlier. It has been a bit of a bumpy ride. Britain’s growth was slightly slower than that of peer countries in the first couple of decades; it was around average in the 1980s and 1990s, and then surged in the 1990s and up until the financial crash of 2007/8. By the end of our period Britain was still somewhere in the middle of the pack of peer countries. We had dropped behind Germany (which had started off well behind) but Britain was still level pegging with France. This was not perhaps a spectacular performance overall but nevertheless it was one which should have made it possible to tackle the giant of Want.

However, the other great economic change which transformed the nature of the British economy was the rapid increase in economic inequality after Margaret Thatcher took office in 1979. Britain had been one of the most equal of our eight peer countries in the late 1970s, but by the 1990s it had become one of the most unequal (although still not as unequal as the USA).

Doubtless the architects of the 1980s reforms hoped that growth rates would increase and that living standards for all would increase as a result. While household incomes did indeed continue to rise throughout the 1980s and up until the 2007/8 crash, it was the higher-income groups who experienced an increased rate of progress. The poorest sections of society saw more modest increases in household income, and their rate of progress was largely unchanged from that of earlier decades. This should still have been sufficient to see a gradual decline in poverty in Britain but may well have been offset by increases in levels of personal debt, which increased faster for poor people than for the better-off.

Unfortunately, we cannot straightforwardly read off the experience of material deprivation from the income data. Measuring poverty and different forms of material deprivation is not at all straightforward, and we do not have good over-time data to discover what the trends have been. This is true whether we look at subsistence levels of poverty—having enough money to eat a healthy diet, to keep warm and clothed—or at access to a socially acceptable standard of living. It would be foolish to pretend that we can reach any definitive conclusion about long-run trends in either of these two conceptions of poverty.

However, the balance of the evidence does suggest that economic progress for the poorest families stalled in the twenty-first century or possibly even went into reverse. Even if we do not place any weight on the data on the rising use of foodbanks, and the rise of rickets and malnutrition, recent research by the Institute for Fiscal Studies indicates that the proportion of children living in material deprivation barely changed between 2004/5 and 2014/15.52

These scattered pieces of evidence are far from providing definitive proof, then, that destitution is increasing, but they do suggest that this is an hypothesis which cannot be dismissed out of hand. It should be taken seriously. One often hears people (especially spokesmen for lobbying groups) argue that there is no proof for a particular empirical claim. In my youth tobacco companies were guilty of this kind of thing, arguing that there was no proof that smoking caused lung cancer. And the companies were right at that time, since strong causal evidence had not yet been established. Nowadays few people who have studied the issue would deny that there is indeed a causal link between smoking and lung cancer. But even before the causal link had been established, there was plenty of evidence suggesting that the possibility of a causal link should be taken very seriously, not dismissed out of hand.

I would argue, then, that there are some cases where we can be sure ‘beyond all reasonable doubt’ that the claim is soundly based. There will be a larger number of cases where the evidence (often because no one has yet collected the relevant data) is not sufficiently strong to lead to a ‘beyond all reasonable doubt’ conclusion, but is still strong enough to say that the claim needs to be taken seriously and should not be dismissed out of hand. I suggest that increasing destitution during the twenty-first century falls into this category—not yet proven, but needs to be taken very seriously.

Given the uncertainty about the data and the trends over time, it would be hazardous to be dogmatic about the reasons why material progress has stalled. One obvious possibility is that the 2007/8 financial crash, and the austerity measures which followed, were partly to blame. Government policy after 2010 protected the real value of state pensions but reduced that of benefits for working-age people. In line with this, the available evidence suggests that it is younger single people who are most at risk of various forms of poverty. This contrasts with the situation in 1950, as shown by Seebohm Rowntree’s survey of York, when it was the elderly who were most at risk.

In my experience of social research one rarely finds a single unique explanation which can answer our central question. I am deeply suspicious of one-liners and sound-bite explanations. We always need to entertain the possibility of an alternative explanation—something that medics term a ‘differential diagnosis’. Potential differential diagnoses are low wages in the lower-skilled labour market, the deteriorating competitive position of poorly qualified workers in the job market, and the difficulty of finding any kind of work for the increasing number of people coming out of prison or leaving care. While austerity measures may eventually be reversed, once Britain’s finances are on a sounder footing, these alternative problems may be harder to reverse. I do not think that we can be entirely optimistic that the (un)steady march of material progress will shortly be resumed—at least as far as the most vulnerable members of British society are concerned.