In 1900, worldwide, there were 6.7 rural dwellers to each urban dweller; now there is less than one and projections suggest close to three urban dwellers to two rural dwellers by 2025. This has been underpinned by the rapid growth in the world economy and in the proportion of gross world product and of the economically active population working in industry and services (since most industrial and service enterprises are in urban areas). Globally, agricultural production has managed to meet the demands from a rapid growth in the proportion of the workforce not producing food and rapid changes in food demands towards more energy- and greenhouse gas emission-intensive food. However, hundreds of millions of urban dwellers face under-nutrition today, although this is far more related to their lack of income than to a lack of capacity to produce food. There is a very large urban population worldwide with incomes so low that their health and nutritional status are at risk from any staple food price rise—as became evident with the rising hunger among urban populations after the food price rises in 2007 and the first half of 2008 (Cohen & Garrett 2009). Show Much is made of the fact that in 2008, the world's urban population exceeded its rural population for the first time. Less attention has been given to two other transitions: around 1980, the economically active population employed in industry and services exceeded that employed in the primary sector (agriculture, forestry, mining and fishing); and around 1940, the economic value generated by industry and services exceeded that generated by the primary sector (Satterthwaite 2007). Today, agriculture provides the livelihoods for around one-third of the world's labour force and generates 2–3% of global value added—although this is misleading in that a significant proportion of industry and services are related to the production, processing, distribution and sale of food, and other agricultural products. In addition, the figure might be higher if the value of food produced by rural and urban dwellers for their own consumption is taken into account. UN projections suggest that the world's urban population will grow by more than a billion people between 2010 and 2025, while the rural population will hardly grow at all (United Nations 2008). It is likely that the proportion of the global population not producing food will continue to grow, as will the number of middle and upper income consumers whose dietary choices are more energy- and greenhouse gas emission-intensive (and often more land-intensive) and where such changes in demand also bring major changes in agriculture and in the supply chain. Two key demographic changes currently under way and likely to continue in the next few decades are the decline in population growth rates and the ageing of the population. An ageing population in wealthier nations may produce more people that want to and can live in ‘rural’ areas, but this is best understood not as deurbanization but as the urbanization of rural areas; most such people will also cluster around urban centres with advanced medical services and other services that they want and value. The precise demographic definition of urbanization is the increasing share of a nation's population living in urban areas (and thus a declining share living in rural areas). Most urbanization is the result of net rural to urban migration. The level of urbanization is the share itself, and the rate of urbanization is the rate at which that share is changing. This definition makes the implications of urbanization distinct from those of urban population growth or those of the physical expansion of urban areas, both of which are often treated as synonymous with urbanization. A nation's urban population can grow from natural increase (births minus deaths), net rural to urban migration and reclassification (as what was previously a rural settlement becomes classified as urban or as an urban settlement's boundaries are expanded, bringing into its population people who were previously classified as rural). Nations with rapid economic growth and relatively low rates of natural increase such as China over the past few decades have most of their urban population growth from urbanization; nations with little or no economic growth and high rates of natural increase (including many sub-Saharan African nations during the 1990s) have most of their urban population growth from natural increase (see Potts 2009). Differences in rural and urban rates of natural increase (influenced by differences in fertility and mortality rates) also influence urbanization, although generally these act to reduce urbanization. The term urbanization is also used for the expansion of urban land uses. The conventional definition for urbanization used in this paper entails a shift in settlement patterns from dispersed to more dense settlement. By way of contrast, much of the expansion of urban land use is the result of a shift from dense to more dispersed settlement. In effect, the term urbanization is being used to refer to two opposing spatial shifts in settlement patterns, likely to have opposing effects on, for example, the land available for agriculture. Many development professionals see urbanization as a problem. Yet, no nation has prospered without urbanization and there is no prosperous nation that is not predominantly urban. Over the past 60 years, there is a strong association between economic growth and urbanization and most of the world's poorest nations remain among the least urbanized nations. Urban areas provide many potential advantages for improving living conditions through the economies of scale and proximity they provide for most forms of infrastructure and services. This can be seen in the high life expectancies evident in the best governed European, Asian and North and South American cities. Urbanization over the past two centuries has also been associated with pro-poor social reforms in which collective organization by the urban poor has had important roles (Mitlin 2008). But there are still very serious development problems in many urban areas, including high levels of urban poverty and serious problems of food security and of high infant and child mortality. Many urban areas in sub-Saharan Africa also have very high prevalence rates for HIV/AIDs; where there are large urban populations unable to get required treatments and a lack of programmes to protect those most at risk, these increase urban mortality rates significantly (van Donk 2006). But it is not urbanization that is the cause of such problems but the inadequacies in the response by governments and international agencies. In most nations, the pace of economic and urban change has outstripped the pace of needed social and political reform, especially at local government level. The consequences of this are evident in most cities in Asia and Africa and many in Latin America and the Caribbean—the high proportion of the population living in very poor and overcrowded conditions in informal settlements or tenements lacking adequate provision for water, sanitation, drainage, healthcare, schools and the rule of law. This is evident even in cities where there has been very rapid economic growth. The fact that half of Mumbai's or Nairobi's population live in ‘slums and squatter settlements’ is more to do with political choices than a lack of resources. Little more than a century ago, most ‘slums’ in Europe and North America had living conditions, and infant and child mortality rates that were as bad as the worst-governed cities in low-income nations today. Here too there were problems of under-nutrition, lack of education and serious problems with exploitation, as well as deeply entrenched discrimination against women in almost all aspects of life. It was social and political reforms that dramatically reduced these. And social and political reforms are addressing these in many middle-income nations today—as in Thailand, Brazil and Tunisia where housing and living conditions, basic service provision and nutritional standards have improved considerably for large sections of the low-income urban population. The world's urban population today is around 3.2 billion people1—more than the world's total population in 1960. Many aspects of urban change in recent decades are unprecedented, including the world's level of urbanization and the size of its urban population, the number of countries becoming more urbanized and the size and number of very large cities. But these urban statistics tell us nothing about the large economic, social, political and demographic changes that underpinned them. These include the multiplication in the size of the world's economy, the shift in economic activities and employment structures from agriculture to industry and services (and within services to information production and exchange), and the virtual disappearance of colonial empires. Aggregate urban statistics may suggest rapid urban change but many of the world's largest cities had more people moving out than in during their last inter-census period.2 The increasing number of ‘mega cities’ with 10 million or more inhabitants may seem to be a cause for concern but there are relatively few of them (17 by 2000), they concentrate less than 5 per cent of the world's population and most are in the world's largest economies. Although rapid urbanization is seen as a problem, generally, the more urbanized a nation, the higher the average life expectancy and the literacy rate and the stronger the democracy, especially at local level. Of course, beyond all these quantitative measures, cities are also centres of culture, of heritage, of social, cultural and political innovation. Some of world's fastest growing cities over the past 50 years also have among the best standards of living within their nation. It is also important not to overstate the speed of urban change. Rates of urbanization and of urban population growth slowed in most sub-regions of the world during the 1990s. Mexico City had 18 million people in 2000, not the 31 million predicted 25 years previously. Kolkata (formerly Calcutta), Sao Paulo, Rio de Janeiro, Seoul, Chennai (formerly Madras) and Cairo are among the many other large cities that, by 2000, had several million fewer inhabitants than had been predicted. There are also significant changes in the distribution of the world's urban population between regions (table 1). In 1950, Europe and Northern America had more than half the world's urban population; by 2000, they had little more than a quarter. Asia now has half the world's urban population.
Some caution is needed when comparing urban trends between nations because of deficiencies in the statistical base. Accurate statistics for nations' urban population and urbanization levels depend on accurate censuses.3 But in some nations, there has been no census for the past 15–20 years. It is also difficult to compare the current population of most of the world's largest cities because each city has at least three different figures for their populations, depending on whether it is the city (or built-up area), the metropolitan area or a wider planning (or administrative) region that is being considered—or whether the city population includes the inhabitants of settlements with a high proportion of daily commuters. Also, there are significant differences between nations in how urban centres are defined, which limits the validity of international comparisons for urbanization levels. China's level of urbanization in 1999 could have been 24%, 31% or 73%, depending on which of three official definitions of urban populations was used (Zhang 2004). If India adopted the urban definition used in the UK or Sweden, its urbanization level would increase very considerably as many of its ‘large villages’ would be reclassified as urban centres. Two aspects of the rapid growth in the world's urban population are the increase in the number of large cities and the historically unprecedented size of the largest cities. In 1800, there were two ‘million-cities’ (cities with one million or more inhabitants)—London and Beijing (then called Peking); by 2000, there were 378. In 2000, the average size of the world's 100 largest cities was 6.3 million inhabitants, compared with 2 million inhabitants in 1950 and 0.7 million in 1900. De-urbanization is a decrease in the proportion of the population living in urban areas. During the 1970s, in various high-income nations, there appeared to be a reversal of long-established urbanization trends nationally or within some regions as there was net migration from large to small urban centres or from urban to rural areas. This was labelled counter-urbanization, although much of it is more accurately described as demetropolitanization because it was population shifts from large metropolitan centres to smaller urban centres or from central cities to suburbs or commuter communities. Some of the ‘smaller cities’ that attracted large migration flows grew sufficiently to become metropolitan centres—so this was a shift from old to new metropolitan centres. This was not underpinned by a shift in the workforce back to agriculture but by the growth of the labour force in industry and services that could live in small urban centres or rural areas and commute to work. In addition, with advanced transport and communication facilities, a proportion of new investment in industry and services could locate in rural areas. Telecommuting allows work to be done and incomes earned in rural areas, even if the work is for a city-based enterprise. This is best understood not as de-urbanization but as the urbanization of rural areas. Here, most rural households enjoy levels of provision for infrastructure and services that have been historically associated with urban centres; many are also within (say) 1 h of central-city theatres, cinemas, museums, art galleries, restaurants and shops. This phenomenon is also seen in the fact that many high-income nations have only 1–2% of their labour force in agriculture when 15–30% of their population live in rural areas. Historically, there are examples of de-urbanization where the proportion of the economically active population working in agriculture increased, especially as nations faced economic or political crises or during wars (Bairoch 1988; Clark 2009). In the past 50 years, various nations de-urbanized for particular periods driven by central planning and force (for instance in Cambodia, Vietnam and parts of China). In the past two decades, some regions in sub-Saharan Africa de-urbanized or had no urbanization, largely in response to economic crisis and to structural adjustment (Potts 2009). Others that have had wars or long-running conflicts may have de-urbanized, unless those fleeing these conflicts went to urban areas. The term de-urbanization has also been applied to particular cities that lose population. This is confusing in that there are always changes in any nation's urban system as some urban centres are more successful than others at attracting or retaining investment. For instance, China has urbanized rapidly over the past three decades, underpinned by rapid economic growth, and it has many rapidly growing cities but also some that have had declining populations. In the United States and Europe, many of the great nineteenth and early twentieth century ports and steel, textile and mining centres have lost economic importance and population (Pallagst et al. 2009); so too have some of the major manufacturing cities—for instance, Detroit as a centre of motor vehicle production. These are not associated with a shift in the economically active population to agriculture but with locational shifts in where new investments are going. We need to understand what has underpinned urbanization in the past and how this is changing and might change in the future to be able to consider its implications for agriculture and food production. The history of urbanization and of the cities and towns it encompasses is a history of political strength and economic success. The spatial distribution of towns and cities is in effect the geography of the non-agricultural economy since it is where industrial and service enterprises have chosen to locate. It is also a map of where people working outside agriculture, forestry or fishing make a living. Changes in this spatial distribution reflect changes not only in the economy but also in how this is organized—for instance, how this is influenced by the growth of multinational corporations and how they are structured, by shifts in goods production to greater use of out-sourcing and by economic changes underpinned by advanced telecommunications including the Internet. The rural to urban migration flows that cause urbanization are mostly a response to these economic changes. Some migration flows might be considered exceptions—for instance, growth in places where retired people choose to live, or in tourist resorts; but this also reflects economic change because of the growth in enterprises there to meet the demand for goods and services generated by the retired people and/or tourists. This close association between urbanization and political strength and economic success is not likely to change looking to the future, although the countries and regions that enjoy the greatest success will change. Economic success for most cities may depend more today on success in global markets than 50 years ago, although intense inter-city competition for markets beyond national boundaries has been an influence for most cities for many centuries (Bairoch 1988; Clark 2009). Urbanization has also been underpinned by the expansion of the state, although the scale of this depends on economic success. In addition, competent, accountable urban governments have considerable importance for economic success. Today, many of the world's largest cities are large not because they are political capitals but because of their economic success. How urbanization is understood has large implications for how its likely future influence on food and farming is perceived. If urbanization is regarded as a process taking place in almost all nations and as a driver of change, then it can be assumed that extrapolating past trends provides us with a likely picture of the world's future urban population. This is backed up by projections for all nations for their urban populations and their levels of urbanization up to 2025 and beyond (United Nations 2008). These suggest that almost all nations continue to urbanize except for those already classified as 100 per cent urban. Within this assumption of almost universal increases in urbanization, often there are references to urbanization being out of control because it seems to take place regardless of economic conditions. There is also uncertainty as to how to fit examples of de-urbanization into this broad picture of a world with almost all nations becoming increasingly urbanized. But if urbanization is understood as a process that is deeply influenced by the scale and nature of economic, social and political change (see for instance Hasan 2006), then projections up to 2025 and beyond become more uncertain. How does one predict the absolute and relative economic performance of each nation up to 2025? Within this understanding of urbanization, there is an interest in the links between urbanization and economic change (which prove to be robust and multi-faceted). Since the scale and nature of economic change varies so much between nations and within nations, there is an interest here in how differences in economic change are associated with (and often the main cause of) differences in the scale and nature of urban change (including urbanization). De-urbanization is more easily incorporated into this, as a spatial manifestation of economic decline or collapse. This paper suggests that there is a substantial but often overlooked evidence base for this second interpretation of urbanization—and that this also provides a more reliable basis for considering the current and future influence of urbanization on food and farming. In low- and middle-income nations, urbanization is overwhelmingly the result of people moving in response to better economic opportunities in urban areas, or to the lack of prospects in their home farms or villages. The scale and direction of people's movements accord well with changes in the spatial location of economic opportunities. Although it is often assumed that most migration is from rural to urban areas, in many nations rural-to-rural, urban-to-rural and urban-to-urban migration flows are also important. That much of the migration over the past 60 years has been from rural to urban areas is hardly surprising in that most of the growth in economic activities over this period has been in urban centres. Today, around 97 per cent of the world's gross domestic product (GDP) is generated by industry and services, and around 65 per cent of the world's economically active population works in industry and services—and a very high proportion of all industry and services are in urban areas. The graphs in figure 1 show how changes in urbanization levels reflect changes in the proportion of GDP generated by industry and services and the proportion of the workforce in industry and services. Changes in the proportion of GDP from industry and services, of the labour force working in industry and services and of the population in urban areas, 1950–2005. Diamonds, % GDP from industry and services; squares, % labour force in industry and services; dashed lines, level of urbanization. Source: Satterthwaite (2007).
Many cities owe their prosperity to their roles within the increasingly internationalized system of production and distribution. International, national and local tourism have also proved important underpinnings in many cities and smaller urban centres. There is an economic logic underlying the distribution of the world's largest cities. For instance, the world's five largest economies in 2000 had 44 per cent of the world's ‘million cities’ and eight of the world's 17 megacities; most of the other large cities and megacities were within the next 15 largest economies. There is also an obvious association between most of the world's largest cities and globalization. Growing cross-border flows of raw materials, goods, information, income and capital, much of it managed by transnational corporations, have underpinned a network of ‘global cities’ that are the key sites for the management and servicing of the global economy (Sassen 2006). Many of the world's fastest growing cities are also the cities that have had most success in attracting international investment. Large international migration flows, and consequent remittance flows, are also associated with globalization and have profound impacts on many cities—in areas of both origin and destination. Around 175 million people (more than 2% of the world's population) live in a country in which they were not born (Boswell & Crisp 2004). However, the association between globalization and large cities is moderated by two factors. The first is that advanced telecommunications systems and corporate structures allow a separation of the production process from those who manage and finance it. The second factor, linked to the first, is the more decentralized pattern of urban development that is possible within regions with well-developed transport and communications infrastructure. Many of the most successful regions have urban forms that are less dominated by a large central city, and have new enterprises developing in a network of smaller cities and greenfield sites (Castells & Hall 1994). This is usually underpinned by a growing capacity among cities outside the large metropolitan areas to attract a significant proportion of new investment, which in turn has been supported by decentralization where local governments' capacities and accountability to citizens were increased. Urbanization brings major changes in demand for agricultural products both from increases in urban populations and from changes in their diets and demands. This has brought and continues to bring major changes in how demands are met and in the farmers, companies, corporations, and local and national economies who benefit (and who lose out). It can also bring major challenges for urban and rural food security. But it is misleading to consider this in general terms for ‘developing countries’ as if current or likely future changes in (say) Argentina and Chile have anything in common with (say) Mauritania and Burkina Faso. To predict changes for each nation is difficult, in large part because of uncertainties as to how much and where urban populations will grow in the future. It is usually assumed that most ‘developing nations’ will continue urbanizing but many low-income nations currently lack any area of comparative advantage within the global economy and so also the basis for the prosperity needed to underpin urbanization (see Satterthwaite 2007; Potts 2009). It is often assumed that there are particularly serious problems with serving growing numbers of ‘megacities’ (cities of over 10 million inhabitants) but as noted already, there are relatively few of them, and in many nations a more decentralized pattern of urban growth was evident in the last round of censuses taken in 2000; it will be interesting to see if this is a trend that has been sustained when data from the current round of censuses become available. It is worth considering likely changes at two different ends of the spectrum in terms of nations' economic success. It would be expected that in nations with successful economies and rapid urbanization, there will be rising demands for meat, dairy products, vegetable oils and ‘luxury’ foods, and this implies more energy-intensive production and, for many nations, more imports (de Haen et al. 2003). Urbanization is also associated with dietary shifts towards more processed and pre-prepared foods, in part in response to long working hours and, for a proportion of the urban population, with reduced physical activity (Popkin 2001; de Haen et al. 2003). Of course, food demand will also be influenced by how this economic growth changes the distribution of income. How will this influence agriculture around or close to growing urban centres will also vary; it would be expected that a growing role for supermarkets (and transnational corporations) in food sales would bring changes in all aspects of the food chain. This would include favouring larger (and often non-local) agricultural producers and major changes in the distribution and marketing of food (Kennedy et al. 2004). This also means a shift in employment within the food system, with fewer people working in agriculture and more working in transport, wholesaling, retailing, food processing and vending (Cohen & Garrett 2009). The high proportion of urban households with electricity in middle-income and some low-income nations also means far more households with refrigeration and this supports shifts in food demand (Reardon et al. 2003). Many low- and middle-income nations are likely to have a growing share of urban food demand met by imported food and by the kinds of shifts in agriculture evident in high-income nations over the past few decades towards more capital- and energy-intensive and less labour-intensive farming. But growing demand from high-income urban dwellers or from tourists may also support the growth of a range of high-value food crops that provide more scope for many local farms (and smaller farmers) and may have valuable multiplier links within the local economy. This includes more scope for urban and peri-urban agriculture (see §4c). It is difficult to predict how this will change—for instance, if there is a sustained increase in the price of oil and natural gas, this might provide local agricultural producers with some advantages in meeting local demands as their production and transport to market is less carbon-intensive, or disadvantage local producers that were serving foreign markets (for instance, high-value crops that are exported by air). At the other end of the spectrum, there is a very large urban population in nations or sub-national regions lacking prosperous economies where demand for agricultural products is likely to change much less. There are many nations where most of the urban population still has no electricity (Legros et al. 2009) and where the profits to be made in food retailing are too small to attract large corporations. In Africa, multinational chains have yet to reach poor urban neighbourhoods and have little presence in poorer countries (Weatherspoon & Reardon 2003). In addition, a very large part of the urban population in both prosperous and unprosperous low- and middle-income nations have incomes so low that they struggle to meet their basic nutritional needs. Given the concentration of economic opportunity in urban areas, it might be expected that urban populations would have much better living standards, levels of nutrition and service provision than rural populations. The concentration of powerful economic interests and wealthier groups in particular urban areas would be expected to produce a bias that favoured them. But it would be misleading to term this urban bias if it favours only a proportion of the urban population. The scale and depth of urban poverty in low- and middle-income nations hardly suggests that everyone benefits from an urban bias. It is common for between one-third and one-half of the population in cities to live in illegal settlements lacking adequate provision for water, sanitation, healthcare and schools. Their homes and livelihoods are at risk from eviction—and tens of millions of urban dwellers are evicted from their homes each year, mostly with no compensation or very inadequate compensation (du Plessis 2005). The large and growing scale of urban poverty in China is a reminder of how very rapid economic growth sustained over 25 years does not automatically translate into less urban poverty (Solinger 2006). The same is true for some of India's most prosperous cities. In addition, the scale and depth of urban poverty is usually underestimated by official statistics because of inadequate allowance made in setting poverty lines for the costs that low-income city dwellers face for non-food necessities, such as rent, water, access to toilets, healthcare, fuel and keeping children at school (Satterthwaite 2004). Urban expansion inevitably covers some agricultural land while changes in land values and land markets around cities often result in land left vacant as the owners anticipate the gains they will make from selling it or using it for non-agricultural uses. In most urban areas in low- and middle-income nations, the absence of any land-use plan or strategic planning framework to guide land-use changes means that urban areas expand haphazardly. This expansion is determined by where different households, enterprises and public sector activities locate and build, legally or illegally. In most instances, there is little effective control over land-use conversions from agriculture to non-agricultural uses. There may be regulations that are meant to limit this but these are often avoided by politicians and real estate interests (Hardoy et al. 2001). This unregulated physical expansion brings many serious consequences. These include the segregation of low-income groups in illegal settlements on the worst-located and the most hazardous sites (they would not be permitted to settle on better-located and safer sites) and a patchwork of high- and low-density land uses to which it is both expensive and difficult to provide infrastructure and services. Urban centres often expand over their nation's most productive agricultural land since most urban centres grew there precisely because of highly fertile soils. Most of the world's major cities today have been important cities for several hundred years, so they became important cities before the development of motorized transport (and later refrigeration) that reduced cities' dependence on their surroundings for food and other agricultural products. Of course, for prosperous cities, the demand for agricultural commodities has long-since gone far beyond what is or could be produced in their surroundings. They draw on large and complex global supply chains and have large ecological footprints, drawing on ‘distant elsewheres’ for food, fuel and carbon sinks (Rees 1992). The dependence of many very large concentrations of urban populations on long international supply chains for food, fuels and most intermediate and final goods makes them vulnerable to disasters in locations that supply these or buy their products, and also to rising fuel prices. However, the loss of agricultural land to the spatial expansion of urban areas is often exaggerated; one recent study suggested that only West Europe among the world's regions has more than 1 per cent of its land area as urban (Schneider et al. 2009). In addition, a declining proportion of land used for agriculture around a city may be accompanied by more intensive production for land that remains in agriculture (see Bentinck 2000) or intensive urban agriculture on land not classified as agricultural. In most locations, governments could and should restrict the loss of agricultural land to urban expansion. But this can also bring serious social consequences if it pushes up land and house prices and reduces still further the proportion of households that can afford a legal housing plot with infrastructure. Approximately 25 per cent of the world's terrestrial surface is occupied by cultivated land (Cassman et al. 2005). Urban growth is more likely to reduce arable land availability if it takes place in this zone. But an analysis of the percentage of urban and rural population in the cultivated zones in each region found no evidence of urban populations concentrated in cultivated zones (Balk et al. 2008). Of course, the expansion of urban land uses is not just the result of urbanization but also (in most cities) of natural increase and of declining urban densities (Angel et al. 2005). Since urbanization entails fewer rural people as well as more urban people, it may reduce rural building and so, in part, counteract the effects of urbanization expanding over cultivated land. Dietary changes can increase pressures on agricultural systems, with increasing meat consumption the most important example of this. Diets differ between rural and urban areas, and meat consumption per capita is higher in urban areas. But a review of the relationship between urbanization and food prices suggests that this may be the result of higher urban incomes and not urbanization or urban living, as higher income rural dwellers have similar levels of increased meat consumption or of luxury goods to higher income urban dwellers (Stage et al. 2010). For instance, in Sri Lanka, there is considerable diversity in the expenditures on meat per household in different parts of the country, but the difference between median rural and median urban households conforms roughly to what might be expected given the differences in average income. In Vietnam, data from 1993 to 2004 show that all parts of the country experienced rapid income growth and increasing consumption of luxury foods, in a pattern that suggests that income, not urban living, is the driving force (Stage et al. 2010). Hundreds of millions of urban dwellers rely on urban agriculture for part of their food consumption or income as they sell high-value crops or non-food crops or raise livestock for sale (Smit et al. 1996; Redwood 2009). A range of studies in urban centres in East Africa during the 1990s showed 17–36% of the population growing crops and/or keeping livestock (Lee-Smith 2010). These studies also showed the diversity among urban farmers—for instance, in Dar es Salaam, they included professionals, teachers, government officials, urban planners, students, casual labourers, the unemployed and part-time workers (Sawio 1994). Urban and peri-urban agriculture has a significant role in food and nutrition security in most low-income nations, although in many cities it is more difficult for the urban poor to get access to the land needed for agriculture (Smit et al. 1996; Lee-Smith 2010). Although urbanization is generally associated with economic growth, this does not mean that the number of urban dwellers facing hunger has declined in all nations. A study of 10 nations in sub-Saharan Africa showed that the proportion of the urban population with energy deficiencies was above 40 per cent in all but one nation and above 60 per cent in three (Ruel & Garrett 2004). In 12 of 18 low-income countries, food-energy deficiencies in urban areas were the same or higher than rural areas, even though urban areas have higher average incomes (Ahmed et al. 2007). The rapid increases in food prices during 2007 and early 2008 showed the vulnerability of the urban poor to price rises. Although there has been some decline in prices since mid-2008, most analysts believe that prices will not return to the levels of the early 2000s because of continued strong demand for energy and for cereals for food, feed and fuel, as well as to structural land and water constraints and likely food production impacts of climate change (Cohen & Garrett 2009). Urban food security depends on households being able to afford food within other needs that have to be purchased (Cohen & Garrett 2009)—although as noted above, the contribution of urban agriculture is important for many households. Various studies have shown the extent of food insecurity among low-income households in urban areas and the many coping measures taken, including those that in the longer term compromise health and nutritional status (see Maxwell et al. 1998; Tolossa 2010). However, many Latin American and some Asian and African nations that now have predominantly urbanized populations have managed to sustain long-term trends of falling infant and child mortality rates and increasing average life expectancies, and this implies improving nutrition levels too. In some nations, the provision of a regular small cash sum for low-income households (e.g. the bolsa familia in Brazil) or the provision of certain staple foods at subsidized prices has reduced hunger and malnutrition—although with considerable differences in effectiveness and in the possibilities for those who need this entitlement to actually obtain it. Perhaps surprisingly, the possible negative consequences of urbanization for agriculture are often stressed more than its positive consequences. Since urbanization is generally the result of a growth in non-food producers and their average incomes, it often provides growing demands for agricultural products and for higher value products that bring benefits to farmers. Any discussion of the ways in which urbanization may affect food demand and supply needs to take into account the complexity of the linkages between rural and urban people and enterprises, and to recognize the capacity of food producers to adapt to changes in urban demand (Tiffen 2003; Hoang et al. 2005). A high proportion of households have rural and urban components to their incomes and livelihoods—so they are better understood as multilocal, as individual members engage in different activities in different locations while sharing resources and assets. Incomes from non-agricultural activities and remittances have proved important for reducing rural poverty in many places (see Deshingkar 2006). Earnings from non-farm activities are estimated to account for 30–50% of rural household income in Africa, about 60 per cent in Asia (Ellis 1998) and around 40 per cent in Latin America (Reardon et al. 2001). Remittances from urban household members and earnings from non-farm activities also have a major role in financing innovation and intensification of farming in Africa (Tiffen 2003) and in Asia (Hoang et al. 2005, 2008). This is best documented in rural areas with relatively good access to urban markets and infrastructure. In many cases, local traders also contribute to the creation of non-farm jobs through the local processing of agricultural produce, and this helps diversify the economic base of large villages and helps in their gradual transformation into small urban centres (Hoang et al. 2008). Around half the world's urban population live in urban centres with less than half a million inhabitants, and this includes a considerable proportion in urban centres with less than 20 000 inhabitants. Small urban centres in agricultural areas can have especially important roles in the livelihoods of the poorest rural groups by providing access to non-farm activities that require limited skills and capital (Hoang et al. 2008). They also have an important role in the provision of basic services such as health and education to their own population and that of the surrounding rural area. Thus, migration and mobility should be seen as a form of income diversification that can support farming innovation and intensification. Small family farms, provided they are well connected to markets, can often compete with large commercial farms, especially in the production of higher-end food, such as fresh fruits and vegetables. The multiple rural–urban linkages noted above mean that climate change impacts on agriculture will affect urban areas (for instance, influencing food availability and price), and climate change impacts on urban areas will affect agriculture (for instance, disruptions in urban demand for agricultural produce and disruptions to the goods and services provided by urban enterprises to agriculture and to rural households). Many rural households would also suffer if remittances from family members working in urban areas were disrupted by climate change-related impacts. Hundreds of millions of urban dwellers are at risk from the direct and indirect impacts of current and likely future climate change—for instance, from more severe or frequent storms, floods and heatwaves, constraints on fresh water and food supplies, and higher risks from a range of water-borne, food-borne and vector-borne diseases (Wilbanks et al. 2007). The highest risks in urban areas are concentrated within low-income populations in low- and middle-income nations. In part, this is because most such nations face impacts that are more serious than those faced by high-income nations. But what is more significant for urban risks is very large deficits in the infrastructure and services needed to protect urban inhabitants from climate change impacts. This is underpinned by a lack of capacity in most urban governments—and in many, an unwillingness to provide infrastructure and services in informal settlements, even when these house 30–60% of a city's population (as they often do). Thus, the climate change-related risks facing the population of any urban centre are a function not only of what climate change brings but also of the quality of housing and the quality and extent of provision for infrastructure and services (see Revi (2008) for a discussion of this in relation to India's urban population). Urban populations in wealthy nations take for granted that a web of institutions, infrastructure, services and regulations protects them from extreme weather, and will keep adapting to continue protecting them. This adaptive capacity is underpinned by buildings conforming to building, health and safety regulations. In addition, it is assumed that city planning, land-use regulation, and building and infrastructure standards will be adjusted to any new or heightened risk that climate change may bring, encouraged and supported by changes in private-sector investments (over time shifting from high-risk areas) and changes in insurance premiums and coverage. At least for the next few decades, this ‘adaptive capacity’ can deal with likely climate change impacts in high-income countries (Wilbanks et al. 2007). But most of the urban population in low- and middle-income nations face (often very large) deficiencies in all the institutions, infrastructure, services and regulations noted above (Bicknell et al. 2009). This makes them very vulnerable as risks are much higher, and a large and growing urban population are exposed to such risks. This helps explain why most deaths from extreme weather disasters are in low- and middle-income nations, and the rapid growth in the number of deaths and serious injuries from such disasters in their urban areas. The impacts fall most heavily on low-income groups and within such groups on women and children (Enarson & Meyreles 2004; Bartlett 2008). Obviously, disasters disrupt food demand and food supplies—and within urban areas, it is generally low-income groups that suffer most as their income-earning activities are disrupted and what little asset bases they have are rapidly used—or destroyed by the disaster. A high proportion of low-income urban households—especially those reliant on wage labour—are particularly at risk from climate change-induced food shortages or staple food price rises (Ahmed et al. 2009). There is also the issue of climate change-induced migration. There are predictions that by 2050 there could be 200 million ‘environmental refugees’—people forced to move by environmental degradation caused by climate change (Myers 1997; Stern Review Team 2006). But land degradation or decreases in rainfall do not inevitably result in migration, or where they do, most movement is short term, as in the case of extreme weather disasters, and short-distance, as in the case of drought and land degradation (Henry et al. 2004; Massey et al. 2007). For slow-onset climate change that has negative impacts on agriculture, income diversification and short-distance circular migration are likely to be common responses. Where climate change is causing environmental stress for rural livelihoods, it will be one among a number of factors in determining migration duration, direction and composition. Agricultural adaptation initiatives do not necessarily reduce rural–urban migration; indeed, successful rural development often supports rapid urban development locally as it generates demand for goods and services from farmers and rural households (Beauchemin & Bocquier 2004; Henry et al. 2004; Massey et al. 2007; Hoang et al. 2008). A failure to support rural populations to adapt will mean crisis-driven population movements that make those forced to move very vulnerable. Here, migration is no longer a planned movement to an urban centre helped by knowledge and contacts there. A considerable proportion of the urban poor in some African, Latin American and Asian nations are those displaced by conflicts and disasters. Most crisis-driven movements may be unrelated to climate events but they show how much these destroy livelihoods and create vulnerable populations. A high proportion of these people move to urban areas, leaving behind homes, social networks and assets. It can take a long time to insert themselves into local communities (who may resent them as they compete for income sources). Ironically, it will be a failure of governments and international agencies to support the poorer and more vulnerable households to adapt (including the adaptation achieved by migration and mobility) and the failure of high-income nations to agree to needed reductions in greenhouse gas emissions that will produce the crisis-driven migrations that those in high-income nations currently fear. Urbanization is often considered as having negative impacts on agriculture—for instance, from the loss of agricultural land to urban expansion and an urban bias in public funding for infrastructure, services and subsidies. But the scale of urban poverty suggests little evidence of urban bias for much of the urban population—and clearly, urban demand for agricultural products has great importance for rural incomes. Agricultural producers and rural consumers also rely on urban-based enterprises for a wide range of goods and services—including access to markets. So the key issue is whether the growing and changing demands for food (and other agricultural products) that an increasingly urbanized population and economy brings can help underpin agricultural and rural prosperity and sustainability within a global decline in agricultural land area per person and water constraints. To this is now added the need to adapt to the impacts of climate change that have the potential to disrupt agriculture and urban demand, and the urban enterprises that provide producer and consumer services to rural populations. The world's level of urbanization is likely to continue increasing, as long as the long-term trend in most low- and middle-income nations is for economic growth. Among these nations, those with the most economic success will generally urbanize most. Higher income nations may no longer urbanize, but this is largely the result of non-agricultural workers being able to live in rural areas or industrial and service enterprises located in rural areas. Low- and middle-income nations with no economic success will have little urbanization. In extreme crisis, they may de-urbanize through an increase in the proportion of the population working in agriculture, forestry and fishing. But this is only likely in nations where parts of the urban poor still have the links in rural areas that allow their reincorporation into rural livelihoods. With regard to climate change, it is difficult to predict likely impacts because these depend so much on whether global agreements rapidly reduce the drivers of greenhouse gas emissions. Climate change mitigation presents many challenges to agriculture to reduce greenhouse gas emissions and to better-off urban dwellers to shift to less carbon-intensive diets and lifestyles. A failure to reduce greenhouse gas emissions is likely to mean increasing numbers of disasters with very serious impacts on rural and urban populations. Many of the largest cities in low-income nations are particularly at risk and at present lack the capacity to adapt. Footnotes1 Unless otherwise stated, the statistics for global, regional, national and city populations are drawn or derived from statistics in United Nations (2008). 2 For most nations, this means 1990–1991 to 2000–2001; it will be at least a couple of years before there is enough census data available to show trends for 2000–2010. 3 There may be some exceptions to this for certain high-income nations, drawn from alternative official information sources. One contribution of 23 to a Theme Issue ‘Food security: feeding the world in 2050'. While the Government Office for Science commissioned this review, the views are those of the author(s), are independent of Government, and do not constitute Government policy. © 2010 The Royal Society This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. References
Page 2This paper focuses on the impact of income distribution on food demand, an issue that has been significantly overlooked by a large proportion of studies projecting future food demand. Specifically, the objective of the paper is to survey the theoretical literature in this area, and at the same time identify the main existing gaps in order to build more realistic food demand scenarios. To our knowledge, this is the first survey to focus on this area. The ability to feed the world population in the near future depends critically on the capacity of food supply to meet an increasing demand. As population rises, more people need to be fed, and as income grows more households' disposable income is available for food consumption. While there is little doubt that the demand scenario for the next decades is one of positive growth, a crucial question is at what rate world food demand is expected to increase. This is particularly critical given the recent additional pressures on the food system arising from the increasing link between food and energy markets via biofuels. One particular element that is essential for projecting food demand growth is income distribution. One of the most robust and stronger relationships in economics is ‘Engel's law’, after Ernst Engel, a German economist, who in the nineteenth century studied food consumption of the Belgian working class. This law establishes that as income increases, households' demand for food increases less than proportionally. Hence, as households become richer, their share of expenditure on food decreases until reaching a ‘saturation’ point, after which food demand is hardly responsive to any income increases.1 An important implication of Engel's law is, therefore, that income distribution changes are relevant when it comes to predicting future food demand. Concretely, the rate of growth of food demand over the next decades should be dependent on the way in which income growth will be distributed among households and countries. Faster income growth among poorer countries and households should result in faster food demand growth in the short and medium term, since poorer households and countries tend to allocate larger shares of their budgets for food consumption. However, if this more equalitarian growth scenario persists, we should expect faster reductions in food demand growth as poor countries converge more rapidly to the threshold of food ‘saturation’. On the other hand, more unequal growth scenarios, with slow income growth in less developed countries (LDCs), would imply slower demand growth in the short and medium term, but sustained in the long term, probably exacerbated by larger population growth in poorer countries. The paper is structured as follows. Section 2 explores the theoretical channels through which income growth impacts food demand. Section 3 reviews the empirical evidence regarding estimation and simulation of Engel curves, and enumerates the desirable properties of demand systems in order to account for income distribution dynamics. Section 4 reviews and assesses models to forecast future food demand and the main projection assumptions used. Section 5 highlights with a simple illustrative example, the potential risks associated when income distribution dynamics are not included in demand forecasts. The final section concludes. For the purpose of this review, food demand is household consumption of food that is either purchased in the market or home-produced. Two facts should be kept in mind before starting the analysis. First, no household, however poor, consumes only food. There are a number of essential items like shelter and medications that even the poorest households buy. Second, food purchases are rarely dictated by nutritional requirements and people do not normally buy the food that is recommended by health providers as healthier or more nutritious. Food demand is dominated by tastes, which vary across countries and over time. Food consumption can be disaggregated into categories in order to conduct more detailed analysis of changes in consumption patterns. The standard procedure for the aggregation of food items consists of grouping together items that are close substitutes in consumption. For example, tea and coffee are often grouped in the ‘stimulants’ category. Similarly, pork, lamb, beef and chicken are grouped in a single ‘meat’ category. Table 1 below, shows the expenditure shares of seven food categories for a sample of rural Indian households, and is an example of such disaggregation. Notice how the small share of food expenditure on meat is a reflection of both tastes and poverty of this sample of households.
As income increases, households' demand for goods, including food, increases. It has been documented by innumerable empirical studies that food demand increases less than proportionally with income. This relationship between food consumption and income is described by Engel curves (Engel's law). The nonlinearity of the food Engel curve has a strong implication on the relationship between income and food demand; food is not only a function of income, but also of the income distribution within the economy. To illustrate this, consider the following linear food demand of a consumer, where q is food demand, a can be interpreted as a minimum consumption level of food, y is income and the observations are indexed across households or individuals (i): 2.1 Adding up all the minimum individual consumption levels (ai) and incomes (yi) over all households, we obtain total food consumption (Q) as a linear function of total income in the economy (Y) 2.2 The problem with this formulation is that it is not valid if food Engel curves are nonlinear. Suppose, for example, that we transfer income from a rich household to a poorer one. According to equation (2.2), food demand in the economy does not change because total income has not changed. However, we know this is not true. Because Engel curves are nonlinear, the poor tend to spend more on food than the rich, and an income transfer from the rich to the poor, while preserving income constant, will increase food demand. Some graphical analysis may help to further illustrate this point. Suppose the economy is composed of only two individuals, one (B) is rich while the other (A) is poor. Their average income is Y, while their average food consumption given the shape of the Engel curve is F1 (figure 1). Average food demand in a two-individual economy.
Consider now a more equitable income distribution between individuals A and B. Some income is taken from the rich (B), and given to the poor (A). While average income Y remains unchanged, average food consumption has now increased to F2 (figure 2). After the transfer, food consumption of the rich has only marginally decreased, while food consumption of the poor has increased dramatically. Effect of an income transfer on food demand in a two-individual economy.
This brief discussion helps in emphasizing two main points. First, the aggregate demand for food is not only a function of income but also of income distribution. Using the variance in the distribution of income (σy) as an index of inequality, equation (2.2) can be re-written in the following way: 2.3 Furthermore, an increase in inequality has the effect of reducing aggregate food demand, while a decrease in inequality has the effect of increasing aggregate food demand. The difficulty in aggregating individual households consumption behaviours described above is known in demand theory as the aggregation problem: the transition from micro- to macroeconomics of consumer behaviour (Deaton & Muellbauer 1980). The literature on aggregation dates back at least to Antonelli (1886) and has shown that exact aggregation of individual consumer demands is possible only if Engel curves are linear (Blundell & Stoker 2005), which is very unlikely to be the case for most products and certainly not for food. In addition, changes in the demographic structure of the household over time also have implications for aggregation and changes in shape of the Engel curve over time.2 As income increases, not only does the quantity of food increase less than proportionally, but the composition of the food basket also changes. In particular, it has been observed that the consumption of starchy staple food declines with income. This fact has been labelled ‘Bennet's law’, after Bennet, who was the first to document the decrease in the amount of calories that people obtain from starchy staples versus other food as income increases (Timmer et al. 1983). Food is an aggregate of a large number of consumption good. For example, the food expenditure questionnaire used by the National Sample Survey Organization (NSSO) of the Government of India (2006) collects information on nearly 150 food items. Clearly, while in the aggregate the consumption of food increases less than proportionally with income, the consumption of some specific items may increase. These items, whose consumption increases more than proportionally with income, are often called ‘luxuries’ in the economic literature in contraposition to ‘necessities’. Bennet's law is portrayed in figure 3. While the consumption of starchy staples increases less then proportionally with food expenditure, the aggregate consumption of other food item must increase. An illustration of Bennet's law.
The effect of a change on the distribution of income on consumption of a particular good depends on the shape of the Engel curve. In addition to the concave down form already analysed, two other forms are possible: linear and concave up. A linear Engel curve for a good is distribution-neutral. Changes in the distribution of income, while preserving the existing level of income, have no effect on the consumption of that good. In effect, the linearity of the Engel curve was found to be the condition that allows the exact aggregation of single Engel curves across consumers. If, on the other hand, the Engel curve is concave up, an increase in inequality increases consumption of that good. For example, Engel curves for health expenditure are often exponential, meaning that an increase in income inequality raises aggregate expenditure on health. Few food items display concave up Engel curves. For example, food categories such as ‘beverages and tobacco’, ‘fish’ and ‘food outside home’ may have expenditure elasticities above one (e.g. Seale et al. 2003). More typically, Engel curves for food categories are linear or bending downwards. In the latter case, consumption increases proportionally with income for the poor and the middle classes, but then falls again for the rich. Consumption of meat and fats in particular tend to have an inverted U shape of this type. In this latter case, it is difficult to predict the consumption effect of changes in income distribution. In order for this to be calculated, the shape of the Engel curve, and the form of the change in the distribution of income, need to be known by income quintile. The aggregation problem described above refers mainly to the capacity to represent all households' demand at a point in time. However, when thinking about the impact of income distribution on food demand in the long term, one needs to consider how households and countries move along the Engel curve. Historically, diets in industrialized countries have shifted from traditional grains towards meat, dairy products and protein-based food, and changes towards more balanced and healthy diet has also reduced income demand elasticities reaching the level of almost ‘saturation’. Similarly, reductions in food budget shares have been observed in other regions and countries. When dynamic effects are considered, the impact on food demand may result in significantly different outcomes in the short, medium and long term. Paradoxically, in the short and medium term, a scenario of pro-poor growth that reduces inequality by a faster increase in per capita income in developing countries is translated into higher demand growth, since income grows more for those households with higher food demand elasticities. However, in the long term, if equalizing growth is persistent, developing countries will hit the ‘saturation’ point faster, implying higher demand growth but at decreasing rates. This is particularly important for very populated countries such as China, where successes in lifting people out of poverty and increasing the middle class will be translated into a deceleration on food demand growth. A major concern when estimating and modelling demand systems is the need for some degree of Engel flexibility. Often demand systems have been used in empirical and modelling studies based on goodness of fit or tractability, rather than consistency with observed consumer behaviour. Specifically, consistent demand systems should have the following properties:
Although still not widely implemented, these more Engel flexible demand systems, such as an implicit additive demand system (AIDADS) and the quadratic almost ideal demand system (QUAIDS) (See the electronic supplementary material, appendix for a more detailed description of these systems), are starting to be used in modelling exercises. The integration of rank three demand systems in empirical and modelling work is essential if one wants to account for the observed dynamic effects of falling marginal food budget shares. While more flexible Engel demand systems correct unrealistic dynamic predictions that do not account for movements along the Engel curve, these systems do not correct the problem of aggregation described above. Most demand systems are based on a representative consumer. Therefore, if the bias of aggregation is significant, this would trigger further biases when estimating future food budget shares. Several authors have analysed empirically the importance of the aggregation problem. Lewbel (1991) shows that aggregation biases are small under certain demand systems (i.e. PIGLOG) and forecasts based on a representative consumer will have low aggregation error if income distribution dynamics remain stable. Moreover, these differences can be minimized when considering the distribution of expenditures, although exact aggregation models will fit the data better. In addition, and more importantly, demographic changes, and changes in the age composition of the household, are potentially a larger source of aggregation bias than nonlinear income effects. Denton & Mountain (2001) show that aggregation errors are small when using an almost ideal demand system (AIDS) and assuming that income is distributed lognormal. Denton & Mountain (2007) estimate the aggregation biases under different income distribution configurations using the AIDS system and its quadratic version (QUAIDS). Overall, the authors find that differences are small, and only for a few configurations of large elasticities and high inequality the biases can become very significant. The evidence suggests, therefore, that aggregation biases in the short term tend to be small under most demand systems. Nevertheless, the size of these biases depends on the stability of income distribution. As a result, biases can increase when performing longer term predictions, where departures from distribution-neutral income growth may be significant. Ray (1998) points out that changes in income distribution that generate changes in demand composition may in turn generate changes in factor demands. Changes in factor demands feed back into further changes in the distribution of incomes. For example, an increase in inequality may increase the demand for luxury goods, which in turn increases the demand for capital. If the production of the good whose demand has increased is capital-intensive, inequality will further increase. Theoretical models along these lines have been formulated by de Janvry & Sadoulet (1983), Bourguignon (1990) and Baland & Ray (1991). Figure 4 illustrates how the distribution of income has implications for the aggregate levels of food expenditure and vice versa. There is a feedback process via factors demand that affects the distribution of income. Depending on the values of the parameters of the general model, the initial shock in the income distribution may dampen or accelerate the initial change in food demand. Changes in income distribution and food demand.
In order for a demand system to represent food demand in a way that reflects changes in the distribution of income, within countries and across countries and over time, the following conditions must be met:
Even in the case of being able to build the ‘perfect’ model to forecast food demand embedded with the desired properties described above, the accuracy of the forecasts depend on the long-term assumptions regarding three key variables: population growth, income growth and income distribution. Before analysing existing models for predicting food demand, it is important to review the existing projections for these three variables. The main source of population forecasts are those of the United Nations World population prospects. The projections indicate a slowdown of the world's population growth for all regions. World population is projected at around 9.1 billion in 2050, converging to growth rates per annum of around 0.3 per cent. This slowdown will be particularly important in China, hitting negative population growth from 2030 and driving negative population growth in East Asia. Developed countries will continue with very low growth, which is expected to turn negative around 2040. Population growth will slow down in South Asia and Sub-Saharan Africa, although this last region will experience larger population increases, around 1.3 per cent per annum. This implies that higher population growth will occur in the regions with higher poverty and larger food demand elasticities and, therefore, income per capita growth in Sub-Saharan Africa will be key to determining the extent of food demand growth; although slower population growth will imply slower food demand growth. While population projections are more reliable owing to more persistent demographic dynamics, income growth dynamics are more uncertain. The main source for these projections is the World Bank Global economic prospects (2009). The projections are of substantial per capita income growth, although this is expected to be lower for all regions during the period 2015–2030 compared with the previous decade. This result is mainly driven by the fact that income in the large developing countries is expected to decelerate slightly. Despite this slowdown, per capita incomes in developing countries are expected to triple from $1550 to $4650 between 2004 and 2030 (World Bank 2009). Incidentally, World Bank (2009) estimates the level of income for cereals demand ‘saturation’ in around $5000 at purchasing power parity (PPP). This implies very low demand growth in cereals after 2030 for human use and a potentially larger increase in the demand for cereals for energy use. Overall, these growth projections are very optimistic as they imply income convergence from low- and middle-income countries catching up with developed countries in a similar way to the experience in the last decade. Similarly, FAO (2006) projections use an even more optimistic scenario for income per capita growth for the period 2030–2050, largely driven by the slowdown in population growth. Estimates of income inequality and its patterns over time are available for single countries and for the entire world (World Bank 2006; Anand & Segal 2008). Nevertheless, there is lack of agreement regarding the direction of the change in income inequality experienced by the world and by each single country, with some authors arguing that globalization is increasing inequality (e.g. Milanovic 2005) and other authors arguing exactly the opposite (e.g. Sala-i-Martin 2006). There is, however, a general consensus on two facts, which have important implications for the analysis and the prediction of food demand patterns in the world. First, world income inequality is very high (Anand & Segal 2008). Estimates of the Gini coefficient5 for the world income distribution are around and above 0.7, a level of inequality that cannot be found within any single country of the world, not even the most unequal. Second, when income inequality among citizens in the world is decomposed into between- and within-countries inequality, it is found that most of global income inequality (between 80 and 90%) is between-country inequality (Anand & Segal 2008). This implies that shifts in the income distribution of single countries have no significant impact on the world income distribution, unless they occur in countries that are very populous or rich like China, India and the United States. For this reason, it is likely that the dynamic patterns of world food demand will be driven by the patterns of convergence or divergence of the world economies, rather than by changes in the distribution of income within each country. Bussolo et al. (2008a) have produced the latest projections on income distribution using a computable general equilibrium (CGE) model with a household module that simulates income distributions (see brief model description below). The authors find that while more income convergence across countries is expected and cross-country inequality is expected to decrease, within-country inequality is going to increase in most of the developing world. There are several models used to forecast the future of food demand. Generally, and owing to their main objective—to forecast production, consumption and trade of specific food commodities—these models tend to be partial equilibrium. Table 2 summarizes the existing main models.
Clearly, most of these models fail to incorporate the desired properties listed in the previous section. An initial important limitation of these models is the neglect of general equilibrium interactions with other non-agricultural sectors. More important, however, is the fact that most of these models tend to use simple log demand systems with constant income elasticities (see Islam 1995, for a comparison), which imply no Engel flexibility, especially relevant when considering long-term forecasts. Finally, none of the models disaggregate demand by household type, therefore with risks of aggregation biases if income growth is not distributional-neutral. Several studies have modelled the future of food demand and supply in agriculture. A common conclusion of these studies is that while further pressure on food demand is likely to emerge in coming years, food demand is expected to slow down in future decades aided by low population growth and food ‘saturation’ in emerging markets. Rosegrant et al. (2001) use the IFPRI–IMPACT model in order to analyse the future of agriculture in 2020. The authors suggest that important changes are impacting food demand related to urbanization, rising incomes and lower population growth, particularly in Asia. Demand growth on grains will decline over time. On the other hand, demand for meat will increase in developing countries, while remaining constant in developed countries, increasing pressure on cereals for animal feed. In general, demand growth in developing countries will be higher, increasing the importance of these countries in global food markets. The main long-term forecasts are produced by the FAO (2006), revised by Alexandratos (2009). According to these forecasts, there is still scope for future demand growth, although almost zero population growth is expected at the global level. This is explained by the fact that most projected population growth will occur in countries with very low consumption levels, mainly in Sub-Saharan Africa. Despite this expected positive demand growth, growth rates are expected to decrease in the future owing to lower population growth and the attainment of medium–high levels of per capita consumption in some emerging markets, especially China, which has experienced very high demand growth in the past. While declining growth rates of food for human consumption are expected, these could be compensated by the additional demand arising from biofuels. In addition, meat consumption is also expected to experience some deceleration following increases in per capita consumption in China and Brazil, while dairy product demand is expected to continue growing, especially in developing countries. Thus, the main message of the FAO projections is a deceleration in food demand growth rates, especially in the long term. A key element of the simulation outcomes is the pace of transition towards lower income demand elasticities as income grows and the question becomes how fast this transition will occur. This is especially important, if we consider the recent link between energy and food markets through biofuels, and the recent pressure on cereals for animal feed from increasing meat demand. The potential stress arising from these factors implies that the ability to feed humans will depend on whether the rates of growth of demand for food for human uses are really decreasing at the rates predicted by these studies. As we have shown during this study, changes in income distribution are crucial to determine food demand growth. Unfortunately, existing projections use models that depart substantially from the optimal demand properties suggested in the previous section. The demand systems used do not have enough Engel flexibility to accurately predict decreasing marginal budget shares. In addition, feedback and general equilibrium effects may be large, especially considering the increasing link between food and energy markets and the potential impact of climate change shocks and policies. Given the uncertainty around income distribution projections in coming decades, existing projections should also consider different scenarios with different distribution dynamics. Since most inequality in the world is between-country rather than within-country, large changes in income distribution within countries are unlikely to produce large shifts in food demand. Unless the income distribution of populous and rich countries like China, India and the United States change dramatically, little effect can be expected on global food demand from this source alone. The important policy question for demand projections becomes how important would food demand projection errors be if the world would move towards no convergence in the rates of economic growth among countries. This section tries to illustrate with a simple example, the size of potential changes in food demand arising from different income distribution scenarios. We perform two simple simulations of the effect of changes in income distribution on food demand within a country and between countries in the world. The simulation is performed using Engel curves estimated by semi-logarithmic functions and by assuming a lognormal distribution of income within countries and in the world. These assumptions are oversimplifying the complex relationship between income distribution and food demand, and other types of simulation are possible. The purpose of the exercise is to provide an initial rough approximation to the size of these effects both within and between countries. In order to provide an example of the effect of changes in income distribution on food demand within an economy, we perform a simulation using data from a sample of households in Andhra Pradesh (India) conducted in 2005 by the NSSO (2006). We estimate the food Engel curve using the semilogarithmic and the food share forms. The fitted values are shown in figure 5. The estimate income elasticity is very high (0.72). Engel curves in Andhra Pradesh. Source: calculated from NSSO (2006) data of 2005.
A simple simulation can be performed assuming that the distribution of income is lognormal and exploiting the following property of the lognormal distribution (Prais & Houthakker 1971). The arithmetic mean () of the lognormal distribution is related to the geometric mean (y*) by the following relationship: , where σy2 is the variance in the distribution of the logarithm of income. Substituting this expression for the mean of logarithm income in equation (2.3), this can be re-written as: 5.1 The average food consumption is now a function of both average income and of the variance of income. Mean preserving changes in inequality can be simulated by changing the values of the variance of income. In order to represent the changes in the income distribution with an index, which is more familiar to the readers, and for which data are readily available, we use another property of the lognormal distribution. If the distribution of income is lognormal, the Gini coefficient can be derived from the variance of the lognormal distribution in the following way (Cowell 2008): 5.2 The initial value of the Gini coefficient is relatively low (0.28). We simulated changes in the Gini of two decimal points above and below the initial value and calculated the corresponding food consumption. The results are displayed in figure 6. Food consumption and income inequality in Andhra Pradesh. Source: calculated from NSSO (2006) data of 2005.
The elasticity of food consumption to the Gini coefficient estimated from this curve is approximately −0.2, indicating that an increase by 10 per cent in the Gini, a change not far from those normally observed within one or a few years, would reduce per capita food consumption by about 2 per cent. This simulation is valid under the assumption that the shape of the Engel curve does not vary over time, which is equivalent to assuming that changes in income levels over time for the same household are mirroring income changes across different households at a single point in time. In other words, food consumption of households with different income levels is used to predict food consumption by the same household when its income increases. Another way to express this is that the shape of the Engel curve should not vary over time. There are a number of reasons, however, that suggest that the shape of the curve might vary over time. Firstly, households may stick to their diets for some time after income changes rather than quickly adopting the diets of richer households. In this case, the change in food consumption over time would be smaller than the change in consumption over the cross section of households, and the Engel curve would be steeper. Secondly, consumer preferences, which determine consumption with income and prices, may change, and as a result alter the shape of the Engel curve. One obvious case is the change in family composition or size over time, because people of different age have different consumption needs. Children in particular tend to consume proportionally more food than adults in developing countries. In this case, the change in food consumption over time, following, for example, a decrease in average family size, would be larger than the change in consumption over the cross section of households, and the Engel curve would be flatter. Finally, new goods can be introduced over time and food luxury items may appear on the market, which attract new consumers. The introduction of these items might shift upwards the right section of the Engel curve putting into question the existence of a ‘saturation’ point. All these possible effects suggest that the invariance of the shape of the Engel curve over time should be tested with the survey data rather than assumed. If Engel curves can be estimated from several cross-sectional surveys that are years apart from each other, then parametric and non-parametric tests are available to assess the stability of the curves over time. We also simulated the pattern of per capita food consumption over the next 40 years. To do so we assumed a rate of per capita income growth of 4.1 per cent per year.6 As per capita income increases over time, per capita food consumption moves along the food Engel curve. However, if the incomes of the poor and the rich grow at different rates and the distribution of income varies, then the patters of food consumption deviate from the Engel curve. We estimated a very large change in the Gini coefficient by ±20 per cent7 over a 40 year period, and calculated the levels of per capita food consumption for each year. The results of these simulations are shown in figure 7. A more egalitarian distribution has the effect of increasing per capita food consumption, while a less egalitarian distribution produces the opposite outcome. The chart on the left of figure 7 shows the deviations from the Engel curve, while the chart on the right shows the levels of per capita food consumption over time. Under the assumptions made in the simulation, an increase in inequality by 20 per cent over 40 years will produce a reduction in per capita food demand by about 2 per cent. Conversely, a reduction in the Gini coefficient by 20 per cent would result in 40 years in an increase in per capita food consumption by about 1.6 per cent. Clearly, the impact of changes of income distribution, even when considering very large changes, on per capita food demand in this case is small, since inequality in Andhra Pradesh is not high. Projected food consumption in Andhra Pradesh over the next 40 years. Source: calculated from NSSO (2006) data of 2005.
Regarding food composition, the effects of changes in income distribution on specific food items depend on the shape of the Engel curve for that particular item. Figure 8 displays Engel curves in share form for six broad food categories (cereals, pulses, dairy, fats, meat, and fruit and vegetable) for the sample of household in Andhra Pradesh. Expenditure is expressed as a share of total expenditure. In this way, a concave down Engel curve implies a falling share, while a concave up form implies an increasing share. The curves were calculated employing semi-parametric methods in order to let the data define the shape of the curve rather than imposing any specific functional form (Yatchew 2003). Engel curves for six different categories in share form. Source: calculated from NSSO (2006) data.
The Engel curve for cereals has a concave down shape for most of the expenditure range, but is concave up for very poor households. Only the Engel curves for dairy and meat show a clear pattern. Dairy is consumed more than proportionally as expenditure increases, while the share of expenditure in meat consistently decreases as expenditure increases.8 Changes in income distribution for these food items are clearly predictable. An increase in inequality increases the consumption of dairy products and reduces the consumption of meat. The effects of changes in income distribution on the consumption of other food items are much less predictable because the Engel curves in share form have an inverted U shape. Whether consumption increases or decreases depends on the nature of the transfer taking place. For example, while as a general rule an increase in inequality reduces consumption of cereals, if the increase in inequality is circumscribed among the very poor (the rising section of the cereals Engel curve in figure 8), cereals consumption increases. Since data are available on food consumption and per capita gross domestic product (GDP) for most countries of the world, it is tempting to proceed to the estimation of a world food Engel curve and to perform a world-scale simulation of a change in the world income distribution. Figure 9 shows the estimation of semilogarithmic and share form food Engel curves for most countries in the world using the data of the International Comparison Programme of the World Bank. Circles represent countries, and their size is proportional to the population of each country. The large circle of the United States on the right, and of India and China on the left, are clearly visible. The world income elasticity of food consumption estimated by these curves is 0.48. World food Engel curves. Source: calculated from the International Comparison Programme data.
The simulation of the effect of changes in the world income distribution on world food demand can be performed in the same way as within a single country. However, before carrying out the simulations, the concept of world income inequality needs to be properly defined. There are three main concepts of income inequality in the world (Milanovic 2005). The first concept, international inequality, is inequality between states, where each country is represented by one single observation. This concept is clearly inappropriate for the present analysis because it is not representative of consumption behaviour in the world. The second concept, global inequality, is inequality among world citizens as could be obtained by running a random income survey over the entire world. This concept represents the true income inequality in the world but data for its calculation are not readily available. The third concept, between-country inequality, is inequality between countries, where each country is represented by the number of its citizens. In other words, this concept measures inequality among individuals, but each individual is assigned the average income of her country of residence. This is the concept used in estimating the charts of figure 8. Between-country inequality ignores within-country inequality and, as any form of income averaging by aggregation, it underestimates the true global world inequality. It is estimated, however, that between-country inequality captures around 80 per cent of global inequality (Anand & Segal 2008), and it represents therefore a good approximation of global inequality. In our data, the estimated world Gini coefficient is 0.56, which compares relatively well to estimates of 0.7 often found for global income inequality. The world income distribution is assumed to be normal in the logarithms as it was the case of the within-country distribution. There is not strong support in the data for this particular parametric form as the true distribution of income, including within-country inequality, is not known, but it is an approximation often used by studies investigating world inequality (e.g. Chotikapanich et al. 1997; Dowrick & Akmal 2005). Figure 10 shows the relationship between the world average food consumption and the world Gini coefficient. The elasticity of food consumption to the Gini coefficient estimated from this curve is −1, indicating that a reduction by 1 per cent in the world Gini would increase per capita food consumption by the same percentage amount. This elasticity is much larger than the one observed within-country, presumably a consequence of the different curvature of the world Engel curve and of the world income distribution compared with the national ones. Year-to-year estimates of world Gini are not easily available because of difficulties in obtaining the required data. Sala-i-Martin (2006), for example, estimated a reduction in global income inequality by 0.8 per cent over 1992 and 1993. According to our estimates, a 0.8 per cent increase in per capita global food demand should have occurred over the same years only on account of a change in the world distribution of income. Food consumption and income inequality in the world. Source: calculated from the International Comparison Programme data.
We also repeated the simulation performed for Andhra Pradesh for the whole world, assuming a per capita income growth of 2.6 per cent per year. We predicted trends in the world distribution of income using historical series of the Gini coefficient calculated by Sala-i-Martin (2006) and by Milanovic (2009). The trends are extrapolations over time of the Gini of global income distribution calculated from an ‘optimistic’ point of view (Sala-I-Martin) and a ‘pessimistic’ point of view (Milanovic), respectively. Theoretically, the two views are supported by diverging hypotheses regarding convergence or divergence of world economies. The results are shown in figure 11. The effect of changes in the income distribution is considerable. Under the assumptions made, an increase in Gini inequality by 8 per cent over 40 years will produce a reduction in per capita food demand by about 5.4 per cent, while a reduction in the Gini coefficient by 5 per cent would result in 40 years in an increase in per capita food consumption by about 2.7 per cent compared with the income distribution-neutral growth case. These effects, however, are not large if compared with the effects of increases in per capita income. For example, the increase in consumption of food following per capita income growth alone, independently of changes in income distribution, over the same period of 40 years is nearly 50 per cent. Given the estimated expenditure elasticity of world food consumption (0.47), the increase in food consumption obtained under the most optimistic prediction of a 5 per cent fall in the Gini over 40 years might be obtained in just slightly over 2 years of per capita income growth at the current levels. Projected world per capita food consumption over the next 40 years. Source: calculated from the International Comparison Programme data.
The much larger effect of income distribution on food demand between countries than within countries is explained by the wider differences in demand elasticities between countries in the world compared with the differences between individuals within countries. This illustration underlines the point that shifts in food demand deriving from changes in the income distribution are to be expected to originate from the patterns of convergence or divergence between world economies rather than from the trends in inequality within each country. However, it should also be emphasized that predictions of future world income distribution were obtained by extrapolating calculated series of Gini coefficients. Without some strong theoretical assumptions regarding convergence or divergence of world economies, it is difficult to interpret these extrapolations as ‘trends’. These predictions are, to some extent, only hypothetical: the increasing inequality trend might be reversed and vice versa. This paper has surveyed the relationship between income distribution and food demand. The cornerstone of this relationship is Engel's law, which establishes that food budget shares decrease as income grows. One implication of Engel's law is that any accurate prediction of future food demand needs to deal with two main issues when considering income distribution: aggregation across households and Engel flexibility. The former refers to the fact that considering only mean income growth to determine aggregate food demand can be misleading when income growth is different across income groups with different income elasticities. The latter refers to the fact that as income grows in time, households and countries move towards decreasing food budget shares. The review of the literature on demand systems suggests that only rank three demand systems produce nonlinear food budget shares on income, and that accurate aggregation requires moving away from representative consumer models to aggregation across different types of households. Furthermore, models should integrate general equilibrium effects related to feedback from agriculture to income distribution and the link between energy and food markets via biofuels. The review of existing models for forecasting food demand found that most of these models do not comply with any of the properties suggested above. As a result, the accuracy for existing food demand projections is highly conditional on the assumptions of within-country distributional-neutral growth and continuous reduction in cross-country inequality. A less plausible scenario of growth divergence would slow down food demand growth in the short term, while at the same time making food demand growth more persistent in time. These potentially different demand growth outcomes in the short, medium and long term are important when considering additional stresses on demand arising from high energy prices and potential climate change shocks and policies. A key question that arises from the paper is how deviations from income distribution projections may impact the accuracy of food demand projections. In order to answer this question, the paper carries out some simple simulations. Two main results emerge from the simulations. First, the largest potential impact on world food demand comes from changes in between-country inequality, rather than within-countries inequality. Second, considering two different income distribution scenarios, one optimistic and one pessimistic, world food demand in 2050 would be 2.7 per cent higher and 5.4 per cent lower than the one under distributional-neutral growth. Thus, income distribution changes have an impact on food demand projections. Clearly, more research is needed in this area. To begin with more empirical work is required to analyse the extent of the demand bias arising from not considering potential income distribution changes. Secondly, more realistic models integrating more sophisticated demand systems with higher Engel flexibility are required to forecast the future of food demand. Thirdly, changes in demographic structure of the countries should be taken into account by both estimation and modelling exercises. Finally, general equilibrium models for disaggregated food commodities able to include different types of households, linking food and energy markets, and capable of feeding back into the model income distribution changes are desirable. We would like to thank the Foresight Project on Global Food and Farming Futures for financial support, Marta Moratti for research assistance and Vivienne Benson for editorial support. Footnotes1 In economics the responsiveness of demand to income changes is called income demand elasticity. Engel's Law implies that different households have different food income demand elasticities according to their levels of income, and as income grows within households, income demand elasticities tend to decrease over time. 2 See the electronic supplementary material, appendix for a more detailed description on household composition and aggregation. 3 A more detailed review of the main demand systems used in the empirical and modelling literature can be found in the electronic supplementary material, appendix. 4 Cranfield et al. (2003) suggest that individual aggregations is possible under linear expenditure system, quadratic expenditure system, AIDS and QUAIDS, but not under AIDADS. 5 The Gini coefficient was developed in 1912 by Italian statistician Corrado Gini. It is a statistical dispersion that is mainly used as a measure of inequality of income and wealth. 6 This is the rate predicted by FAO (2006). 7 Within-country Gini coefficients tend to be very stable over time. As a result a 20 per cent change is a very large and unlikely change in income distribution. 8 Indian households are largely vegetarian and increasingly so, as the poorer section of society adopt the consumption habits of middle and upper classes. One contribution of 23 to a Theme Issue ‘Food security: feeding the world in 2050'. While the Government Office for Science commissioned this review, the views are those of the author(s), are independent of Government, and do not constitute Government policy. © 2010 The Royal Society This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. References
Page 3By 2050 it is predicted that there will be between 8.0 and 10.4 billion people on earth, with a median value of 9.1 billion (http://esa.un.org/unpp). If all of these people are to be fed sufficiently, total food consumption will have to increase by 50–70% (Smil 2005; FAO 2009). How much of a contribution can increased yield per unit area make? Many studies have addressed this problem, and the large ones have done so from the viewpoint of agro-ecology and climate science or of socio-economics (Fisher et al. 2005; Nelson et al. 2009). This review examines the question from the viewpoint of crop physiology and agronomy. By 2050, assuming the A1B world development pathway from the Special Report on Emissions Scenarios (Nakićenović & Swart 2000), it is predicted that the carbon dioxide concentration [CO2] will have risen from today's value of approximately 370 to 550 ppm. This, in combination with other changes in the atmosphere, is likely to change the earth's climate, making it warmer by an average of 1.8°C (Meehl et al. 2007). This warming will increase the evaporation of water from wet surfaces and from plants, leading to increased but more variable precipitation. At present, the amount and seasonality of precipitation in any region can only be predicted with a great deal of uncertainty. The concentration of ozone [O3] will also increase as a result of industrialization and this will have a negative impact on crop growth and productivity: this has been assessed by reviewing recent literature. We have selected 11 arable crops, and assess the extent to which changes in yields might contribute towards an increase in the amount of food available. These crops (table 1) represent the principal types of photosynthesis used by plants, C3 (wheat, rice, soya, sunflower, oilseed rape, potato, sugar beet and dry bean) and C4 (maize, sugar cane and sorghum). Two of the crops (soya and dry bean) are legumes and fix nitrogen, as does much sugar cane in Brazil (Döbereiner 1997). Wheat, rice, maize and sorghum occupy 83 per cent of the world's total cereal area. Together, these crops occupy 56 per cent of the world arable area. We have attempted to assess the extent to which crop yield changes might contribute to feeding the world's population using the literature to assess the probable yield changes and by making an analysis akin to that of Ewert et al. (2005). These authors estimated future changes in productivity of a range of crops in 17 European countries from trended values of current yields modified by relative changes owing to climate change, increasing [CO2] and technology development. Future technology development effects were estimated from historic trends in relative changes of national yields and progressed into the future.
The [CO2] in the atmosphere can have a large impact on the rate of photosynthesis, particularly of C3 plants. This effect is used commercially in tomato production where the air in glasshouses is enriched to greatly increase yield. However, [CO2] also affects water use by plants because high concentrations cause partial closure of the stomata. The magnitude of its effects on dry matter production depends upon the illumination conditions, water availability, N supply and the transport and storage of the photosynthates. This complexity means that the interpretation of controlled environment studies (where enriching the air with CO2 is relatively straightforward) is fraught with difficulty. To overcome this, free air carbon dioxide enrichment (FACE) experiments have been made in the last two decades. In these, crops are grown to maturity in the field in either an ambient atmosphere or one enriched with CO2. Most of these studies used an atmosphere close to 550 ppm CO2, and these are considered here. Studies with grain crops were reviewed by Long et al. (2005a), who found that the average yield increase of C3 species was 11 per cent. In FACE experiments in Germany Manderscheid & Weigel (2006) grew two cycles of a three-year rotation of winter barley, sugar beet and winter wheat using adequate applications of nitrogen fertilizer and measured yield increases of 13, 15 and 7 per cent, respectively, in response to [CO2]. In Italy, FACE experiments with potato (Miglietta et al. 1998; Magliulo et al. 2003) produced much larger yield increases (29%, 32% and 54%) in response to increased [CO2]. Is it significant that the two C3 species (potato and tomato) with large responses to increased [CO2] are both members of the Solanaceae? Long et al. (2005a) also reported FACE results for maize and sorghum (C4 species) where there were no significant responses to enrichment. All these yield results, except those for potato, are less than anticipated from earlier reviews, most of which were based on studies in controlled conditions (Amthor 2001; Kimball et al. 2002). The responses to enriched atmospheres in the FACE experiments are also smaller than those that have been used in most crop-growth models. Possible reasons for the smaller increases are that field-grown crops have canopy architectures that are not optimized for the efficient use of radiant energy, and that feedback repression of photosynthesis occurs because the plants are incapable of transporting or storing sugars at the greater production rate of the enriched atmospheres i.e. they are sink limited. The [CO2] affects the water economy of crop plants. Increased [CO2] increases the rate at which this gas diffuses into leaves through the stomata, relative to the rate at which water vapour diffuses out. Because the extra CO2 increases the rate of dry matter production of C3 plants, this change in relative diffusion rates also increases the water use efficiency (WUE), the amount of dry matter produced per unit of water transpired. An increase in the [CO2] also causes a decrease in the aperture of the stomata, which reduces the rate of water consumption. In the FACE experiments with potatoes this effect was large: CO2 enrichment increased tuber yield by 43 per cent, decreased water consumption by 11 per cent and consequently increased WUE by approximately 70 per cent (Magliulo et al. 2003). In sugar beet, the amount of water consumed during the growing season was reduced by 20 per cent while yield increased by 8 per cent (Manderscheid et al. 2010). The impact of this water economy on yield is difficult to determine because, to date, it has not been possible to conduct FACE experiments with both warmed air and CO2 enrichment. However, it is clear that this effect of CO2 on water consumption can only have a positive impact on yield because in many situations crop yields are water-limited, and this effect has not been built into the simulations of future food production made so far. In most FACE experiments the plants were supplied with adequate water and nitrogen fertilizer. However, experiments with wheat (Kimball et al. 1999), rice (Kim et al. 2003) and a cereal and beet rotation (Burkart et al. 2009) compared enrichment responses at inadequate and sufficient levels of N supply. In all cases the relative response to enriched [CO2] was either enhanced or unchanged when the N amount was inadequate. Similar responses were measured in clover; plants with plentiful nodules produced smaller responses than plants with few nodules (Hartwig et al. 2002). In future, if for financial or environmental reasons, N fertilizer use is further restricted, the enriched CO2 atmosphere should help to limit the negative impact on crop yield. Ozone concentrations [O3] in the industrialized countries of the Northern Hemisphere have been rising at between 1 and 2 per cent per year (Chameides et al. 1994). The surface [O3] has now reached a global mean of approximately 50 ppb (8 h summer seasonal average; Fiscus et al. 2005). Nearly a quarter of the Earth's surface is at risk of experiencing concentrations in excess of 60 ppb during mid-summer. Yield reductions owing to ozone pollution can begin at concentrations as small as 20 ppb (Ashmore 2002). The IPCC Fourth Assessment Report projects an increase in surface [O3] across the globe of 20–25% by 2050 (Meehl et al. 2007). Long et al. (2005b) estimated that a 20 per cent increase will decrease yields relative to today by 5, 4, 9 and 12 per cent for maize, rice, wheat and soya, respectively. Potatoes have suffered a yield reduction of 5 per cent (Craigon et al. 2002). A meta-analysis by Feng & Kobayashi (2009) found that probable yield reductions by 2050 were 8.9, 9 and 17.5 per cent for barley, wheat and rice, but were 19.0 and 7.7 per cent for bean and soya bean. These projections were made on the basis of studies in open-topped chambers in the field. Only two FACE studies (Morgan et al. 2003; Shi et al. 2009) have been reported with ozone enrichment: the first reduced soya yield by 20 per cent, the second produced rice yields that were unaffected (two inbred cultivars) or were reduced by 15 and 17.5 per cent (two hybrid cultivars). Changes in yield as a consequence of rising [O3] have not been built into recent projections of global food production under climate change (Gitay et al. 2001; Parry et al. 2004; Nelson et al. 2009). The predicted yield changes are rather variable even within species, but they are all reductions. By 2050 the impact of rising [O3] is likely to eliminate most of the yield increase owing to increasing [CO2] in C3 crops, and cause a yield decrease of at least 5 per cent in C4 species. However, the studies with rice indicate that there is scope to breed for reduced O3 sensitivity. A consequence of the increase in the [CO2] and the concentration of other gases in the atmosphere is that the world is expected to get warmer, by about 1.8°C as an annual average by 2050, and by rather more over land (Gornall et al. 2010). This will be accompanied by changes in precipitation, more than today in some places, and less in others. We did not have access to observed daily weather data at sufficient international locations to use crop-growth models to simulate the impacts of future climates on yields. Instead, we have relied on published results. There have been three large studies. The first was summarized by Parry et al. (2004): it used climate simulations from general circulation models (GCMs) developed in the 1980s. The CERES and SOYGRO models were used to simulate the growth of wheat, rice, maize and soya bean at 118 locations around the world, with and without a CO2 effect on growth. In broad terms for 2050 and in the absence of the CO2 effect on growth, the findings were:
The third and most recent study, by the International Food Policy Research Institute (Nelson et al. 2009), modelled maize, wheat, rice, groundnuts and soya beans at 0.5° intervals using simulations of the current weather around the world, based on monthly average values for the period 1950–2000 and decision support system for agrotechnology transfer (DSSAT) crop models. These results were applied to other crops: C4 species were assumed to behave like maize, C3 types were assumed to behave like wheat, rice and soya. The climate simulations were generated by models from the National Center for Atmospheric Research (NCAR) and from Commonwealth Scientific and Industrial Research Organization (CSIRO), using the A2 global development path. The [CO2] in this pathway is similar to A1B by 2050. The NCAR simulations for 2050 predict an extra 10 per cent precipitation on land whereas the CSIRO simulations predict an extra 2 per cent: the HadCM3 predictions are for an increase of about 4 per cent over cropped land (Gornall et al. 2010). The NCAR simulations also indicate larger temperature increases than the CSIRO or HadCM3 models, especially in the Northern Hemisphere. The results of the estimated yield changes, averaged for the two climate simulators and without CO2 fertilization, are presented in table 2 for wheat, rice and maize. In most cases the yield reductions owing to climate change were more serious when the NCAR simulations were used, or the yield increases were smaller; the average difference was 3 per cent. This happened whether the crops were irrigated or rain-fed: this is surprising since the climate simulations with the most precipitation would be expected to produce the larger rain-fed yield. In almost all cases the yield reductions were more serious in developing countries. Of necessity, almost all of the input data for the yield models were simulated and this raises questions about the reliability of the output. Unfortunately, Nelson et al. (2009) give no indication of whether their yield simulations for today's climate are similar to reality or not. Certainly the assumptions made to determine crop sowing date are such that rain-fed crops would seldom be sown in eastern England, Canada, Russia or the western half of the USA.
The variations between predicted outcomes of climate change arising from different climate simulators and different modelling methods are illustrated by comparing the results in table 2 for rice with the results produced by Masutomi et al. (2009). These authors used a model, agro-ecological zones for Asia and 18 GCMs to conclude that by 2050 and without a CO2 fertilizing effect, rice yields would decrease by an average of 8 per cent, not the 16 per cent implied in developing countries in table 2. These authors ascribed most of this yield reduction to warmer winters that would affect the impact of weeds, pests and diseases. However, no consideration was given to the likelihood that these impacts would be controlled by farmers. The large differences between the predicted outcomes for these climate simulators and modelling approaches illustrates just how tentative we should be about the predicted outcome of climate change, albeit that most studies agree that yield will be reduced. Extreme weather events are more likely to happen in the changed climate of the future (Gornall et al. 2010). It is obvious that the severity and frequency of drought will affect crop production: the extreme heat effects are less obvious recently. Gornall et al. (2010) show that over much of the world's crop land, today's 1-in-20 year event is likely to be approximately 3°C hotter by 2050. Increases of this sort will have serious negative impacts if they occur during the flowering stages of many crops (Wheeler et al. 2000); whether the temperature sensitivity of these stages is highly conserved within a species is not clear, so breeding for tolerance may be difficult. Furthermore, plant breeders are unlikely to select for tolerance to an event that is predicted to be rare. Farmers may have to adapt by growing more tolerant species. Unfortunately, the species that seem best adapted to high temperatures have received little attention from international plant breeders until now. Yield simulations made on the basis of predicted future climate seldom simulate in a realistic way the possible impacts of pests, diseases or weeds (whose impacts might become more or less serious) or take account of many of the possible adaptations that plant breeders and farmers might make in response to climate change. In an attempt to make a qualitative assessment of some adaptations that might occur, we examined the simulated weather data from 16 regions that represent major zones of arable crop production around the world (figure 1). One of the regions is East Anglia, chosen not because it represents a large production area, but because its characteristics are familiar to us. The percentage of world production of major crop types within these countries is presented in table 3. The UK Meteorological Office provided daily weather simulation output from HadCM3 for these locations for 10-year time slices centred on 2000 and 2050. These simulations are for grids where little of the area is sea.
Selected sites for weather data and crop yield assessment in different regions.
The temperature and precipitation at the 16 locations are summarized in figures 2 and 3. Clearly, all locations are anticipated to become warmer. For example, mean spring temperatures in Manitoba are predicted to increase from 3.7°C to 6.4°C: similar increases are predicted for Harbin, northern China and Tambov in Russia. Similarly, during autumn in Harbin, mean air temperatures are predicted to rise from 4.8°C to 8.8°C. These shifts are large enough for the growing seasons of crops like soya, maize, potato and beet to be lengthened considerably, and in turn this should generate large yield increases, provided there is sufficient water for the crops to avoid serious drought. Unfortunately, the summer in Harbin is predicted to become drier, with rainfall decreasing from 422 mm to 338 mm (figure 3). Similarly, in New South Wales, Australia, average spring rainfall is predicted to decrease from 70 mm to 29 mm and in Germany on the loess soils, summer rain will decrease from 380 mm to 280 mm. In the first case, this will significantly affect the chance that crops can be established successfully, and in the second it will increase the risk of drought for crops that grow throughout the summer, like maize and sugar beet. Seasonal mean temperatures at selected sites (see figure 1) in decades centred on 2000 (filled black bar) and 2050 (filled grey bar). The Northern Hemisphere (a) spring is March, April and May; (b) summer is June, July and August; (c) autumn is September, October and November and (d) winter is December, January and February. The allocation of the months is reversed in the Southern Hemisphere. The data are the means of 10 years' daily simulations generated by HadCM3.
Seasonal total precipitation at selected sites (see figure 1) in decades centred on 2000 (filled black bar) and 2050 (filled grey bar). The Northern Hemisphere (a) spring is March, April and May; (b) summer is June, July and August; (c) autumn is September, October and November and (d) winter is December, January and February. The allocation of the months is reversed in the Southern Hemisphere. The data are the means of 10 years' daily simulations generated by HadCM3.
The yields of arable crops in the developed and developing countries of the world changed enormously in the last half of the twentieth century. Average yields of wheat in the UK rose from 3 to 8 t ha−1 while the world average has risen from 1.08 to 2.7 t ha−1. Reilly & Fuglie (1998) showed that the average yields of 11 crops in the USA had increased by between 1 and 3 per cent per year during the last half century and that the trend was linear or exponential, showing no sign that the rate was slowing down. A large study by Hafner (2003) showed that national average yields of wheat, rice and maize in 188 countries were mostly increasing, that the increases had been predominantly linear, and that the biggest producers' yields had increased at more than 33.1 kg ha−1 yr−1. This rate of yield improvement is required if per capita consumption is to remain at current levels by 2050. In the developed and developing countries much of this increase has been due to the use of nitrogen fertilizer, crop-protection chemicals and responsive varieties. The yield increases delivered by optimizing the use of N fertilizer and by controlling pests, disease and weeds cannot be repeated: if a disease is controlled and yield increases, then this cannot be repeated to achieve another yield increase. This begs the question ‘Are yield increases in the past any guide to increases in the future?’. Silvey (1994) studied UK national cereal yields over recent decades and concluded that the proportion of the yield change attributed to plant breeding was 47 per cent for wheat and 55 per cent for barley. This compares with 58 per cent for maize in Minnesota (Reilly & Fuglie 1998) and 50 per cent for USA as a whole (Duvick & Cassman 1999). Nevertheless, and despite a new world record wheat yield of 15.6 t ha−1 set in New Zealand in 2010 (http://www.fwi.co.uk/Articles/2010/03/17/120390/Farmer-topples-his-own-wheat-world-record.htm) questions are being asked about whether these historic improvements have continued in the last 10–15 years. Spink et al. (2009) presented data from UK wheat and oilseed rape variety trials from 1997 to 2006 which show almost no upward trend. Similar evidence can be produced for potato (Allen et al. 2005). However, these plateaux in the trends are still of short duration and could result from annual climate variations. In sugar beet in the UK, Jaggard et al. (2007) showed that sugar yield increases since 1976 were mostly the result of warmer springs and that only about 30 per cent of the improvement was the result of technological advance, including plant breeding. Evidence like this has fuelled the debate about the extent to which plant breeders are reaching a yield ‘ceiling’. To increase yield, plant breeders must increase the combination of solar energy capture by photosynthetically active parts of plants, improve radiation use efficiency (RUE) or shift dry matter distribution in favour of the harvestable part of the plant (harvest index, or HI). For example, much of the advance made with wheat has been achieved by shifting HI (Austin 1999), and little progress has been made to increase rates of photosynthesis (Richards 2000) or ability to tolerate drought. HI is thought to be at about optimum now and this has led to the perception that perhaps the ceiling yield has nearly been achieved. Spink et al. (2009) discussed this and referred to blueprints for wheat and oilseed rape that could produce 19 t ha−1 and 9 t ha−1, respectively, in ideal agronomic conditions in the UK. Compared with today's crops, much of the increase would be owing to increased light capture achieved by breeding for delayed senescence. This will be especially important in future, to counteract the effect of a warmer climate that would make grain crops mature earlier. Enrichment of the atmosphere with CO2 might offer the plant breeder the opportunity to raise the yield ceiling by increasing RUE and WUE, which are difficult breeding targets. The gains in yield made by almost all C3 crop species in the FACE experiments were smaller than anticipated from studies in controlled conditions. It has been speculated that this is mostly caused by the inability of today's crops to either transport or store the sugars at a rate that keeps pace with the production capacity of leaves that are operating in the enriched atmosphere: the capacity of the sink limited the yield. There is debate about whether grain crops are limited by the sink capacity in today's conditions (Sinclair & Jamieson 2008) and indications that beet crops growing in a CO2-enriched atmosphere will be limited as well (Manderscheid et al. 2010). As the [CO2] increases, plant breeders will gradually, and perhaps inadvertently, select for lines that have less of a sink limitation (Sun et al. 2009). In C3 crops, leaf photosynthesis is saturated at radiant flux densities of between a quarter and half of full sunlight, therefore any solar energy intercepted at above this level is wasted. Another approach that has been postulated to increase RUE is to manipulate canopy architecture so that, while the sun is bright, more of the canopy is illuminated at moderate intensity and less is light-saturated (Long et al. 2005a). This can be done by making the uppermost leaves nearly vertical, so that they are not light-saturated, while the lower leaves are almost horizontal to ensure that almost all the light is intercepted. This approach has been a major factor in improving the productivity of rice (Beadle & Long 1985), but it has fallen from fashion. Nevertheless, it has the potential to increase RUE by as much as 40 per cent at mid-day in full sunlight (Long et al. 2005a). However, like the CO2 effect, in order to benefit from this change, today's crops would need less sink limitation. Targets like changing sink capacity and canopy architecture can be tackled, if necessary, by conventional plant breeding. More exotic approaches such as engineering C4 photosynthesis into C3 species are likely to be much more complex and difficult to deliver (Hibberd et al. 2008)—C4 species not only have different photosynthesis biochemistry, but they also have different leaf anatomy (Kranz anatomy) which is crucial to their efficient functioning. This anatomy is responsible for increasing the [CO2] around the mesophyll cells by several times its ambient concentration. A successful attempt to improve C3 crop yields by engineering them so that they use C4 photosynthesis would also have to engineer a version of the Kranz anatomy (Long et al. 2005a). A more successful strategy would be to extend the environmental range of existing C4 crops. Detailed descriptions of the opportunities and possible problems of breeding wheat with more productive biochemical pathways were reviewed by Reynolds et al. (2009). In addition to concentrating on raising the potential yield, plant breeders will have to continue or even increase the attention they give to breeding crops for resistance to pathogens in order to increase the obtainable yield and its stability. This is especially important in the developed countries, many of which are restricting the types and amounts of pesticide permitted to be applied. Increased effort to breed for resistance to pests and pathogens is likely to divert resource applied to breeding for yield potential, reducing the pace at which yield can be improved. Furthermore, it is not uncommon that new sources of genetic resistance confer a yield penalty when crossed with elite material, and it takes some time to overcome this drag effect (Fisher & Edmeades 2010). Another consideration is to assess the role of minor or under-used crop species. A large proportion of human caloric intake depends on a few graminaceous species (rice, wheat, maize), and this will not change significantly. However, these species represent a small fraction of the biological diversity of edible plants, and some species could become more important in the future. For example, cassava is a staple food for millions in tropical and subtropical regions, yet investment in improvements to this crop pales in comparison to the major ones. Likewise quinoa, a nutritious C4 grain of South American origin, could make a larger contribution with further development in genetics and agronomy. Globally, the minor, under-used species are likely to have only a small effect on feeding the billions, but locally, the impact of higher yields could be significant. In conclusion, there is little reason to suppose that crops are approaching a yield ceiling, and every reason to expect that yields will increase as new varieties are introduced that are adapted to the changed, CO2-enriched environment. A large proportion of the yield increases that will be required to feed the world's population must be delivered via plant breeders. They will need to make advances as quickly as in the past, if not faster, and they will therefore need all the tools that biotechnology can provide: genomics and bioinformatics are likely to be of paramount importance (Phillips 2010). There is an implicit danger in too great a reliance on potential biotechnological breakthroughs to provide a ‘second green revolution’ (Sinclair et al. 2004). While it is possible that single transgene events could radically alter plant performance in a positive way under field conditions, diverting resources and focus from conventional breeding could slow the rate of yield increases. Improved crop nutrition, particularly the provision of nitrogen fertilizer, has made huge increases in yields in developed economies. For example, in wheat in the UK, the optimum dose of nitrogen fertilizer, now about 200 kg N ha−1, increases yield about two-fold. Between 1950 and 1980, average N dressings for winter wheat increased from 50 to 180 kg ha−1 but have risen only slowly since then. Today it is rare for crops in countries with well developed arable agriculture to receive sub-optimal doses of N fertilizer, and applications are falling slightly as farmers fine-tune their agronomy. However, table 4 clearly shows that arable land in many regions is either being mined for nutrients (which is not sustainable) or is producing suboptimal yield. Fertilizer use in East Asia seems lavish, but more than one crop per year is common in parts of that region. The transition countries (the former Soviet Union) used far less fertilizer during the period of restructuring and reorganization, but its use is increasing again. Farmers in sub-Saharan Africa could increase their production considerably if they had access to fertilizer and the technology to use it appropriately: the limitation is probably poverty. In much of the world there is scope to increase fertilizer application per hectare by 50 per cent, and this would produce significant yield gains and slow or even prevent deterioration of land quality.
In future, as yields rise, so will nutrient off-takes, and these nutrients will need to be replaced if the agriculture is to be sustainable. There is scope to increase the proportion of N fertilizer that is taken up by plants. Some crops leave a large proportion of the soil-applied fertilizer in the soil at the end of the growing season, where it represents a waste of resource to the farmer and a pollutant to water and/or air. Crops that are grown for their protein content, like bread-making wheat, will need more N in their grain in future if their yields are to continue to rise. More efficient ways to apply this N will be needed so that it does not get left in the soil, where it is prone to leaching and a cause of water pollution. Plant breeders and agronomists have started to search for ways to improve uptake and N use efficiencies. Crop-protection chemicals (herbicides, insecticides and fungicides), like nutrients, have played a huge part in increasing and sustaining the yields of arable crops in industrialized countries. Oerke & Dehne (1997) analysed literature and field experiments from around the world and calculated the percentage of potential losses prevented by control measures, i.e. the efficacy of control. In 1991–1993 efficacy reached only 34–38% in rice, wheat and maize but was 43 per cent in soya and potatoes. Efficacy was 55 per cent for weeds, 31 per cent for pests and 23 per cent for diseases. On a regional basis, efficacy was 61 per cent in west Europe, 56 per cent in North America and Oceania and 37 per cent in the rest of the world. Both the potential and actual losses have increased both in actual and relative terms since the early 1960s, when yields were smaller and cropping less intensive. In 205 German wheat trials between 1985 and 1990 losses owing to diseases increased from 11 per cent when the attainable yield was 4 t ha−1 to 20 per cent with 11 t ha−1. This makes the important point that, as yields rise in the future, so too will the need for excellent crop protection. The major threats to crop protection in the future are the incidence of new resistances in the pathogens and the availability of chemical and genetic controls. In the past, resistances have arisen where a mode of pathogen control has been used repeatedly and without recourse to alternatives. It has happened with aphids in glasshouses and is happening now with repeated use of glyphosate in rotations of herbicide-resistant crops. This will continue to occur unless there is effective regulation to prevent it. Resistance need not be a serious worry so long as there is a ready supply of alternative crop-protection products or practices that can be applied when the need arises. There is a serious risk that this may not always be so. Chemical controls are available for most of the major weeds and airborne pests and diseases of the major crops. Farmers should be able to cope with most of the major airborne threats if these chemicals, or their replacements, remain available. However, within Europe at least, there is strong pressure to reduce the use of crop-protection chemicals and to reduce the types that can be marketed, often with little consideration of the real risk that they pose to human or animal health or the wider environment. If this trend continues, it could have serious implications for future crop yields. The crop-protection chemistry that is available to combat soil-borne pests and diseases is less effective. If applied at all, the chemicals usually have a less-than-perfect toxicology and have to be applied to the soil in large doses. Hence, many soil-borne pathogens are currently held in check by crop rotation alone. These pathogens will become an increasing threat as the soil gets warmer and increases their multiplication rates: control by crop rotation will become less effective. Plant breeding for resistance or tolerance to these problems has been successful in a few cases in the past, but screening lines to find sources of resistance is expensive and time-consuming. It also slows progress in breeding for yield increases, as was evident while tolerance to rhizomania was introduced into sugar beet cultivars. Transgenic approaches may be the way to solve these problems. Certainly, an approach like this will be needed to prevent nematode-induced disorders and fungal root-rots from getting much worse. Will companies that make crop-protection chemicals continue to invest in research and development as they have done in the past? The market value of these products, in real terms, fell by 18 per cent between 1998 and 2003 (Clough 2005), partly as a result of the use of genetically modified (GM) crops. It seems clear that the large agrochemical companies will continue to invest in GM, where the market is growing, and are unlikely to expand their activity in new crop-protection chemicals, especially as environmental concerns continue to be raised in relation to these products. Eventually, we will have to decide whether we want GM or old chemistry. Unlike some of the problems of crop nutrition, where poverty prevents access to fertilizers, the most damaging biotic problems are weeds, and farmers in less developed countries often have access to cheap sources of labour that are every bit as effective at controlling weeds as expensive herbicides. In some of the complex and intensive cropping systems used in many developing countries, the problems caused by carry-over effects of some herbicides would make their use counter-productive. Serious difficulties will arise if labour becomes so scarce or so expensive that manual weed control is no longer possible. Achievable yield of crop is defined here as the yield that could be produced by combination of the best germplasm with the best management and in an environment with the current average radiation, temperature and rainfall. It is assumed that the texture of the soil and the ability to irrigate cannot be changed (the provision of an irrigation system is not a short-term farm-management action). There is usually a large difference between the achievable yield and the commercial yield of a crop. This may be estimated as the difference between a benchmark set by a crop model (which usually simulates an experiment where agronomy is optimized) and a farm or national average. Alternatively, it can be estimated as the difference between yields from crops grown under near-perfectly managed conditions, as in variety tests, and the yields of farm or national crops grown nearby in the same season. These differences are referred to here as the yield gap. Closing this gap has huge effects on productivity and resource use efficiency. How can real-world farmers today achieve yields closer to the potential of current crop varieties, given that they cannot modify soil texture or increase the water supply? Hochman et al. (2009) describe relationships between yield simulations made with a crop-growth model for 334 crops of wheat in Australia and farmers' observations of the yields of the same crops. Farmers' observations were taken from either yield monitors on the combines or from sales records. On average farmers recorded yields that were 80 per cent of the benchmark value and most of the variation in yield could be accounted for by considering evapotranspiration alone, so the effects of the farmer choosing an insufficient N fertilizer dose or an inappropriate sowing date were small. Australian farmers have access to inputs that they consider justified on economic grounds and in the wheat example there was evidence that, with hindsight, their use of N fertilizer was lavish. Also their fields are large so errors owing to the difference between field and cropped areas were small. Despite this, about 20 per cent of the benchmark value was not being harvested or sold. Only a small portion of this could be owing to losses during the harvest. The yield gap for wheat and sugar beet crops in England and Wales is illustrated in figure 4 as the difference between official variety tests and national average yields. Wheat yields have been rising steadily and the gap has remained at about 2.3 t ha−1. Beet yields have been rising rapidly, but the gap has been widening; this situation is almost exactly mirrored in data from Germany, where the gap in sugar yield in the last two decades has averaged 3.5 t ha−1 (Märlander et al. 2003). Yields of (a) sugar beet and (b) wheat in official variety tests in the UK and national average yields in the same year. Data sources are http://statistics.defra.gov.uk, www.hgca.com ((a); open triangle, commercial; filled triangle, variety trial and (b) open circle, commercial; filled circle, variety trial).
It is often assumed that these gaps are owing to lack or inappropriate use of resource by the farmers. In some places this may be true, but the gaps are also caused by:
Yield gaps of approximately 20 per cent are common in developed countries. For example, the gaps between variety tests and the state yields for wheat and maize in Kansas between 2004 and 2007 ranged from 0.65 to 0.91 and averaged 0.71 for wheat and 0.81 for maize (data from Kansas State University). Pidgeon et al. (2001) estimated the sizes of yield gap for sugar beet production across Europe during the 1990s using a crop-growth model. At one extreme, France, Belgium, Netherlands and UK delivered approximately 75 per cent of the achievable yield while Poland only delivered 30 per cent. Polish sugar beet yields have risen by about 60 per cent in the last 15 years. This clearly illustrates the effects that rewards, appropriate trading arrangements, and access to modern varieties and machines can have on productivity. Differences between achievable and actual crop yields are sometimes assessed on the basis of global agro-ecological zones (Bruinsma 2003; Kindred et al. 2008). For example, Bruinsma (2003) show the difference between actual and agro-ecologically attainable yields of wheat for 15 countries. The UK, France and Denmark all produce more than the amount that is apparently attainable while the USA appears to produce about half of the attainable yield. These agro-ecological zones are too crude for this type of analysis. For example, France and UK are in the same zone and should have the same attainable yields but analyses with crop-growth models show that, for beet, achievable yields are about 15 per cent more in France than in UK because the weather is more favourable. Despite the apparent stability of the yield gap, it can be narrowed. This is illustrated by the fact that neighbouring farmers can have very different yields. The distribution in the five-year average yields for all sugar beet growers in England is shown in figure 5. The highest yielders are performing almost as well as the variety tests, while the yields of the poor performers are less than half. This difference has little connection to differences in soil type or region, although it is loosely correlated with the crop's access to water. Neither is it connected to use of inputs because poor performers often spend more money on seeds, fertilizers and crop-protection chemicals than their more successful counterparts (Lang 2009). Clearly, anything that can be done to improve the performance of the below-average farm will have a large and inexpensive effect on productivity. Differences in beet yield performance between near neighbours were studied in Sweden (Blomquist et al. 2003) where a useful indicator of the farmer's expertise was the penetration resistance of the subsoil: large penetration resistances were indicative of operations that took place at inappropriate times and had deleterious effects on soil structure. Similar situations may be found commonly in mechanized agriculture anywhere. Frequency distribution of five-year average (2004–2008) adjusted root yields of sugar beet contracts, classified as percentages of all growers or as percentages of all tonnage delivered to British Sugar factories. Data from British Sugar plc (filled black bar, % tonnes; filled grey bar, % growers).
Smil (2005) estimated that by 2050 an enlarged world population with changed dietary requirements would need about 50 per cent more food for people and farm animals. By 2009 the FAO's World Expert Forum had raised this estimate to 70 per cent. We have attempted to analyse what these increased demands will mean for arable agriculture if they are to be realized solely by changing yield per hectare. We have done this by taking an approach analogous to that of Ewert et al. (2005) and Rounsevell et al. (2005). Using the yield statistics from FAO for 1961 to 2007 for a selection of major crops, we calculated linear trends as the yield changed with time for each country containing a zone illustrated in figure 1, except that the whole of the EU was used instead of England and Germany. The linear trends were converted to relative yield changes: the future change in yield was calculated from the relative change at the end of the observation period i.e. between 2006 and 2007. Where yield has declined (sugar cane in South Africa, Brazil and Australia, wheat in Ukraine and Nigeria) we assumed that the decline could be stopped and the relative yield change was set at zero. We assumed that each crop would react to [CO2] changes as described in §2: where there is no crop-specific data we used the average value for the C3 crops or the C4 crops, as appropriate. The assumed values are shown in table 5. All crops were assumed to suffer the ozone-induced yield reduction described in §3: where a crop had no specific value, we assumed the mean reduction of 7.5 per cent (this may be optimistic). Future climate change impacts in the absence of a CO2 effect (table 6) were either taken from crop-specific publications or were means of values (table 2) taken from Nelson et al. (2009). We then calculated the possible yield of various crops assuming three future yield improvement scenarios (fT,P, table 5). The first assumed that potential yield continues to improve at 1 per cent each year (i.e. current yield trends are maintained). Case 2 assumed that yield trends are modified to 70 per cent of the recent annual gain (Ewert et al. 2005), while scenario 3 assumed that the present trends are increased to 2 per cent per year, in line with expert opinions cited by Kindred et al. (2008). The current fraction of achievable yield (fTG) was assumed to be 55–80% and was assumed to increase by 10 per cent with the stimulus that is likely to be provided by extra demand for food. The proportions used in individual cases were decided according to Ewert et al. (2005) for wheat, Masutomi et al. (2009) for rice and Jaggard et al. (2007) for sugar beet, with the remaining crops equal to sugar beet. A summary of the most conservative projections to 2050 is shown in table 7: the others are available as the electronic supplementary material.
In the conservative scenario (table 7) the assumptions for most crop–country combinations provide 50 per cent more yield per unit area in 2050 than in 2007. The exceptions tend to be in Russia and Ukraine, where recent changes to the political system and to rural society have caused upheaval. However, European farmers investing there expect that the productivity of arable agriculture will improve rapidly. Surprisingly, another problem is sugar cane, where yields do not seem to be improving, even in Australia where the knowledge transfer schemes within the sugar industry are second to none. The most serious cause for concern is in Africa, where we have assumed that the yield decline in Nigeria can be halted (and this is far from certain), where drought is likely to get worse if for no other reason than the growing season will be hotter, and where farming is so unprofitable that the resources needed to make improvements cannot be afforded. The scenarios with faster future growth assumptions suggest that yields per unit area will increase by about 75 per cent or will double, producing more than enough food on a global scale, although not in every region. Although, on our assumption, yields might improve enough to feed the mean estimates of world population by 2050, there is very little room for complacency or for alternative uses for high-quality land. This review has not considered the production of bio-fuel or natural fibres like cotton, but there could be serious competition for the land resource if they are planned to occupy more land in major food-producing areas. In many areas where bio-fuel already supplies much of the energy (parts of rural China, India, large parts of Africa) there is the risk that already insufficient organic matter (OM) is returned to the soil to prevent soil degradation. This food—versus—bio-fuel question needs to be the subject for research, especially because some bio-fuel production systems are long-term investments. This review has not addressed four important questions. The first is the extent to which degradation of the soil resource and its ability to be productive is continuing around the world. The principal causes are soil erosion (wind and water) and salinization (build-up of salts in the surface layers of the soil to reach toxic concentrations, owing to inappropriate irrigation and fertilizing practices). Both these problems have the potential to rapidly degrade what would otherwise be very productive sites. These problems tend to occur when the weather in the locality is extreme and these conditions could become more frequent in the future climate. Methods to greatly reduce the risks of erosion and salinization are well-known, but their acceptance by farmers is usually poor because the costs of putting them into practice are continuous while the need for protection is usually sporadic. The uncertainties surrounding the current extent of these problems and their future impact are large and were reviewed by Bruinsma (2003). The second and third problems are more insidious. In many underdeveloped and developing countries agricultural products have been exported for decades, often without the soil's nutrients being replaced. For example, large quantities of material produced in Southeast Asia and exported for animal feed has led to the phosphate surplus in parts of western Europe. Whether the mining of soils for nutrients is causing reductions in productivity has not been considered here, but it is a practice that is not sustainable. Agriculture may expand onto fresh land, sometimes because the climate changes and it becomes suitable for crop growth. This can be an important avenue for increased food production in some parts of the world (Fisher et al. 2005). When land is first cultivated some of the OM is oxidized to produce CO2. Cultivation speeds up this process, and recently reclaimed land loses OM quickly. Eventually (more than 50 years), the soils reach a stable OM state, but in this condition they are usually more difficult to manage productively. We have not attempted to assess the possible impacts of these changes. The fourth problem is irrigation. Crop-growth models often make the assumption that irrigated crops do not suffer water shortages. This is seldom true because irrigation is usually far from perfect. The extent to which the area of irrigated cropping will be adequately supplied with water in future has not been considered. The area that receives the precipitation is seldom the area that is irrigated, and the lag time between precipitation and use of the water may be years, not months. Some of these issues as they relate to the timing of flows in major rivers like the Ganges and the Danube have been considered by Gornall et al. (2010). This review contains many assumptions that represent our best estimates, and some unanswered questions. Some of these assumptions should be placed on firmer footings by research and reviews aimed at the issues that are set out below.
By 2050 the [CO2] is likely to be approximately 550 ppm and FACE experiments show that this will increase yields of C3 crops by about 13 per cent but will not increase the yields of C4 species. It will also decrease water consumption, making rain-fed crops less prone to water stress. However, by then most places will be hotter by 1–3°C. This will speed up the development of existing crops, increasing the yields of indeterminate species that do not flower before harvest (such as sugar beet) and potentially decreasing the yields of determinate types like wheat and rice. The temperature rise will also increase the rate of evapotranspiration, tending to counteract the beneficial effect of CO2 on water consumption. This will be especially serious in those places that are already short of water. However, the changed temperature regime will also present opportunities for agronomists and plant breeders to modify cropping systems to deliver yield improvements by matching varieties to lengthened growing seasons or adopting new crop types, and this is seldom factored into yield projections. Along with changes to [CO2], the [O3] is likely to increase, especially where there is intense industrialization. This will reduce yields by at least 5 per cent. These changes are small in comparison to the challenge ahead and in comparison to increases in crop productivity achieved in the last 50 years. To increase yield by the required amounts farmers will need improved varieties of crop plants with larger potential yields, better tolerance or resistance to pests and diseases, and more efficient extraction and use of water and nutrients. Our assumptions about future possibilities are based on past performance and they are therefore rather uncertain, but no more so than the output of some of the large climate change impact studies that rely almost entirely on multiple simulations. There is some evidence that plant breeders are approaching a yield ceiling with the world's major crops, but the smaller-than-expected yield increases of C3 species measured in response to extra CO2 indicate that there are many improvements still to be made. At the same time, farmers in the developed world have good access to fertilizers and crop-protection chemicals and should be in a position to prevent serious degradation of their soil and to control weeds, pests and diseases. However, in a warmer world, soil-borne pests and diseases are likely to become more damaging: chemical control of these problems has been unsuccessful in the past. Transgenic approaches to plant breeding are likely to be needed if robust control of these problems is to be provided. There is almost always a gap between the achievable yield of an agronomic system and the yield that is actually delivered. Part of this gap is inevitable; it relates to the way land areas are reported, to minor inefficiencies at harvest, losses during crop storage and during transport. Extreme events, like savage storms and floods, also cause part of the yield gap. These are predicted to become more frequent in the future climate. Nevertheless, even with the best agricultural extension services in developed countries, there are still large differences in performance between neighbouring farmers. It should be possible to close this gap, and there is an even larger opportunity to close the gap in those places where it is more difficult for farmers to make use of the technology that is, in theory at least, available to them. Our assumptions and calculations indicate that it will be possible to increase food production by 50 per cent by 2050. However, this relies heavily on improved technology. Huge increases in crop yields have been made in recent decades, and the same advances cannot be repeated without major changes in crop genetics, introducing novel or foreign genes with large effects on yield. Therefore in future we will be very reliant on the maintenance of soil fertility and control mechanisms for pests, diseases and weeds, but we will be especially reliant on successful plant breeding, So long as plant breeding efforts are not hampered and modern agricultural technology continues to be available to farmers, it should be possible to produce yield increases that are large enough to meet some of the predictions of world food needs, even without having to devote more land to arable agriculture. Whether that food will be available to and affordable by all those who need it is another question. The Hadley Center of the UK Meteorological Office provided the climate simulation data from HadCM3 as part of this project. Rothamsted Research is sponsored by the Biotechnology and Biological Sciences Research Council. FootnotesOne contribution of 23 to a Theme Issue ‘Food security: feeding the world in 2050'. While the Government Office for Science commissioned this review, the views are those of the author(s), are independent of Government, and do not constitute Government policy. © 2010 The Royal Society This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. References
Page 4Livestock systems occupy about 30 per cent of the planet's ice-free terrestrial surface area (Steinfeld et al. 2006) and are a significant global asset with a value of at least $1.4 trillion. The livestock sector is increasingly organized in long market chains that employ at least 1.3 billion people globally and directly support the livelihoods of 600 million poor smallholder farmers in the developing world (Thornton et al. 2006). Keeping livestock is an important risk reduction strategy for vulnerable communities, and livestock are important providers of nutrients and traction for growing crops in smallholder systems. Livestock products contribute 17 per cent to kilocalorie consumption and 33 per cent to protein consumption globally, but there are large differences between rich and poor countries (Rosegrant et al. 2009). Livestock systems have both positive and negative effects on the natural resource base, public health, social equity and economic growth (World Bank 2009). Currently, livestock is one of the fastest growing agricultural subsectors in developing countries. Its share of agricultural GDP is already 33 per cent and is quickly increasing. This growth is driven by the rapidly increasing demand for livestock products, this demand being driven by population growth, urbanization and increasing incomes in developing countries (Delgado 2005). The global livestock sector is characterized by a dichotomy between developing and developed countries. Total meat production in the developing world tripled between 1980 and 2002, from 45 to 134 million tons (World Bank 2009). Much of this growth was concentrated in countries that experienced rapid economic growth, particularly in East Asia, and revolved around poultry and pigs. In developed countries, on the other hand, production and consumption of livestock products are now growing only slowly or stagnating, although at high levels. Even so, livestock production and merchandizing in industrialized countries account for 53 per cent of agricultural GDP (World Bank 2009). This combination of growing demand in the developing world and stagnant demand in industrialized countries represents a major opportunity for livestock keepers in developing countries, where most demand is met by local production, and this is likely to continue well into the foreseeable future. At the same time, the expansion of agricultural production needs to take place in a way that allows the less well-off to benefit from increased demand and that moderates its impact on the environment. This paper attempts a rapid summary of the present-day state of livestock production systems globally in relation to recent trends, coupled with a brief assessment of whether these trends are likely to continue into the future. In §2, the key drivers underpinning past increases in livestock production are outlined, and the status of both intensive and extensive production systems in the developed and developing world is described. Section 3 summarizes the advances in science and technology that have contributed to historical increases in livestock production, and indicates where potential remains, in relation to livestock genetics and breeding, livestock nutrition and livestock disease management. Section 4 contains sketches of a number of factors that may modify both the production and the consumption of livestock products in the future: competition for land and water, climate change, the role of socio-cultural drivers and ethical concerns. (Competition for resources and climate change are treated very briefly: other reviews address these issues comprehensively.) The section concludes with a brief discussion of three ‘wildcards’, chosen somewhat arbitrarily, that could cause considerable upheaval to future livestock production and consumption trends in the future: artificial meat, nanotechnology and deepening social concern over new technology. The paper concludes (§5) with a summary outlook on livestock production systems evolution over the coming decades and some of the key uncertainties. Human population in 2050 is estimated to be 9.15 billion, with a range of 7.96–10.46 billion (UNPD 2008). Most of the increase is projected to take place in developing countries. East Asia will have shifted to negative population growth by the late 2040s (FAO 2010). In contrast, population in sub-Saharan Africa (SSA) will still be growing at 1.2 per cent per year. Rapid population growth could continue to be an important impediment to achieving improvements in food security in some countries, even when world population as a whole ceases growing sometime during the present century. Another important factor determining demand for food is urbanization. As of the end of 2008, more people now live in urban settings than in rural areas (UNFPA 2008), with urbanization rates varying from less than 30 per cent in South Asia to near 80 per cent in developed countries and Latin America. The next few decades will see unprecedented urban growth, particularly in Africa and Asia. Urbanization has considerable impact on patterns of food consumption in general and on demand for livestock products in particular: urbanization often stimulates improvements in infrastructure, including cold chains, and this allows perishable goods to be traded more widely (Delgado 2005). A third driver leading to increased demand for livestock products is income growth. Between 1950 and 2000, there was an annual global per capita income growth rate of 2.1 per cent (Maddison 2003). As income grows, so does expenditure on livestock products (Steinfeld et al. 2006). Economic growth is expected to continue into the future, typically at rates ranging from between 1.0 and 3.1 per cent (van Vuuren et al. 2009). Growth in industrialized countries is projected to be slower than that in developing economies (Rosegrant et al. 2009). The resultant trends in meat and milk consumption figures in developing and developed countries are shown in table 1, together with estimates for 2015–2050 (FAO 2006; Steinfeld et al. 2006). Differences in the consumption of animal products are much greater than in total food availability, particularly between regions. Food demand for livestock products will nearly double in sub-Saharan Africa and South Asia, from some 200 kcal per person per day in 2000 to around 400 kcal per person per day in 2050. On the other hand, in most OECD countries that already have high calorie intakes of animal products (1000 kcal per person per day or more), consumption levels will barely change, while levels in South America and countries of the Former Soviet Union will increase to OECD levels (Van Vuuren et al. 2009).
The agricultural production sector is catering increasingly to globalized diets. Retailing through supermarkets is growing at 20 per cent per annum in countries such as China, India and Vietnam, and this will continue over the next few decades as urban consumers demand more processed foods, thus increasing the role of agribusiness (Rosegrant et al. 2009). Global livestock production has increased substantially since the 1960s. Beef production has more than doubled, while over the same time chicken meat production has increased by a factor of nearly 10, made up of increases in both number of animals and productivity (figure 1). Carcass weights increased by about 30 per cent for both chicken and beef cattle from the early 1960s to the mid-2000s, and by about 20 per cent for pigs (FAO 2010). Carcass weight increases per head for camels and sheep are much less, about 5 per cent only over this time period. Increases in milk production per animal have amounted to about 30 per cent for cows' milk, about the same as for increases in egg production per chicken over the same time period (FAO 2010). (a) Number of chickens, carcass weight and egg production per animal from 1961 to 2008, global. (b) Number of bovines (cattle and buffaloes), carcass weight and cattle milk production per animal from 1961 to 2008, global. (c) Number of pigs and carcass weight from 1961 to 2008, global. (d) Number of sheep, goats and carcass weights from 1961 to 2008, global. (e) Number of camels and carcass weight from 1961 to 2008, global. Data from FAO (2010).
These changes have been accompanied by substantial shifts in the area of arable land, pastures and forest. Arable and pasture lands have expanded considerably since the early 1960s, although the rates of change have started to slow (Steinfeld et al. 2006). Over the last 20 years, large forest conversions have occurred in the Amazon Basin, Southeast Asia and Central and West Africa, while forest area has increased owing to agricultural land abandonment in the Eurasian boreal forest and parts of Asia, North America, and Latin America and the Caribbean (LAC) (GEO4 2007). Considerable expansion of crop land planted to soybean (as a protein source in animal feed) has occurred in Latin America over the last 30 years. Developing countries' share of global use of cereals for animal feed nearly doubled (to 36%) from the early 1908s to the late 1990s (Delgado 2005). Some cropland has been converted to other uses, including urban development around many major cities. Land-use intensity has increased in some places: cereal yields have trebled in East Asia over this time, while yields have increased not at all in sub-Saharan Africa, for example. Land-use change is complex and driven by a range of drivers that are regionally specific, although it is possible to see some strong historical associations between land abundance, application of science and technology and land-use change in some regions (Rosegrant et al. 2009). In Latin America, for instance, land abundance has slowed the introduction of new technologies that can raise productivity. Historically, production response has been characterized by systems' as well as regional differences. Confined livestock production systems in industrialized countries are the source of much of the world's poultry and pig meat production, and such systems are being established in developing countries, particularly in Asia, to meet increasing demand. Bruinsma (2003) estimates that at least 75 per cent of total production growth to 2030 will be in confined systems, but there will be much less growth of these systems in Africa. While crop production growth will come mostly from yield increases rather than from area expansion, the increases in livestock production will come about more as a result of expansion in livestock numbers in developing countries, particularly ruminants. In the intensive mixed systems, food-feed crops are vital ruminant livestock feed resources. The prices of food-feed crops are likely to increase at faster rates than the prices of livestock products (Rosegrant et al. 2009). Changes in stover production will vary widely from region to region out to 2030 (Herrero et al. 2009). Large increases may occur in Africa mostly as a result of productivity increases in maize, sorghum and millet. Yet stover production may stagnate in areas such as the ruminant-dense mixed systems of South Asia, and stover will need to be replaced by other feeds in the diet to avoid significant feed deficits. The production of alternative feeds for ruminants in the more intensive mixed systems, however, may be constrained by both land and water availability, particularly in the irrigated systems (Herrero et al. 2009). Meeting the substantial increases in demand for food will have profound implications for livestock production systems over the coming decades. In developed countries, carcass weight growth will contribute an increasing share of livestock production growth as expansion of numbers is expected to slow; numbers may contract in some regions. Globally, however, between 2000 and 2050, the global cattle population may increase from 1.5 billion to 2.6 billion, and the global goat and sheep population from 1.7 billion to 2.7 billion (figure 2; Rosegrant et al. 2009). Ruminant grazing intensity in the rangelands is projected to increase, resulting in considerable intensification of livestock production in the humid and subhumid grazing systems of the world, particularly in LAC. (a) Projected number of (i) bovines and (ii) sheep and goats to 2050 in the ‘reference world’. (b) Projected number of (i) pigs and (ii) poultry to 2050 in the ‘reference world’. CWANA, Central and West Asia and North Africa; ESAP, East and South Asia and the Pacific; LAC, Latin America and the Caribbean; NAE, North America and Europe; SSA, sub-Saharan Africa. Data from Rosegrant et al. (2009).
The prices of meats, milk and cereals are likely to increase in the coming decades, dramatically reversing past trends. Rapid growth in meat and milk demand may increase prices for maize and other coarse grains and meals. Bioenergy demand is projected to compete with land and water resources, and this will exacerbate competition for land from increasing demands for feed resources. Growing scarcities of water and land will require substantially increased resource use efficiencies in livestock production to avoid adverse impacts on food security and human wellbeing goals. Higher prices can benefit surplus agricultural producers, but can reduce access to food by a larger number of poor consumers, including farmers who do not produce a net surplus for the market. As a result, progress in reducing malnutrition is projected to be slow (Rosegrant et al. 2009). Livestock system evolution in the coming decades is inevitably going to involve trade-offs between food security, poverty, equity, environmental sustainability and economic development. Historically, domestication and the use of conventional livestock breeding techniques have been largely responsible for the increases in yield of livestock products that have been observed over recent decades (Leakey et al. 2009). At the same time, considerable changes in the composition of livestock products have occurred. If past changes in demand for livestock products have been met by a combination of conventional techniques, such as breed substitution, cross-breeding and within-breed selection, future changes are likely to be met increasingly from new techniques. Of the conventional techniques, selection among breeds or crosses is a one-off process, in which the most appropriate breed or breed cross can be chosen, but further improvement can be made only by selection within the population (Simm et al. 2004). Cross-breeding, widespread in commercial production, exploits the complementarity of different breeds or strains and makes use of heterosis or hybrid vigour (Simm 1998). Selection within breeds of farm livestock produces genetic changes typically in the range 1–3% per year, in relation to the mean of the single or multiple traits that are of interest (Smith 1984). Such rates of change have been achieved in practice over the last few decades in poultry and pig breeding schemes in several countries and in dairy cattle breeding programmes in countries such as the USA, Canada and New Zealand (Simm 1998), mostly because of the activities of breeding companies. Rates of genetic change achieved in national beef cattle and sheep populations are often substantially lower than what is theoretically possible. Ruminant breeding in most countries is often highly dispersed, and sector-wide improvement is challenging. Rates of genetic change have increased in recent decades in most species in developed countries for several reasons, including more efficient statistical methods for estimating the genetic merit of animals, the wider use of technologies such as artificial insemination and more focused selection on objective traits such as milk yield (Simm et al. 2004). The greatest gains have been made in poultry and pigs, with smaller gains in dairy cattle, particularly in developed countries and in the more industrialized production systems of some developing countries. Some of this has been achieved through the widespread use of breed substitution, which tends to lead to the predominance of a few highly specialized breeds, within which the genetic selection goals may be narrowly focused. While most of the gains have occurred in developed countries, there are considerable opportunities to increase productivity in developing countries. Within-breed selection has not been practised all that widely, in part because of the lack of the appropriate infrastructure needed (such as performance recording and genetic evaluation schemes). Breed substitution or crossing can result in rapid improvements in productivity, but new breeds and crosses need to be appropriate for the environment and to fit within production systems that may be characterized by limited resources and other constraints. High-performing temperate breeds of dairy cow may not be appropriate for some developing-country situations: for example, heat stress and energy deficits make the use of Friesians in smallholdings on the Kenyan coast unsustainable, partly because of low cow replacement rates (King et al. 2006a). There is much more potential in the use of crosses of European breeds with local Zebus that are well-adapted to local conditions. In the future, many developed countries will see a continuing trend in which livestock breeding focuses on other attributes in addition to production and productivity, such as product quality, increasing animal welfare, disease resistance and reducing environmental impact. The tools of molecular genetics are likely to have considerable impact in the future. For example, DNA-based tests for genes or markers affecting traits that are difficult to measure currently, such as meat quality and disease resistance, will be particularly useful (Leakey et al. 2009). Another example is transgenic livestock for food production; these are technically feasible, although the technologies associated with livestock are at an earlier stage of development than the equivalent technologies in plants. In combination with new dissemination methods such as cloning, such techniques could dramatically change livestock production. Complete genome maps for poultry and cattle now exist, and these open up the way to possible advances in evolutionary biology, animal breeding and animal models for human diseases (Lewin 2009). Genomic selection should be able to at least double the rate of genetic gain in the dairy industry (Hayes et al. 2009), as it enables selection decisions to be based on genomic breeding values, which can ultimately be calculated from genetic marker information alone, rather than from pedigree and phenotypic information. Genomic selection is not without its challenges, but it is likely to revolutionize animal breeding. As the tools and techniques of breeding are changing, so are the objectives of many breeding programmes. Although there is little evidence of direct genetic limits to selection for yield, if selection is too narrowly focused there may be undesirable associated responses (Simm et al. 2004); for example, in dairy cattle, where along with genetic gain in some production traits, there is now considerable evidence of undesirable genetic changes in fertility, disease incidence and overall stress sensitivity, despite improved nutrition and general management (Hare et al. 2006). Trade-offs are likely to become increasingly important, between breeding for increased efficiency of resource use, knock-on impacts on fertility and other traits and environmental impacts such as methane production. Whole-system and life-cycle analyses (‘cradle-to-grave’ analyses that assess the full range of relevant costs and benefits) will become increasingly important in disentangling these complexities. New tools of molecular genetics may have far-reaching impacts on livestock and livestock production in the coming decades. But ultimately, whether the tools used are novel or traditional, all depend on preserving access to animal genetic resources. In developing countries, if livestock are to continue to contribute to improving livelihoods and meeting market demands, the preservation of farm animal genetic resources will be critical in helping livestock adapt to climate change and the changes that may occur in these systems, such as shifts in disease prevalence and severity. In developed countries, the narrowing animal genetic resource base in many of the intensive livestock production systems demonstrates a need to maintain as broad a range of genetic resources as possible, to provide genetic insurance against future challenges and shocks. Institutional and policy frameworks that encourage the sustainable use of traditional breeds and in situ conservation need to be implemented, and more understanding is needed of the match between livestock populations, breeds and genes with the physical, biological and economic landscape (FAO 2007). The nutritional needs of farm animals with respect to energy, protein, minerals and vitamins have long been known, and these have been refined in recent decades. Various requirement determination systems exist in different countries for ruminants and non-ruminants, which were originally designed to assess the nutritional and productive consequences of different feeds for the animal once intake was known. However, a considerable body of work exists associated with the dynamics of digestion, and feed intake and animal performance can now be predicted in many livestock species with high accuracy. A large agenda of work still remains concerning the robust prediction of animal growth, body composition, feed requirements, the outputs of waste products from the animal and production costs. Such work could go a long way to help improve the efficiency of livestock production and meeting the expectations of consumers and the demands of regulatory authorities. Advances in genomics, transcriptomics, proteomics and metabolomics will continue to contribute to the field of animal nutrition and predictions relating to growth and development (Dumas et al. 2008). Better understanding of the processes involved in animal nutrition could also contribute to improved management of some of the trade-offs that operate at high levels of animal performance, such as those associated with lower reproductive performance (Butler 2000). While understanding of the science of animal nutrition continues to expand and develop, most of the world's livestock, particularly ruminants in pastoral and extensive mixed systems in many developing countries, suffer from permanent or seasonal nutritional stress (Bruinsma 2003). Poor nutrition is one of the major production constraints in smallholder systems, particularly in Africa. Much research has been carried out to improve the quality and availability of feed resources, including work on sown forages, forage conservation, the use of multi-purpose trees, fibrous crop residues and strategic supplementation. There are also prospects for using novel feeds from various sources to provide alternative sources of protein and energy, such as plantation crops and various industrial (including ethanol) by-products. The potential of such feeds is largely unknown. Given the prevalence of mixed crop–livestock systems in many parts of the world, closer integration of crops and livestock in such systems can give rise to increased productivity and increased soil fertility (McIntire et al. 1992). In such systems, smallholders use crops for multiple purposes (food and feed, for example), and crop breeding programmes are now well established that are targeting stover quality as well as grain yield in crops such as maize, sorghum, millet and groundnut. Considerable work is under way to address some of the issues associated with various antinutritional factors. These include methods to reduce the tannin content of tree and shrub material, the addition of essential oils that may be beneficial in ruminant nutrition and the use of other additives such as enzymes that can lead to beneficial effects on livestock performance. Enzymes are widely added to feeds for pigs and poultry, and these have contributed (with breeding) to the substantial gains in feed conversion efficiency that have been achieved. What are the prospects for the future? For the mixed crop–livestock smallholder systems in developing countries, there may be places where these will intensify using the inputs and tools of high-input systems in the developed world. In the places where intensification of this nature will not be possible, there are many ways in which nutritional constraints could be addressed, based on what is locally acceptable and available. One area of high priority for additional exploration, which could potentially have broad implications for tropical ruminant nutrition, is microbial genomics of the rumen, building on current research into the breaking down of lignocellulose for biofuels (NRC 2009). Addressing the nutritional constraints faced by pastoralists in extensive rangeland systems in the developing world is extremely difficult. While there is potential to improve livestock productivity in semi-arid and arid areas, probably the most feasible solutions require integrated application of what is already known, rather than new technology. This could involve dissemination of information from early warning systems and drought prediction, for example, so that herders can better manage the complex interactions between herd size, feed availability and rainfall (NRC 2009). For the developed world, various drivers will shape the future of livestock nutrition. First, there is the continuing search for increased efficiency in livestock production. Margins for livestock farmers are likely to remain volatile and may be affected heavily by changes in energy prices, and increased feed conversion efficiency is one way to try to keep livestock production profitable. Public health issues will become increasingly important, such as concerns associated with the use of antibiotics in animal production, including microbiological hazards and residues in food (Vallat et al. 2005). The World Health Organization recommended that all subtherapeutic medical antibiotic use be stopped in livestock production in 1997, and proposed strict regulation and the phasing-out of other subtherapeutic treatments such as growth promotants; but appropriate surveillance and control programmes do not exist in many countries (Leakey et al. 2009). All antibiotics as growth promoters were banned in the European Union (EU) in 2006, but not all countries have made the same choice as the EU. Similarly, certain hormones can increase feed conversion efficiencies, particularly in cattle and pigs, and these are used in many parts of the world. The EU has also banned the use of hormones in livestock production. The globalization of the food supply chain will continue to raise consumer concerns for food safety and quality. Another key driver that will affect livestock nutrition is the need (or in countries such as the UK, the legal obligation) to mitigate greenhouse gas emissions. Improved feeding practices (such as increased amounts of concentrates or improved pasture quality) can reduce methane emissions per kilogram of feed intake or per kilogram of product, although the magnitude of the latter reduction decreases as production increases. Many specific agents and dietary additives have been proposed to reduce methane emissions, including certain antibiotics, compounds that inhibit methanogenic bacteria, probiotics such as yeast culture and propionate precursors such as fumarate or malate that can reduce methane formation (Smith et al. 2007). Whether these various agents and additives are viable for practical use or not, and what their ultimate impacts could be on greenhouse gas mitigation, are areas that need further research. Animal diseases generate a wide range of biophysical and socio-economic impacts that may be both direct and indirect, and may vary from localized to global (Perry & Sones 2009). The economic impacts of diseases are increasingly difficult to quantify, largely because of the complexity of the effects that they may have, but they may be enormous: the total costs of foot-and-mouth disease in the UK may have amounted to $18–25 billion between 1999 and 2002 (Bio-Era 2008). The last few decades have seen a general reduction in the burden of livestock diseases, as a result of more effective drugs and vaccines and improvements in diagnostic technologies and services (Perry & Sones 2009). At the same time, new diseases have emerged, such as avian influenza H5N1, which have caused considerable global concern about the potential for a change in host species from poultry to man and an emerging global pandemic of human influenza. In the developing world, there have been relatively few changes in the distribution, prevalence and impact of many epidemic and endemic diseases of livestock over the last two decades, particularly in Africa (Perry & Sones 2009), with a few exceptions such as the global eradication of rinderpest. Over this time, there has also been a general decline in the quality of veterinary services. A difficulty in assessing the changing disease status in much of the developing world is the lack of data, a critical area where progress needs to be made if disease diagnostics, monitoring and impact assessment are to be made effective and sustainable. Globally, the direct impacts of livestock diseases are decreasing, but the total impacts may actually be increasing, because in a globalized and highly interconnected world, the effects of disease extend far beyond animal sickness and mortality (Perry & Sones 2009). For the future, the infectious disease threat will remain diverse and dynamic, and combating the emergence of completely unexpected diseases will require detection systems that are flexible and adaptable in the face of change (King et al. 2006b). Travel, migration and trade will all continue to promote the spread of infections into new populations. Trade in exotic species and in bush meat are likely to be increasing causes of concern, along with large-scale industrial production systems, in which conditions may be highly suitable for enabling disease transmission between animals and over large distances (Otte et al. 2007). Over the long term, future disease trends could be heavily modified by climate change. For some vector-borne diseases such as malaria, trypanosomiasis and bluetongue, climate change may shift the geographical areas where the climate is suitable for the vector, but these shifts are not generally anticipated to be major over the next 20 years: other factors may have much more impact on shifting vector distributions in the short term (Woolhouse 2006). Even so, Van Dijk et al. (2010) have found evidence that climate change, especially elevated temperature, has already changed the overall abundance, seasonality and spatial spread of endemic helminths in the UK. This has obvious implications for policy-makers and the sheep and cattle industries, and raises the need for improved diagnosis and early detection of livestock parasitic disease, along with greatly increased awareness and preparedness to deal with disease patterns that are manifestly changing. Climate change may have impacts not only on the distribution of disease vectors. Some diseases are associated with water, which may be exacerbated by flooding and complicated by inadequate water access. Droughts may force people and their livestock to move, potentially exposing them to environments with health risks to which they have not previously been exposed. While the direct impacts of climate change on livestock disease over the next two to three decades may be relatively muted (King et al. 2006b), there are considerable gaps in knowledge concerning many existing diseases of livestock and their relation to environmental factors, including climate. Future disease trends are likely to be heavily modified by disease surveillance and control technologies. Potentially effective control measures already exist for many infectious diseases, and whether these are implemented appropriately could have considerable impacts on future disease trends. Recent years have seen considerable advances in the technology that can be brought to bear against disease, including DNA fingerprinting for surveillance, polymerase chain reaction tests for diagnostics and understanding resistance, genome sequencing and antiviral drugs (Perry & Sones 2009). There are also options associated with the manipulation of animal genetic resources, such as cross-breeding to introduce genes into breeds that are otherwise well-adapted to the required purposes, and the selection via molecular genetic markers of individuals with high levels of disease resistance or tolerance. The future infectious disease situation is going to be different from today's (Woolhouse 2006), and will reflect many changes, including changes in mean climate and climate variability, demographic change and different technologies for combating infectious diseases. The nature of most, if not all, of these changes is uncertain, however. Recent assessments expect little increase in pasture land (Bruinsma 2003; MA 2005). Some intensification in production is likely to occur in the humid–subhumid zones on the most suitable land, where this is feasible, through the use of improved pastures and effective management. In the more arid–semiarid areas, livestock are a key mechanism for managing risk, but population increases are fragmenting rangelands in many places, making it increasingly difficult for pastoralists to gain access to the feed and water resources that they have traditionally been able to access. In the future, grazing systems will increasingly provide ecosystem goods and services that are traded, but how future livestock production from these systems may be affected is not clear. The mixed crop–livestock systems will continue to be critical to future food security, as two-thirds of the global population live in these systems. Some of the higher potential mixed systems in Africa and Asia are already facing resource pressures, but there are various responses possible, including efficiency gains and intensification options (Herrero et al. 2010). Increasing competition for land in the future will also come from biofuels, driven by continued concerns about climate change, energy security and alternative income sources for agricultural households. Future scenarios of bioenergy use vary widely (Van Vuuren et al. 2009), and there are large evidence gaps concerning the likely trade-offs between food, feed and fuel in production systems in both developed and developing countries, particularly related to second-generation bioenergy technology. Globally, freshwater resources are relatively scarce, amounting to only 2.5 per cent of all water resources (MA 2005). Groundwater also plays an important role in water supply: between 1.5 and 3 billion people depend on groundwater for drinking, and in some regions water tables are declining unremittingly (Rodell et al. 2009). By 2025, 64 per cent of the world's population will live in water-stressed basins, compared with 38 per cent today (Rosegrant et al. 2002). Increasing livestock numbers in the future will clearly add to the demand for water, particularly in the production of livestock feed: one cubic metre of water can produce anything from about 0.5 kg of dry animal feed in North American grasslands to about 5 kg of feed in some tropical systems (Peden et al. 2007). Several entry points for improving global livestock water productivity exist, such as increased use of crop residues and by-products, managing the spatial and temporal distribution of feed resources so as to better match availability with demand and managing systems so as to conserve water resources (Peden et al. 2007). More research is needed related to livestock–water interactions and integrated site-specific interventions, to ensure that livestock production in the future contributes to sustainable and productive use of water resources (Peden et al. 2007). Climate change may have substantial effects on the global livestock sector. Livestock production systems will be affected in various ways (table 2 and see Thornton et al. (2009) for a review), and changes in productivity are inevitable. Increasing climate variability will undoubtedly increase livestock production risks as well as reduce the ability of farmers to manage these risks. At the same time, livestock food chains are major contributors to greenhouse gas emissions, accounting for perhaps 18 per cent of total anthropogenic emissions (Steinfeld et al. 2006). Offering relatively fewer cost-effective options than other sectors such as energy, transport and buildings, agriculture has not yet been a major player in the reduction of greenhouse gas emissions. This will change in the future (UNFCCC 2008), although guidance will be needed from rigorous analysis; for example, livestock consumption patterns in one country are often associated with land-use changes in other countries, and these have to be included in national greenhouse gas accounting exercises (Audsley et al. 2009).
Climate change will have severely deleterious impacts in many parts of the tropics and subtropics, even for small increases in the average temperature. This is in contrast to many parts of the temperate zone; at mid- to high latitudes, agricultural productivity is likely to increase slightly for local mean temperature increases of 1–3°C (IPCC 2007). There is a burgeoning literature on adaptation options, including new ways of using weather information to assist rural communities in managing the risks associated with rainfall variability and the design and piloting of livestock insurance schemes that are weather-indexed (Mude 2009). Many factors determine whether specific adaptation options are viable in particular locations. More extensive adaptation than is currently occurring is needed to reduce vulnerability to future climate change, and adaptation has barriers, limits and costs (IPCC 2007). Similarly, there is a burgeoning literature on mitigation in agriculture. There are several options related to livestock, including grazing management and manure management. Global agriculture could offset 5–14% (with a potential maximum of 20%) of total annual CO2 emissions for prices ranging from $20 to 100 per t CO2 eq (Smith et al. 2008). Of this total, the mitigation potential of various strategies for the land-based livestock systems in the tropics amounts to about 4 per cent of the global agricultural mitigation potential to 2030 (Thornton & Herrero submitted), which could still be worth of the order of $1.3 billion per year at a price of $20 per t CO2 eq. Several of these mitigation options also have adaptive benefits, such as growing agroforestry species that can sequester carbon, which can also provide high-quality dietary supplements for cattle. Such carbon payments could represent a relatively large amount of potential income for resource-poor livestock keepers in the tropics. In the more intensive systems, progress could be made in mitigating GHG emissions from the livestock sector via increases in the efficiency of production using available technology, for the most part, and this may involve some shifting towards monogastric species. Social and cultural drivers of change are having profound effects on livestock systems in particular places, although it is often unclear how these drivers play out in relation to impacts on livestock and livestock systems. Livestock have multiple roles in human society. They contribute substantially and directly to food security and to human health. For poor and under-nourished people, particularly children, the addition of modest amounts of livestock products to their diets can have substantial benefits for physical and mental health (Neumann et al. 2003). Livestock's contribution to livelihoods, particularly those of the poor in developing countries, is also well recognized. Livestock generate income by providing both food and non-food products that the household can sell in formal or informal markets. Non-food products such as wool, hides and skins are important sources of income in some regions: wool production in the high-altitude tropical regions of Bolivia, Peru or Nepal, for example. Hides and skins from home-slaughtered animals are rarely processed, as the returns may not justify the costs involved (Otte & Upton 2005). Livestock acquisition as a pathway out of poverty has been documented by Kristjanson et al. (2004) in western Kenya, for example. Livestock provide traction mainly in irrigated, densely populated areas, and allow cropping in these places. They provide nutrients in the form of manure, a key resource particularly for the mixed systems of sub-Saharan Africa. Livestock also serve as financial instruments, by providing households with an alternative for storing savings or accumulated capital, and they can be sold and transformed into cash as needed and so also provide an instrument of liquidity, consumption smoothing and insurance. For some poorer households, livestock can provide a means of income diversification to help deal with times of stress. In addition to their food security, human health, economic and environmental roles, livestock have important social and cultural roles. In many parts of Africa, social relationships are partly defined in relation to livestock, and the size of a household's livestock holding may confer considerable social importance on it. The sharing of livestock with others is often a means to create or strengthen social relationships, through their use as dowry or bride price, as allocations to other family members and as loans (Kitalyi et al. 2005). Social status in livestock-based communities is often associated with leadership and access to (and authority over) natural, physical and financial resources. Livestock may have considerable cultural value in developed countries also. Local breeds have often been the drivers of specific physical landscapes (e.g. extensive pig farming in the Mediterranean oak forests of the Iberian peninsula); as such, local breeds can be seen as critical elements of cultural networks (Gandini & Villa 2003). Compared with the biophysical environment, the social and cultural contexts of livestock and livestock production are probably not that well understood, but these contexts are changing markedly in some places. External pressures are being brought to bear on traditional open-access grazing lands in southern Kenya, for example, such as increasing population density and increasing livestock–wildlife competition for scarce resources. At the same time, many Maasai feel that there is no option but to go along with subdivision, a process that is already well under way in many parts of the region, because they see it as the only way in which they can gain secure tenure of their land and water, even though they themselves are well aware that subdivision is likely to harm their long-term interests and wellbeing (Reid et al. 2008). There are thus considerable pressures on Maasai communities and societies, as many households become more connected to the cash economy, access to key grazing resources becomes increasingly problematic, and cultural and kinship networks that have supported them in the past increasingly feel the strain. Inevitably, the cultural and social roles of livestock will continue to change, and many of the resultant impacts on livelihoods and food security may not be positive. Social and cultural changes are likewise taking place elsewhere. In European agriculture, there is already heightened emphasis on, and economic support for, the production of ecosystems goods and services, and this will undoubtedly increase in the future (Deuffic & Candau 2006). In the uplands of the UK, recent social changes have seen increasing demand for leisure provision and access to rural areas. At the same time, there are increasing pressures on the social functions and networks associated with the traditional farming systems of these areas, which have high cultural heritage value and considerable potential to supply the public goods that society is likely to demand in the future (Burton et al. 2005). Ethical concerns may play an increasing role in affecting the production and consumption of livestock products. Recent high-profile calls to flock to the banner of global vegetarianism, backed by exaggerated claims of livestock's role in anthropogenic global greenhouse gas emissions, serve mostly to highlight the need for rigorous analysis and credible numbers that can help inform public debate about these issues: there is much work to do in this area. But science has already had a considerable impact on some ethical issues. Research into animal behaviour has provided evidence of animals' motivations and their mental capacities, which by extension provides strong support for the notion of animal sentience (i.e. animals' capacity to sense and feel), which in turn has provided the basis for EU and UK legislation that enshrines the concept of animal sentience in law (Lawrence 2009). Recently, European government strategies are tending to move away from legislation as the major mechanism for fostering animal welfare improvements to a greater concentration on collective action on behalf of all parties with interests in animal welfare, including consumers (Lawrence 2008). There is conflicting evidence as to the potential for adding value to animal products through higher welfare standards. There are common questions regarding the robustness of consumers' preferences regarding welfare-branded, organic and local food, for example, particularly in times of considerable economic uncertainty. While there are differences between different countries in relation to animal welfare legislation, animal welfare is an increasingly global concern. Part of this probably arises as a result of the forces of globalization and international trade, but in many developing countries the roots of animal welfare may be different and relate more to the value that livestock have to different societies: the sole or major source of livelihood (in some marginal environments in SSA, for example), the organizing principle of society and culture (the Maasai, for instance), investment and insurance vehicles and sources of food, traction and manure, for example (Kitalyi et al. 2005). Improving animal welfare need not penalize business returns and indeed may increase profits. For instance (and as noted above), measurements of functional traits indicate that focusing on breeding dairy cows for milk yield alone is unfavourably correlated with reductions in fertility and health traits (Lawrence et al. 2004). The most profitable bulls are those that produce daughters that yield rather less milk but are healthier and longer lived: the costs of producing less milk can be more than matched by the benefits of decreased health costs and a lower herd replacement rate. Identifying situations where animal welfare can be increased along with profits, and quantifying these trade-offs, requires integrated assessment frameworks that can handle the various and often complex inter-relationships between animal welfare, management and performance (Lawrence & Stott 2009). There is considerable uncertainty related to technological development and to social and cultural change. This section briefly outlines an arbitrary selection of wildcards, developments that could have enormous implications for the livestock sector globally, either negatively (highly disruptive) or positively (highly beneficial). From a technological point of view, this may not be a wildcard at all, as its development is generally held to be perfectly feasible (Cuhls 2008), and indeed research projects on it have been running for a decade already. There are likely to be some issues associated with social acceptability, although presumably meat ‘grown in vats’ could be made healthier by changing its composition and made much more hygienic than traditional meat, as it would be cultured in sterile conditions. In vitro meat could potentially bypass many of the public health issues that are currently associated with livestock-based meat. The development and uptake of in vitro meat on a large scale would unquestionably be hugely disruptive to the traditional livestock sector. It would raise critical issues regarding livestock keeping and livelihoods of the resource-poor in many developing countries, for example. On the other hand, massive reductions in livestock numbers could contribute substantially to the reduction of greenhouse gases, although the net effects would depend on the resources needed to produce in vitro meat. There are many issues that would need to be considered, including the effects on rangelands of substantial decreases in the number of domesticated grazing animals, and some of the environmental and socio-cultural impacts would not be positive. There could also be impacts on the amenity value of landscapes with no livestock in some places. Commercial in vitro meat production is not likely to happen any time soon, however: at least another decade of research is needed, and then there will still be the challenges of scale and cost to be overcome. This refers to an extremely dynamic field of research and application associated with particles of 1–100 nm in size (the size range of many molecules). Some particles of this size have peculiar physical and chemical properties, and it is such peculiarities that nanotechnology seeks to exploit. Nanotechnology is a highly diverse field, and includes extensions of conventional device physics, completely new approaches based upon molecular self-assembly and the development of new materials with nanoscale dimensions. There is even speculation as to whether matter can be directly controlled at the atomic scale. Some food and nutrition products containing nanoscale additives are already commercially available, and nanotechnology is in widespread use in advanced agrichemicals and agrichemical application systems (Brunori et al. 2008). The next few decades may well see nanotechnology applied to various areas in animal management. Nanosized, multipurpose sensors are already being developed that can report on the physiological status of animals, and advances can be expected in drug delivery methods using nanotubes and other nanoparticles that can be precisely targeted. Nanoparticles may be able to affect nutrient uptake and induce more efficient utilization of nutrients for milk production, for example. One possible approach to animal waste management involves adding nanoparticles to manure to enhance biogas production from anaerobic digesters or to reduce odours (Scott 2006). There are, however, considerable uncertainties concerning the possible human health and environmental impacts of nanoparticles, and these risks will have to be addressed by regulation and legislation: at present, for all practical purposes, nanotechnology is unregulated (Speiser 2008). Brunori et al. (2008) see nanotechnology as potentially a highly disruptive driver, and the ongoing debate as to the pros and cons is currently not well informed by objective information on the risks involved: much more information is required on its long-term impacts. Nanotechnology could redefine the entire notion of agriculture and many other human activities (Cuhls 2008). Much evidence points to a serious disconnect between science and public perceptions. Marked distrust of science is a recurring theme in polls of public perceptions of nuclear energy, genetic modification and, spectacularly, anthropogenic global warming. One of several key reasons for this distrust is a lack of credible, transparent and well-communicated risk analyses associated with many of the highly technological issues of the day. This lack was noted above in relation to nanotechnology, but it applies in many other areas as well. The tools of science will be critical for bringing about food security and wellbeing for a global population of more than nine billion people in 2050 in the face of enormous technological, climatic and social challenges. Technology is necessary for the radical redirection of global food systems that many believe is inevitable, but technology alone is not sufficient: the context has to be provided whereby technology can build knowledge, networks and capacity (Kiers et al. 2008). One area where there are numerous potential applications to agriculture is the use of transgenic methodology to develop new or altered strains of livestock. These applications include ‘… improved milk production and composition, increased growth rate, improved feed usage, improved carcass composition, increased disease resistance, enhanced reproductive performance, and increased prolificacy’ (Wheeler 2007, p. 204). Social concerns could seriously jeopardize even the judicious application of such new science and technology in providing enormous economic, environmental and social benefits. If this is to be avoided, technology innovation has to take fully into account the health and environmental risks to which new technology may give rise. Serious and rapid attention needs to be given to risk analysis and communications policy. What is the future for livestock systems globally? Several assessments agree that increases in the demand for livestock products, driven largely by human population growth, income growth and urbanization, will continue for the next three decades at least. Globally, increases in livestock productivity in the recent past have been driven mostly by animal science and technology, and scientific and technological developments in breeding, nutrition and animal health will continue to contribute to increasing potential production and further efficiency and genetic gains. Demand for livestock products in the future, particularly in developed countries, could be heavily moderated by socio-economic factors such as human health concerns and changing socio-cultural values. In the future, livestock production is likely to be increasingly characterized by differences between developed and developing countries, and between highly intensive production systems on the one hand and smallholder and agropastoral systems on the other. How the various driving forces will play out in different regions of the world in the coming decades is highly uncertain, however. Of the many uncertainties, two seem over-arching. First, can future demand for livestock products be met through sustainable intensification in a carbon-constrained economy? Some indications have been given above of the increasing pressures on natural resources such as water and land; the increasing demand for livestock products will give rise to considerable competition for land between food and feed production; increasing industrialization of livestock production may lead to challenging problems of pollution of air and water; the biggest impacts of climate change are going to be seen in livestock and mixed systems in developing countries where people are already highly vulnerable; the need to adapt to climate change and to mitigate greenhouse emissions will undoubtedly add to the costs of production in different places; and the projected growth in biofuels may have substantial additional impacts on competition for land and on food security. A second over-arching uncertainty is, will future livestock production have poverty alleviation benefits? The industrialization of livestock production in many parts of the world, both developed and developing, is either complete or continuing apace. The increasing demand for livestock products continues to be a key opportunity for poverty reduction and economic growth, although the evidence of the last 10 years suggests that only a few countries have taken advantage of this opportunity effectively (Dijkman 2009). Gura (2008) documents many cases where the poor have been disadvantaged by the industrialization of livestock production in developing countries, as well as highlighting the problems and inadequacies of commercial, industrial breeding lines, once all the functions of local breeds are genuinely taken into account. The future role of smallholders in global food production and food security in the coming decades is unclear. Smallholders currently are critical to food security for the vast majority of the poor, and this role is not likely to change significantly in the future, particularly in SSA. But increasing industrialization of livestock production may mean that smallholders continue to miss out on the undoubted opportunities that exist. There is no lack of suggestions as to what is needed to promote the development of sustainable and profitable smallholder livestock production: significant and sustained innovation in national and global livestock systems (Dijkman 2009); increasing regulation to govern contracts along food commodity chains, including acceptance and guarantee of collective rights and community control (Gura 2008); and building social protection and strengthening links to urban areas (Wiggins 2009). Probably all of these things are needed, headed by massive investment, particularly in Africa (World Bank 2009). It is thought that humankind's association with domesticated animals goes back to around 10 000 BC, a history just about as long as our association with domesticated plants. What is in store for this association during the coming century is far from clear, although it is suffering stress and upheaval on several fronts. The global livestock sector may well undergo radical change in the future, but the association is still critical to the wellbeing of millions, possibly billions, of people: in many developing countries, at this stage in history, it has no known, viable substitute. I am very grateful to the late Mike Gale and Maggie Gill for initiating this work and for advice, and to Michael Blummel, Phil Garnsworthy, Olivier Hanotte, Alistair Lawrence, Brian Perry, Wolfgang Ritter, Mark Rosegrant, Geoff Simm, Philip Skuce and Bill Thorpe, who all provided key inputs and information. Three anonymous reviewers provided helpful comments and suggestions on an earlier draft. Remaining errors and omissions are my responsibility entirely. FootnotesOne contribution of 23 to a Theme Issue ‘Food security: feeding the world in 2050'. While the Government Office for Science commissioned this review, the views are those of the author(s), are independent of Government, and do not constitute Government policy. © 2010 The Royal Society This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. References
Page 5According to the United Nations Department of Economic and Social Affairs (UN-DESA 2009), the world population is expected to grow from the present 6.8 billion people to about 9 billion by 2050, mostly in developing countries (5.6–7.9 billion). With a growing world population and recurrent problems of hunger and malnutrition plaguing many communities, e.g. in South Asia and Sub-Saharan Africa, food security is of major societal and international concern. Fishery resources are an important source of proteins, vitamins and micronutrients, particularly for many low-income populations in rural areas, and their sustainable use for future global food security has garnered significant public policy attention. In the context of variable and changing ecosystems, and despite some progress, the challenges of maintaining or restoring fisheries sustainability and stock sizes, reducing environmental impact and degradation, and improving local and global food security remain immense. Marine capture fisheries are a critical component of this picture. Their production is close to the maximum ecosystem productivity (NRC 2006), cannot be increased substantially in the future and could decline if not properly managed, leaving the world to solve a significant new food deficit. The 2002 World Summit on Sustainable Development (WSSD) called on States to ‘maintain or restore stocks to levels that can produce the maximum sustainable yield with the aim of achieving these goals for depleted stocks on an urgent basis and, where possible, not later than 2015’. The world is far from meeting this target, and this paper addresses the underlying issues and considers the future implications. Current global fisheries production has been increasing since records commenced, except during the two World Wars (figure 1). World capture and aquaculture production. Black, China; grey, world excluding China. Source: FAO (2009).
According to The Food and Agriculture Organization of the United Nations (FAO 2009, p. 3), fisheries produced close to 144 million tonnes of fish (live weight equivalent) in 2006, of which 82 million tonnes were from marine capture fisheries (figure 2), 10 million tonnes from inland capture fisheries, 32 million tonnes from inland aquaculture and 20 million tonnes from marine aquaculture. Aquaculture grew faster than any other food-producing sector and if sustained, will continue to augment capture fisheries production in response to global demand, supplying more than 50 per cent of aquatic food consumption by 2015 (Bostock et al. 2010). High seas catches have increased from below 2 million tonnes in 1950 to more than 10 million tonnes (FAO 2009, p. 14) and the taxonomy of the 133 species caught indicates growing deep-water fishing with reported catch close to 4 million tonnes. Altogether and despite reporting uncertainties, the world catch of marine capture fisheries may well have reached the upper limit of 100 million tonnes proposed by Gulland (1971). The ceiling for inland capture fisheries is highly uncertain, although there is some indication that additional growth is possible (FAO 2009, p. 8; Welcomme et al. 2010). World capture fisheries production. Dark grey, China; light grey, world excluding China. Source: FAO (2009).
In addition to this, Illegal, Unreported and Unregulated (IUU) fishing is the major source of undocumented catches (FAO 2001). Agnew et al. (2009) estimated present IUU catches at 11–26 million tonnes, worth 10–20 billion USD annually. Information on IUU is increasing as societal concern grows, and as international and national governance mechanisms strengthen. Nonetheless, trends are not known, and the uncertainty in the estimates is substantial. Discarding of unwanted catch in 1990–2000 has been estimated by FAO at 9.5 million tonnes (Kelleher 2005) or about 10 per cent of reported landings. Some studies have indicated that discarding rates may be substantially greater, regionally or globally (Harrington et al. 2005; Davies et al. 2009), but more recent estimates are not available. Discards appear to have decreased from about 27 million tonnes in 1980–1990 (Alverson et al. 1994) owing to bycatch reduction efforts as well as an increasing use of bycatch for local consumption, aquaculture feeds, etc. The fact that the ceiling in marine fisheries production has been reached is illustrated by the state of marine resources. Relative to the level that would support maximum sustainable yield, 20 per cent of targeted fishery resources are moderately exploited, 52 per cent are fully exploited with no further increases anticipated, 19 per cent are overexploited, 8 per cent are depleted and 1 per cent are recovering from previous depletion (FAO 2009, p. 7). Similar figures have been compiled for US and Canada domestic fisheries, although a recent study of 10 well-studied ecosystems revealed five in which fishing pressure is declining owing to increased management (Worm et al. 2009). However, in European Community waters, more than 80 per cent of stocks are overexploited or depleted (European Commission 2007). The first overview study of the state of marine fisheries resources by country (Garcia 2009a,b), using FAO statistics for 1950–2006, confirms that globally, the maximum average level of bottom fish and small pelagic fish production has been reached within the last decade. Catches of crustaceans and cephalopods are still growing, perhaps owing to reduced stocks of their predators but also owing to increased targeting because of their high price. At national or sub-national level, the analysis showed that during the last decade, 30 per cent of fishing areas were still ‘growing’ (increasing production), 30 per cent were ‘mature’ (stagnating production) and 40 per cent were ‘senescent’ (decreasing production, some of which for many decades; figure 3). Chronology of resource development phases in 169 national fishing areas (1950–2006). Source: Garcia (2009a,b).
It is also important to weigh the state of stocks by their importance in terms of maximum potential. The data are not available to fully explore this relationship but table 1 and figure 4, concerning about 75 per cent of recent landings (average 1998–2002), indicate that, respectively, 14.1 per cent of world production (about 11 million tonnes), 57.3 per cent (about 41 million tonnes) and 28.6 per cent (about 22 million tonnes) come from stocks considered, respectively, as underexploited or moderately exploited, fully exploited and overexploited, and depleted or recovering.
Distribution of annual landings (average 1998–2002) by category of resource state in FAO terminology. U, undeveloped; M, moderately developed; F, fully developed; O, overfished; D/R, depleted and recovering. Data source: FAO (2009).
While producing food, employment, livelihood and wealth, fisheries can also generate a significant level of environmental impact on target and non-target resources as well as on sensitive habitats (NRC 2002; Lokkeborg 2005; FAO 2008). Marine debris from lost fishing gear can continue to fish and entangle a wide variety of wildlife. Destructive fishing and IUU fishing aggravate impacts of fishing on the food web and can alter ecosystem structure and function, and ultimately productivity and resilience to the impacts of other drivers such as climate change. This large and crucial subject cannot be dealt with here in any level of detail but it is clear that without serious efforts to define and reduce such impacts, marine ecosystems will risk much greater negative pressure, and policy conflicts between conservation and fisheries could reduce the scope to develop sustainable and productive fisheries. The structural and functional diversity of the sector needs to be carefully considered when considering its trends and future scenarios. Relevant typological dimensions include:
There are no complete or consistent time series but according to FAO (2009, SOFIA 1990–2008), the global fleet size, all vessel sizes included, had doubled from about two million vessels in the 1970s to some four million in the 2000s. The largest number operates from Asia. According to FAO (2009), the size of the Chinese fleet of vessels over 100 tonnes in 1996 was approximately 15 000. Adding these to the vessels registered by the Lloyds Maritime Information Services (LMIS; FAO 1999, p. 73) leads to an estimate of the world fleet size of 43–45 000 vessels over 100 tonnes in 1996. No data have been found about its evolution since then, but FAO (2009, fig. 18) indicates that the world fleet size as now registered in the Lloyds database has remained practically identical in number and tonnage. About 500 new industrial vessels were built every year in the 1950s, growing to about 2000 per year in the mid-1970s, and decreasing rapidly to about 300 per year in the early 2000s (Garcia & Grainger 2005)2 and to 50 vessels per year in 2007 (FAO 2009, fig. 19). Recent data seem to confirm that the period of large investment in large-size vessels, which peaked around the mid-1980s (Garcia & Grainger 2005), is largely over. However, the global fleet capacity index (fishing power) appears to have increased by a factor of six between 1970 and 2005, a period during which the global harvesting productivity decreased by the same amount (World Bank 2009). Food security is achieved when ‘all people, at all times, have physical, social and economic access to sufficient, safe and nutritious food to meet their dietary needs and food preferences for an active and healthy life’ (FSN Forum 2007). Fishes have always been an important component of human food, particularly around lakes, rivers, deltas, floodplains and coastal areas, and particularly on small islands. This importance has spread globally with the development of trade. Fisheries may contribute to food security in two ways: (i) directly as a source of essential nutrients; (ii) indirectly as a source of income to buy food. Because of their contribution to total global output, and to the numbers of people involved in fishing, marine capture fisheries play a substantial role in these respects. Fish is highly nutritious, rich in essential micronutrients, minerals, essential fatty acids and proteins, and represents an excellent supplement to nutritionally deficient cereal-based diets. It provides more than 1.5 billion people, particularly in low-income food-deficit countries, with almost 20 per cent3 of their average per capita intake of animal protein (FAO 2009). According to Worldfish, 400 million poor people depend critically on fish for their food,4 particularly in small island states, Bangladesh, Ghana and in the lower Mekong basin (FAO 2007; Hortle 2007; Laurenti 2007). From the 144 million tonnes produced in 2006 by capture fisheries (53%) and aquaculture (47%), about 110 million tonnes were used for food directly and 33 million tonnes indirectly through fish meal used for aquaculture, cattle, pig and poultry farming. This represented a record level of per capita supply of 16.7 kg (13.6 kg excluding China and 13.8 kg in low-income food-deficit countries). Outside China, per capita supply has shown a modest growth rate of about 0.5 per cent per year since 1992. Since 1950, the increases in fishery production have managed to offset demographic growth, gradually improving food supply from aquatic resources (figure 5). World fish utilization and supply. Dark grey bars, food; light grey bars, non-food uses; light grey line, population; dark grey line, food supply. Source: FAO (2009).
The fisheries and aquaculture sector contribution to gross domestic product (GDP) typically ranges from around 0.5 to 2.5 per cent, but may exceed 7 per cent in some countries, a level similar to agricultural sector GDP. Growth in sector employment, particularly in the developing world, and largely outpacing that of agriculture, has been mainly in small-scale fisheries and in aquaculture. Capture fisheries provide employment and income directly and indirectly, e.g. through boat building, equipment and maintenance, vessel supplies, fish processing and trade, etc. Some 42 million people work directly in the sector, the great majority in developing countries. Adding the related activities, the sector is estimated to support more than 500 million livelihoods (Worldfish 2009), much of which is associated with marine capture fisheries. Moreover, fishery trade (including the sale of fishing agreements) is particularly important as a source of foreign currency for many developing countries. The sector also has particular significance for small island states. However, fisheries can also incur substantial costs to society, in lost resource rent—estimated at around 50 billion USD according to the World Bank (2009)—and/or in direct subvention including capital support and fuel subsidies, amounting to tens of billions USD per year. Poverty is one of the sources of fishery resources degradation in many rural areas of the developing world and is an obvious constraint to achieving food security. However, healthy fisheries may contribute to poverty reduction through generation of revenues and wealth creation, operating as a socio-economic ‘lift’ at community level and contributing to economic growth at national level. If well managed, fisheries can maintain a sustainable stream of economic benefits in the community, and in some cases can operate as a safety net when needed, e.g. for people displaced from their area by serious drought (e.g. collapsing the agriculture sector) or by civil wars. However, though regionally important, particularly in Asia, fisheries are for the most part a small socio-economic sector, and cannot alone counteract poverty processes. Thus, impacts on poverty will be complementary to other sectors' contributions in nationwide poverty-reduction programmes (Béné et al. 2007). The dynamics of the fishery sector reflect the complex interaction of a number of internal or external drivers, the most significant of which are examined below. World population is a key driver of seafood demand and fisheries development. The projected increases in global population also suggest continued migration to coastal areas with accompanying development pressures, and increasing gaps between wealthy and poorer nations and peoples. Half of the world population lives within 60 km of the ocean and three-quarters of the large cities are located by the coast. By 2020, it is projected that some 60 per cent of the world population (about 6 billion) will live in coastal areas (Kennish 2002, in UNEP 2007). By 2050 it is expected to reach 9 billion (UN-DESA 2009) and according to UN-Habitat (2009), globally, 70 per cent of this population will live in urban centres. Most of the megacities (over 20 million inhabitants) will be in the coastal zones, looking for food and livelihoods. Demand for fish as food is particularly high in the wealthier parts of society and demand increases with the economic level of development and living standards. This demand has been rising in both the developed and developing world at more than 2.5 per cent per year (Peterson & Fronc 2007), and as wealth increases in highly populated countries such as China and India, demand levels are likely to rise more strongly. The issue of globalization and its application to fisheries can be controversial and politically sensitive. According to Held & McGrew (2000, cited in Rood & Schechter 2007): ‘… globalisation denotes the expanding scale, growing magnitude, speeding up and deepening impact of interregional flows and patterns of social interaction. It refers to a shift or transformation in the scale of human social organisation that links regions and continents’. The geographical expansion of fisheries has progressively globalized its structure, operations, trade flows, science and governance, at an increasing pace. The expansion of fleets onto the high seas has had a significant international impact on policy (e.g. the 1995 UN Fish Stocks Agreement; FAO 2009 Port States Agreement) and on its scientific support. Globalizing markets have increased demand and enhanced competition, affecting the evolution of the production and consumption patterns of the sector as well as wealth distribution within the sector. The strengthening and harmonization of food safety regulations and norms have changed seafood processing standards globally and can represent significant additional costs for exporters, with particular impacts in developing countries. The global marketplace effect on scarce/high value resources has also shifted seafood products away from poorer consumers to those with greater ability to pay, with potentially significant local food security consequences. Environmental awareness of consumers, stimulated by public and environmental group campaigns, has increased demands for seafood products meeting demands for both quality and environmental sensitivity (Peterson & Fronc 2007). Ecolabelling is slowly spreading (Phillips et al. 2003; Seafood Choices Alliance 2008), largely through non-governmental efforts (e.g. the Marine Stewardship Council, MSC), and is likely to continue, better linking the role of governments, responsible for establishing management systems and norms, with independent third-party certification mechanisms. The public sentiment for sustainably produced food and retailers responding to that demand, particularly in Europe and North America, has contributed to improving management frameworks for capture fisheries, as shown by a decade of experience in developed nations. Fisheries governance is an intricate web of public, private and hybrid institutions interacting in a complex manner to administer and regulate the sector (Garcia 2009a,b), and its weakness is considered to be the main factor behind the problems of overfishing and stock decline (Beddington et al. 2007; Garcia 2009a,b; Mora et al. 2009). Fishery sector governance and the systems within which it is nested are key drivers of fisheries performance. The governance frameworks adopted at national, regional and global levels interact with each other in a continuous but asynchronous manner (i.e. developing at different speeds in different places). The most crucial aspects of fisheries governance relate inter alia to: (i) connecting the fishery policy framework within a supporting national policy framework; (ii) the capability of fishery administrations; (iii) the nature of entitlements to resource access, including possible co-management systems; (iv) the level of participation of stakeholders, non-governmental organizations (NGOs) and civil society groups; (v) the availability and enforcement of deterrence measures; (vi) the level and extent of inter-ministerial coordination; and (vii) the quality of international collaboration. The central international law and policy framework, the 1982 United Nations Convention on the Law of the Sea (UNCLOS), came into force only in November 1994. In the wake of the UNCED (UN Convention on Environment and Development), the implementation framework of UNCLOS has started to improve in a number of directions, with the adoption of the 1993 FAO Compliance Agreement, the 1995 United Nations Fish Stock Agreement and the 1995 FAO Code of Conduct for Responsible Fisheries (CCRF). The Precautionary Approach to Fisheries (PAF) and the Ecosystem Approach to Fisheries (EAF) were adopted in 1995 and 2001, respectively. The Sustainable Livelihood Approach to Fisheries has also been successively tested, particularly on small-scale fisheries (Allison & Horemans 2006). New instruments have been developed to combat IUU fishing such as the 2001 FAO International Plan of Action to Prevent, Deter and Eliminate Illegal, Unreported and Unregulated Fishing (IPOA-IUU) and, very recently, the legally binding 2009 Agreement on Port State Measures to Prevent, Deter and Eliminate Illegal, Unreported and Unregulated (IUU) Fishing. On the high seas, which produce around 10 per cent of the world catch, weak governance resulting from incomplete jurisdiction by Coastal States has been a major problem. This area is plagued by the insufficient exercise of their international responsibilities by Flag States, Coastal States and Port States. As a result, the Regional Fisheries Management Organisations (RFMOs) are still unable to fully control member states' fishing activities and are confronted with IUU fishing. In some RFMOs, the parties themselves are setting catches well above scientific advice and failing to implement strong enough conservation measures. The rapid development of new fisheries in particularly vulnerable areas such as the deep sea is also trying the RFMOs' capability. For example, seamount fisheries or new fisheries in the Arctic (as sea ice retreats) are not clearly in the purview of existing RFMOs—though they potentially could be. In the EEZs, jurisdiction is either purely national, shared (for transboundary stocks5) or harmonized (for straddling stocks6). In addition to the dearth of bilateral sharing agreements, weak governance problems encountered are connected mainly to the lack of clear and defendable entitlements (whether communal or individual), the widespread reluctance to limit access to resources, and the difficulty to eliminate excess fishing capacity, with the hard socio-economic and political consequences that this entails. The ongoing shift to participative and adaptive methods such as the Ecosystem Approach to Fisheries (EAF) has the potential for broadening the range and role of stakeholders. However, a major problem worldwide is in the efficient and effective management of small-scale fisheries with its special prescriptions including management subsidiarity, active participation and devolution, communal rights, self-management capacity-building and the use of sustainable livelihood approaches. While advances such as those noted have addressed loopholes in the UNCLOS regarding stocks located wholly or partly on the high seas,7 decisive progress has also been seen in EEZs (e.g. in the US, Canada, Iceland, Norway, Australia, Namibia, Chile, New Zealand), adopting a progressive consensus on rebuilding stocks by reducing capacity, limiting catch or effort and using various forms of fishing rights to strengthen conservation incentives in medium- and large-scale commercial fisheries. Initial progress is also being made in implementing the EAF in many national fisheries, testing tools and approaches. Demonstrable progress in some fishery management systems, and recovery of depleted resources (Rosenberg et al. 2006; Beddington et al. 2007; Garcia 2009a,b; Worm et al. 2009) provide signs of hope even though achievement of the 2015 World Summit on Sustainable Development goal is yet distant. The management of small-scale fisheries, with its fundamental components of demography, poverty and food security, remains particularly problematic. The performance of the governance system is reflected in the state of the resource base, the economy of the sector and the contribution to food security. Regarding the resource base conservation, with a few notable exceptions, performance has been poor. Since its entry into force in 1994, the UNCLOS has been an important improving factor, though partly counteracted by IUU. Regarding the economy, the Great Law of Fishing of Graham (1935), according to which all unlimited fisheries were to decline, has been amply verified (e.g. Garcia & Newton 1997; World Bank 2009). The latter study confirmed that 75 per cent of the world's fishery resources were economically underperforming assets leading to a loss of potential net economic benefits from marine fisheries of about 50 billion USD annually. This confirms that despite substantial improvements in policy and management frameworks, implementation remains sluggish, slowed by delayed response of stocks (because of their inherent dynamics or climate conditions), lack of political will and implementation capacity, unclear or inexistent users' rights, poor incentive structures (including corruption), etc. In both the high seas and EEZs, the highly dynamic nature of fisheries stocks and activities can make it difficult for governance systems to adapt quickly enough, unless a protective precautionary approach is applied. However, this can also result in increased inefficiencies, loss of benefit and increased compliance problems. Various approaches to adaptive management are being promoted to improve dynamic response, but these are yet to be widely applied. Regarding food security, the sector has performed well globally, improving per capita seafood supply despite large population increases. However, there is clear evidence that global capture fisheries reached their production limit in the late 1980s and that, on average, the quality of supply has decreased (smaller individuals and species). Collaboration has improved between international institutions in charge of fisheries (FAO, International Council for the Exploration of the Sea—ICES, RFMOs) and those dealing with the environment (such as United Nations Environment Programme (UNEP), Convention on Biological Diversity (CBD), International Union for Conservation of Nature (IUCN), Convention on International Trade in Endangered Species (CITES), OSPAR, etc.) and the role of NGOs has been increasing significantly. However, though environmental and fishery governance are co-evolving, better collaboration and more explicit allocation of responsibilities are needed. The arenas for testing such collaboration are in area-based integrated management, such as ecosystem-based fishery management (EBFM) or the EAF, using inter alia marine-protected areas, refugia and marine spatial planning (Ehler & Douvere 2009). In both governance systems, the role of the civil society has grown with participative governance, raising public attention to fisheries and to broader environmental problems and changing the political and economic forces at play. This change, already occurring in many regions, better reflects societal concerns and aims than the more insular sectoral focus of the past. Finally, the uncertainty resulting from the complexity of international and national governance conditions may call for the application of well-considered precautionary approaches (FAO 1996) and other environmental management strategies. In the context of governance, the description and status definition of fisheries systems and the science underpinning their management have long been a practical, theoretical and organizational challenge. Fishery management science has also been affected by globalization in many contexts. To a great extent, the scientific approach in the developed world and for large-scale fisheries has moved towards ever more complex data and modelling approaches, incorporating information from highly developed monitoring programmes including research surveys, sophisticated statistical modelling approaches and projections of future states of the resource. In the developing world and in small-scale fisheries (most of which remain practically unmanaged), scientific support is slowly moving towards integrated multi-disciplinary and participative assessment, with strong social sciences input, for example in the framework of the Sustainable Livelihoods Approach (Allison & Horemans 2006). The development of methodologies to advice management in data-limited situations is a priority. Many conventional methods are data-intensive and difficult to use in less developed countries and in the face of climate change. Simpler, compelling advice is needed that can be developed rapidly as changes are observed, coupled with adaptive management processes that can react effectively when better information becomes available. As discussed in §2c, the large vessel fleet has stabilized in size if not in fishing power (FAO 2009; World Bank 2009). In EEZs, however, the total number and power of smaller boats has increased substantially during the same period. As a consequence, global fishing capacity is still very high, probably at its highest point ever and, with some notable exceptions, the required global adjustment to reduced stock productivity has not yet happened. However, with fishery resources severely depleted, oil prices increasing, and subsidies decreasing, further massive investments are much less likely. Under these conditions, and considering the low rate of recruitment of new vessels, the fleet should decrease in numbers in the future to about half the present size (Garcia & Grainger 2005). With the slow but accelerating adoption of fishing rights, the fleet reduction might accelerate. A risk exists, however, that ageing vessels, trying to reduce operations costs to remain profitable, may reflag and move into IUU fishing. Furthermore, there is some evidence that fleet size is still increasing in some developing countries (e.g. Vietnam) despite the challenge of rising fuel costs and declining fishery resources. Technological progress has been both a source of beneficial expansion and wellbeing for fishing communities and a constant challenge for managers. Fishing power and efficiency has increased dramatically because of larger or more powerful engines capable of propelling larger vessels and a greater amount of gear over a greater range. Other innovation areas include hydraulic power applications; stronger materials for fishing gears increasing size and efficiency; better electronic aids for navigation, bottom mapping, fish finding, gear deployment and communication; and improved fish preservation technology. Many of these technologies have also become inexpensive and compact enough to be available to almost any size vessel. Technology has improved fishing capacity and efficiency as well as safety on board, and in some cases improved fishing selectivity and product quality, but it has also greatly increased fishing mortality, spreading overfishing worldwide (Garcia & Newton 1997). Its unbridled use will continue to direct fisheries on a trajectory of progressive automation and reduction of labour, with negative implications for coastal communities. In addition, the drive for processing-based value-addition can keep fleets in operation in otherwise unviable conditions, even though resources are driven down to dangerously low levels. The reduction of discards in the 1990s (Kelleher 2005), essentially through improved transformation of the bycatch into edible products and fish feeds (as opposed to improved selectivity), is a case in point. The impact of progress in information and communication technology includes: (i) improved information on vessel distribution (through satellite vessel monitoring systems, VMS); (ii) accelerated submission of catch data (e.g. through the VMS or the Internet); (iii) facilitation of global or regional information systems (e.g. on resources or IUU) and comparable research programmes on similar ecosystems; and (iv) improved understanding on underwater habitat (e.g. with autonomous underwater vehicles and improved scanning instruments), species distribution and migration and related environmental conditions (through archival tagging). However, it has also increased communication, foresight, evasive capacity and efficiency of pirate fleets. Fuel efficiency has been also improved and fishing is globally more fuel-efficient than any terrestrial meat production system (Tyedmers et al. 2005) but more efforts will be needed in the face of rising fuel costs. It should also be noted that although it is available, improved technology may not be necessarily applied unless both fishermen and government officials are willing to adopt it. This may require much greater incentives, particularly for technologies that improve reporting, monitoring and management capacity. Natural climatic oscillations, particularly those at medium (decadal) scale, have always affected fisheries as well as their management performance.8 Clearly, therefore, the impact of global climate change on ocean capture fisheries will be important for the availability, distribution and resilience of resources as well as for the sector structure and performance. Climate impacts are already evident, with warmer water species moving towards the poles, changes in coastal conditions that may affect habitat, impacts both positive and negative on productivity at all levels, and the effects of ocean acidification. Climate change impacts will likely be as varied as the changes themselves and will be felt through changes in fishing opportunities (resources available and entitlements), operational costs (in production and marketing) and sales prices, with increased risks of damage or loss of infrastructure and housing. Fishery-dependent communities may also face increased vulnerability in terms of less stable livelihoods and loss of already insecure entitlements. Some changes may also be positive, opening new opportunities as new species become accessible. So far, most fishery sector literature concerns potential negative impacts and positive options are not well defined. A community's ability to limit losses and benefit from other opportunities will depend on its adaptive capacity. In terms of food security, climate change may potentially act across four interconnected dimensions: availability, stability, access and utilization of food supplies.
The vulnerability of fishers and fishing systems to climate change would be determined by three factors: their exposure to a specific change; their sensitivity to that change; and their ability to respond to impacts or take advantage of opportunities. Fisheries presently located in the high latitudes or at the interface between two neighbouring ecosystems (e.g. Senegal, Angola) or else in very shallow areas (estuaries, deltas, coral reefs) will be among the most exposed. Coastal communities in low-lying areas and small island states will be at high risk of floods and extreme weather conditions, requiring protective infrastructures, early warning systems, education and perhaps relocation. In these circumstances, priority assistance, including disaster relief, would have to be given to poor coastal fishing communities, so often neglected and disenfranchised. The capacity to change is a real issue, particularly in highly vulnerable areas and fisheries. The status quo not being an option, adapting to climate change is a necessity, requiring preparation and means. If the change were slow, adaptation would be easier. Thus, if the rate of change was lower than the rate of depreciation of investments, the industry would adapt much more easily than if not the case, when high costs and economic collapse would be more likely, and special funds might be required for emergency intervention. However, the most imperative adaptations might be required in means and infrastructures (e.g. roads, electricity networks, early warning systems and other general infrastructures) that are beyond the fishery system itself but would influence its capacity to adapt. In the next 40 years, the marine capture fishery sector will face its most critical challenges. In the past, fishers successfully overcame their fear of the unknown, risking their lives in one of the world's deadliest activities, reaching farther and deeper to bring food and expand their livelihoods. They now need to control and often reduce their harvesting capacity or, unless subsidized, face directly the consequences of not doing so. The potential for sustaining catches, food output and value at or near current levels, and supporting the nutrition and livelihoods of many hundreds of millions of dependent people, will rest critically on managing fisheries more responsibly. The sector's future, whether in the high seas or EEZs, will be significantly conditioned by the capacity to address key inter-connecting elements of global or/and local relevance, including: (i) its present state and characteristics; (ii) its intrinsic capacity to adapt to multiple internal drivers, i.e. its resilience; (iii) external drivers affecting natural and human sub-systems of the ecosystems; and (iv) the constraints that may limit or jeopardize governance efforts. In the near and medium term, the sector will continue to face four main, possibly conflicting, challenges: (i) reducing excessive harvest to rebuild overexploited stocks and improve sectoral performance; (ii) reducing fisheries and aquaculture environmental impacts; (iii) matching the growing demand of an increasing world population; and (iv) adapting management and communities to the effects of climate change. For the long term, perspectives on these major global resource systems are constrained by the capacity to predict the evolution of political and economic systems: access to and cost of energy, the control of land-based degradation and contamination, and climate variability and change. First and foremost, the future for marine fisheries will be conditioned by sectoral and national social, economic and environmental governance (Beddington et al. 2007; Garcia 2009a,b; World Bank 2009; Worm et al. 2009). The challenge is a dual one. On the one hand, a large part of the society no longer accepts damage to natural, public trust resources such as the world's oceans and is calling for a change in use and consumption patterns. On the other hand, the potential impacts of climate change stand to shake all acquired positions and certainties. Resources will need to adapt their distribution and productivity at an unpredictable rate. Fisheries will need to adapt to weather, resource and market changes, and avoid undermining the adaptive capacity of the natural system on which they depend. Drivers and constraints (including governance) will shape the external envelope of all the acceptable trajectories of the fishery system (i.e. its domain of ‘viability’ sensu Aubin, in Cury et al. 2005). Under such conditions, prediction is hazardous. In national comparisons, apparently similar situations may hide local differences in drivers, mechanisms or capacity to change. Similarly, climate change, international policy drivers (e.g. United Nations Convention on the Law of the Sea—UNCLOS, World Trade Organization—WTO, CITES) and consumer preferences are global but their impacts will vary regionally and locally. While some similarities with other food production systems are to be expected (particularly for aquaculture), capture fisheries are fundamentally different in terms of their linkages and responses to change and in food security outcomes. Unlike most terrestrial animals, aquatic animal species are poikilothermic (cold-blooded) and changes in habitat temperatures will more rapidly and significantly influence metabolism, growth, reproduction and distribution, with stronger impact on fishing and aquaculture distribution and productivity. However, the interconnectedness of aquatic systems allows species to change distribution more easily as ecosystems shift, to remain in their zones of preference. Finally, the greater genetic diversity of marine animals compared with farmed animals also favours adaptation to new conditions. Therefore, the fishery sector requires special consideration to ensure that policy and management responses to climate change are effective. To the extent that present trends in fishery ecosystem parameters (e.g. fleets, resources, environment, governance) and external drivers (e.g. demography, economic development and environmental policies, climate change) may provide some indications about the future, the following sections offer some reflections. Particularly for systems with low fuel demands for catching effort, capture fisheries, together with some forms of extensive aquaculture, have among the lowest ecological footprint for animal proteins. Replacing fisheries supplies with equivalent terrestrial sources or with intensive forms of aquaculture would significantly add to global resource demands, and would be a substantial ecological burden. However, without a substantial reduction of fishing capacity and explicit stock rebuilding plans, the prospect for building and sustaining resources is not good. It could be expected that underexploited stocks could produce more if fully developed and that overexploited, depleted and recovering stocks could produce more if properly managed. However, given complex ecosystem dynamics, this theoretical output cannot simply be added to the production of stocks that are presently fully exploited. Trade-offs will need to be made between more or less productive stocks as a matter of societal choice, as some may have to be accepted in a sub-optimal (but sustainable) state in order to optimize the production of others. The existence of predator–prey relations across the network of resources means that dynamic adjustments will naturally take place. A recovery of predator stocks will lead to increased predation on species that are also exploited for fishing. The combined sustainable yield may not be much higher than the current yield. The productivity of different ecosystems has changed and will do so further owing to changing environmental conditions such as habitat loss or gain, climate change and non-native species introductions. Analysis of historical data indicates that many fisheries systems had much higher productivity in the past (e.g. Rosenberg et al. 2005), which may well not be recoverable due to fishery depletion as well as land use and ecosystem level changes. There are, most likely, no major new resources to develop, except perhaps krill and oceanic squid and there may be ecological reasons for not overexploiting these stocks, which are key foods needed by large marine predators to recover from overfishing and adapt to climate change. A major unknown, subject so far to very limited consideration, is the impact of massive coastal degradation and global contamination of the ocean, the ultimate sink of human pollution. Finally, climate change may improve the conditions for some resources and worsen them for others. The past has shown that unfavourable climatic conditions combined with excessive fishing pressure led to collapse. Inter-tropical resources may be heading towards particularly unfavourable combinations of conditions with high human demands, emigration of resources towards more temperate areas, and weak governance. Demographic trends point to population increase, though economic trends are much less certain, as are the social implications and political consequences. Uncertainties in these domains tend to shape future scenarios around three broad options: (i) status quo, with present trends maintained within their envelope; (ii) significant improvement in democracy and governance; and (iii) significant collapse in democracy and governance. The probability of each is unknown but the status quo is often considered as the most likely. The future of marine fisheries in these three contexts is likely to be: continued decline of the sector in scenario (i); substantial improvement in scenario (ii); and rapid collapse in scenario (iii) (Garcia & Grainger 2005). The world population is increasing and notwithstanding the present financial crisis, economic growth is still expected in many countries and an increase in the demand for high-quality seafood can be expected. Though aquaculture may fill the gap to some extent, its ability to overcome its own constraints is not fully definable. Potentially increasing prices will provide additional incentive for fisheries and aquaculture investments, and in the absence of effective management, this would lead to stock collapse, reducing supplies at high societal costs and with a potentially severe backlash for the image of fisheries. Demography-driven demands for employment may also make full- or part-time fishing more attractive or even one of the few options available. In developed countries particularly demand may increase on coastal resources for tourism and recreation, including recreational fishing—the impact of which can be substantial (Coleman et al. 2004; Taylor et al. 2007). The increase in demand for products and employment may lead to: (i) political pressure to slow rebuilding plans; (ii) greater incentives for IUU fishing; and (iii) increasing pressure on near-shore coastal resources by subsistence or low-income fisheries as well as on high-value products from more mechanized and industrialized fishing. It can be expected that developing countries will continue past trends of directing a large and growing part of their primary resources to export trade, in search of hard currency. Parallels may be drawn with the rapid increase in use of land in Africa and Latin America by sovereign funds and agro-industrial groups to countries with high food demand such as China and India. In fisheries, the equivalent is the granting of fishing access agreements and the reflagging of fishing vessels under the national flag of developing countries owning large fishery resources. Without a major modification of the socio-economic perspectives in these countries and, for instance, the development of alternative sources of livelihood, the risk is that their fishery resources will remain under very high pressure and the contribution of fish, particularly to local food security, may decrease. An important driving factor will be the World Trade Organization and other agents, with, for example, the EU rules preventing importation of fish from IUU operations, connecting trade and the environment. Together with progress in ecolabelling and sustainable seafood campaigns, these have potential, at least in wealthier countries, to reduce demand and hence economic drivers for poorly managed resources. This should stimulate better management. However, this is limited by the potential for catches which are rejected by wealthier more ‘ethically’ driven markets to be redirected to countries and populations with much lower buying power, e.g. through increased intra-regional trade. The form under which fish is consumed may not be very relevant for future food security and safety, except perhaps with regard to contamination. The important unknown, globally, is the destination of lower-value fish now going to reduction for fish meals and oils. As global demand increases, particularly for poorer communities and developing countries, these resources will be under tension between three main destinations: (i) the present usage, from animal feeding and increasingly for aquaculture; (ii) direct food for humans; and (iii) the food of the stocks of predatory fish species (e.g. tuna, cod) it is commonly intended to rebuild, an often overlooked demand. The policy objectives for the sector cannot be to produce much more than it produces today from wild stocks. The aim can only be to maintain and optimize production and profitability, in terms of catch composition (species, age and size), nutritional quality, fuel consumption and the ecological footprint. This implies maintaining, or recovering when relevant, resources and productive ecosystems and facilitating their adaptation to climate change. Governance frameworks have substantially improved, including those on the high seas, and good examples exist to demonstrate the effectiveness of the instruments at hand. The global political will of governments to implement them effectively and eliminate loopholes must still be demonstrated, however, and developing countries will continue to require assistance in that regard. The growing concern regarding environmental degradation generally will add pressure to better conserve fishery resources and their environment. The adoption of fishing rights in commercial and large-scale fisheries bears the risk of concentrating resources in fewer hands, disenfranchising coastal communities. Their application to small-scale fisheries (in the form of communal rights or territorial use rights) will continue to be tested and the long-term outcome is at this stage unclear. Many of the approaches to improve fisheries management (e.g. fishing rights, participative/adaptive management) and sustain adequate levels of social equity require a democratic environment that may be yet to emerge in some countries. Ministries in charge of environment and the civil society active in this domain are also gaining influence and societal support, and the role of environmental agencies in fisheries management (and exploited ecosystems) will increase, with consequences that can only go towards reduced use rates, with impacts on food security that are yet to be assessed. At the same time, these agencies need to deal more effectively with the often irreversible environmental degradations and contamination from other human activities that affect fishery resources. When attempting to represent the functioning of productive ecosystems in the next 50 years, scientists face numerous sources of uncertainty affecting the quality of advice. The use of methodologies such as the ecosystem approach has increased the amount of uncertainty to be addressed. However, in the future, uncertainties will be reduced and with the closer association of social sciences, the quality of advice under uncertainty has the potential to improve. However, uncertainty will not be eliminated (Mangel 2000), and ultimate management performance is likely to depend on the trade-off between precautionary protection and responsive adaptation to emerging limitations and opportunities. The actions required for maintaining the contribution of capture fisheries to food security in the face of climate change are similar to those already applied, with two aggravating factors: (i) overfishing which reduces resilience to environmental change so that climate change adds urgency to the classical rebuilding/recovery issues; (ii) transition through a progressively changing context, perhaps with periods of acceleration, adding a destabilizing factor to an already complex governance equation. Facing environmental change and the broad range of its impacts will require concerted and determined action by all main stakeholders, linking private sector, community and public sector agents, at national and regional levels. A wide range of measures can be considered for anticipation, mitigation or adaptation to climate change.
Footnotes1 A Senegalese canoe used to land 8 tonnes of Sardinella per day in the late 1970s, while an Icelandic high-tech small fishing boat with only three men on board caught 240 tonnes of cod in one month in 2008 (G. Valdimarsson, FAO, personal communication 2009). 2 These conclusions are based on a demographic analysis of the fleet age structure in the Lloyds database before the registration of the Chinese fleet and need to be revised. 3 Probably underestimated in view of the under-recorded contribution of small-scale and subsistence fisheries. 4 See http://www.worldfishcenter.org/wfcms/HQ/article.aspx?ID=684. 5 Stocks that occur or migrate across national EEZ boundaries. 6 Stocks of fish that migrate between, or occur in both, the EEZ of one or more states and the high seas. 7 In IUU, illegal fishing also includes poaching inside EEZs. 8 This section draws extensively from FAO 2008. One contribution of 23 to a Theme Issue ‘Food security: feeding the world in 2050'. †Retired. While the Government office for Science commissioned this review, the views are those of the author(s), are independent of Government, and do not constitute Government policy. © 2010 The Royal Society This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. References
Page 6Inland capture fisheries group activities that extract fish and other living organisms from surface waters inland of the coastline. In 2008, inland capture fisheries produced an estimated 10 million tonnes of fish and crustaceans (FAO Fishstat 2010—see http://www.fao.org/fishery/statistics/software/fishstat/en). As a valuable source of protein-rich food and employment, inland fisheries deliver nutritional security and income to hundreds of millions of rural households. Nevertheless, there are serious misperceptions about the magnitude, benefits and sustainability of inland fisheries resources which limit the effectiveness of national and international policies for their management and undermine their future. Inland fisheries are dynamic. As economies evolve the nature of inland fisheries changes (Arlinghaus et al. 2002). The importance of high-value inland recreational fisheries grows and reliance on fisheries for food declines as local economies develop. Inland fisheries are distinct from marine fisheries in their nature and in the range of drivers that influence them. Although commercially intensive fisheries exist, inland fisheries are generally characterized by small-scale/household-based activities. Participation in fisheries is high and the bulk of the catch is consumed locally. By-catch is insignificant as practically all fish caught are used. This means that their benefits are widely spread. Inland fisheries are also very diverse, being based on a range of ecosystems whose fish communities respond very differently to internal (fisheries-based) and external (natural- and human ecosystem-based) drivers. One conceptual driver of inland fisheries is the widespread vision of inevitable demise of inland fisheries in the face of escalating human impacts, which is reflected by studies from all continents (Friend et al. 2009). Catches are allegedly falling, species disappearing and many other symptoms of chronic overfishing are reported. There is an assumption that overfishing is to blame, which is influenced by perceptions derived from marine fisheries. This instils a sense of hopelessness, fuelling neglect and subordination to agricultural, industrial and domestic sectors, particularly with respect to competing resources. The contribution of wild-caught, inland fish to food security has been largely ignored, and priorities switched to other sectors. Aquaculture is promoted as the means to maintain production in the face of this perceived decline, a view prominent throughout the tropics and widely held by aid agencies. The result is a lack of resources assigned to inland fisheries, a lack of information and apparent failure to incorporate inland fisheries' interests adequately into administrative structures. In addition, governments and resource developers see inland fisheries as an impediment to their desires to expropriate the wealth of the rivers—the transfer of generalized wealth (nutritional security, livelihoods) from powerless people into focused income streams that benefit powerful people (Osborne 2010). Nevertheless, reported catches from inland fisheries are still rising at a linear rate of about 3 per cent per year globally (figure 1). There is widespread evidence that much of the catch from inland fisheries is unrecorded, partly because of the diffuse and small-scale nature of individual fisheries, the lack of easily definable landings, and because much of the catch goes directly to domestic consumption (e.g. Welcomme 1976, for rivers; Coates 2002, for Asia; Hortle et al. 2008, for rice fields; Braimah (in Béné 2007), for Volta Lake; Lymer et al. 2008, for Thailand). The Food and Agriculture Organization of the United Nations (FAO) itself posts caveats about the quality of the inland fisheries statistics in its SOFIA (The State of World Fisheries and Aquaculture) reviews (FAO 2002, 2004, 2007, 2009). Global trends in inland fish catch 1950–2008—including fish, Crustacea and Mollusca, excluding reptiles and mammals (y=12311x + 2E+06; r2 = 0.960). From FAO Fishstat database.
These contrasting views of increasing production and underestimation of the fisheries versus continuing reports of declining fish catches, loss of diversity and lack of potential in individual fisheries are difficult to reconcile because a lack of indicators hinders the formulation of management policy. This review attempts to clarify some of the issues surrounding the various types of inland fishery to define their role in food security and in so doing defines the various drivers operating on inland fisheries, where drivers are defined as factors influencing yield, changes and sustainability in inland fish resources and fisheries. Demand is the primary driver of almost any human activity including inland fisheries, aquaculture and marine fisheries. It also regulates water management, power supply, mining, forestry, agriculture or any of the other influences on inland waters. Demand operates through a series of more immediate drivers as described in table 1. This summarizes the principal drivers regulating inland fisheries, the mechanisms through which they operate, their effects and some solutions. Further details on some drivers are discussed in the sections listed.
Most countries report their inland fish catch statistics to FAO, where they are accessible through Fishstat (http://www.fao.org/fishery/statistics/software/fishstat/en). Several weaknesses are apparent in the existing statistics including:
FAO nominal fish catch statistics reported a total catch of 10 220 499 tonnes in 2008 for the inland waters of the world. Catches have risen steadily at about 3.05 per cent per year since the beginning of FAO statistical records in 1950 (figure 1). Trends in catch by continent suggest the main increases are associated with Asia and Africa, and to a lesser extent the Americas (figure 2). Table 2 shows percentage contribution by continent and the growth rate in catch over the last 10 years by continent. The declines in catch noted in Europe and North America can be attributed to the progressively greater use of inland fish resources for recreational fisheries.
Trends in catch by continent 1950–2008 (dark blue, Asia; brown, Africa; green, Americas; violet, Europe; light blue, Oceania; yellow, ex USSR territories). From FAO Fishstat database. N.B. The FAO dataset is discontinuous for the old USSR countries which were reported as a group (other) until 1987. After that date they were split into individual reports. Here the catches from the old USSR including Russia were combined with those of Europe for a continuous dataset—inland water catches from the former Asian USSR republics are now generally negligible.
Most inland fisheries are multi-species, multi-gear in nature, so standard assessment models and concepts of overfishing are inappropriate and can be applied only in a few lakes where a limited number of species are exploited by a homogeneous fishery. Instead, the fishing-down process that operates in many inland waters suggests that the main indicator of heavy fishing is a reduction of mean size (and age) of the fish landed. In many areas of the tropics, the mean size and age of the catch have reduced progressively over the years, until in some cases the major part of the catch consists of fish in the first year of life (see, for example, Lae 1995; Halls et al. 1999). In addition, fisher numbers have increased throughout Asia and Africa (see FAO database on fishermen numbers—http://www.fao.org/fishery/statistics/programme/3,1,1/en). These factors indicate that most inland fisheries in these continents are heavily fished to a degree that substantially alters the species composition, abundance and ecology of the fish communities, and that there is probably little room for any substantial increases in catch. Fishing pressures in South America do not appear to have reached these levels, as catches still include large species, and there is probably some room for increase. In other areas, catches appear to be maintained by stocking programmes. In the temperate zone, inland fisheries resources seem to be increasingly oriented towards recreation and conservation (Arlinghaus et al. 2002; Cowx et al. 2010), although there is growing evidence that recreational fisheries are having significant impacts of stocks both from fishing pressure and stock dynamics (Cooke & Cowx 2004, 2006). Fish from all sources form the major single source of animal protein worldwide, accounting for over one-third (36.58%) of global production in 2007 (table 3). Based on current statistical information, inland fisheries account for 2.36 per cent of animal protein sources. This figure is very likely to be underestimated as compared with their 6.8 per cent contribution to the world total fish production, as about 90 per cent of fish from inland capture is for human consumption as opposed to marine fisheries where a substantial amount goes for fishmeal.
Fish from inland waters can be extremely important to local food security as compared with other sources of animal protein. For example, the Lower Mekong basin has a population of more than 60 million people. Inland capture fisheries yield in the region is about 2 million tonnes per year and 1 million tonnes of fish is equivalent to 1 200 000 big buffaloes or 16–17 million pigs. In Laos inland fish contributes 29 kg per person per year (48% of animal protein) and in Cambodia 37 kg per person per year (79% of animal protein; Hortle 2007). Fish inhabit most inland water ecosystems. The ecology of the many species, and to a large extent the methods by which they are exploited, are determined by the ecosystem and habitat characteristics. The drivers operate in different ways and diverse approaches need to be taken to their management. The main types of inland waters are as follows. Rivers are open, linear systems with numerous small headwater streams that depend mainly on external nutrient inputs. In headwaters, food webs are based on organic matter that is progressively degraded by invertebrate and micro-organism activity along the course of the channel (river continuum concept; Vannote et al. 1980). Significant drivers are the degree of deforestation, and agriculture practice in the vicinity of the river. In lowland rivers, nutrient dynamics involve material deposited on the floodplain. There is a seasonal shift in ecology associated with seasonal flooding (the flood pulse concept; Junk et al. 1989). Floodplains are of particular importance to the breeding, feeding and growth of many species of fish and catches from any particular system are closely correlated to the degree to which the floodplains were flooded in preceding seasons. Lakes are closed systems consisting of a defined body of water. Lake ecology is stable relative to rivers. Some lakes may become severely reduced in area or even dry out when flows are reduced, as, for example, Lake Chad or the Aral Sea. Lakes are classified according to their nutrient richness—oligotrophic lakes being the lowest in nutrients and the least productive, and eutrophic lakes being high in nutrients and highly productive. Changes in water quality are the major driver of lake ecology and shifts in water transparency, dissolved oxygen regimes and resident organisms occur with nutrient enrichment (eutrophication). Oligotrophication, reversion to lower nutrient status, may occur if nutrient inputs are reduced. Pollution from other sources, and sedimentation, are additional pressures. Reservoirs, especially those with short retention times, are sensitive to changes in flow regime in inflowing rivers and may become severely reduced in area at times when the dam is opened for electricity generation or water abstraction. Rapid fluctuations in water level (daily due to hydropeaking) are a particular problem in reservoirs, so one of the main drivers of reservoir ecology is the nature of the dam operation. Wetlands are primarily extensive shallow swampy areas often associated with river or lake systems as riparian floodlands. They often vary in area seasonally and depend on local rainfall, discharge from inflowing rivers, groundwater or on rising lake levels. They are usually very productive and support populations of fish that are highly adapted to the generally difficult environmental conditions of wetland habitats. Wetlands are one of the most threatened of environments. Rice fields constitute man-made, temporary wetlands and account for over half of total wetland area in Asia. They are colonized by fish during the wet season and support high levels of fisheries production (Nguyen Khoa et al. 2005; Hortle et al. 2008). Transitional waters include river estuaries, coastal deltas, coastal lagoons and inland mangrove systems. They are often integrated into complexes of floodable coastal wetland, and permanent lagoons and channels. The ecology of the fishes depends on salinity and the main direct drivers of fisheries production are changes in flow regimes (freshwater input) leading to ingress of saline marine waters, pollution, and land reclamation and associated loss of wetlands, all leading to reduced fishery production. The relative magnitude of the main categories of fresh waters (excluding transitional waters) in the various continents is shown in table 4. Globally, there are 304 million natural lakes that cover 4.2 million km2 (Downing & Duarte 2009). The land area covered by constructed lakes and impoundments is 335 000 km2 (77 million impoundments); 76 830 km2 of this area are farm ponds. The figures for wetland area are considered underestimates.
Fish assemblages in inland and coastal waters tend to be highly complex. In rivers the number of species is strongly correlated with basin area (Oberdorff et al. 1995). The number of species in individual river systems ranges from tens in small basins to over 1000 in large systems such as the Amazon or Mekong. In lakes, the number of species is also broadly correlated with basin area (Amarasinghe & Welcomme 2002). Multi-species, multi-gear fish assemblages and fisheries in inland waters respond to drivers such as heavy fishing or use of illegal methods according to a model known as the fishing-down process (Welcomme 1999). This predicts that, with increases in fishing pressure (effort), the larger individuals and species will be successively reduced and even lost from the fishery (overfishing of species) until only the smaller species remain to form the basis for the fishery. Because smaller species are generally more biologically productive, and many of the larger species are fish-eating predators, production of the fish assemblage as a whole responds, so the level of catch remains the same over a considerable range of fishing pressure. Excessive fishing may reduce the functioning of the fish community (community overfishing). Fishers respond to the diversity of habitats and the large number of species, life stages and behaviour, and seasonality of the systems by developing a range of gears adapted to the capture of all species and life stages throughout the year. Up to 150 different gears are described by Deap et al. (2003) for a large river such as the Mekong. Inland fisheries differ fundamentally from their coastal counterparts in the very diversified and complex forms that inland fishing can take within the livelihood of the fisher households. Indeed, for many local populations, inland fishing is only one economic element within the diversified matrix of activities that constitute their livelihoods strategy. The socio-economic importance of inland fisheries and their role in rural economies in developing countries are often underestimated. Inland fisheries have been perceived as ‘backward, informal and marginal’ economic activities (Platteau 1989) and are poorly integrated into national or local decision-making processes (Dugan 2005; Sneddon & Fox 2007; Sugunan et al. 2007). Recent studies show that the true situation may be very different. It was estimated that more than 56 million people were directly involved in inland fisheries in the developing world in 2009 (BNP 2009). This number is larger than the estimated 50 million people who depend on the same activities in coastal areas. The great majority of inland fisherfolk are engaged in the ‘small-scale’ sector, which ranges from family-based artisanal units operating with or without dugout canoes in small ponds or lakes, or along tributaries or larger river channels, to commercial enterprises with motorized and well-equipped boats fishing in larger lakes and reservoirs. Furthermore, the vast majority of the households that depend on inland fisheries are farmers or fisher–farmers who have traditionally engaged in seasonal farming and fishing. The inland fisheries post-harvest sector generates particularly important economic opportunities for women and it is estimated that 54 per cent of the people involved in small-scale fisheries are women. Fish play a particularly important role in improving the nutrition of millions of people in the world (table 3). Not only are they a source of protein but they also provide vitamins, minerals, fatty acids and other micronutrients essential to a healthy diet (Roos et al. 2007a,b). Small-scale fisheries play a critical role in the food security of producers and their families, but also provide for other consumers. Inland fish is traded far afield from local ‘inland’ markets, and a substantial part of the catch may be consumed by coastal urban dwellers. One of the most important contributions to the livelihood of millions of people is the role of inland fisheries as a source of cash for households, not only for families of full-time fishers but for an unexpectedly large number of rural households that live close to water bodies and engage in fishing activities for only a few weeks or few months each year (e.g. table 5).
Fishing in floodplains or along rivers or lake can be operated all-year-round and offers households the possibility to generate revenues on an almost daily basis. Fishing plays a critical role as a ‘bank in the water’ (Béné et al. 2009) for local populations that largely rely on this activity to access cash quickly. The most critical contribution of inland fisheries is its role in the provision of labour for unskilled workers who often appear to rely heavily on fishing and related activities such as fish processing for their livelihood. The common pool nature of small-scale fisheries allows poor people to engage more heavily in this activity to sustain their lives. Small-scale fisheries also play a role as a ‘safety-net’ in that fishing can provide alternative or additional sources of income, employment and food for the poor and near-poor households whose livelihoods have been temporarily reduced or affected by unexpected shocks or in periods of individual or collective economic crisis. The capacity of inland fisheries to generate rent and foreign exchange earnings (Valdimarsson 2003) is limited to very few fisheries, the best example being the Lake Victoria Nile perch fishery that generates more than US$250 million annually for the three riparian countries (Cowx 2005). Inland waters have suffered the most intense human-induced impacts of all ecosystems over the past 100 years. As a consequence, freshwater fishes have become threatened by a wide array of factors that seem to be the underlying cause of the decline of many fisheries. These issues can be broken down into fishery-related and environment- or watershed-related problems. Exploitation is one of the key drivers affecting inland fisheries. In developed countries, inland fisheries are exploited mainly by recreational fisheries. In developing countries, exploitation is largely for food (Welcomme 2001), although recreational fishing is developing as part of the tourism sector (Cooke & Cowx 2004). The general effects of heavy fishing pressure are to reduce the abundance of desired species (reducing the value of the catch) and affect the fish population or community structures (size and species). While overall production from the fishery is generally not compromised, the quality and value of the fisheries shift towards lower-value products that are consumed locally. An important aspect of many inland fisheries is therefore not sustainability of the total catches but determining what kind of fishery management aims to achieve. The trade-offs between sustaining catches of larger higher-value species versus supplying cheaper fish to the generally more numerous underprivileged (Cowx 1998a) are discussed in §8a. Direct conflicts often exist between commercial and recreational fishing because they exploit the same resource base, although many studies indicate that commercial and recreational fisheries can coexist (see Hickley & Tompkins 1998). When commercial and recreational fisheries compete, the allocation of the harvest is generally in favour of recreational fishing in industrialized nations; the opposite is true for developing countries. The greatest threats to inland fisheries come from outside the fisheries sector. Aquatic resources are subject to numerous anthropogenic perturbations (Cowx 1994; Cowx & Welcomme 1998), which have caused shifts in the status of the fisheries and a general decline in the yield. Fisheries are not generally considered of sufficiently high priority or value relative to the competing uses, and thus suffer in the face of economically and socially higher priorities, such as agriculture, hydroelectric power production or water sports. The major drivers external to the fishery are listed in table 1 and include:
There is a wide range of access regimes and fishing right systems in inland fisheries. In most cases they remain public resources but responsibilities for management are increasingly being devolved to private individuals or groups/local communities. The claim that small-scale fisheries in the developing world are ‘open access’ resources (e.g. Panayotou 1982; Bailey & Jentoft 1990; Machena & Kwaramba 1997) does not reflect reality. Very few inland fisheries are de facto open access. Most are linked to some form of management system at the local/community level (Fay 1989; Thomas 1996; Béné et al. 2003). The diversity of inland fisheries is to be found in their ecology as well as the social and institutional settings under which they operate. There is considerable uncertainty in the processes that govern their dynamics. Because small-scale fisheries are affected mainly by external processes, unpredictable institutional and policy environments are sources of constant uncertainty and threat. Water allocation policy and investments, water flows, pollution and climatic variability are dominant drivers of many inland fishery systems. Faced with such challenges, conventional fisheries management has generally been irrelevant as a basis for sustainable development. Inland fisheries tend to evolve along a cline from initial emphasis on food production, through recreation, to aesthetic and nature conservation (Arlinghaus et al. 2002; Cowx et al. 2010). The position of any fishery along this trajectory varies most markedly between developed and developing countries (table 6). Fisheries management in industrialized countries focuses almost exclusively on recreation and conservation, whereas developing countries still focus on food security, although the emphases on recreational fisheries (Cowx 2002) and conservation (Collares-Pereira et al. 2002) are increasing as a result of globalization (Cowx et al. 2010).
Fisheries management can be broken down into three major domains: management of the fish assemblages; management of the fishery; and management of the environment. Which of these domains predominates depends on the type and location of the fishery. Natural lake fisheries, for example, tend to be regulated mainly by management of the fishery; enhanced fisheries in dams and reservoirs tend to concentrate more on management of the fish; and fisheries in rivers and estuaries are predominantly managed through control of the environment. A variety of techniques are used to improve production of fish species favoured by commercial or recreation interests, to make up for shortfalls in production arising from overfishing or environmental change, to enhance the potential yield from a particular water body or for conservation initiatives (Cowx 1994, 1998b; Welcomme & Bartley 1998). They include:
In addition to direct intervention on the fish populations/communities, fisheries are usually controlled by enforcement of various regulatory constraints to prevent the overexploitation of the resources and maintain a suitable stock structure (table 7). Irrespective of the regulation measures, the fundamental problem usually lies with intense fishing pressure brought about by open access to the fishery resources. Restricting access is, however, not a simple solution because many fisheries are multi-gear, multi-species and complicated by social issues, such as traditional use rights and family obligations.
In many fisheries in the world, management is wholly under the control of a centralized authority that regulates effort, through access or catch regulations. This can lead to social inequity by denying access to some. Centralized authorities have also proved largely ineffective because they cannot respond to the fluctuating nature of inland fishery resources and enforce regulations in highly dispersed, multi-species, multi-gear fisheries across huge areas. There is a growing tendency worldwide to charge fishing communities with the management and improvement of their resource (Welcomme 2000; §7a). Major challenges for inland fisheries managers and stakeholders relative to the environment are: (i) to defend the interests of the fisheries stakeholders by interacting and making alliances with other interested parties; (ii) to seek to limit damage to aquatic ecosystems; and (iii) to promote rehabilitation activities. A number of key strategies are promoted, usually to address one or several problems, which may be grouped under five main actions:
Many rivers and lake basins lie within the territories of more than one country. Fish often migrate from one country to another for breeding, feeding or refuge. Human activities in one country can also affect those of others. More seriously, impacts of pollution, water abstraction and damming for power generation and irrigation are transmitted downstream in river basins, potentially damaging fish stocks, or in the latter case blocking migratory routes for fish. Common approaches need to be adopted for their management using the ecosystems (river or lake basin) approach. Many international mechanisms for such collaboration exist in the form of river and lake basin commissions, but these usually address developmental issues such as water supply, power generation or navigation, and rarely consider fisheries. A number of models have been developed to assist in the assessment and management of inland fish resources. Many of these were derived from models designed for marine fisheries on unit stocks. Some of these are adequate for the management of single-species fisheries in large lakes such as the Nile perch fishery of Lake Victoria, but on the whole do not perform satisfactorily in the more diffuse multi-species multi-gear fisheries of rivers and floodplains. As a consequence, a series of models have been derived to describe the performance of exploited fish assemblages. These are needed not only for the assessment and management of the fishery itself but must provide information on the impacts of any environmental changes on the fishery, especially riparian wetland drainage and damming. In view of the continuing demands on water for uses other than fisheries, models that guide the setting of discharges for environmental flows (see §7c) are especially urgent. The problem is that, although such models are appropriate and useful, it is difficult to act on the management advice they generate because of poor management and enforcement capacity. It is widely acknowledged that in most parts of Africa and Latin America, and to a lesser extent in Asia, it is extremely difficult to make any accurate and up-to-date assessment of the economic value of small-scale fisheries activities. A large number of recent works underline the high potentials of small-scale fishing activities for economic development (e.g. Cowx et al. 2004; Neiland & Béné 2006; Sugunan et al. 2007). Neiland & Béné (2006) attempted to address the lack of valuation for inland fisheries and some studies confirm the substantial values of inland fisheries. For instance, various attempts to value the Mekong fisheries have been reviewed by Hortle (2009), and Baran et al. (2007) estimated that the commercial value of the Lower Mekong fisheries is worth between US$550 120 and US$1 796 560 per year at first landing. One of the major limitations of the various studies is that they often only account for the monetary value of the catch on local markets. In fact the actual value of these small-scale fisheries goes far beyond this market value, highlighting in particular the critical role that the sector plays in terms of food security, sources of cash and employment for resource-poor local communities in remote rural areas (e.g. Béné et al. 2009). Recreational fisheries are the dominant use of fish resources in inland waters in the North and South temperate zones, particularly Europe, North America and Australia. The sector is also experiencing explosive development in many transitional economies in Asia and Latin America and a few countries in Southern Africa (Angola, South Africa, Zambia). The economic potential of recreational fisheries is very high. Direct income is generated from the sale of fishing licences, which may have to be paid to the owner of the fishing rights whether this is a public or private entity. The sector also has a considerable secondary income generating effect through producers and sellers of fishing equipment, bait providers, boat renters, guides, lodge owners, travel agencies, restaurants, boat constructors, producers of books, magazines, documentaries and digital information on sports fishing, and producers of stocking material. A number of ecosystem services are associated with inland water fisheries as defined by Holmlund & Hammer (1999) and the United Nations 2004 Millennium Ecosystem Assessment. Fisheries management strategies should aim to conserve the full range of services if possible, although in many circumstances some will be awarded higher priority than others. Inland fisheries are characterized by a relatively low dependence on fossil fuels so the carbon footprint of the sector is remarkably low compared with other food production systems. Fisheries use energy in three main ways: the manufacture of gear; movement to and from the fishing site; and preservation and post-harvest transport. Manufacture of gear. Many of the gears used are made of locally derived materials, although the growing and widespread use of gill-nets and other gears made from artificial fibres does have some carbon cost. Movement to and from the fishing site. Many fishers operate from the bank or in shallow waters so they do not need fishing craft. Where craft are used they are usually small hand-propelled canoes or sometimes use sail. Post-harvest preservation and transport. Fish products are conserved by a variety of means. Where electrical power is available, lake and river fishers use ice to conserve the catch on their journeys to market. Where power is not available, most of the artisanal post-harvest sector still uses traditional conservation techniques such as sun-drying, salting and smoking for round-fish conservation, and fermented pastes and sauces for smaller fish. Capture fisheries harvest wild aquatic animals held in some form of common ownership, while aquaculture involves the active rearing of aquatic animals held in private ownership. There is a continuum of inland fishery systems using varying degrees of enhancement and management that fall between true wild capture fisheries and true aquaculture (figure 3).
Releasing fish spawned and bred in aquaculture systems into natural populations can add to total production and population abundance (Lorenzen 2008). However, such measures may impact negatively on the wild population through density-dependent responses and introgression of hatchery stocks often characterized by reduced genetic diversity and fitness (Lorenzen 2005). There are also issues relating to the possibility of disease transmission, although in many cases aquaculture stocks may be healthier than wild stocks. Strategies for stocking also vary according to the water body and the manner of stocking or enhancement. In some cases fish are stocked for almost complete recapture, such as in seasonal irrigation reservoirs and water bodies that are considered culture-based fisheries. Elsewhere, in permanent water bodies and large reservoirs, stocking would have a minimal impact on overall fishery recruitment, and a strategy of stocking species that will breed in the water body and contribute to recruitment is favoured. The enhancement of fisheries usually involves some form of ownership over what were previously open access fisheries. As a result there are often social problems with enhanced/culture-based fisheries in developing countries, due to aspects of rights to fish and access. Furthermore, water management of the water body may not prioritize fisheries and thus the fishery/culture-based fishery production may not be optimal, or even be severely impacted by such externally imposed factors as the draining down of irrigation water bodies. Aquaculture concessions granted to a user or user group may resolve access issues, but in some cases, the concession may marginalize traditional users and the benefit may be limited to a few individuals. Capture-based and self-recruiting aquaculture are culture systems based on the use of broodstock, fingerling or fry captured from the wild or recruiting naturally into the culture facility (i.e. there is no system of captive breeding). All aquaculture was originally based on wild stocks and was only liberated by the development of artificial breeding techniques in the 1950s. Capture-based and self-recruiting aquaculture remain strongly dependent on the productivity of wild fish stocks and are only viable in the longer term where fishing pressure on the fry remains within the limits imposed by the ability of wild populations to compensate for removal of early life stages through density-dependent processes. Aquaculture development is often promoted to mitigate for real or perceived declines in inland fisheries and their contribution to rural livelihoods. However, fisheries and aquaculture are very different activities, and it is not usually possible to simply replace fisheries with aquaculture. The main reasons for this are:
Climate change is likely to affect inland fisheries through several mechanisms. Higher temperatures reduce oxygen solubility in water but can raise the oxygen and food intake demand of fish as their metabolic rates are raised. Associated rises in gill ventilation rates can lead to increased uptake of aquatic pollutants, potentially rendering the flesh unfit for human consumption. Higher water temperatures can also favour the survival of parasites and bacteria. All these responses combine to potentially reduce fish survival, growth and reproductive success both in wild populations and aquaculture systems (Ficke et al. 2007). Similarly, many species in temperate regions have characteristic temperature ranges in which they live and breed and rises in temperature may result in species being displaced to higher latitudes to be replaced by species preferring higher temperatures. In rivers, increasing flows during the flood season will translate to more extensive and prolonged floodplain inundation, potentially increasing overall system productivity including the fish component (Welcomme 1985; Junk et al. 1989). Longer, more extensive floods are likely to provide greater and more prolonged feeding opportunities for fish. Improved growth can favour survival and reproductive potential (fecundity). Changes to the timing of flows also have the potential to disrupt spawning behaviour (Welcomme & Halls 2001). The dry season is a period of great stress to many river fish species arising from diminished feeding opportunities and water quality, and elevated risk of predation or capture. Fish survival during this period is therefore likely to be density-dependent. Increased precipitation and water availability during this period might favour fish survival and ultimately exploitable biomass, while drier conditions would have the converse effect (Halls & Welcomme 2004). The combination of reductions in river flow and sea level rise may change salinity profiles in river deltas and lead to greater upstream salinity intrusion. These changes may displace stenohaline (narrow salinity tolerance) species further upstream and increase the upstream range and biomass of euryhaline (wide salinity tolerance) species, including those that depend upon brackish water environments to complete their life cycles. Perhaps the greatest impact will be in the conversion of snow- and glacier-fed rivers to rain-fed rivers as the permanent ice in many mountain regions is eroded. This will change the hydrological characteristics of such rivers fundamentally, altering their seasonality and the evenness of the food regimes. Careful consideration will have to be given to both planned and autonomous adaptive coping strategies pursued by the agricultural sector. Less predictable flooding patterns and reductions in dry season flows may force small-scale farmers to build makeshift levees to protect their crops from flood damage and to rely increasingly on surface water bodies to meet their irrigation needs. Planned adaptation may favour the construction of large-scale storage reservoirs, flood control embankments and irrigation schemes with an associated increase in withdrawal of water from the aquatic ecosystems, which impact negatively on the fisheries sector by obstructing fish migrations and diminishing dry season habitat availability and quality (Halls et al. 1998, 1999). Fisheries in Asia are very heavily exploited and have very little apparent room for expansion by better management. In Africa fishing pressure, although increasing, is still below the level experienced in Asia so there still may be some potential for expansion. The economic value of small-scale fisheries in Africa could be doubled or tripled simply by improving post-harvest processing techniques. In Latin America, fisheries appear relatively less heavily exploited than in Asia, with few signs of fishing down at the community level, although some individual stocks are under pressure. Inland fish resources in Europe, North America and Australia are exploited more for recreational than consumptive purposes, and often managed to meet conservation objectives (Cowx et al. 2010). As a result production for food is declining. The significance of current reported catches is difficult to assess. It is assumed that actual catches have been at a maximum level for some time, although real increases are still occurring in some fisheries. Increases in reported catch are mainly because of improved reporting of hitherto unrecorded sources of inland fish, such as small-scale artisanal and subsistence yields, or yields from rice fields. It is impossible to predict at what level reported and actual catches will merge, if ever, although it is clear that present actual production exceeds the 10 million tonnes estimate by a large margin. Better understanding of the significance of inland fisheries resource may influence the direction of general development policies for aquatic systems, in particular in relation to further hydropower and irrigation investments. The greatest risk, particularly in rivers, coastal lagoons and estuaries and river-driven lakes, is modification of flow regimes by water abstractions and power generation, principally through damming. Climate change is likely to exacerbate the situation arising from adaptive strategies such as flood control, and increasing demand for water for irrigated agriculture. The risks of losing catch are also increased by other forms of environmental damage such as draining of seasonal riparian wetlands and river channelization. The assumption that better identification of the role of inland fisheries in livelihoods and food security would result in the sector's needs being considered when planning new civil works on rivers has so far been unjustified. As a result, losses of inland fishery production can be anticipated in many rivers, lakes and wetlands. One method to mitigate for this loss is to develop improved fishery enhancements in the inland waters that remain after the present wave of modifications. Fishery enhancement was popular in the 1980–1990s and achieved notable successes in increasing inland fish production in many countries. Unfortunately, current trends seem to indicate that use of public funds to support large-scale stocking is not acceptable in the existing financial climate, so the practice has declined in several countries (de Silva & Funge Smith 2005). Nevertheless, knowledge of the technique is still available and may well re-emerge as an option if food security becomes an issue. In summary, inland fisheries are an important source of cash and protein food, particularly in poorer countries where its products are readily available to the population. Yields at present are probably well in excess of 10 million tonnes per year, but the prognosis for the future is far from good with many of the external drivers reducing the amount being caught from many wild fisheries. This will almost certainly result in issues of changing supply and availability to some rural areas which remain dependent upon inland fisheries as a food source. FootnotesOne contribution of 23 to a Theme Issue ‘Food security: feeding the world in 2050'. While the Government Office for Science commissioned this review, the views are those of the author(s), are independent of Government, and do not constitute Government policy. © 2010 The Royal Society This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. References
Page 7Global aquaculture (figure 1) has grown dramatically over the past 50 years to around 52.5 million tonnes (68.3 million including aquatic plants) in 2008 worth US$98.5 billion (US$106 billion including aquatic plants) and accounting for around 50 per cent of the world's fish food supply. Asia dominates this production, accounting for 89 per cent by volume and 79 per cent by value, with China by far the largest producer (32.7 million tonnes in 2008). The rapid growth in this region has been driven by a variety of factors, including pre-existing aquaculture practices, population and economic growth, relaxed regulatory framework and expanding export opportunities. Global aquaculture production by region. Source: FAO (2010). (a) Aquaculture by quantity 2008 (excluding aquatic plants). (b) Aquaculture by value 2008 (excluding aquatic plants). Dark blue, Africa; brown, Americas; light green, Asia; violet, Europe; light blue, Oceania.
Aquaculture development in Europe and North America was rapid during the 1980s–1990s but has since stagnated, probably owing to regulatory restrictions on sites and other competitive factors, although as markets for fish and seafood they have continued to grow. The growth rate of aquaculture between 1970 and 2006 was 6.9 per cent per annum (FAO 2009a), although it appears to be slowing (average 5.8% between 2004 and 2008). This reflects the typical pattern, which can be seen at the national level, of adoption followed by rapid growth, which then slows with increasing competition and other constraints. The highest relative growth rates between 2006 and 2007 are in countries with relatively low production, such as Lesotho (6450%), Rwanda (909.5%) and Ukraine (590.8%). Although these can be a useful indicator of new initiatives, smaller percentage growth in countries with already substantial production has a greater impact. For instance, 5.2 per cent growth in China represented 52.3 per cent of the total increase in global aquaculture supply for 2007. The second most important country in this respect was Vietnam, which contributed 16.7 per cent of the additional aquaculture production with a growth rate of 30.1 per cent (figures 2 and 3). Average annual growth rate of all aquaculture production in terms of quantity over a 5-year period. Calculated using the difference between mean values from the periods 2000–2002 and 2005–2007. Red, greater than −10%; orange, −3 to −10%; rose, 0 to −3%; violet, 0–3%; light blue, 3–10%; dark blue, greater than 10%. Source: FAO (2009b).
Average annual growth rate of all aquaculture production in terms of value over a 5-year period. Calculated using the difference between mean values from the periods 2000–2002 and 2005–2007. Red, greater than −10%; orange, −3 to −10%; rose, 0 to −3%; violet, 0–3%; light blue, 3–10%; dark blue, greater than 10%. Source: FAO (2009b).
A small number of countries with substantive production experienced contraction in 2007, most notably Thailand, Spain and Canada. Reasons for this were mainly market and competitiveness related, although disease and one-off environmental events can also play a role in single-year figures. Overall, these reductions amounted to the equivalent of 1.6 per cent of global supplies (i.e. more than compensated by growth elsewhere). Excluding aquatic plants, 310 species were recorded by FAO as cultured in 2008. However, the top five species accounted for around 33 per cent of the output (19% by value), the top 10 for 53 per cent (45% by value) and the top 20 species for 74 per cent of production by volume (63% by value). Freshwater fish production is dominated by various species of carp, although tilapia and later pangasius catfish have become more significant (table 1). Coastal aquaculture primarily comprises whiteleg and, to a lesser extent, tiger shrimp, oyster, scallop and mussels, with Atlantic salmon as the leading intensively farmed marine fish.
Freshwaters were the source for 60 per cent of the world aquaculture production in 2008 (56% by value), despite they only constituting 3 per cent of the planet's waters and only 0.3 per cent of that being surface water (figure 4). Of this, 65.9 per cent were carp and other cyprinids which are mostly cultured in ponds using semi-intensive methods (water fertilization with inorganic and organic fertilizers and supplementary feeding with low-protein materials). Salmonid farming (mainly rainbow trout in freshwater) constituted only 1.5 per cent, typically using ponds, concrete raceways and other types of tank that require higher throughputs of water to maintain a good water quality. Stocking densities are typically two to five times as high as in semi-intensive ponds and fully formulated diets are fed. Species such as tilapia (7.6% of freshwater production) are cultured in a mix of systems, from extensive to highly intensive. Aquaculture production by output and value for major species groups in 2008. Source: FAO (2010), excluding aquatic plants. (a) Aquaculture by output 2008 (excluding aquatic plants). (b) Aquaculture by value 2008 (excluding plants).
Cage-based aquaculture in both freshwater lakes and rivers has flourished in many countries, although some are now regulating use due to concerns over environmental impacts. In Egypt, over 10 per cent of freshwater aquaculture production in 2005 was from cages in the River Nile. However, by 2006, almost 80 per cent were removed (down from 12 495 to 2702). Rapid expansion of cage-based catfish farming in the Mekong is giving similar cause for concern, but has not led to such a drastic regulatory response, although the expansion of pond farms is now apparent. In unregulated conditions, eutrophication from cage farms can impact on farms downstream, on other water uses and ecosystems in general. Globally, Asia and especially China has the greatest freshwater aquaculture production in relation to land area, although some European and African countries are also significant (figure 5). The Americas in particular are notable for relatively low freshwater aquaculture production per unit area. Mean aquaculture production from freshwater systems as a function of land area (kg km−2 yr−1) for the period 2005–2007. Dark green, 0–50 kg km−2; light green, 50–100 kg km−2; yellow, 100–250 kg km−2; light orange, 250–500 kg km−2; dark orange, 500–1000 kg km−2; red, 1000–3000 kg km−2; maroon, greater than 3000 kg km−2. Source: FAO (2009b).
Coastal ponds and lagoons and have been exploited in simple ways for fish, mollusc, crustacean and seaweed production for centuries. However, production has been expanded and intensified over the past 30 years. In warmer countries, the penaeid shrimps have tended to dominate brackish-water culture due to high-value, short production cycles and accessible technologies. Production has increased almost exponentially since the mid-1970s (figure 6) and now accounts for about 58 per cent of aquaculture production from brackish water (72% by value).1 In more temperate climates, brackish-water fish species are the main crop with varying degrees of intensification. Total world production for culture of brackish-water species (blue) and for penaied shrimp (red). Source FAO (2010).
Further expansion of brackish-water aquaculture is possible, especially in relatively unexploited regions of Africa and Latin America. However, a strengthening environmental lobby as well as competition for land resources in some areas is likely to limit developments of the kind seen in some Asian countries. Coastal aquaculture using onshore tanks has developed in some areas (e.g. South Korea, Spain, Iceland), usually where other types of aquaculture would not be possible. Most use pumped water that passes through the tanks once before being discharged to the environment. However, an increasing number treat and reuse the water flow, providing greater isolation from the environment and hence biosecurity. For marine fish species with mid to high-value, floating cages have proved the most cost-effective production system across a range of farm sizes and environments (as determined by conventional financial appraisal; Halwart et al. 2007). The open exchange of water through the nets replenishes oxygen and removes dissolved and solid wastes. Most rely on feeding either complete diets or, for some species, trash fish. Cage units can be sized and arranged flexibly to meet the needs of the farm. Expansion is straightforward by increasing cage volume or number of units. Larger cages, especially in more exposed locations, become difficult and costly to manage with manual labour, so a range of specialist service vessels and equipment has been developed, especially in the salmon sector, to overcome such constraints (figure 7). Economies of scale supported by mechanization have helped to reduce production costs substantially. Development of production volume of Atlantic salmon and rainbow trout in Norway and number of employees (blue), illustrating trends in industrialization of production (red) systems. Source: Fiskeridirektoratet (2008).
The cultivation of marine molluscs (mainly bivalves) and seaweed using simple methods has a long history in many countries and has become widely established as a coastal livelihood activity involving high labour inputs. Since the 1990s, however, there has been significant upscaling of production and the introduction of specialized equipment allowing larger sites and greater labour efficiencies. Total output of molluscs from coastal waters in 2008 was 12.8 million tonnes valued at US$12.8 billion. A further 15.7 million tonnes of seaweeds were cultivated in coastal waters in 2008 valued at US$7.4 billion. The development of aquaculture depends on the interaction of a wide variety of factors as summarized in table 2.
It is instructive to study individual aquaculture industries in relation to these factors. The primary factors are market demand (and competition), the availability of environmental resource, the development or transfer of appropriate technology and a favourable business environment that allows entrepreneurs to profit from their investment in the sector. However, there are many examples of failure of development, especially in Africa and parts of Latin America, due to the lack of well-developed markets or the ability to reach them due to infrastructure issues, including the lack of adequate quality controls for export. Weak institutional systems and lack of investment have also been important constraints in many countries. The aquaculture sector overall is highly diverse and fragmented, ranging from smallholder ponds in Africa providing a few kilos of fish per year to international companies with annual turnover in excess of US$1 billion. An estimated 9 million people were engaged in fish farming in 2006 (FAO 2009a), around 94 per cent in Asia. Average output per person per year was 5.96 tonnes, but this varied from 0.57 tonnes in Indonesia, where aquaculture systems tend to be labour intensive, to 161.22 tonnes in Norway, which is highly industrialized (table 3).
For many participants, aquaculture is one of a more limited range of economic activities available in the specific coastal or rural location and is particularly important in countries such as Bangladesh, India and Vietnam, as both subsistence and cash crop. The number of small–medium enterprises and sole traders in Europe is also high, with 13 139 companies with an average of 2.6 full-time employees and turnover of around €270 000 (Framian 2009). However, trends towards industrialization and consolidation are strong for some species, especially commodity products that are internationally marketed. For instance, four companies now account for 70 per cent of Scottish salmon production and two for over 50 per cent of industry value. There are critical linkages between market chain structure and viable company size. In Europe, the smallest companies tend to market directly to consumers and local hotels and restaurants, gaining a valuable premium on normal wholesale market prices. This is not an option for slightly larger producers who would saturate local markets. Scale economies become more important when producers are competing in larger markets and when there are minimum purchase quantities imposed by much larger buyers. The formation of producer cooperatives has sometimes enabled smaller companies to work with more consolidated market chains, most frequently when consolidation of sites is not physically possible. International market chains are also impacting on previously small-scale producers in Asia. For instance, consolidation is apparent in the Vietnamese catfish industry, mainly driven by the implementation of western quality standards, initially in processing, but increasingly stretching into production. Elsewhere in Asia, complex chains involving many small companies still exist. Efficiency comes through specialization and competition on flexibility and quality of service. A key example of this is the production of live marine fish for restaurants and specialist markets in Korea, Hong Kong, China and other parts of Southeast Asia where values are very different from those of the Western markets. While beneficial in many ways, the growth of aquaculture is increasing pressure on natural resource inputs, notably water, energy and feed, although sites in a broader sense are also an issue. There is also the question of the use of, and impact on, environmental services, particularly for the dispersion and treatment of farm effluents. Aquaculture systems are very diverse with respect to their dependence on these resources (table 4).
Freshwater farming uses a range of systems, from static water ponds through to high flow-through tanks. Most involve intake of water from the environment and a post-production effluent stream, so that water consumption does not equate to water intake. However, the quality of discharge water is usually diminished and water can be lost through evaporation and seepage. As a worst case, pond systems in tropical countries can lose 20 per cent of their volume per day (Beveridge 2004). However, pond aquaculture can also contribute to water management as it acts to catch and store surface water (rain and run-off) that might otherwise be lost from local agroecosystems or which might cause damaging floods (e.g. in the Czech Republic). Implementation of water reuse and recirculation systems can reduce consumption substantially, although usually at the cost of higher energy inputs. The majority of freshwater aquaculture is pond based using semi-intensive methods that rely on controlled eutrophication for their productivity, using a wide variety of organic and inorganic fertilizers as well as supplementary feedstuffs. The production of feed materials for aquaculture, particularly grain and similar crops, incurs additional freshwater use (up to 3000 m3 tonne−1 according to Verdegem & Bosma 2009). Solid wastes produced from such systems often have a use as fertilizers for other crops. Dissolved nutrients can often be lost through necessary water replacement regimes and sometimes cause problems in areas with extensive aquaculture production or with otherwise oligotroiphic or mesotrophic environments. Better optimization of freshwater production systems with respect to water and feed management could triple production without increasing freshwater usage according to Verdegem & Bosma (2009). Given the presently increasing pressures on freshwater supplies, future aquaculture development might be expected to utilize more abundant brackish and sea water resources. However, environmental issues are no less complex. The energy cost and linked implications for carbon emissions of aquaculture activities is receiving greater attention. A distinction needs to be drawn in analysis between direct energy use (e.g. fuel and electricity consumed directly in the production process) and the more comprehensive approaches to auditing energy inputs. For instance, these may include consideration of industrial energy (energy used in the manufacture and supply of equipment, feeds and other inputs) or embodied energy, which also takes into account photosynthesis and sunlight energy or calorific values, etc. Another consideration is whether the energy sources are renewable or not. Life cycle analysis (LCA) carried out by Tyedmers & Pelletier (2007) found energy dependence correlated with production intensity. This is mainly due to the energy input in the production and delivery of feed (Grönroos et al. 2006). More variable is the energy required for other on-farm activities which can range from virtually zero up to about 3 kWh kg−1. For land-based farms, most of the power is likely to be provided by electricity from the central grid. Cage-based farms rely mainly on diesel or other fossil fuel. Table 5 shows typical embodied energy levels and ratios for different production systems, with seaweed and mussel culture requiring much more modest input levels.
Aquaculture, especially in coastal zones, is frequently in competition with other uses of the resource that can often take precedence (e.g. tourism and port developments; figure 8). However, there are also cases where aquaculture has outcompeted other users, such as shrimp farming, which has come under scrutiny due to over-exploitation and destruction of mangrove resources, as well as other environmental impacts and serious disease problems. The wider ecosystem value of these environments is now recognized and suitable protection given in most regions, although much remains to be done with respect to rebuilding lost area. More recent moves by the shrimp industry inland have also caused problems with saline intrusion into agricultural soils. Mean production quantities from coastal aquaculture systems as function of coastline length (kg km−1 yr−1) for the period 2005–2007. Dark green, less than 10 kg km−1 yr−1; light green, 10–25 kg km−1 yr−1; yellow, 25–50 kg km−1 yr−1; light orange, 50–100 kg km−1 yr−1; dark orange, 100–250 kg km−1 yr−1; red, 250–500 kg km−1 yr−1; maroon, greater than 500 kg km−1 yr−1. Source: FAO (2009b).
The development of marine fish farming in cages has also raised concerns over wider environmental, ecosystem and biodiversity impacts. At modest scales of development, these are hard to detect apart from localized changes to sediments beneath the cages. Larger scale development has the potential for wider impacts due to the release of nutrient or chemical wastes directly into the environment, or the effects of escaped fish or disease transfer on wild populations. The most immediate problem is often conflicts between cage-based farming and other interests, such as boating and navigation, recreation, preservation of seascape scenery and protection of wildlife. In Europe, these issues are considered during the licensing process or increasingly through the development of coastal zone plans. Similar issues apply to coastal pond and pump-ashore tank systems. Recirculated water systems overcome a number of these constraints, but except for more specialist applications have so far been unable to compete financially. Most mollusc culture requires no feed inputs and the majority of freshwater fish production utilizes low-protein, grain-based supplementary diets and organic fertilizers. Much of the crustacean farming, most marine species and other intensive fish aquaculture require a higher quality diet, usually containing fish meal and often fish oil. Some aquaculture, notably tuna fattening and much of the marine cage culture in Asia, relies directly on wild-caught small pelagic fish with relatively low market price. The process transforms fish protein from low to high value for human consumption. However, the efficiency of this is both an ecological issue and one of social justice (e.g. consumers of farmed salmon and shrimp may effectively outcompete rural poor for this fish resource; Tacon & Metian 2009). Fish meal has also traditionally been used in intensive livestock rearing, especially pork and poultry, so the issues are not unique to aquaculture. However, it is aquaculture that is taking a growing and majority share of this resource as substitutes are more easily found for livestock and poultry. Wild-caught supplies of fish meal and oil have varied at around 5–6 million and 1 million tonnes annually for at least the past 20 years, suggesting that such levels are likely to be sustained in the future. However, in 2008, approximately 90 per cent of the fish oil available worldwide, and 71 per cent of the fish meal, was consumed in aquaculture practices (Tacon & Metian in preparation). Unless alternative higher value markets develop, aquaculture will continue to consume the majority of fish meal and oil produced but this will not be sufficient to meet ever-increasing demands for aquafeed ingredients (figures 9–12). Estimated global compound aquafeed production in 2008 for major farmed species (as percentage of total aquafeed production, dry feed basis. Source: Tacon & Metian (in preparation).
Estimated global use of fish meal and oil by the salmon farming industry projected to 2020. Blue, total feeds used; red, mean % fish meal; green, mean % fish oil. Source: Tacon & Metian (in preparation).
Feeds for herbivorous and omnivorous species (carps and tilapias) often contain fish meal and sometimes fish oil, although this is not essential on purely nutritional grounds. However, rapidly expanding culture of carnivorous species such as cobia and pangasius catfish could increase the pressure on fish meal and oil supplies. An overarching factor that has significantly impacted demands for fish meal and oil is improvements in food conversion efficiency as feeds and feeding technologies improve. Up to 25 per cent of fish meal is now obtained from fish processing waste, and ingredient substitution is also increasing the efficiency of fish meal and oil utilization. In the wild, the conversion efficiency (fish intake to fish output, FIFO) is commonly taken as 10 : 1 between one trophic level and the next (e.g. carnivorous fish eating plankton-feeding fish). Between 1995 and 2006, input : output ratios for salmon improved from 7.5 to 4.9, trout from 6.0 to 3.4, marine fish from 3.0 to 2.2 and shrimp from 1.9 to 1.4. Herbivorous and omnivorous finfish and some crustacean species showed net gains in output, with ratios in 2006 of 0.2 for non-filter feeding Chinese carp and milkfish, 0.4 for tilapia, 0.5 for catfish and 0.6 for freshwater crustaceans (Tacon & Metian 2008). Calculations of FIFO for the global aquaculture industry include 0.7 (Tacon & Metian 2008), 0.63 (Naylor et al. 2009) and 0.52 Jackson (2009). Overall, the finite supply of fish meal and oil is not expected to be a major constraint, but demand for alternative feed materials will increase—in turn placing greater pressure on the wider agro-feed system. Looking forward, there is strong focus on improving the efficiency of resource utilization through management and integration or more technological solutions available through advances in engineering and bio-science. Both approaches will be important and influenced by wider social and economic factors including globalization, urbanization, factor prices (especially energy) and consumer demand. The integration of aquaculture, fisheries, agriculture and other productive or ecosystem management activities has an integral role to play in the future of the aquaculture industry. Techniques include ranching, agriculture/aquaculture (IAA), integrated multi-trophic aquaculture (IMTA) and links with renewable energy projects. Integration is a key element of the ‘ecosystem approach to aquaculture (EAA)’ which ‘is a strategy for the integration of the activity within the wider ecosystem in such a way that it promotes sustainable development, equity, and resilience of interlinked social and ecological systems’ (Soto et al. 2008). Although aquaculture and capture fisheries are often seen as separate activities linked only in their market destinations, a number of important system linkages exist between these forms of aquatic production. These include interdependence for supplying fish products in aquaculture feeds, the role of aquaculture stocks in supporting and enhancing capture fisheries and the development of managed ecosystem approaches connecting aquaculture and fisheries in single spatial units; typically, lakes and floodplain systems, peri-urban zones, coastal margins and fjords or sea lochs. Aquaculture-based fisheries enhancements comprise a diverse set of resource systems that combine attributes of aquaculture and fisheries. Most commonly, enhancements involve releases of cultured fish into open waters with the aim of enhancing fisheries catches directly or helping to rebuild depleted fish stocks. Examples include large-scale culture-based fisheries for major carps in Asian reservoirs, Pacific salmon ranching, scallop enhancements in Japan and New Zealand, and many systems that operate at smaller scales. Enhancements may also involve habitat and environmental modifications with the dual aim of increasing the productivity of wild or released cultured stocks and extending private ownership over such resources. Examples include traditional systems of culturing animals that recruit into privately owned and managed coastal ponds or rice fields and recent innovations such as ‘free fish farms at sea’ where fish are habituated to feeding stations.2 Major advances in the understanding of aquaculture-based enhancement fisheries systems and in underlying science areas have been made over the past decade. Integrative frameworks have been developed that allow a rapid assessment of enhancement potential based on the consideration of ecological, genetic, technological, economic, stakeholder and institutional attributes (Lorenzen 2008). Quantitative assessment tools can be used to evaluate the likely fisheries benefits of enhancements prior to and during the development of enhancement technologies (Lorenzen 2005; Medley & Lorenzen 2006). Robust genetic management principles have been defined for different types of aquaculture-based enhancements (Utter & Epifanio 2002). Understanding of domestication effects on fish behaviour has been applied to developing increasingly effective ways of conditioning cultured fish to improve their post-release survival and recapture (Olla et al. 1998). The economics of fisheries enhancements and, in particular, the institutional arrangements that can facilitate the emergence of such systems and sustain them over extended periods of time are now well understood (Arnason 2001; Lorenzen 2008). Aquaculture-based fisheries enhancements can pose substantial ecological and genetic risks to wild fish stocks. In production-oriented enhancements, such risks can be minimized but not fully avoided by separating the released cultured and wild stocks ecologically (e.g. by release and habituation in habitats not used by interacting wild fish) and genetically (e.g. by maintaining captive brood stock and releasing sterile fish). Selective harvesting of released cultured fish may further reduce impact on wild stocks where this is technically possible (i.e. fishing is not unselective for cultured/wild fish). Environmental modifications and feeding could lead to further impacts on wild stocks and the natural ecosystem. In initiatives aimed at rebuilding wild stocks, the aim is for cultured fish to interact with wild fish and particular care must be taken in stock and genetic management to avoid detrimental impacts on the depleted or even endangered wild stock. Captive breeding and supplementation programmes can aid conservation and restoration of such stocks, but the management strategies in this case are very different from those employed in production-oriented enhancements. Aquaculture-based fisheries enhancements are now successfully implemented in over 27 countries worldwide, involving over 80 species and yielding an estimated 2 million tonnes of fisheries products. It is therefore likely that interest in enhancements and demand for research and technology development in this area will increase. IMTA systems can be described as culture systems that use species from different trophic levels grown in combination within the same water body or through some other water-based linkage (for land-based systems). Scale does not necessarily have to be large, provided the layout of the species being grown and the quantities being grown are compatible. In all cases water is the nutrient transport vector for dissolved and particulate wastes, the releases from one species acting as food for other species at a lower trophic level. The combination of species from different trophic groups creates a synergistic relationship which, in turn, acts as a bioremediation measure. In a perfect IMTA system the processing of biological and chemical wastes by other species would make the whole production cycle environmentally neutral. There are IMTA systems at or near commercial scale in China, Chile, Canada, Ireland, South Africa and the UK, and ongoing research in many other countries. Such systems face a number of challenges, particularly in selecting species that integrate well, but that also have sufficient economic value to attract investment. The internalization of environmental costs within the systems (environmental economics) could substantially alter this (Soto 2009), as could the development of new products from marine species (Barrington et al. 2009). Other constraints include existing regulations restricting further aquaculture development or the potential for unintended interactions between systems. However, the potential of the approach in addressing sustainability objectives is clear. IAA is most common in developing countries, where it provides a means for rural systems to diversify and maximize output. The culture method differs from mono-culture, which is often too risk intensive for resource-poor farmers. Integrated systems benefit from the synergies among the different components and they have diversity in production that results in a relative environmental soundness (Prein 2002). IAA systems range from simple integration to multi-component integrated systems using commercial fertilizers and feeds. Examples of IAA include the culture of fish in rice fields or the use of livestock manure from terrestrial farming for both feed and fertilizer in fish ponds. Integration can be categorized into: (i) polyculture (multiple species co-cultured; (ii) sequential (waste flows directed sequentially between culture units); (iii) temporal (replacement of species within the same holding site to benefit from waste generated by preceding species); and (iv) mangrove integration (using mangroves as biofilters) (Troell 2009). Dey et al. (2010) evaluated the impact of a WorldFish-supported programme that introduced IAA to smallholders in Malawi and found adopters of the technology realized an 11 per cent rise in total factor productivity (TFP), 35 per cent higher technical efficiency scores, 134 per cent higher farm income per hectare and 60 per cent higher income overall compared with non-adopters. Non-adopters had higher income from off-farm activities, but adopters had higher overall returns to family labour and thus higher household incomes (almost 1.5 times higher). The authors suggest this illustrates the potential for IAA to contribute to poverty reduction and livelihood improvements in Malawi and probably other countries that have similar conditions. Where IAA is practised on a larger scale and with commercial products, further challenges have emerged. For instance, quality can be variable with concerns about contamination, e.g. with pesticides where irrigation water is used, or off-flavour taints, which inhibit acceptance and certification, particularly for international markets (Little & Edwards 2003). To date, the benefits of IAA have focused mainly on food production, but more efficient use of freshwater and energy may become equally important. In developed countries, there is growing interest and activity with small-scale aquaponic systems, which combine freshwater aquaculture in a recirculated system with hydroponic horticulture, usually herbs and salad vegetables. The horticultural crop is mostly fertilized by the nitrogen waste from fish culture. Owing to scaling issues, these systems have not proved attractive commercially, but are suitable for ‘back-yard’ food production, which is emerging as a candidate strategy for increasing sustainable food production. Substitution of the protein (essential amino acids) and other nutrients derived from fish meal is nutritionally straightforward and considerable advances in this field have been made over the past 30 years. For protein supply, the issue is largely one of economics and formulation as well as continual assessment of potential novel sources of protein (such as: the biomass derived from bioethanol production; cereal glutens; microbial proteins; improved oilseed and legume meals, etc.). Even for carnivorous species (high dietary protein levels, sensitivity to the palatability of the feed), up to 75 per cent of the fish meal in a standard feed can easily be replaced (Bell & Waagbo 2008). For omnivorous and herbivorous species, fish meal is unnecessary and is only presently used because it is economically viable to do so. There is a general issue of whether it is ethical, or impacts fish welfare, when carnivorous species are fed on ‘vegetarian’ diets. In addition, there is evidence that soya bean induces enteritis in Atlantic salmon and it is possible that plant proteins in general (which contain wide ranges of nutrient and non-nutrient fractions to which fish are not normally exposed) may have impacts on fish welfare. Substitution of fish oil is considerably more problematic as n − 3 HUFA (highly unsaturated fatty acids; EPA and DHA) supplied by fish oil, and essential in the diets of truly marine species, are not commercially available from any other source at present. Neither is it desirable to reduce the n − 3 content of farmed species with respect to human health benefits. Considerable progress has therefore been made towards substitution of most or all of the fish oil during the growth phase before introducing a finishing diet, rich in fish oil, that ‘washes out’ the n − 6 fatty acids accumulated during growth. This results in a high n − 3 HUFA final product that resembles wild individuals of the same species. For future supply of HUFA that can be incorporated in aquafeeds, some microorganisms (bacteria and algae particularly) have shown promise and HUFA yields will undoubtedly be increased through conventional selection, improved culture techniques and/or the use of genetically modified organisms. It may even be possible to combine production of useful protein biomass and HUFA in this way (Olsen et al. 2008). One further potential source of feed protein and oil is krill (a collective name for a group of approximately 80 species of small, pelagic, shoaling crustacea). The nutritional issues of product quality (rapid spoilage) and fluorine content have been successfully addressed and viable methodologies for capture and processing developed. CCAMLR3 estimates a total allowable catch that would provide approximately 1 teratonne of krill meal and 32 000 million tonne of krill oil per year from Antarctic waters. However, aquaculture faces strong competition for the krill resource from increasing use of high grade krill for direct human consumption and production of pharmacological grade krill oils. The potential impact on marine food webs should also be seriously considered. The bulk of aquaculture production still comes from wild or recently domesticated stocks. A lack of genetic management and poor hatchery procedures, particularly but not only in developing countries, has significantly degraded the performance of many farmed species through inbreeding, genetic drift and uncontrolled hybridization. The reduction in performance and viability means that hatchery stocks often need to be routinely replaced by wild fish or better managed stock from other farms. In contrast, properly managed selective breeding programmes have shown continual improvements in performance and quality. Atlantic salmon breeding companies have shown more than 100 per cent improvement in growth performance in around six generations, with significant improvements in disease resistance and delays in the onset of sexual maturation. The vast majority of farmed Atlantic salmon eggs and smolts are now sourced from such breeding companies and similar approaches are now being introduced in some other species. Selective breeding can improve the year-on-year performance of farmed fish stocks for a wide range of traits, but it is still often necessary to include some other techniques that enable these fish to achieve their full potential. Sexual maturation in production fish can significantly reduce the final yield, as maturing fish can become aggressive, stop growing, lose condition and become more susceptible to disease. In many species one sex or another is preferred, e.g. because it grows faster or is still immature at harvest size. In salmonids, females usually mature later than males. In rainbow trout being grown to portion size (more than 300 g), all-female production is now almost universally used in Europe as females are still immature at harvest. In tilapia, all-male production is preferred: even though the males mature, the lack of females avoids the unwanted production of fry common in mixed sex on-growing systems. In some species and under certain conditions, any sexual maturation is detrimental. This can be avoided by the production of sterile fish using chromosome set manipulation techniques that produce animals with three sets of chromosomes, known as induced triploidy. This approach is now used in the production of large rainbow trout (more than 3 kg) which continue to grow and remain in prime condition. Triploidy is also widely used for the production of ‘all-year-round’ oysters. Transgenic technology has been applied to a number of fish species in recent years, although mostly for research. Recent studies in salmonids show that the spectacular improvements in growth seen by incorporating growth hormone gene constructs into slow-growing wild strains were not repeated when the same constructs were incorporated into fast-growing domesticated stock (Devlin et al. 2009). This suggests that the same improvement in growth could be achieved using selective breeding techniques which have the advantage of selecting across a range of commercial traits, raising the overall performance of the strain as well as maintaining its genetic integrity. Transgenic strains are by necessity derived from a small number of individuals, making further improvement in other commercial traits less likely. In the EU, the high level of public concern about GM technology would suggest that the widespread adoption of transgenic fish for a single trait such as growth performance, even if it were licensed, would meet with consumer resistance. Disease has proved a major constraint to efficient production in some intensive aquaculture systems. Major improvements in the understanding of the aetiology and epidemiology of fish diseases have been made in recent years and aquaculture producers in many countries have dramatically improved their husbandry practices with greater focus now on fish welfare. Control of many serious infectious diseases has been achieved through new medicines and vaccines, and this is especially true for bacterial diseases. However, new disease problems are emerging, and previously rare diseases becoming much more prevalent, so continued vigilance and solution development is required. Vaccines have been very effective for bacterial fish pathogens where there are resources to develop them, but success against virus disease has been more limited. Nevertheless, fish viral diseases were among the first to be tackled using recombinant DNA technology, specifically for infectious pancreatic necrosis, and subsequently direct DNA vaccination, which appears very promising. As this involves a transfer of genes, there are significant issues of safety and consumer acceptance to be addressed. Another approach showing promise is the use of proteomics and epitope mapping for the identification of vaccine antigens and the subsequent development of peptide vaccines. It is hoped that this approach might be suitable against parasites such as salmon lice. Further methods include the use of virus-like particles which have been reportedly used against grouper nervous necrosis virus or recombinant viral proteins produced in yeast (Renault 2009). For the moment, new therapies developed using genomic tools appear some way off, but some potential has been demonstrated using dsRNA for disease protection and RNA-i-based gene therapies in shrimp (Renault 2009). Antimicrobial peptides are also being studied as a potential therapeutant. Aquaculture diets are also under scrutiny with respect to potential for delivery of immunostimulants and better understanding of interactions between gut microflora, pathogens and micronutrients, including probiotic effects (Gatesoupe 2009). With respect to the engineering of culture systems, aquaculture largely takes and adapts technology from other sectors, such as fisheries, water treatment or offshore oil. However, as the sector grows, more specialized equipment develops, such as the well boats now currently employed by the salmon industry. Of particular interest for reducing pressure on water resources and minimizing impacts on sensitive freshwater or coastal environments, are recirculated aquaculture systems (RAS) and offshore cage technology. RAS culture systems are typically land-based, using containment systems such as tanks or raceways for the fish. A percentage of the water is passed from the outflow back through the system following treatment and removal of wastes. The level of waste treatment and water reuse depends largely on the requirements of the fish, the environmental parameters and the technology available. Reusing water gives the farmer a greater degree of control over the environment, reduces water consumption and waste discharge and enables production close to markets (Sturrock et al. 2008). Owing to relatively high capital costs, high energy dependencies and more complex technology, RAS is largely restricted in its use to higher value species or life stages (especially hatcheries where control over environmental conditions is more critical and unit values higher). However, it could become a more competitive approach if economic factors change. Moving systems further offshore removes a number of the challenges faced by near shore systems such as visual impacts, local environmental impacts and space constraints. In most cases, predation issues and disease risks could also be substantially reduced. Expansion of the offshore industry would allow increases in the scale of project and could therefore improve efficiency as well. Competition with other interests such as tourism and inshore fisheries might be reduced and waste discharges would be more readily diffused. However, other problems and risks associated with intensive cage-based aquaculture would remain or even increase. There is no internationally agreed definition of offshore cage aquaculture. In Norway, sites are classified according to significant wave height, whereas in the USA offshore aquaculture is defined as operations in the exclusive economic zone from the three mile territorial limit of the coast to 200 miles offshore (James & Slaski 2006). In general, offshore farming can be characterized as more than 2 km from shore, subject to large oceanic swells, variable winds, reduced physical accessibility and requiring mostly remote operations including automated feeding and distance monitoring. For these reasons, offshore aquaculture systems need to be robust structures and associated systems which are able to function with minimum intervention in a high-energy environment (Sturrock et al. 2008). There are also substantial issues over staff safety which increase cost over near shore systems. The large size required and amount of new technology mean that offshore cage farms will have large capital requirements, which will restrict use until farms and companies reach a scale of operations where offshore investment becomes feasible. There are signs that this is starting to happen with Marine Harvest, the largest salmon farming company, which has announced an intention to apply for and develop offshore sites. This is for salmon farming, but several species have been promoted as potentially suitable for offshore farms (on the basis of biology and economics), with cobia perhaps receiving the most interest and investment. Advances in information and communications technology is benefiting the aquaculture industry with improved monitoring and control systems and better real-time information for managers. The development of micro-sensors combined with greater sophistication in electronic tags is opening up possibilities for data collection from individual fish within an aquaculture environment. Particularly, when combined with genomic tools, this is a potentially powerful research approach and may also play a role in management feedback (Bostock 2009). A notable development in the British trout sector is the linking of data from many farms to provide both a benchmarking tool for farm management and stock performance, and data for real-time epidemiological modelling. This is based on changes in mortality patterns reported by the farm and their geographical location and basic environmental data such as water temperatures. Such tools can potentially provide early warning of disease outbreaks in the industry and allow precautionary actions to be put in place. All forward projections anticipate a need for increased supply of fish protein to meet the health needs and general aspirations of societies. Furthermore, this will need to be at affordable levels in relation to income and other proteins. As with terrestrial animal proteins, production of fish protein is more ecologically expensive than production of plant protein due to the higher trophic level, although some systems (e.g. enriched polyculture ponds) compare very well. Bivalve shellfish should also not be overlooked as an animal protein already well ahead on sustainability criteria. With respect to fisheries and aquaculture, it may be helpful to break the market down into commodity products that are used in a wide range of food presentations and outlets (such as whitefish, salmon, tuna and prawns), and products that are differentiated through distinctive attributes and that have both smaller production and market bases. Bulk supply is most likely to come through growth in the globalized commodity products based on economies of scale, while growth in the more specialist products would be through diversification of products and production systems. Underlying development of sustainable aquaculture of all types, but especially commodity products, is the need to improve the basic conversion of feed materials into edible fish flesh and minimize utilization and conversion of premium resource. This involves species selection, production systems, animal genetics, good health management and optimized feed and feeding. These are also linked to some extent through the developing understanding of animal welfare, which is also reaching into other physiological and environmental interactions. The interactions of aquaculture with the environment, with respect to both goods and services, are also critical and need to be evaluated in a rational way that allows the benefit of environmental services to be used but not over-exploited and impacted on. At the policy level, important questions exist about the priority given to conserving the environment versus the exploitation of natural resources for food production. While richer nations in Europe may be able to offset reduced food production by increasing imports, the environmental impact is transferred to other countries where options or control are more limited. Imposing high environmental standards on both local production and imports would encourage technology development and uptake, although most likely at the cost of increased food prices. With the market of central importance to the direction of future development, there is growing momentum to educate and influence market demand to play a more responsible role in shaping future production systems. Many campaign groups are active on specific issues, which is at least stimulating debate and further developments. Most notably, there is now a clear trend towards the establishment of various types of standards that can be measured, monitored and certificated by independent bodies to provide producers with clear guidelines and consumers and market chain participants with confidence in the environmental or social provenance of the product. The development of appropriate standards can, however, be challenging. Within aquaculture, there are now many initiatives, perhaps most significantly GLOBALGAP,4 which is private sector-based business-to-business certification focusing on food safety, animal welfare, environmental protection and social risk assessment standards. This now has certification schemes for shrimp, salmon, pangasius and tilapia and is developing a standard for aquaculture feeds. While GLOBALGAP has strong take-up, it does not involve a specific consumer label, such as ‘Friend of the Sea’,5 ‘Freedom Foods’6 or various organic labels. So far, aquaculture products have not had a consumer label with the degree of recognition of the Marine Stewardship Council mark for sustainable capture fisheries. This is expected to change with the formation of the Aquaculture Stewardship Council7 which is taking forward a long programme of stakeholder dialogues organized by the WWF8 on standards for 12 major aquaculture products and implementing a consumer-oriented certification scheme. The WWF aquaculture dialogues have highlighted the problems in developing robust measures of sustainability, particularly as definitions move beyond simple measures of environmental impact to more complex assessments of ecological efficiency. Parallel initiatives by international policy and academic organizations have therefore focused on the development of assessment tools. LCA is one of the key approaches, measuring parameters such as total energy consumption or carbon emissions throughout the production, distribution, consumption or disposal of individual products. This allows a ready comparison between products and helps to identify stages in the product life cycle where efficiency gains might be realized. While LCA provides a useful headline figure, it is less useful for understanding the dependencies of products on natural resources and service or linkages to other production processes. For this reason, FAO and partners are developing assessment frameworks based on the concept of an EAA9. This uses a number of measures including the concept of ecological footprints which help assess the dependence of specific activities on ecosystem support. A further tool that may prove useful is the ‘Global Aquaculture Performance Index’10 developed by the University of Victoria, Canada, and based on the Yale and Columbia University's Environmental Performance Index11. This uses a range of weighted metrics and statistical analysis to provide comparative scores for assessing species choices or performance differences between countries or regions. While the creation and use of international standards may appear an irrelevance to smallholder systems in many countries, there is also a risk that they could create substantial barriers to development, by denying them access to wider markets. The implications of globalizing trade, standards and certification, development and sustainability and how these interrelate are being researched by the EC-funded SEAT project,12 which aims to build a broader scoring system encompassing a range of ethical issues. Future policy development will clearly need to move beyond simple objectives of economic development and employment or environmental protection and conservation. The complexity of the seafood market suggests there are many opportunities for segmentation and innovative approaches to sustainable aquaculture that could be exploited with policy support. FootnotesWhile the Government Office for Science commissioned this review, the views are those of the author(s), are independent of Government, and do not constitute Government policy. © 2010 The Royal Society This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. References
Page 8‘Any bloke hungry in this country just silly’ (Yarralin man, Northern Territory, Australia: in Bird Rose 1996), p. 99 ‘What do you mean by weeds? There is nothing like a weed in our agriculture’ (Woman farmer, Deccan plateau, India: in Mazhar et al. 2007), p. 18. Globally, an estimated 1.02 billion people are undernourished (FAO 2009). The literature on vulnerability, food security and ecosystem services has tended to emphasize cultivated foods (MEA 2005; Ericksen et al. 2009). However, there is substantial evidence that wild foods are an important part of the global food basket. At regional and national level, food balances guide policies on trade, aid and the declaration of food crises. Notably absent from these is the contribution made by wild edible species. With the routine underestimation of wild foods comes the danger of neglecting the provisioning ecosystems and supportive local knowledge systems that sustain these food chains (Grivetti & Ogle 2000; Mazhar et al. 2007; Pilgrim et al. 2008). We summarize the best available evidence for the importance and values of wild foods (see Scoones et al. 1992; Heywood 1999; Posey 1999; MEA 2005; Kuhnlein et al. 2009). A central assumption about non-agricultural societies has been that they represent an earlier stage of cultural evolution, or the outcome of cultural devolution (Barnard 1999). It was long supposed that cultures progressed from hunter–gatherer to agricultural to industrial. Beginning with Hobbes's 1651 observation that the life of ‘natural man’ was ‘solitary, poore, nasty, brutish and short’, cultural evolutionary views—distinguishing between ‘natural’ and ‘civilized’ peoples—persisted from the eighteenth to the late twentieth centuries (Meggers 1954; Lathrap 1968). Lathrap, for example, uses terms such as devolution, degradation and wreckage of former agricultural societies to describe communities in the Amazon that engage in hunting, gathering and foraging (Barnard 1999). Evidence has revealed the limitations of these perspectives (Kent 1989; Kelly 1995). The landmark Man the Hunter conference and book (Lee & DeVore 1968) showed hunter–gatherers to be rich, knowledgeable, sophisticated and above all different from one another. There was no single stage of human development, just different adaptations to ecological and social circumstances. It is now better accepted, though not universally, that cultures are adapted to localities, and thus are configured with a wide variety of land uses and livelihoods. As a result, foraging and farming across the world are actually ‘overlapping, interdependent, contemporaneous, coequal and complementary’ (Sponsel 1989). This suggests that many rural people and their cultures might be better known as variants of cultivator–hunters or farmer–foragers rather than just farmers or hunter–gatherers. Culture and nature are thus bound together (Berkes 1999; Pretty et al. 2010). Another long-standing stereotype suggests that hunter–gatherers are nomadic and cultivators sedentary. Again, the evidence shows a bewildering array of adaptations and cultural choices. Some horticulturalists move, some hunter–gatherers are sedentary (Vickers 1989; Kelly 1995). Some groups maintain gardens for cultivated food as well as to attract antelopes, monkeys and birds for hunting (Posey 1985). Many apparently hunter–gatherer and forager cultures farm; many agricultural communities use large numbers of non-domesticated resources. The Hohokam are well-known as sophisticated canal irrigators and desert farmers of the American southwest, yet they were hunters, gatherers and foragers too. Szuter & Bayham (1989) thus observed that the ‘convenient labels of hunter–gatherer or farmer are of minimal value… The two activities are complementary’. What has also become clear is that farmers, hunters, gatherers, fishers and foragers do not simply take resources from a compliant environment. They manage and amend resources in much the same way as is standard practice on farms (table 1). Foragers maintain resources by intentional sowing of wild seeds, irrigation of stands of grasses, burning to stimulate plant growth, selective culling of game animals and fish, replanting of portions of roots, enrichment planting of trees and extraction of only parts of honeycombs so that sites are not deserted by bees (Steward 1938; Lawton et al. 1976; Woodburn 1980; Kelly 1995). All these activities have agricultural equivalents, and are variously designed to increase the productivity and stability of useful plants and animals.
Many cultures and groups directly manage trees on and off the farm. The forest islands of Amazonia were found by Posey (1985) to have emerged as a result of Kayapo directly planting-up mounds. In the lower Amazon, smallholder farmers enrich the forests with desirable fruit, timber and medicinal trees, often broadcasting seeds when cutting timber (Brookfield & Padoch 2007). In dryland Kenya, Acacia tortilis tree recruitment occurs on the sites of abandoned pastoralist corrals that are high in organic matter and nutrients from the penned livestock. Acacia seedpods are a favoured fodder, and some pass through the animals to then germinate in the next season. The result is circular woodlands of dense Acacia (Reid & Ellis 1995; Berkes 1999). In China, there is widespread use of wild trees in integrated systems of land management, and wild plants and animals are gathered from a variety of microenvironments, such as dykes, woods, ponds and irrigation ditches (Li Wenhua 2001). Farmers also widely transplant species from the wild. In northern Nigeria, they plant Hibiscus on field boundaries; in South Africa, wild fruit trees and edible herbs are grown on farms; and in northeastern Thailand, a quarter of all the 159 wild food species gathered from field boundaries, irrigation canals, swamps and roadsides are transplanted and propagated by rice farmers (Price 1997; High & Shackleton 2000; Harris & Mohammed 2003). Home gardens are particularly important for many rural smallholders, and are notably diverse, sometimes containing more than 200 useful species (Eyzaguirre & Linares 2004). In northeast Thailand, 88 per cent of home gardens contain wild species. Home gardens are often a refuge for wild species threatened by deforestation and urbanization, and in periods of drought when the wild relatives suffer, those surviving in the home gardens provide considerable additional value to farm households. Burning is a widespread management practice. Australian Aborigines call it ‘firestick farming’, and used fire to make the ‘country happy’, to keep it ‘clean’ (Bird Rose 1996). Burning allowed people to walk without fear of snakes and the nuisance of grass seeds; it created new food for kangaroos and wallabies; and made it easy to see animal tracks and burrows. The observation of smoke is still taken to be a sign that the country is healthy. Burning was also common in North America, helping to create the ‘parkland’ type environments of Yosemite and Vancouver Island, and used by plains groups to increase herd size on the prairies (Berkes 1999; Lee & Daly 1999). To many cultures, the ideas of wild, wildlife and wilderness remain problematic. The term wild is commonly used today to refer to ecosystems and situations where people have not interfered, yet we now know that people influence, interfere with and manage most if not all ecosystems and their plants and animals. In Papua New Guinea, wild and domesticated pigs are central to many subsistence strategies (Rosman & Rubel 1989). Wild pigs are hunted and managed: boars and sows are brought together to breed, females are followed to their nests, litters and piglets removed for raising, and wild pigs are fed with sago and roots. Some groups raise extra gardens of sweet potatoes just for pigs. Forest-dwelling cassowaries are never bred, but their chicks are captured, tamed and raised. Similar merging of the wild and raised occurs in reindeer (caribou) herding and hunting communities of Siberia (e.g. Evenki, Anderson 1999). What is common in all cases is that people pay close attention to what the land is telling them. Such knowledge and understanding is then encoded into norms, rules, institutions and stories, and thus forms the basis for continued adaptive management over generations (Basso 1996; Pretty 2007; Berkes 2009). This knowledge is an important capital resource. The result is a huge variety of subsistence strategies that vary spatially as well as over time (Kelly 1995). In both agricultural and hunter–gatherer systems, there are no easy distinctions between ‘wild’ and ‘cultivated’ foods. While food research and policy tend to consider these separately, the differences are rarely mirrored by local communities. Plant foods can thus be envisioned as ‘existing along a continuum ranging from the entirely wild to the semi-domesticated, or from no noticeable human intervention to selective harvesting, transplanting, and propagation by seed and graft’ (Harris 1989). Moreover, since ‘domestication grew out of food gathering, which almost imperceptibly led to cultivation’, many wild edible species can be considered to be ‘in various stages of domestication as a result of human selection, however slight’ (Heywood 1999). Many farmers continually blur the distinction between the cultivated and the uncultivated (Mazhar et al. 2007). Wild foods have long provided farmers a ‘hidden harvest’, as they have used co-evolved species and other wild biodiversity in and around their farms to supplement their foods and earnings (Harris & Hillman 1989; Scoones et al. 1992; Heywood 1999; Grivetti & Ogle 2000). Many species are found within the fields themselves. The harvesting of wild species from paddy fields is an excellent example; in Thailand, farmers harvest wild herbs, insects, trees and vines (Price 1997; Halwart 2008); in Bangladesh, 102 species of greens and 69 of fish (Mazhar et al. 2007) are collected. In Svay Rieng, Cambodia, wild fish from in and around paddies contribute up to 70 per cent of total protein intake as well as being a source of income. Their relevance as a buffer against hunger is considerable in this area since rice yields here are among the lowest in southeast Asia (Guttman 1999). Table 2 summarizes the range of species used by rice-based agricultural communities in four Asian countries, with total use varying from 51 to 102 species (overall mean: 83; plants: 17; animals: 66).
Farmers also transplant species onto or near fields. In northeast Thailand, a quarter of the 159 wild food species gathered are deliberately propagated (Price 1997; High & Shackleton 2000; Harris & Mohammed 2003). Smallholders' home gardens are another example—these are notably diverse, sometimes containing more than 200 useful species (Eyzaguirre & Linares 2004). Wild food species are declining in many agricultural landscapes (MEA 2005). The spread of agriculture and the homogenization of agricultural landscapes increasingly limits the availability and use of wild foods of nutritional importance to agricultural communities, but most of all to the landless poor and other vulnerable groups (Scoones et al. 1992; Pretty 2002). Their continued availability depends on the maintenance of synergies between farming and wild biodiversity (Pretty 2007; Royal Society 2009). By FAO estimates, around ‘one billion people use wild foods in their diet’ (Aberoumand 2009). Forests provide livelihoods and food for some 300 million people in the form of non-timber forest products (NTFPs). In general, food security and NTFPs are strongly interlinked in rural communities, especially for the most vulnerable groups (Belcher et al. 2005), even among agricultural communities (Vincetti et al. 2008). Urban communities also rely on wild foods. For instance, affluent urban households are willing to pay 43–157% more for bushmeat in Zambia and Mozambique (Barnett 2000). In Rajasthan, India, wild foods benefit both urban and rural children (Rathore 2009). Titus et al. (2009) explored the importance of wild game in Alaska, where 80 per cent of the population is urban, and found urban households routinely consuming significant amounts of wild game. Food security has come to depend on a small handful of widely cultivated species. Over 50 per cent of the world's daily requirement of proteins and calories comes from three crops—wheat, maize and rice (Jaenicke & Höschle-Zeledon 2006); 12 species contribute 80 per cent of total dietary intake. By contrast, wild foods provide a greater dietary diversity to those who rely on them. Ethnobotanical surveys of wild plants indicate that more than 7000 species have been used for human food at some stage in human history (Grivetti & Ogle 2000; MEA 2005). Some indigenous communities use over 200 (Kuhnlein et al. 2009); in India, 600 plant species are known to have food value (Rathore 2009); DeFoliart (1992) records 1000 species of edible insects used worldwide. Some 1069 species of wild fungi consumed worldwide are important sources of protein and income (Boa 2004). Bushmeat and fish provide 20 per cent of protein in at least 60 developing countries (Bennet & Robinson 2000). Additionally, wild plants in particular have diverse uses. In Nepal, 80 per cent of 62 wild food plants have multiple uses (Shrestha & Dhillon 2006). Tanzanian Batemi agro-pastoralists use species as food (31 species), thirst quenchers (six species), for chewing (seven species), as flavourants (two species) and for honey beer (one species). A further 35 wild edible plants are cultivated (Johns et al. 1996). In the Mekong Delta and Central Vietnamese Highlands, several wild food species are used as medicine and livestock feed; one-fifth are used as all three (Ogle et al. 2003). We summarize evidence on the use of wild species in tables 3–5. Surveys of even small sample sizes yield surprisingly high numbers of species used. Table 3 illustrates the use of wild foods in 12 Asian contexts; table 4 in 10 countries across Africa. From these 36 studies in 22 countries of Asia and Africa, the mean use of wild foods (discounting country- or continent-wide aggregates) is 90–100 species per place and community group. Individual country estimates can reach 300–800 species (India, Ethiopia, Kenya). Table 5 illustrates the use of wild foods by 12 indigenous communities (seven agricultural; five hunter–gatherer) across both industrialized and developing countries. The mean use of wild species is 120 per community, rising to 194 for those seven communities formally designated as agricultural.
Wild foods are still used in industrialized countries, though both use and traditional ecological knowledge appear to be declining (Mabey 1996; Pilgrim et al. 2008). In New Zealand, however, more than 60 species are still in common use, largely because of traditions of Maori groups. These include muttonbird (sooty shearwater), seagull, possum, rabbit, deer, wild pig, goat, salmon, trout, eel, watercress, sea lettuce, gorse and many berries (Newman & Moller 2005; NZFSA 2007; Stephenson & Moller 2009). In the Wallis Lake catchment, Australia, 88 species are in general use (Gray et al. 2005). In the swamps of Louisiana, large numbers of people still hunt and fish regularly for their own food (Roland 2006). Malnutrition is a major health burden in developing countries, and the recognition that nutritional security and biodiversity are linked is fundamental for enlisting policy support to secure wild food use and preserve habitats for wild edible species. Comprehensive food composition data is a critical first step (McBurney et al. 2004; Flyman & Afolayan 2006; Frison et al. 2006). This is especially important for communities most vulnerable to malnutrition (Misra et al. 2008; Afolayan & Jimoh 2009). However, understanding of wild foods' micro- and macro-nutritional properties currently lags behind that of cultivated species (Vincetti et al. 2008). Though several studies have found that wild foods are important sources of micronutrients, their energy-density is generally low (with the exception of honey and high-fat organs or in-season fat deposits) (Samson & Pretty 2006; McMichael et al. 2007). In the Sahel, several edible desert plants are sources of essential fatty acids, iron, zinc and calcium (Glew et al. 1997). In the arid Ferlo region of Senegal, some 50 per cent of all plants have edible parts, and those that are commonly consumed are critical suppliers of vitamins A, B2 and C, especially during seasonal lean periods (Becker 1983). Lockett et al. (2000) found that among the plants used by the Fulani in Nigeria, those available during the dry season (and thus important for ensuring year-round nutritional security in the face of possible food shortages) were superior in energy and micronutrient content compared with those from the wet season. The contribution of dietary energy from traditional food species in 12 indigenous communities has been found to range from 30% to 93% of total dietary energy (Kuhnlein et al. 2009). For many indigenous communities, especially Artic and sub-Arctic, traditional wild foods outweigh modern store-bought items in terms of nutrient content. Their gradual replacement by store-bought produce causes discernable and significantly negative impacts on nutritional security at household and community levels (Samson & Pretty 2006). There is no comprehensive global estimate of the economic value of wild foods. Quantitative analyses face methodological difficulties. First, case studies using different valuation methods and diverse scales are rarely comparable. Second, sale of wild products (particularly bushmeat) is often illegal, and therefore under-reported. Trade is often informal or occurs at local markets and is therefore missed by conventional accounting mechanisms (Jaarsveld et al. 2005). The MEA (2005) cautions that the extent of freshwater fish catches might be under-reported by up to a factor of two because of inaccurate measures of informal fisheries. While exact estimates of the economic value or volumes involved is difficult, what is not in dispute is that trade in and use of wild foods provide an important supplement to general incomes and are especially critical during economic hardship. Among the Tsimane' of Bolivia, only 3 per cent of goods consumed in the household comes from the market; a significant proportion comes from freshwater and forest (Reyes-García et al. 2008). In DR Congo, almost 90 per cent of harvested bushmeat and fish is sold rather than consumed (de Merode et al. 2003). In table 6, we summarize findings from economic valuations of direct use values for wild foods in selected African countries. From the limited data available, it is clear that wild plants and animals can provide $170–900 worth of value to rural households in South Africa and Tanzania. In Ghana, the bushmeat market is worth $275 million annually.
An important aspect of wild food use is the relative importance of wild foods to poorer households. The conventional understanding holds that poorer households depend more on wild foods. However, detailed analyses do not show simple correlations between wealth and resource use (de Merode et al. 2003; Allebone-Webb 2009). A range of context-specific social and economic factors (e.g. price, individual or cultural preference, and wealth) are also relevant. In some countries, household consumption of wild foods increases with wealth—with the exception of bushmeat in Africa (IIED & Traffic 2002). de Merode (2003) found that the poorest households among those sampled in DR Congo were unable to capitalize on the most valuable food products and concluded that household use of wild foods depends less on natural abundance than on socio-economic factors. In Honduras, the sale of forest products as an emergency response was relatively restricted to a minority of households and only certain conditions of cash need. Most households preferred other short-term measures such as the sale of stored crops, borrowing cash or doing wage labour (McSweeney 2005). Consumption is also influenced by price or individual or cultural preference. There are a number of important drivers for wild food availability and use. While some clearly increase or decrease the use of wild foods, the impact of others is ambiguous and context-dependent. The importance of understanding current trends for wild foods is underscored by the recognition that food insecurity is a particular problem among indigenous populations (Ford & Berrang-Ford 2009). For instance, Willows et al. (2009) find that of 35 000 households, 1528 of whom were aboriginal, 33 per cent of Aboriginal households were food-insecure, compared with 9 per cent of non-Aboriginal households, and that Aboriginal households were more prone to experience socio-demographic risk factors for food insecurity than non-Aboriginal households. The interaction between drivers also deserves attention. In assessing links between local knowledge and sociocultural continuity, Howard (2010) finds that cultural identity and agrobiodiversity are strongly associated: ‘culture and ecosystems … co-evolve’. Thus, a biophysical driver (e.g. climate change) could have knock-on effects on a cultural parameter (e.g. local knowledge), and the effect of the two combined could lead to either an increase or decrease in wild food use. Forecasting the precise impacts of the changing climate on the availability of wild foods is difficult (MEA 2005; Woodruff et al. 2006). Studying resilience and vulnerability in two communities in Tanzania and Niger, Strauch et al. (2009) concluded that there was insufficient evidence to predict the impacts that climate change would have on both human foraging and the interlinked processes of local ecological knowledge (LEK) transmission, cultural continuity and land-based subsistence livelihood. At a regional level, White et al.'s (2007) study of the effects of a changing climate on wild food supplies in the Arctic focused on surface water regimes. There were multiple impacts brought by changes in hydrology for local communities. The stresses brought by a changed Arctic climate are compounded by rapid socio-cultural change in the region (Samson & Pretty 2006; Loring & Gerlach 2009). Wild food species offer a potentially critical role for buffering against food stress caused by a changing climate. Nevertheless, ‘the innate resilience of wild species to rapid climate change, which is often lacking in exotic species’, means that they could play an increasingly important role during periods of low agricultural productivity associated with climate events (Fentahun & Hager 2009). Current trends in land use, including expansion of intensive agriculture, limit the capacity of ecosystems to sustain food production and maintain the habitats of wild food species (Foley et al. 2005). Changes in land use and agriculture expansion have significant implications for the availability of wild foods. The commercialization of agriculture—an important driver of land use change—potentially implies decreased reliance on wild foods (Treweek et al. 2006). Agricultural and land use policy, infrastructure development and widened access to markets all drive land use change, and are implicated in declines of wild species in Thailand (Schmidt-Vogt 2001; Padoch et al. 2007) and China (Xu et al. 2009). Biodiversity in intensely managed swidden (shifting) fallows has traditionally provided communities with the means to increase incomes, improve diets and increase labour productivity. Most of the wild food species used by swiddeners come from fallows, rather than mature forests. With the replacement of swidden farming by annual or perennial crops (Bruun et al. 2009), wild foods that accompanied fallows are being lost, leading to decreased diversity, and with it downgraded nutritional status, health and income, and the removal of a vital ‘safety net’ for the rural poor (Rerkasem et al. 2009). Somnasang et al. (1998) report that in 20 villages surveyed in Thailand, deforestation had led to a decline in wild food species. Efforts by the local community to stem this loss by domesticating important species were unsuccessful, as many species do not survive outside their natural forested habitat. Overall, the challenge of feeding a growing world population, if it does not focus on sustainable intensification (Royal Society 2009), will further threaten naturally biodiverse landscapes. Yet, ensuring dietary diversity and associated nutritional security rests on ‘forestalling the imminent extinction of up to one quarter of the worlds' wild species and the loss of important agro-biodiversity’. This calls for a biodiversity-focused strategy in food, public health and poverty-alleviation policies (Johns & Sthapit 2004). Sixteen of the world's biodiversity hotspots correspond with areas of malnutrition and hunger, placing pressure on biodiversity for food provision (Treweek et al. 2006). In these locations, unsustainable harvests have led to declines in wild food species. The illegal use and trade of bushmeat is well documented. In the long term, over-harvesting will have a negative impact on wild food availability and thus on nutritional security for those communities that rely on bushmeat for protein. In some parts of Africa, unsustainable harvesting is putting added pressure on stocks. An important driver is the widespread availability of firearms (Jaarsveld et al. 2005). Nevertheless, despite the fact that unsustainable trade in bushmeat is regarded as a threat to wildlife, Cowlishaw et al. (2005) found some evidence of sustainable harvesting after the extinction (through historical hunting) of key species. After vulnerable species have been depleted, robust species (fast reproducers) are then harvested and traded at sustainable levels. Management policies might therefore benefit from according stricter protection to key species but allowing robust ones to continue being traded sustainably. Where species have traditionally been harvested sustainably, the entry of the market and the commercialization of species hitherto used exclusively for local subsistence can also result in over-harvesting (Kala 2009). Unsustainable harvesting is a concern in the case of wild fisheries. At a global level, increasing average per capita consumption of seafood has led to catch rates that regularly exceed maximum sustainable yields (MEA 2005). Brashares et al. (2004) found links between unsustainable harvesting of bushmeat and fish stocks in Africa: years of poor fish catches coincided with increased hunting over a 30 year period. In Africa, climate-induced vulnerabilities combined with HIV/AIDS have produced a decline in food security sufficiently great to have spurred new thinking on the origins of famine (e.g. New Variant Famine Hypothesis: de Waal & Whiteside 2003). Hlanze et al. (2005) state that ‘increasingly it is becoming difficult to separate the food security impact of drought from that of HIV/AIDS. The two work in tandem to cause poor harvests and reduced incomes.’ For households afflicted by HIV/AIDS, wild foods offer nutritious dietary supplements at low labour and financial costs. This is important when considering the negative impact of a household's HIV/AIDS status on income and food security (Kaschula 2008), together with the fact that deficiencies of micronutrients (in which many wild foods are rich) critical to immune-system function are ‘commonly observed in people living with HIV in all settings’ (Piwoz & Bentley 2005). Food stress associated with HIV/AIDS can drive households to intensify wild food use, putting unsustainable pressure on local resources especially when combined with deepening poverty or indeed conflict (Dudley et al. 2002). In South Africa, Kaschula (2008) found that wild food use was significantly more likely in households afflicted by HIV. However, use of wild foods could also decline due to HIV/AIDS. For example, at one site, it was found that ‘households suffering the loss of a head of household were actually less likely to gather from the bush’ (Hunter et al. 2009). Further relevant drivers include the loss of ecological knowledge as adults die (Ansell et al. 2009), declines in household labour (de Waal & Whiteside 2003; Kaschula 2008) and the stigma attached to HIV/AIDS (Kaschula 2008). Armed conflict and associated internal displacement of populations are associated with heavy subsistence use of wild foods by refugees, combatants and resident non-combatants alike, and the sale or barter of wildlife for food (Loucks et al. 2009), arms or other goods. Conflict—often positively correlated with areas of high biodiversity—is generally associated with landscape degradation (Loucks et al. 2009). It is conceivable that this could lead to a decline in the long-term use of wild food species. Climate change is also predicted to increase armed conflict in some developing countries (Buhaug et al. 2008). LEK is required for the identification, collection and preparation of wild foods (Pilgrim et al. 2008). The distribution of LEK between individuals in a community is usually differentiated by gender, age or social role. Several studies show women score higher on food-related knowledge (Price 1997; Somnasang et al. 1998; Styger et al. 1999). In one Nepalese site, women above 35 years of age were able to describe the uses of 65 per cent of all edible species, while young men could only describe 23 per cent (Shrestha & Dhillon 2006). Men and women might also hold specialized LEK. Somnasang et al. (1998) found that while men had more knowledge of hunting and fishing, women had more knowledge of wild food plants, insects and shrimp. LEK is also differentiated by age: in Ethiopia, children gather fruit for consumption by the whole community, and unsurprisingly those under 30 had the most knowledge of wild fruits (Fentahun & Hager 2009). Research has pointed to declines in LEK (Pilgrim et al. 2008) as communities rely increasingly on store-bought foods and move away from land-based livelihoods. Somnasang et al. (1998) found that young people working outside the village did not have the chance, and in some cases the desire, to acquire food-relevant LEK. It is thus possible that as young adults leave land-based livelihoods, knowledge transmission to younger generations will be diminished. In other cases, individuals' preferences change as they grow and thus, their stock of LEK changes, even if they remain within their community. In Ethiopia, Fentahun & Hager (2009) found that ‘… grown-ups succumb to the culture of the society which regards the consumption of wild fruits (commonly consumed by children) as a source of shame’ (insert added). As climate change alters habitats, so knock-on effects are expected on LEK (Strauch et al. 2009). The nutrition transition associated with industrialization and the modernization of diets poses challenges to public health worldwide (Popkin 1998). The replacement of wild foods by store-bought products is linked to reduced dietary diversity, rising rates of chronic lifestyle-related conditions such as obesity and type II diabetes, poor intake of micronutrients (Batal & Hunter 2007) and malnutrition (Erikson et al. 2008). Traditional species become undervalued and underused as exotic ones become available, as has been found in India (Rathore 2009) and the Amazon (Byron 2003). Yet, the importance of wild foods to nutritional security means that they are not necessarily replaced by store-bought foods providing the same amount of calories. Global trends indicate that more people will, however, come to depend solely on store-bought, cultivated foods (Johns & Maundu 2006), thus marginalizing wild foods. In regions isolated from sweeping transformations, traditional food systems can persist. Pieroni (1999) suggests that the geographical isolation of the upper Serchio valley in northwest Tuscany has ‘permitted a rich popular knowledge to be maintained’. Gastronomic traditions in the valley help to preserve influences dating from pre-Roman times and over 120 species form a well-preserved pharmacopoeia of food and medicine. In other regions too, wild food use seems to persist: 123 edible species are still used in Spain (Tardío et al. 2003); and in many Mediterranean countries, wild foods are still prevalent enough to be considered an important part of local diets (Leonti et al. 2006). In the Arctic, the nutrition transition is driven by a changing climate as well as large-scale cultural changes. This transition produces significant negative effects to physical and mental health at community level (Samson & Pretty 2006; Loring & Gerlach 2009). In the Canadian Artic, children now obtain more than 40 per cent of their total energy from store-bought processed foods (‘sweet’ and ‘fat’ foods). In adults, however, the benefits of consuming traditional wild foods are clear: ‘… even a single portion of local animal or fish food resulted in increased (p < 0.05) levels of energy, protein, vitamin D, vitamin E, riboflavin, vitamin B-6, iron, zinc, copper, magnesium, manganese, phosphorus, and potassium’ (Kuhnlein & Receveur 2007). Though wild foods have traditionally played a critical role in circumpolar communities (Ford 2009; Ford et al. 2009; Titus et al. 2009), public health policy across many countries tends to operate within a model of food security that discounts the traditional food practices of these communities (Power 2008). The MEA (2005) lists 250 mammalian, 262 avian and 79 amphibian species as threatened from overexploitation for food. Mechanisms such as CITES regulate cross-border trade in wild species, but require international cooperation. At national level, however, trade is generally poorly regulated and monitored. Challenges to sustainable harvesting include (i) lack of comprehensive data on species used and sustainable yields; (ii) lack of management regimes and institutions regulating ownership, access and harvesting rights; (iii) lack of legislation and policy for sustainable harvesting—in many cases a result of lack of information on use and trade of species (Schippmann et al. 2006). Policy support is central to the conservation of species as well as LEK. Lack of policy support for relevant programmes has been implicated in the continued over-harvesting of African bushmeat (Scholes & Biggs 2005). By contrast, support for agroforestry systems have potentially ensured sustainable harvests from indigenous tree species in areas otherwise prone to deforestation (Sileshi et al. 2007). Management of common forests has recently become successful with the emergence of joint forest management and community-managed forest groups (Ostrom et al. 2002; Pretty 2003; Berkes 2004; Molnar et al. 2007). Worldwide, some 370 million ha of various habitats are estimated to be under community conservation, including 14 million ha managed by 65 000 community groups in India and 900 000 ha managed by 12 000 groups in Nepal. In Italy, Vitalini et al. (2009) linked the continued use of wild food and plants with a site's EU designation of ‘Site of Community Interest’. The preservation of habitats bodes well for species conservation, but there are also concerns that protected area status might exclude local people from access and use. In environments where LEK is being lost, it is important that it be recorded. Local communities might themselves desire to preserve wild food species through, for example, the establishment of community enterprises based on wild food resources in Nepal (Shrestha & Dhillon 2006) or through local women strengthening traditional community sanctions against overuse and enlisting the support of state law in northeast Thailand (Price 1997). Wild food species form a significant portion of the total food basket for households from agricultural, hunter, gatherer and forager systems. However, the focus on the contribution of agriculture to total food security has resulted in the routine undervaluation of wild food species. The continued contribution of wild species to food and nutritional security is threatened by some of the processes that seek to increase agricultural production and enhance economic development. While wild foods cannot entirely bridge the existing supply and demand gaps, without them it would be much wider. Edible species provide more than just food and income. In communities with a tradition of wild food use, it is part of a living link with the land, a keystone of culture (Pretty 2007; Pilgrim & Pretty 2010). The decline of traditional ways of life and decreased wild food use are interlinked. Research needs are twofold: (i) standardized, accessible and comparable studies on the nutritional and toxicological properties of currently underused wild species on a broad scale; (ii) the identification of priority areas for conservation of wild food species and the recording of food-relevant LEK. Polices on conservation, food-security and agriculture need to be integrated to recognize and preserve the importance of wild foods. Recent initiatives indicate that this may be taking place. For example, traditional food revitalization projects aim to increase the consumption of wild foods, and are being used to provide health and cultural benefits to traditional communities otherwise subject to the nutrition transition (Pilgrim et al. 2009). The FAO recognizes that ‘nutrition and biodiversity converge to a common path leading to food security and sustainable development’ and that ‘wild species and intraspecies biodiversity have key roles in global nutrition security’ (FAO 2009). The evidence shows that wild foods provide substantial health and economic benefits to those who depend on them. It is now clear that efforts to conserve biodiversity and preserve traditional food systems and farming practices need to be combined and enhanced. FootnotesOne contribution of 23 to a Theme Issue ‘Food security: feeding the world in 2050'. While the Government Office for Science commissioned this review, the views are those of the author(s), are independent of Government, and do not constitute Government policy. © 2010 The Royal Society This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. References
Page 9Globally, 2600 km3 of water are withdrawn each year to irrigate crops, representing over two-thirds of all human withdrawals (FAO 2004). As water scarcity intensifies and many of the world's river basins approach closure (i.e. all water supplies have been put to use for at least part of the year; Smakhtin 2008), water is increasingly transferred out of agriculture to provide for other demands, such as energy generation or growing urban populations. Pimentel et al. (1997) note that given worldwide hunger, rising populations will increase pressure on already constrained food supplies. Vorosmarty et al. (2000) argue that global water resources are already under stress at current population levels, and that this will only intensify as populations rise further. Perhaps more problematically, rising incomes cause diets to shift to more water-intensive agricultural products and cause levels of water service to increase (e.g. from community standpipes to plumbing systems). Together, these are rapidly increasing per capita water demand in developing nations. Simultaneously, to meet higher food demands for a growing population, agriculture is expanding to new regions and becoming more productive, both of which are rapidly increasing the demand for water. Energy consumption and other industrial activities in many countries continue to increase, causing industrial water consumption to rise. Perhaps, the most important and most overlooked, environmental flow requirements (EFRs) are increasingly being recognized as a crucial element of a functioning riparian ecosystem and, accordingly, are increasingly being instated as part of environmental management. As EFRs are instated, remaining water for agriculture will be further diminished. In addition to rising demands on water resources, climate change will significantly affect the timing, distribution and magnitude of water availability. Where shifts in water availability reduce regional water supplies, agriculture may be further threatened. In Water for agriculture: maintaining food security under growing scarcity, Rosegrant et al. (2009b) review the recent works on water for agriculture at the global and regional scale. Water for food, water for life (Molden 2007) provides a comprehensive review of water management issues in agriculture, and considers how increasing demands and environmental flows could threaten water supplies. However, the analysis considers forecasts of municipal and industrial (M&I) water demands at a broad geographical scale rather than at a more disaggregated national level, and does not quantitatively evaluate how climate change impacts water supply. Strzepek & McCluskey (2007) look at the effects of climate change on agriculture in Africa with water as a primary constraint. This study does not, however, explicitly address whether growing demands and shifting supplies will leave sufficient water for agriculture. In this paper, we consider the fraction of current agricultural withdrawals that may be threatened given increasing water demands in other sectors, limitations imposed on withdrawals to meet EFRs and the likely effects of climate change. We first briefly review demand- and supply-side factors that will affect water available for agriculture, and then model the possible implications for agricultural water availability through 2050 under climate change. In doing so, we comment on the relative importance of each competing pressure, and identify geographical ‘hotspots’ where water for agriculture could be substantially reduced. Finally, we comment on the most significant sources of uncertainty in our results, and suggest directions for additional research. Three of the most significant competing demands for water in agriculture are rising M&I uses (particularly in developing countries) and baseline EFRs. We describe these and others below. Municipal water demand, as defined here, encompasses both domestic and commercial uses of water. Increases in municipal water use, which will be driven by both rising populations and per capita incomes, will vary widely across countries. As noted by Cole (2004) and others, a nation's per capita GDP is a strong determinant of its per capita municipal water use. As per capita incomes rise in poorer nations, level of service moves from systems such as rainwater catchments, truck-supplied water or public standpipes, to plumbing systems where water is delivered directly to households. Gleick (1996) observes that at the lowest levels of service, individuals may only consume an average of 10 litres of water per day, whereas at the highest levels people may consume between 150 and 400 litres per day. The relationship between per capita water use and per capita GDP growth over time depends on the development path of the particular nation; it is probably that countries with more equitable distributions of resources (i.e. those with lower Gini coefficients) will spread advancements in water service more widely, which will lead to more rapid increases in average per capita water use.1 Once the majority of a population has ready access to water (as in most developed nations), household and commercial consumption of water flattens with respect to incomes, and then falls with further increases in income as nations introduce or require water-efficiency measures (e.g. water-saving showerheads and toilets). As a result, over the past few decades, nations such as the US and Switzerland have had constant or falling per capita municipal water use as per capita GDPs have increased (see Kenny et al. 2009). This trend has prompted Cole (2004) to inquire whether municipal water use follows an environmental Kuznet's curve, where per capita water use initially rises with incomes and then falls as nations grow wealthier. Indeed, as seen in table 1, European water withdrawals generally increased through the 1970s and declined between 1980 and 1995. Given that GDP and population were generally rising through this period, the trend in per capita use relative to per capita GDP would be considerably lower.
Developing nations where incomes are rising rapidly, such as China or India, will experience dramatic increases in municipal water use as levels of water service become more advanced. In nations where populations are also rising, these effects will be further magnified. World Bank projections of municipal water use over time for OECD and non-OECD countries are included in figure 1. Note that OECD municipal demand is projected to increase only by 10 per cent (from 162 billion m3 to 178 billion m3) through to 2050, as compared with the over 100 per cent increase forecast in non-OECD countries (from 257 billion m3 to 536 billion m3). Total projected OECD versus non-OECD municipal water use, 2005–2050. Source: Hughes et al. (in press). Squares with solid lines, non-OECD; diamonds with solid lines, OECD.
Industrial water demand includes water use for manufacturing, energy generation and other industrial activities. Similar to municipal demand, per capita industrial water use tends to rise rapidly as a nation industrializes and then falls as countries move towards more service-based industries. As a result, the most important determinant of future industrial water use is the stage of a country's development. A related factor is whether the country adopts water-conserving technologies. If regulations on water use are imposed that require conservation technologies, or if water prices cause industrial water use to become more costly than conservation, water use will tend to decline. This trend is typified in the construction of new energy generation capacity in developing and developed countries: new power plants in developing countries generally use water for thermoelectric cooling, whereas new facilities in developed nations often use air cooling condensers to avoid excess water use and thermal pollution. In some instances, developed nations transfer lower water use technology to developing nations and thus allow those nations to ‘leapfrog’ past the period during their development paths with highest per capita industrial water use. These patterns can be observed in figure 2, which shows World Bank projections of total OECD and non-OECD industrial water use between 2005 and 2050. Note that total OECD industrial water use declines and non-OECD use increases only slightly after peaking during the 2030s. Industrial water use is dominated by cooling and non-consumptive uses. When faced with pollution controls or high water prices, industrial water use has exhibited major reductions (Kenny et al. 2009). The World Bank projections assume that leapfrogging occurs to facilitate reductions in developing nations' industrial use. Total projected OECD versus non-OECD industrial water use, 2005–2050. Source: Hughes et al. (in press). Squares with solid lines, non-OECD; diamonds with solid lines, OECD.
EFRs refer to minimum flows allocated for the maintenance of aquatic ecosystem services. EFRs can also be viewed as a demand for floodplain maintenance, fish migration, cycling of organic matter, maintenance of water quality or other ecological services (Smakhtin 2008). Although these demands are increasingly being viewed as crucial, they are often not included in traditional accounting determinations of how close river basins are to closure. In understanding EFRs, Falkenmark & Rockström (2006) differentiate between the ‘blue water’ in lakes, rivers and aquifers that is available for human withdrawal, and the ‘green water’ in soil moisture that is used by terrestrial ecosystems, including agricultural systems (figure 3). They argue that excessive blue water withdrawals can lower water tables and affect the availability of green water, thus potentially impairing terrestrial ecosystem function. Globally, irrigation consumes nearly 1800 km3 of blue water annually, with rainfed crops consuming an additional 5000 km3 of green water (Falkenmark & Rockström 2006). Blue and green water. Source: Falkenmark & Rockström (2006).
As the focus has shifted from maintaining minimum flows to ensuring that the timing and magnitude of flows are appropriate to assure ecosystem health, quantifying EFRs within individual river basins has grown more complex. Smakhtin et al. (2004a) suggests that Q90 flows (i.e. flows that are exceeded 90% of the time) are sufficient to maintain riparian health in ‘fair’ condition, and are generally a reasonable assessment of EFRs. He contrasts these with the much higher Q50 flows (i.e. flows that are exceeded half the time), which maintain the riparian system in ‘natural’ condition (i.e. negligible modification of habitat) and Q75 flows, which maintain the system in ‘good’ condition (i.e. largely intact biodiversity and habitats despite some development). Depending on the shape of a river's hydrograph, Q90 flows may be exceedingly low (e.g. if greater than 10% of flows are zero, Q90 flows will be zero). In these instances, Smakhtin suggests that high-flow requirements be instated, thereby imposing minimum water flow requirements at the high end of the hydrograph. Figure 4 (from Smakhtin et al. (2004b)) compares traditional water stress in the world's river basins to water stress with EFRs included. Note the expansion and intensification of stressed basins, particularly in the Middle East, central Asia and southern Europe. (a) Traditional water stress and (b) water stress with environmental flows. Source: Smakhtin et al. (2004b).
Increasing agricultural demands. Food production will need to continue to increase to meet the growing demands of larger, wealthier populations (Tilman et al. 2001). At the same time, the increased demand for renewable sources of energy will cause the fraction of land for biofuel production to increase (Fisher & Schrattenholzer 2001; Berndes 2002). To meet these demands, agriculture will move into currently undeveloped lands, which may increase evapotranspiration (ET) if the crops are more water-intensive than the natural vegetation, and will certainly do so if irrigation systems are installed. As incomes and crop prices rise and farmers seek higher yields, sprinkler and flood irrigation systems will be installed in current locations, which will increase both crop water use and evaporation. Location of withdrawals. The relative location of the various demands within the basin is critically important to water availability for agriculture. If M&I demands (described together henceforth) are concentrated upstream of agriculture, water is more likely to remain available for farming because return flows from M&I uses are generally a large percentage of initial withdrawals (roughly 90%). On the other hand, growing cities and industry near the terminus of river basins may transfer water out of upstream agriculture if supplies are constrained, particularly given that ET from agriculture consumes between 50 and 80 per cent of withdrawals, depending on crops grown, climate and irrigation efficiency (Postel et al. 1996). EFRs also have a spatial dimension because these flows must remain in rivers throughout their course. This may be an issue in cases where M&I uses withdraw large volumes of water upstream and return the majority downstream, creating river segments with flows that are below EFR targets. Political and institutional issues. Political and institutional issues may also affect availability of water for agriculture. Transboundary competition for water can cause water to be used for domestic agricultural production to maximize local production rather than where regional agricultural productivity (i.e. ‘crop per drop’) is highest. This causes an effective loss of productivity for agriculture. In addition, many countries have national security and economic policies focused on reaching food self-sufficiency. This focus is driving many countries to withdraw water for agriculture in water-stressed basins rather than importing agricultural products. While it may be economically feasible for the nations to import food, the desire not to be held hostage by food exporters can lead to environmentally unsustainable water use. In addition, the presence or absence of water markets can have a significant effect on the availability and distribution of water for agriculture. In regions where broad water markets exist, such as southeastern Australia or certain parts of the western US, water prices are often driven by demands with higher marginal values than agriculture, such as urban uses. Generally, this has the effect of transferring water out of agriculture to these higher value uses. In other instances, water markets have been successfully established in many regions to transfer water between agricultural products, typically to higher value products (e.g. from alfalfa to fruit trees). The majority of nations currently lack water markets owing to legal or institutional barriers, poor water metering infrastructure and/or exceedingly high transaction costs; however, increasing water scarcity may cause markets to become more prevalent in future years. Next, we discuss the potential effects of climate change and groundwater depletion on availability of water for agriculture. Climate change affects the water cycle through changes in temperature, the timing and magnitude of precipitation, soil moisture, run-off, the magnitude and frequency of extreme events, and a number of secondary effects. Although precipitation is often projected to increase under climate change, research has suggested that a 4ºC temperature increase would require at least a 10 per cent increase in precipitation to balance evaporative losses. As a result, in many regions projected increases in precipitation can accompany decreases in run-off (Gleick 2000). Spatial patterns of these changes in run-off will vary widely. For example, models predict that run-off will increase by 10–40% in eastern equatorial Africa and that run-off will decline by 10–30% in southern Africa (Milly et al. 2005). In addition, a warmer climate brings with it increases in the magnitude and frequency of extreme events (Bates et al. 2008). The magnitude and distribution of run-off will also be further affected by reductions in glacial melt. Climate change may also have several secondary effects that impact the water cycle. Increases in the intensity of precipitation events, coupled with extended periods of lower streamflow, may intensify pollution issues (Kundzewicz et al. 2007). Groundwater systems are anticipated to respond more slowly to changes in climate than surface water systems, but increase in evaporation, changes in vegetation, increases in high run-off events and other effects of climate change may reduce the potential for groundwater infiltration. The net effect of these changes may be reduced sustainable levels of groundwater pumping, changes in water availability in surface water systems, or both. Finally, decreases in precipitation coupled with increasing temperature in certain regions will have a pronounced downward effect on soil moisture (a function of soil type, rainfall patterns and temperature patterns), making less ‘green water’ available for crop use (Cao & Woodward 1998; Falkenmark & Rockström 2006). On the demand side, climate change will directly affect water use across numerous sectors. On agricultural or other vegetated lands, increasing temperatures will cause plant growth (and thus water demand) to increase as long as soil moisture is not constraining. Increased temperatures also increase domestic demand for water (Goodchild 2007), which will be driven primarily by increased garden and lawn watering (Arnell 1998). Rising temperatures may also directly increase water withdrawals for thermoelectric cooling, and indirectly increase cooling withdrawals as electricity demand increases for air conditioning (Bates et al. 2008). The regional effect of these supply and demand effects on water available to agriculture ranges widely, and the fact that both vulnerability and adaptive capacity to changes in climate also differ across regions will magnify differences in the response to changes in water availability (Adger et al. 2003). Between 1950 and 2000, global groundwater extraction has increased sharply to supply municipal, industrial and agricultural uses. As a result, in many regions of the world, groundwater reserves have declined to the point where well yields have fallen dramatically, land has subsided and aquifer salinization has occurred (Konikow & Kendy 2005). In Yemen, for example, groundwater withdrawals exceed recharge by 400 per cent, which prompted the World Bank to express concerns that groundwater mining in the nation threatens the fundamental wellbeing of its citizens (Shah et al. 2000). Because shallow groundwater aquifers and surface water bodies are connected through the same hydrological system, excessive groundwater withdrawals will cause increased groundwater infiltration and thus reduced run-off; for example, in Idaho in the northwestern US, farmers, businesses and cities were ordered to shut down 1300 wells to restore reduced spring discharge (Konikow & Kendy 2005). As a result, groundwater pumping is either from hydrologically disconnected sources that have very low recharge rates (i.e. groundwater mining), or directly decreasing the mean annual run-off (MAR) of a surface water source (Winter et al. 1998). As the global demand for groundwater continues to increase, groundwater tables and well yields will decline more rapidly, decreasing surface water run-off and forcing those that rely on groundwater resources to seek new sources. Both will have negative effects on water available for agriculture. To assess the impacts of changing water demand and supply on water available for agriculture, we model the potential implications of increased M&I withdrawals (considered together), EFRs, and climate change on withdrawals for worldwide agriculture through 2050. Specifically, for a number of geopolitical regions and under three climate change scenarios, we estimate the fraction of current agricultural withdrawals that would be threatened assuming that EFRs and increased M&I demands cause total basin withdrawals to exceed MAR (or total annual withdrawals if they currently exceed MAR because of return flows).2 Following Winter et al. (1998), we assume that regional groundwater withdrawals deplete river basin run-off and therefore implicitly consider subsurface water in our modelling exercise. It must be noted that this analysis may underestimate threats to agriculture, for two reasons: (i) we make these comparisons relative to current agricultural demands rather than the expected higher demands of 2050; and (ii) we do not consider the effects of drought or increased extreme events. On the other hand, the analysis may overestimate threats because we model withdrawals rather than consumptive use and thus do not account for reuse of return flows. We consider a total of three climate change and three demand scenarios. On the demand side, we consider the effects of 2050 M&I demands alone, EFRs alone and 2050 M&I and EFR demands together. M&I demand projections to 2050 are taken from central World Bank projections for 214 countries (Hughes et al. in press). EFRs are assumed to be the Q90 basin flows necessary to maintain riparian ecosystems in ‘fair’ condition, and, following Smakhtin, if Q90 flows are exceedingly low owing to the shape of the basin's hydrograph, we assume minimum high-flow requirements to maintain other key ecosystem services (see Smakhtin et al. (2004a) for details of this approach).3 For the climate change analysis, we evaluate a baseline (i.e. no climate change) scenario, and two climate change scenarios based on the range of available general circulation models (GCMs). Although use of GCM ensemble means—with some acknowledgement of the uncertainty in ensemble outputs—has become standard practice in climate research (Bates et al. 2008), probabilistic analysis using the full suite of 22 IPCC GCMs was beyond the scope of this work. As a result, we follow the World Bank's economics of adaptation to climate change (EACC) analysis (World Bank 2009), and model the two climate change scenarios under the A2 SRES scenario using the NCAR and CSIRO GCMs, which the Bank considers to represent generally wetter and drier climate runs, respectively.4 In total, we consider nine climate-demand scenarios, each compared with the current baseline. Table 2 provides a key for these nine scenarios in a three-by-three grid.
We use the CLIRUN II hydrologic model in this analysis (Strzepek et al. in preparation), which is the latest model in the ‘Kaczmarek school’ of hydrologic models (Yates 1996) developed specifically for the analysis of the impact of climate change on run-off and extreme events at the annual level. CLIRUN II models run-off in 126 world river basins with climate inputs and soil characteristics averaged over each river basin. The model simulates run-off at a gauged location at the mouth of the catchment, and can run on a daily or monthly time step; for this study, climate and run-off data were available on a monthly basis. Because data on 2000 agricultural and M&I withdrawals are available for 116 economic regions of the world, we intersect the 126 river basins with these economic regions to form 281 food production units (FPUs; see Strzepek & McCluskey (2007) and Rosegrant et al. (2009a,b)), which form the geographical unit of our analysis. For each FPU, our baseline data include current MAR values, 2000 agricultural withdrawals and 2000 M&I withdrawals. We generate 2050 M&I values by first developing ratios of 2050 to current M&I demands using World Bank projections for the 214 countries. Next, we assign each of the FPUs a 2050 to current demand ratio by translating data from the 214 countries to the FPU scale, and then multiply these ratios by 2000 baseline M&I demands to develop 2050 M&I demands for each FPU. We generate EFRs based on the existing run-off distributions in each of the FPUs. On the supply side, climate change will directly affect the MAR within each of the river basins. To assess these changes through 2050, we use the CLIRUN II hydrologic model to generate changes in MAR in each FPU based on the NCAR (wet) and CSIRO (dry) GCMs. Below, we first present estimates of the percentage of MAR that is: (i) currently withdrawn for agricultural and M&I purposes; and (ii) needed for EFRs and projected 2050 M&I demands. Then, we present the fraction of current agricultural withdrawals in each of the geopolitical regions that may be threatened under the nine scenarios, and conclude this section with a discussion of our findings. Data for the analytical baseline are presented in Table 3 which summarizes the MAR in 2000 for the world and each of the geopolitical regions, along with the percentage of 2000 MAR withdrawn for agriculture and M&I.5 In 2000, roughly 10 per cent of worldwide MAR was withdrawn for agriculture and 4.3 per cent was withdrawn for M&I use. Note that in Asia, these figures are 27 per cent and 6.6 per cent, respectively, and in India, agriculture and M&I withdraw 76 per cent and 9.3 per cent, respectively. Figure 5 shows percentage of MAR that is withdrawn for agriculture in 2000. Areas where water is used most intensively for agriculture (e.g. the Middle East, central Asia, western US) are most vulnerable to changes in supply and competing demands. On figure 6, we show the percentage of MAR that is currently withdrawn for M&I—although the magnitude of these values is considerably lower than those of agriculture, these are projected to rise sharply by 2050.
2000 Agricultural water withdrawals as percentage of MAR in 2000.
2000 M&I water withdrawals as percentage of MAR in 2000.
To evaluate the effects of changing water withdrawal and availability conditions, we model changes in M&I demands, EFRs and changes in run-off caused by a wet and dry climate change scenario through to 2050. For each of the geopolitical regions, table 4 presents the EFR and 2050 M&I withdrawals as percentages of MAR in 2000, and presents percentage changes from MAR under the wet (NCAR) and dry (CSIRO) climate scenarios. Note that regionally, EFRs are between approximately 23 and 54 per cent (Nile River Basin and Oceania, respectively), which are substantial shares of annual flow to satisfy minimum ecological requirements. Between 2000 and 2050, M&I is projected to rise globally from 4.3 to 5.9 per cent of MAR, with the highest rise occurring in India (9.3–24% of MAR). Climate change increases global MAR under both the wet and dry scenarios, although at the regional level the NCAR and CSIRO GCMs projections diverge, sometimes dramatically (e.g. Nile River Basin).
Figures 7–10 present these water demand and climate change estimates spatially for the globe. Note that in certain FPUs, EFRs can be as high as 52–74% of MAR (figure 7), and that 2050 M&I use tends to be highest in areas with higher incomes (figure 8). As can be observed in figures 9 and 10, under climate change, effects on MAR vary widely between the two scenarios and across space. Environmental flow requirements as percentages of MAR in 2000.
2050 M&I withdrawals as percentages of MAR in 2000.
Percentage change in MAR under the wet (NCAR) climate scenario.
Percentage change in MAR under the dry (CSIRO) climate scenario.
As discussed above, demands for additional M&I withdrawals and minimum EFRs may be met through transfers from agriculture. Table 5 displays the fraction of 2000 agricultural water withdrawals that may be threatened in each of the geopolitical regions under the nine scenarios. Under the no climate change scenario, our models indicate that increases in M&I demands, EFRs, and combined M&I demands and EFRs will require 7.3 per cent, 9.4 per cent and 18 per cent, respectively, of worldwide agricultural water in 2000. Agricultural water in Asia accounts for over two-thirds of the global total, and also accounts for the majority of threatened agricultural water by volume, largely because of substantial increases in M&I demands in India. Modelling indicates that EFRs and M&I increases together will threaten nearly 20 per cent of agricultural water in the European Union and the former Soviet Union. In sub-Saharan Africa, rapidly rising M&I demands also threaten water for agriculture.
Under climate change, threats to agricultural water both increase and decrease, depending on the region and scenario. In Europe, less water for agriculture is threatened under the wet scenario, and significantly more is threatened in the dry scenario. We project that threats decline in North America and Asia under both climate scenarios, but increase in Africa and Latin America and the Caribbean. Note that not all areas will be affected; model results indicate that agricultural water in Brazil and the UK, both of which have plentiful supplies relative to demands (see tables 3 and 4), will not be threatened under any of the scenarios. These results are presented spatially for FPUs in figures 11–14. These spatial representations allow us to identify hotspots where agricultural water will be most threatened. Threats to agricultural water availability given 2050 M&I demands, EFRs and the two combined are presented in figures 11–13, respectively. Figure 14 presents the effects of combined 2050 M&I demands and EFRs under the dry (CSIRO) climate scenario. Per cent of agricultural water threatened under the no climate change scenario, given 2050 M&I withdrawals.
Per cent of agricultural water threatened under the no climate change scenario, given EFRs.
Per cent of agricultural water threatened under the no climate change scenario, given 2050 M&I withdrawals and EFRs.
Per cent of agricultural water threatened under the dry (CSIRO) climate change scenario, Given 2050 M&I withdrawals and EFRs.
In the no climate change scenario, increases in M&I demands tend to affect areas with both high water stress and rapidly growing water demands, explaining why these impacts are concentrated in developing countries. Imposing EFRs, on the other hand, would reduce water supplies in basins with high water stress in both developing and developed countries (e.g. the Colorado River Basin in the US, parts of the Nile River Basin, the Murray-Darling Basin in Australia). Taken together, these increases in demand are most significant in parts of Europe, southern Asia, northern Africa and the western US. As observed above, climate change affects the distribution of water availability, increasing threats to agriculture in some areas and lessening them in others. The shifting locations of hotspots under the dry climate change scenario can be observed on figure 14. The above results indicate that increasing M&I water use and EFRs will pose significant threats to agricultural water availability. Here, we discuss possible solutions to ensure that agriculture and other demands are satisfied and how to address uncertainties that exist in both climate and water demand projections. Many alternatives are available to extend limited supplies of water resources, generally falling into the categories of demand management or supply augmentation. Demand management approaches involve using mechanisms to reduce demand such that existing supplies can be extended. For example, Postel (1998) finds that improving the water productivity of agriculture will be critical to meeting future food demands. As water productivity (i.e. irrigation efficiency) increases, agricultural water withdrawals decrease, although consumptive use remains constant. Water conservation in cities or sharing of water-saving technologies with developing countries may be functional approaches to reduce M&I withdrawals and therefore relieve pressure on agriculture. On the supply augmentation side, desalination may be an increasingly realistic alternative as the technology becomes cheaper, and importing of virtual water (Allan 1998) in the form of food and other water-intensive goods can expand supplies and transfer water from water-rich regions to water poorer nations. Hoekstra & Hung (2005) find that 13 per cent of the water used for crop production globally is used for export instead of domestic consumption. Other frequently proposed solutions to water availability issues are water banks and markets. Research in economics has long demonstrated the efficiency benefits from water trading (e.g. Howe et al. 1986); however, such efficiency gains tend to transfer water away from agriculture to uses with higher marginal economic values. Projections of future water use and availability are highly uncertain owing to underlying uncertainties in their determinants (e.g. GDP projections, variability in climate models). Currently, several studies are developing or have developed probability distributions for these uncertain variables. For example, the International Institute for Applied Systems Analysis (IIASA) has developed population projection fractiles for the world, as described in another Foresight Global Food and Farming Futures Project paper in this volume (Lutz & Samir 2010). These fractiles provide uncertainty bounds on population that are year-dependent. In an ongoing study, the Massachusetts Institute of Technology (MIT) has used Latin-hypercube sampling to develop a joint probability density function (PDF) that captures ranges of the determinants of climate change. When this PDF is complete, climate change analysts will be able to sample directly from this distribution to develop probabilistic estimates of economic and physical climate change effects. In the context of this study, such a PDF would enable a statistical treatment of population, GDP, and other variables that determine future M&I water use. In this paper, we review the primary threats to agricultural water availability, and model the potential effects of increases in M&I water demands to 2050, EFRs, and changing water supplies given climate change to 2050. For each FPU, we assume that the MAR is the maximum quantity that can be withdrawn annually (or total current withdrawals if they exceed MAR), and that any withdrawals exceeding this limit may come from agriculture. We find that EFRs and increased M&I water demands together cause an 18 per cent reduction in the availability of worldwide water for agriculture by 2050. Meeting EFRs, which can necessitate more than 50 per cent of the MAR in a basin depending on its hydrograph, presents the single biggest threat to agricultural water availability. Next are increases in M&I demands, which are projected to increase upwards of 200 per cent by 2050 in developing countries with rapidly increasing populations and incomes. The combined effect of these increasing demands can be dramatic in key hotspots, which include northern Africa, India, China, parts of Europe, the western US and eastern Australia, among others. These areas tend to be already water-stressed owing to low water supplies, existing large-scale agricultural or M&I demands, or both. Climate change will affect the spatial and temporal distribution of run-off, and thus change availability from the supply side. Based on wet and dry climate scenarios, we find that water availability for agriculture increases in North America and Asia, and decreases in Africa and Latin America and the Caribbean. In Europe, water availability increases under the wet model and decreases under the dry model. Overall, however, our results indicate that climate change is a much smaller threat to agriculture than growing M&I demands and EFRs. We suggest two avenues for further research. First, a more rigorous modelling effort of water availability for agriculture based on a more detailed quantification of changes in competing water uses and in availability, as well as a range of GCM outputs and SRES scenarios. Importantly, conduct a sensitivity analysis on results using the joint PDF of climate drivers from MIT's latin hypercube sampling. Second, investigate the causes of increased domestic water demand in different economies, focusing on the relationship with water availability per capita, urbanization, income distribution and levels of service (e.g. private delivery, community standpipe, etc). Although rising domestic water use will be one of the main causes of increased global demand for water, existing projections of domestic use have ignored some of these crucial factors. FootnotesOne contribution of 23 to a Theme Issue ‘Food security: feeding the world in 2050'. 1 The Gini coefficient is a measure of income inequity—the higher the coefficient, the less equitable the distribution of incomes in the country. 2 In this paper, our focal ‘geopolitical regions’ are Europe, Africa, North America, Asia, Latin America and the Caribbean, and Oceania. Within Europe, we also focus on the European Union, northwestern Europe, UK and the former Soviet Union. Sub-Saharan Africa and the Nile River Basin are reported for Africa, and in Asia, we report findings for India and China. Finally, we identify impacts on Brazil. 3 Note that the analysis assumes that those basins that currently do not meet Q90 flows today will do so in the future. 4 In the A2 scenario, population growth increases throughout the twenty-first century, but economic growth is regional and occurs more slowly than in the A1B and A1 scenarios. As a result, emissions are lower in 2050 than in the other A storyline scenarios. Note that the SRES scenarios developed in 2000 assume emissions projections that are far more optimistic than has been observed in the past decade (for more detail, see IPCC 2009). 5 Note that the ‘Europe’ Foresight region was listed as containing the former Soviet Union. As a result, we have included all of the former Soviet Union countries in the Europe region, even though many of these are also in Asia. 6 Note that agricultural water availability in North America increases by 0.1 per cent under the 2050 M&I scenarios. This occurs because 2000 M&I and agricultural withdrawals in North America exceed MAR in the Colorado and Rio Grande Basins, but M&I declines in 2050. As a result, additional water is made available to these constrained basins. While the Government Office for Science commissioned this review, the views are those of the author(s), are independent of Government, and do not constitute Government policy. © 2010 The Royal Society This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. References
Page 10The UK Foresight Global Food and Farming Futures Project is considering how a future global population of 9 billion can all be fed healthily and sustainably (Foresight 2009). The project has identified 19 ‘drivers’ (with subcategories) affecting food and farming in the future, one of which is competition for land. The purpose of this review is to examine competition for land, and to consider the direct and indirect pressures and drivers affecting it. The scope of the review is global and the time scale considered is the past 20 years and the next 40 years (1990–2050). In addition to agriculture, use is included for forestry, non-food crops and protected areas for biodiversity, as well as use of land for bioenergy and land degradation/restoration. The impact of policy on influencing each of these factors is discussed in §2c. We summarize the quantitative information on changes in land use and land quality over the last 20 years, both globally and disaggregated according to the major regions of the world. The most recent synthesis of agricultural land-use change was conducted for the International Assessment of Agricultural Knowledge, Science and Technology for Development (IAASTD), particularly the chapter ‘Outlook on agricultural changes and its drivers’, dealing with land-use and land-cover change (van Vuuren et al. 2008). That study collated projections from the Land use and cover change synthesis book (Alcamo et al. 2005), the scenarios from the Global Scenarios Group (Raskin et al. 2002), IPCC Special Report on Emissions Scenarios (SRES) (IPCC 2000), the Millennium Ecosystem Assessment (MEA 2005), UNEP's Global Environment Outlook (UNEP 2002) and some models from the EMF-21 study of the Energy Modelling Forum (e.g. Kurosawa 2006; van Vuuren et al. 2006). We expand on that synthesis by adding more recent studies in §§4 and 5. In these sections, we present projections of land-use change to 2050 and examine the impact of changes in non-agricultural policy (e.g. forest and protected land policy) on competition for land. We briefly examine the assumptions upon which the projections are based and identify the main areas of uncertainty. We conclude by assessing and ranking the most important external factors that may affect the land available for agriculture between now and 2050, and by discussing future needs to reduce uncertainties in these projections. Although competition for land has been identified as a driver affecting land use, food and farming by the Foresight Global Food and Farming Futures Project, it is actually an emergent property of a range of other drivers and pressures. Figure 1 presents a conceptual framework for analysing drivers and related pressures of competition for land at different geographical scales. Conceptual analysis framework for competition for land, drivers and pressures. Adapted from Contreras-Hermosilla (2000).
In understanding interrelated causes for competition for land, we distinguish between drivers and pressures. Pressures represent direct causes, the visible motivations for competition for land (right-hand side of figure 1). Drivers (underlying causes) for competition are factors of higher causal order that determine the degree of the actual direct pressures (left-hand side of figure 1), (see Chomitz & Gray 1996; Kaimowitz et al. 1998; Geist & Lambin 2002; Wunder 2003; Niesten et al. 2004; Rudel et al. 2005, on these different drivers and pressures; S. Klappa 1999, unpublished data). We do not attempt to review the drivers and pressures in detail here, since they are covered by the other driver reviews in this issue. In §2a, however, we discuss a few drivers and pressures to demonstrate how they impact upon land use through their impact on competition for land. The growth in the human population from about 3 billion in 1960 to 6.8 billion in 2010, coupled with increased income and changes in diet, has been accompanied by substantial increases in crop and animal production (2.7-fold for cereals, 1.6-fold for roots and tubers and fourfold for meat; Foresight 2009). This increase will need to be maintained if the projected population of 9 billion by 2050 is to be sustained. Past increases in crop production have occurred as a result of both extensification (altering natural ecosystems to produce products) and intensification (producing more of the desired products per unit area of land already used for agriculture or forestry). Of the world's 13.4 billion ha land surface, about 3 billion ha is suitable for crop production (Bruinsma 2003) and about one-half of this is already cultivated (1.4 billion ha in 2008). The remaining, potentially cultivatable, land is currently beneath tropical forests, so it would be undesirable to convert this to agricultural land because of the effects on biodiversity conservation, greenhouse gas emissions, regional climate and hydrological changes, and because of the high costs of providing the requisite infrastructure. Therefore, increased yield and a higher cropping intensity will need to be the main driver behind future growth in food production (Bruinsma 2003). Table 1 shows that, according to the projection of Bruinsma, extensification will still contribute significantly to crop production in Sub-Saharan Africa (27%) and Latin America and the Caribbean (33%). There is almost no land available for expansion of agriculture in South and East Asia and the Near East/North Africa (and there may be loss of agricultural land to urban development) so that intensification is expected here to be the main means of increasing production (Gregory et al. 2002; Bruinsma 2003).
The main means to intensify crop production will be through increased yields per unit area together with a smaller contribution from an increased number of crops grown in a seasonal cycle. As cereal production (wheat, maize and rice) has increased from 877 million tonnes in 1961 to 2342 million tonnes in 2007, the world average cereal yield has increased from 1.35 t ha−1 in 1961 to 3.35 t ha−1 in 2007. Simultaneously, per capita arable land area has decreased from 0.415 ha in 1961 to 0.214 ha in 2007 (Foresight 2009). Put another way, had the increases in yield of the last 40–50 years not been achieved, almost three times more land would have been required to produce crops to sustain the present population; land that, as indicated above, does not exist unless unsuitable for cropping. Without changes in productivity, the growing population would have led to an even greater expansion in agricultural area than observed, and competition for land would have been greatly intensified. There have also been substantial changes in human food consumption reflected in dietary and nutritional changes over recent decades (Schmidhuber 2003). There is an increasing demand for livestock products, particularly in developing countries (Smith et al. 2007), and given the lower efficiency of livestock products compared with the direct consumption of vegetal matter (Stehfest et al. 2009), an increasing proportion of livestock products in the diet is expected to increase competition for land. While agricultural production for food consumption is one of the predominant land-use activities across the globe, land is also used for the production of timber, fibre, energy and landscape amenities as well as being consumed by urbanization. Historically, the production of forest products has grown rapidly—and again, in the future a further increase is necessary (upto 2030 by 1.4% per annum for sawnwood, and 3% for paper and wood-based panels; FAO 2009a). But worldwide, the area of forest and woodland has decreased over the past decade (FAO 2009a,b; Foresight 2009), mostly at the expense of agricultural expansion. However, regional differences in forest areas and timber production are stark, with declines occurring in developing countries, but forest expansion in developed countries (table 2).
The different trends between developed and developing countries arise from a number of factors that reflect competition with other land uses.
The growth of crops for bioenergy has been highlighted as a potential competitor for land with food crops. It is noteworthy, though, that the area occupied by bioenergy and its by-products in 2004 was only 14 Mha compared with 1500 Mha of crops (i.e. about 1% of the total cropped area) and 4500 Mha of pastures worldwide (IEA 2006). While the reasons for growing crops for bioenergy are complex, the use of land for them is likely to increase in the future (FAO 2009b). An important issue for competition for land is the potential clearing of new land for biomass crops. Using biomass for energy is likely to have both positive and negative competitive effects on food production and therefore on land, with national and regional policies beginning to reflect differing components of these inter-linkages. With global oil stocks becoming increasingly threatened (UKERC 2009), fossil fuel prices will inevitably continue to rise and alternative sources of energy will be needed, not least to maintain agricultural yields. Bioenergy is likely to fill a significant part of this emerging energy gap for agriculture, which in turn will require more integrated energy/agriculture/land-use policies to circumvent adverse impacts of competition for land. An increasing trend in some parts of the world is the use of land for amenity activities and/or biological conservation. This includes recreational uses such as public parks, golf courses and other sports facilities, as well as the conservation of traditional landscapes for their aesthetic, cultural or natural heritage value. Land competition between amenity and other uses depends strongly on geographical location, with stronger pressures for amenity use occurring on land near to urban centres. However, many cultural landscapes are multi-functional, being used, for example, for food or timber production, as well as offering amenity services. Setting aside land for amenity or conservation potentially increases competition for land on the remaining area, which we return to in §§4 and 5. Degradation of soil and land through inappropriate use or the addition of pollutants has been a topic of concern for many decades, because of the potential impact on biodiversity, and the availability of land for the human population to feed itself. Degradation of land intensifies competition for land, since it reduces the quantity of land suitable for a range of uses such as food production. ISRIC (1991) produced a world map of human-induced soil degradation based on the knowledge of 250 experts from six continents showing that of the 11.5 billion ha of vegetated land, 15 per cent was degraded. Erosion was the main process of degradation, and about 20 per cent of the agricultural land worldwide was moderately degraded and 6 per cent strongly degraded (Oldeman 1994). A more recent global assessment of land degradation (ISRIC 2008) identifies 24 per cent of land as degrading, mainly in Africa (south of the equator), SE Asia and southern China, North and Central Australia, the Pampas and parts of the boreal forest in Siberia and North America. Although cropland occupies only 12 per cent of land area, almost 20 per cent of the degrading land is cropland, with forests also over-represented (28% of area but 42% of degrading land). Some 16 per cent of the land area is improving, including cropland, rangeland and forests. Overall, the assessment shows the importance of natural catastrophic phenomena and human management in driving degradation, with the latter also instrumental in speeding up rehabilitation. Agriculture almost always results in stresses being applied to land (for example, by reducing organic matter returns to soils or the imposition of a physical stress such as tillage), but the properties of some soils allow them to recover naturally and rapidly, while others may require amendments (e.g. inputs of fertilizer) or other physical interventions to regain their productive ability (Greenland & Szabolcs 1994). By reducing degradation rates or increasing rates of land rehabilitation, competition for land in areas containing degraded land could be reduced (Debeljak et al. 2009). Agricultural policy in many developed countries is dominated by protectionism, established through trade tariffs and producer support (subsidies). Subsidies affect land-use decisions by influencing the types of land-use strategies adopted by a land manager. So, for example, farmers will only grow crops for which they receive financial support through direct payment. In this sense, subsidies tend to limit competition for land. Subsidies also distort markets on a global scale and influence the competitiveness of agricultural land use in other regions of the world. Conversely, policy liberalization often leads to land-use diversification as seen, for example, in New Zealand following the 1984 agricultural policy reforms (MacLeod & Moller 2006), which overnight led to the complete removal of production subsidies (Smith & Montgomery 2004). In doing so, however, a liberalized land-use policy is likely to increase competition between land uses. Pressure from the World Trade Organization, among other drivers, has in part led the governments of the developed world to move away from production-related support to new policy directions based on rural development or environmental protection. Policies such as the Less Favoured Area scheme in Europe, for example, were designed with the objective of protecting agricultural land use in areas with a competitive disadvantage, usually because of physical limitations such as topography or climate. By preserving the status quo of traditional agricultural landscapes, such policies limit or remove entirely the competition between alternative land uses. Other policies such as the European agri-environment schemes compensate farmers for managing their land to high environmental protection standards. The common theme in rural development and environmental protection policies, however, is the support of farmer incomes, and this leads to the maintenance of current land-use practices that limits land competition. Competition for land is associated with deforestation owing to agricultural expansion while, at the same time, expansion of forests is leading to competition with other land uses. Furthermore, permanent forest clearing is associated with the loss of many other ecosystem services. Thus, deforestation is not only a phenomenon of competition for land per se, but is also important in considering the wider concept of competition for ecosystem services. Since 1960, agricultural area has increased from just under 4.5 billion ha to just over 4.9 billion ha in 2007 (FAOSTAT 2010). During the last 20 years, there has been an overall increase in agricultural area from 4.86 billion ha in 1990, but showing some fluctuations, with the greatest area of 4.98 billion ha recorded in 2001. Figure 2 shows the absolute and percentage change in agricultural and forest/woodland area for the world, and for each world region, 1990–2007. (a) Absolute and (b) percentage changes (of total agricultural and forest/wood area) in forest/wood and agricultural areas 1990–2007, globally and in different world regions. (a) Green bars, forest and wood (Mha); purple bars, agricultural land (Mha). (b) Blue bars, forest and wood (%); brown bars, agricultural land (%). Adapted from FAOSTAT (2010).
As described in §2, the close to tripling of global food production since 1960 has largely been met through increased food production per unit area. For example, Bruinsma (2003) suggests that 78 per cent of the increase in crop production between 1961 and 1999 was attributable to yield increases, and 22 per cent to expansion of harvested area. Land use has therefore changed, despite smaller changes in land cover. While yield increases have outpaced increases in harvested area in most regions, the proportions vary. For example, 80 per cent of total output growth was derived from yield increases in South Asia, compared with only 34 per cent in Sub-Saharan Africa. In industrial countries, where the amount of cultivated land has Models used for examining land-use change and competition for land in this review. been stable or declining, increased output was derived predominantly through the development and adoption of agricultural knowledge science and technology, which has served to increase yields and cropping intensity (van Vuuren et al. 2008). The role of land-use change and adoption of agricultural knowledge, science and technology have, therefore, varied greatly between regions. In some regions, particularly in Latin America, the abundance of land has slowed the introduction of new technologies (van Vuuren et al. 2008).The previous sections have shown that land-use changes are a result of the interaction of a variety of drivers and pressures. In particular, population growth and a shift towards more meat-intensive diets have in the past contributed to an increasing demand for agricultural land. These factors are expected to continue to be important in the future, although trends will differ in time and across regions. Historically, the demand for more agricultural production has been partly compensated by technological advances, and improving technology will determine whether yields will continue to improve in the future. The complexity of the interactions between different drivers necessitates the use of scenario studies using models of land resources and land use, to analyse the consequences of particular trends and policies. There is a variety of studies and a range of models for addressing these issues. Box 1 gives an overview of the most commonly used models for such analysis at the global scale. For a review of land-use change scenarios at the regional scale, see Alcamo et al. (2006); Busch (2006) and de Chazal and Rounsevell (2009). Future land-use trends are described as part of studies that look into long-term agriculture trends (such as the projections published by the Food and Agriculture Organization of the United Nations (FAO) and International Food Production Research Institute (IFPRI). In addition, studies focusing on agricultural trade increasingly tend to describe the relationships between trade flows and land use. Finally, integrated assessment models, used for examining global environmental change and climate change, are increasingly applied to investigate how climate policies might interact with land-use change. The type of models used in these different areas vary greatly, ranging from models derived from the economic tradition (general equilibrium models, e.g. GTAP-type models) to partial agricultural-economy models (like IMPACT), and models that focus mostly on the interaction of economic activity and biophysical indicators (e.g. the IMAGE and GLOBIOM model; box 1). General equilibrium models account for the economic linkages of the land-use sector with the rest of the economy and allow for assessment of income generation owing to land-use activities. Another strength of these models is their consistent description of agriculture trade. Partial equilibrium models allow for detailed study of agricultural production of different crops and within different regions. Moreover, some of these models are also able to represent specific land-use-related policies. Biophysically based models allow the relationship between environmental parameters (production potential based on soils and climate; land cover), land use and agriculture to be studied. Within the scope of this paper, we will not be able to review the complete literature of land-use scenarios; instead we will focus on a few noteworthy projections (table 3), while in table 4 we provide some details on the selected models, emphasizing how these models handle land use. For full details, the reader is advised to consult the references given.
The most widely used agricultural projections are those of FAO and IFPRI. IFPRI uses the IMPACT model as the basis of its projections. The methods underlying the FAO projections are more diverse, utilizing both models and expert consultations. Both studies consider mostly agricultural markets, and thus do not fully cover land-use projections. The scenario projections in the Global Environmental Outlook-4 (UNEP 2007), the Millennium Ecosystem Assessment (MEA 2005) and the IIAASTD study (van Vuuren et al. 2008) all focused on the relationship between environmental change and the agriculture sector. In these studies, a combination of the IMAGE model and IFPRI's IMPACT model was used to define the scenarios. The scenarios of the other studies look at more specific cases in regard to climate policy and biofuel potential. The general trends common to the scenarios considered here show an increase in land for bioenergy, crops and livestock, with forest and other lands decreasing. The exceptions here are scenarios implementing a carbon tax and a lower meat diet where more land is converted back to unmanaged forest. The scenarios considered by the IMAGE model, and those used in a wider range of studies, are given in tables 5 and 6, respectively. Table 7 shows the different land categories considered by each of the models we compare in this section.
Global food production is projected to increase, driven by population growth and changes in diet (§2a). The increase in production is somewhat slower than in the past, as a result of a slowdown in population growth. Diets are projected to become more meat-intensive, with annual per capita meat consumption increasing. The growth in production of cereals over the 2000–2050 period, based on a range of assessments, varies between 43 and 60 per cent (figure 3). The differences are relatively small since estimates of consumption growth are mostly driven by the increase in the global population (which shows relatively little variation between the different scenarios in 2050). An increasing share of cereals will be used as animal feed to supply the rapidly growing demand for livestock products. As incomes increase, demand for animal products also increases. This trend, which has been empirically established in all regions, is assumed to continue in the scenarios of the three groups of studies considered here. As a result, meat demand is projected to increase at a greater rate than the global population, and diets are projected to become more meat-intensive. For instance, the IFPRI calculations show annual per capita meat consumption increasing, on average, from 90 kg per person per year to over 100 kg between 2000 and 2050 in high-income countries, and from around 25 kg to nearly 45 kg per person per year in low-income countries during the same period. This trend is relevant for land use, since animal products require much more land than crops. On average, the production of beef protein requires several times more amount of land than does the production of vegetable proteins, such as cereals (Stehfest et al. 2009). While meat currently represents only 15 per cent of the total global human diet, approximately 80 per cent of the agricultural land is used for animal grazing or the production of feed and fodder for animals (FAO 2006). It should be noted that this includes extensive grasslands in areas where other forms of agriculture would be extremely challenging. Interestingly, future meat production varies considerably more than future cereal production among the scenarios (figure 3), since different scenarios show much more divergence in per capita meat consumption than for per capita cereal consumption. Some studies have looked into the consequences of reducing consumption of livestock products, with proteins being substituted by additional consumption of pulses (Stehfest et al. 2009), and shown that far less land would be required for agriculture under such extreme scenarios. Trend in global production of (a) cereals and (b) meat according to various assessments. MA scenarios are from Carpenter & Pingali (2005); the OECD/FAO study has been included with (asterisk) and without biofuels; IFPRI 2009 is reported by Msangi & Rosengrant (2009).
The actual demand for cropland in the future depends on the balance between increases in agricultural demand and increases in yield improvement. Historically, yield improvements (approx. 80%) have been more important in increasing production than expansion of agricultural land (approx. 20%; see §§2 and 3 for more details). As a result, agricultural areas have expanded by about 5 per cent since 1970. Scenarios show a very large variation in the expected development in cropland (figures 4–6). The 2050 projections for cropland increase range from as low as 6 per cent (e.g. the Technogarden scenario of the MEA), to an increase of more than 30 per cent (such as for the SRES A2 scenario, and one of the scenarios of the EPPA model; numbers represent the 60% interval of the literature). The average increase is around 10–20% (see also van Vuuren et al. 2008). In general, models with a stronger focus on physical parameters tend to project somewhat lower growth rates than models with a more macro-economic orientation (figure 6). Change in crop area in various assessments (IAASTD projection includes land for bioenergy crops). Grey area indicates 20–80th percentile literature range.
Projected change in grazing area in various assessments. Grey area indicates 20–80th percentile literature range.
Global land-use change by 2020 and 2050 for different models and scenarios (see tables 5 and 6 for abbreviations). Change given as absolute change relative to 2000 with the exception of MiniCAM (base year 2005) and GRAPE (base year 2010) where this was the nearest available year. Table 7 details the land categories for the different models. Brown, biofuel; orange, crop; yellow, pasture; light green, managed forest; dark green, unmanaged forest; red, other.
The slightly lower contribution (on average) from the expansion of crop area can be attributed to increasing land scarcity and reduced growth of the global population. The decreasing quality of land brought into production, however, may mean that a greater percentage of gains in total production will need to be found from crop area expansion than has historically been the case (as indicated in MEA 2005). Even in the two scenarios with little global expansion of cropland, a considerable expansion of arable land still occurs in Africa, Latin America and partly in Asia, but this is compensated for by a decrease in arable area in temperate zones. Across the assessments, the area in crop production increases from 1.4 billion ha (or 10% of Earth's land surface) to up to 2.3 billion ha. As indicated by FAO, this expansion is within the scope of total land available for crop production (Bruinsma 2003). The fact that the assessments considered here agree on a rather flexible continuous response of the agricultural system to demand increases is interesting, as more sceptical views have also been expressed. An important implication, however, is further loss of the area available for unmanaged ecosystems (figures 4 and 7). Remaining natural area according to projections from various assessments (deserts and ice areas are not included). Grey area indicates 20–80th percentile literature range.
Increases in meat production will occur through a number of means, including changes that lead to intensified production systems, such as more efficient conversion of feed into animal products, and via expansion of land use for livestock (figure 6). Previous scenarios indicate that most of the increases in world livestock production will occur in developing countries (Bouwman et al. 2005). For grazing systems, this means that some intensification is likely to occur. Considerable intensification is likely in mixed systems, with further integration of crop and livestock in many places. Strong growth is implied for confined livestock production systems. In the FAO scenario, for instance, at least 75 per cent of the total growth is in confined systems, although there are likely to be strong regional differences (e.g. less growth of these systems in Africa; Bruinsma 2003). This is a continuation of historic trends. The major expansion in industrial systems has been in the production of pigs and poultry, as they have short reproductive cycles and are more efficient than ruminants in converting feed concentrates (cereals) into meat. Industrial enterprises now account for 74 per cent of the world's total poultry production, 50 per cent of pig meat and 68 per cent of eggs (FAO 2006, 2009a,b). At the same time, a trend to more confined systems for cattle has been observed, with a consequent rapid increase in demand for cereal- and soy-based animal feeds (these trends are included in the projections discussed in the previous section; see Delgado et al. 1999). For grazing land, the range of 2050 scenario projections ranges between a 5 per cent contraction to a 25 per cent increase (60% interval). Most studies show an increase of 10 per cent or less. The IAASTD baseline, for instance, projects an almost constant grazing area (van Vuuren et al. 2008). These numbers are lower than for croplands, representing the general view that croplands are expected to grow faster than the grazing area, driven by a further intensification of livestock production systems (and despite the rapid growth in meat consumption). The vast area of land used for animal husbandry also means that some studies looking into alternative pathways for land use often identify a large potential for reduction here, either by low-meat diets (Stehfest et al. 2009), or intensification (Smeets et al. 2007). Obviously, the total demand for agricultural area arises from trends in cropland and grassland. Studies show diverging trends (figure 6), but there are also some common characteristics. First of all, almost all studies show an expansion in 2020 and 2050 of the area for cropland and grassland (as already noted in the previous sections). Second, in most studies, expansion of grassland area or cropland area represents the most dominant expansion category in 2020; by 2050, in some studies, however, bioenergy also becomes important (especially EPPA, MiniCAM and Quickscan). As indicated earlier, cropland expansion is generally more important than expansion of grassland, but there are some noteworthy exceptions (GEO4, and EPPA in 2020). In nearly all studies, both forest area and other areas (savannah, natural grasslands etc.) decline. The lowest numbers of land-use change are reported for the MEA (2005) scenarios, the IAASTD scenario, the IMAGE representation of the FAO baseline and the MiniCAM reference. Some of these scenarios include high levels of technology change (Global Orchestration, Technogarden and high AKST). High rates of land-use change are reported for several of the EPPA and MiniCAM scenarios. It should be noted that figure 6 represents a global picture. Much more change may happen at the regional level. A considerable expansion of arable land still occurs in Africa, Latin America and partly in Asia, but this is compensated for by a decrease in harvested area in temperate zones. An important implication, however, is further loss of the area available for unmanaged ecosystems. This is already shown in figure 6; figure 7 shows the remaining natural areas globally—but again it should be noted that these global figures hide underlying regional trends. In general, across the assessments, total natural areas decline by about 0–20%. This includes so-called baseline projections; but also scenarios that focus more on the projection of ecosystem services such as the MEA's Technogarden scenario; or the Sustainability First scenario of GEO4. There are only a few studies that have looked at incremental switches in management systems, such as those to semi-natural forest management (e.g. Havlík et al. in press) and changes in grassland management. A great impact on land-use change can also come from carbon incentives as demonstrated by Wise et al. (2009a). For example, the scenario examined by Wise et al. (2009a) in which (i) it is assumed that greenhouse gas emissions of the energy system are regulated and (ii) there is no regulation of emission from land-use change, according to MiniCAM work, will result in massive land-use change towards bioenergy and crops. In contrast, a policy that targets all potential greenhouse gas emissions (also from land use) can lead to preservation of woodland. A similar trend can be observed from the GRAPE model, which also takes carbon cost into account. In fact, these studies suggest that carbon taxing could have an impact on changing diet via the induced prices of meat. Ever-increasing competition for land may endanger the integrity of currently protected areas, which are located and classified in the World Database on Protected Areas (UNEP-WCMC 2009). Most model studies discussed above either assume projected areas to be constant, or even ignore this category as a special land category. There is one major exception, which is the Sustainability First scenario as part of UNEP's GEO4. Based on a minimum share of protected land by biome category, this study assumes that projected area would need to increase from 2009 to 2030 by up to approximately 400 Mha worldwide. Many of these areas may not enter into strong competition with other land uses, while some are clearly at the forest frontier. Uncertainties in projecting land use have a range of sources, including the level of understanding of the underlying causal relationships (i.e. ‘what is known about driving forces, their impacts and interdependencies?’), the degree of complexity of underpinning system's dynamics (i.e. ‘how do driving forces, impacts and their respective feedbacks lead to emerging nonlinear system dynamics?’), the degree of path dependency (i.e. ‘to what degree does the current system state and past trends determine future developments?’), the level of uncertainty introduced by the time horizon (i.e. ‘how far into the future?’) or even surprises and unpredictable future developments. Some of these phenomena follow known random processes while others cannot be explored well enough as we lack anticipative capacity. For a more complete discussion of different types of uncertainties and their consequences for methods to explore the future, see van Vuuren (2007). This section serves to illustrate some of these uncertainties inherent in future projections of land use and of competition for land, and how these are critically dependent upon future policies on forest protection and bioenergy supply, and future trends in agricultural product preferences and consumption. Given that there is substantial uncertainty about how different drivers will evolve and how they will impact upon the competition for land, here we illustrate the impact of uncertainty by presenting results of eight selected changes of drivers between 2020 and 2030. The analysis presented here was carried out using the GLOBIOM model (table 4 and box 1; Havlík et al. in press) over a short timeframe, to reduce the level of uncertainty introduced by the time period considered. Four uncertainty domains were identified for the quantitative modelling analysis on a global scale: biofuel, meat and wood demand and infrastructure development. In total, eight alternative scenarios were modelled under these four uncertainty domains, since the biofuel scenarios included five variants, differentiated by the expected biofuel mix (table 8). To assess uncertainty in this analysis, the policy shock (table 8) was incorporated in the baseline for each scenario separately, and the model was re-run with the new assumptions.
The scenarios were defined in such a way that any expansion of cropland would occur at the cost of forest land in order to have a ‘pure’ measure of the degree of competition for land. Under this scenario specification, the GLOBIOM model considers only drivers of deforestation coming from agriculture or bioenergy production. We consider that the model operates under the constraint of a fixed total land area, and allocates land use according to the economic competitiveness of different land-use activities. Deforestation is used as a measure of the degree of competition for land and is itself costly. The cost of avoiding deforestation is equal to the difference between the cost of deforestation itself and the income from agricultural production that would occur on that land subsequently if it were deforested and used for agriculture (opportunity cost). Under avoided deforestation, the degree of competition for land is mitigated at the cost of land-use intensification and reduced consumption (Havlík et al. in press). Figure 8 presents the global deforested area, which serves as a proxy for the degree of competition for land, between 2020 and 2030 in Mha. The red line displays the baseline scenario. The biofuel scenarios 1, 3 and 4, and the meat policy shock scenario, cause more deforestation. These scenarios are associated with agricultural land expansion owing to additional production of commodities. Improvement of infrastructure in emerging and developing economies on the one hand leads to higher pressure on natural ecosystems on the frontier, and on the other hand increases global productivity of agricultural production, and will therefore reduce land expansion in the long term. The infrastructure scenario leads to some 3 Mha more deforestation compared with the baseline. The result for the wood scenario is very close to baseline results, causing 0.35 Mha less for additional wood consumption since the relative value of forest increases. The only scenario that leads to less deforestation is the fifth biofuel scenario in which second-generation biofuels are used. This is associated with afforestation activities using high-yielding short-rotation forests. This policy shock scenario leads to a reduction in deforestation of more than 5 Mha over the period 2020–2030, when compared with the baseline. Global deforested area owing to expansion of agricultural land between 2020 and 2030 (Mha). Red line, baseline.
The scenarios demonstrate the range of impacts a single biofuel production policy shock can exert on deforestation depending on the type of biofuel production technology used. Further sources of uncertainty lie in the resolution and quality of the land category considered. Many studies do not distinguish between managed and unmanaged forest and do not consider conversion to short-rotation coppice as deforestation. Therefore, in terms of net deforestation, natural forest can be converted in such models to short-rotation coppice without showing land-use change. In the scenarios presented here, deforestation is defined as conversion of unmanaged natural forest to cropland. The development of different forest types was tracked separately. For example, short-rotation plantations were only allowed to expand into cropland and grassland and therefore could only indirectly lead to deforestation through cropland expansion elsewhere into unmanaged forest. Increasing forest management intensity does not lead to deforestation. Lower deforestation in the second-generation biofuel and WOOD scenario is due to the increased value of managed forest, reducing deforestation as described above. However, the increased value of forest management leads to higher conversion of unmanaged forest to managed forests using semi-natural forest management practices. Another source of uncertainty arises from the models themselves. All models provide an imperfect representation of reality and rely on the availability and quality of input data and additional assumptions. For example, in GLOBIOM, there is no explicit link assumed between increased animal production and grassland demand. Consequently, the MEAT scenario will overestimate the degree of deforestation owing to the restrictive grassland assumptions. It is important to be aware of these inherent uncertainties when dealing with future projections. Improved models, data and more sophisticated scenarios will allow this uncertainty to be reduced in the future, but projections of future policy impact will always contain a degree of uncertainty. We have shown that competition for land, in itself, is not a driver affecting food and farming in the future, but is an emergent property of other drivers and pressures. There is considerable uncertainty over projections of intensity of competition for land in the future, and the regional distribution of this competition. Modelling studies show that future policy decisions in the agriculture, forestry, energy and conservation sectors could have profound effects, with different demands for land to supply multiple ecosystem services usually intensifying competition for land in the future. Given the need to feed 9 billion people by the middle of this century, and increasing competition for land to deliver non-food ecosystem services, it is clear that per-area agricultural productivity needs to be maintained where it is already close to optimal, or increased in the large proportion of the world where it is suboptimal. It remains a challenge to deliver these increased levels of production in a way that does not damage the environment and compromise other ecosystem services (Royal Society 2009). In summary, in addition to policies addressing agriculture and food production, further policies addressing the primary drivers of competition for land (population growth, dietary preference, protected areas, forest policy) could have significant impacts in reducing competition for land. Technologies for increasing per-area productivity of agricultural land will also be necessary. Key uncertainties in our projections of competition for land in the future relate predominantly to uncertainties in the drivers and pressures within the scenarios, uncertainties in the models and data used in the projections and the policy interventions assumed to affect the drivers and pressures in the future. Though price has been used as an indicator of land scarcity and, therefore, competition for land, the development of other indicators to assess the intensity for competition for land is in its infancy, and the development of new metrics will advance our understanding of competition for land in the future. This work was supported in part by the UK Foresight Global Food and Farming Futures Project. The work of P.S., J.W., M.O., J.B. and P.H. contributes to the NERC-QUEST-funded QUATERMASS project, and of M.O., P.H. and P.S. contributes to the EU FP7 project CC-TAME. P.S. is a Royal Society-Wolfson Research Merit Award holder. The authors also thank the researchers who kindly supplied data for the creation of figure 6 in §4—A. Gurgel for the EPPA data, J. Edmonds and Steve Smith for MiniCAM data and T. Kosugi for the GRAPE data. FootnotesOne contribution of 23 to a Theme Issue ‘Food security: feeding the world in 2050'. While the Government Office for Science commissioned this review, the views are those of the author(s), are independent of Government, and do not constitute Government policy. © 2010 The Royal Society This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. References
Page 11Agriculture is a dominant form of land management globally, and agricultural ecosystems cover nearly 40 per cent of the terrestrial surface of the Earth (FAO 2009). Agroecosystems are both providers and consumers of ecosystem services (figure 1). Humans value these systems chiefly for their provisioning services, and these highly managed ecosystems are designed to provide food, forage, fibre, bioenergy and pharmaceuticals. In turn, agroecosystems depend strongly on a suite of ecosystem services provided by natural, unmanaged ecosystems. Supporting services include genetic biodiversity for use in breeding crops and livestock, soil formation and structure, soil fertility, nutrient cycling and the provision of water. Regulating services may be provided to agriculture by pollinators and natural enemies that move into agroecosystems from natural vegetation. Natural ecosystems may also purify water and regulate its flow into agricultural systems, providing sufficient quantities at the appropriate time for plant growth. Impacts of farm management and landscape management on the flow of ecosystem services and disservices to and from agroecosystems.
Traditionally, agroecosystems have been considered primarily as sources of provisioning services, but more recently their contributions to other types of ecosystem services have been recognized (MEA 2005). Influenced by human management, ecosystem processes within agricultural systems can provide services that support the provisioning services, including pollination, pest control, genetic diversity for future agricultural use, soil retention, regulation of soil fertility and nutrient cycling. Whether any particular agricultural system provides such services in support of provisioning depends on management, and management is influenced by the balance between short-term and long-term benefits. Management practices also influence the potential for ‘disservices’ from agriculture, including loss of habitat for conserving biodiversity, nutrient runoff, sedimentation of waterways, and pesticide poisoning of humans and non-target species (Zhang et al. 2007). Since agricultural practices can harm biodiversity through multiple pathways, agriculture is often considered anathema to conservation. However, appropriate management can ameliorate many of the negative impacts of agriculture, while largely maintaining provisioning services. Agroecosystems can provide a range of other regulating and cultural services to human communities, in addition to provisioning services and services in support of provisioning. Regulating services from agriculture may include flood control, water quality control, carbon storage and climate regulation through greenhouse gas emissions, disease regulation, and waste treatment (e.g. nutrients, pesticides). Cultural services may include scenic beauty, education, recreation and tourism, as well as traditional use. Agricultural places or products are often used in traditional rituals and customs that bond human communities. Conservation of biodiversity may also be considered a cultural ecosystem service influenced by agriculture, since most cultures recognize appreciation of nature as an explicit human value. In return, biodiversity can contribute a variety of supporting services to agroecosystems and surrounding ecosystems (Daily 1997). Around the world, agricultural ecosystems show tremendous variation in structure and function, because they were designed by diverse cultures under diverse socioeconomic conditions in diverse climatic regions. Functioning agroecosystems include, among others, annual crop monocultures, temperate perennial orchards, grazing systems, arid-land pastoral systems, tropical shifting cultivation systems, smallholder mixed cropping systems, paddy rice systems, tropical plantations (e.g. oil palm, coffee, cacao), agroforestry systems and species-rich home gardens. This variety of agricultural systems results in a highly variable assortment and quantity of ecosystem services. Just as the provisioning services and products that derive from these agroecosystems vary, the support services, regulating services and cultural services also differ, resulting in extreme variation in the value these services provide, inside and outside the agroecosystem. In maximizing the value of provisioning services, agricultural activities are likely to modify or diminish the ecological services provided by unmanaged terrestrial ecosystems, but appropriate management of key processes may improve the ability of agroecosystems to provide a broad range of ecosystem services. Globally, most landscapes have been modified by agricultural activities and most natural, unmanaged ecosystems sit in a matrix of agricultural land uses. The conversion of undisturbed natural ecosystems to agriculture can have strong impacts on the system's ability to produce important ecosystem services, but many agricultural systems can also be important sources of services. Indeed, agricultural land use can be considered an intermediate stage in a human-impact continuum between wilderness and urban ecosystems (Swinton et al. 2007). Just as conversion from natural ecosystems to agriculture can reduce the flow of certain ecosystem services, the intensification of agriculture (Matson et al. 1997) or the conversion of agroecosystems to urban or suburban development can further degrade the provision of beneficial services. The value of ecosystem services has been estimated in various ways. In general, the framework has three main parts: (i) measuring the provision of ecosystem services; (ii) determining the monetary value of ecosystem services; (iii) designing policy tools for managing ecosystem services (Polasky 2008). Ecologists and other natural scientists have been engaged in enhancing our understanding of how ecosystem services are produced for over a decade (e.g. Costanza et al. 1997; Daily 1997; MEA 2005). Basic knowledge about ecosystem structure and function is increasing at a rapid pace, but we know less about how these factors determine the provision of a complete range of ecosystem services from an individual ecosystem (NRC 2005). In practice, most studies focus on estimating the provision of one or two well understood ecosystem services. Better understanding of the processes that influence ecosystem services could allow us to predict the outputs of a range of ecosystem services, given particular ecosystem characteristics and perturbations to those ecosystems. That is, an ‘ecological production function’ might be generated (Polasky 2008). Despite recent advances, this is an area of research that still needs considerable attention. The second step of valuation of ecosystem services typically includes both market and non-market valuation. Valuing the provisioning services that derive from agriculture is relatively straightforward, since agricultural commodities are traded in local, regional or global markets. Some ecosystem services provide an essential input to agricultural production, and their value can be measured by estimating the change in the quantity or quality of agricultural production when the services are removed or degraded. This approach has been used to estimate the value of pollination services and biological control services (e.g. Losey & Vaughan 2006; Gallai et al. 2009). Values for such services can also be estimated by measuring replacement costs, such as pesticides replacing natural pest control and hand-pollination or beehive rental replacing pollination. Non-market valuation methods have been used for many years to measure both the use value and the non-use value of various environmental amenities (Mendelsohn & Olmstead 2009). Non-market valuation can be based on revealed preference (behaviour expressed through consumer choices) or stated preference (e.g. attitudes expressed through surveys). In contingent valuation surveys, for example, consumers are asked what they would be willing to pay for the ecosystem service. Another approach is to ask producers—in this case farmers—what they would be willing to accept to supply the ecosystem service (Swinton et al. 2007). The overarching goal of measuring and valuing ecosystem services is to use that information to shape policies and incentives for better management of ecosystems and natural resources. One of the inherent difficulties of managing ecosystem services is that the individuals who control the supply of such services, such as farmers and other land managers, are not always the beneficiaries of these services. Many ecosystem services are public goods. While farmers do benefit from a variety of ecosystem services, their activities may strongly influence the delivery of services to other individuals who do not control the production of these services. Examples include the impact of farming practices on downstream water supply and purity and regional pest management. The challenge is to use emerging information about ecological production functions and valuation to develop policies and incentives that are easily implemented and adaptable to changing ecological and market conditions. One approach to incentives is to provide payments for environmental services, through government programmes or private sector initiatives (Swinton 2008). Historically, the US has provided support for soil conservation investments and other readily observable practices to maintain or enhance certain ecosystem services. In the US, the Conservation Security Program of the 2002 farm bill established payments for environmental services, and many European countries have also provided governmental support for environmentally sound farming practices that support ecosystem services. Agri-environment schemes are intended to moderate the negative environmental effects of intensive agriculture by providing financial incentives to farmers to adopt environmentally sound agricultural practices. The impacts of these projects are variable, however, and their success is debated (e.g. Baulcombe et al. 2009). A recent evaluation of over 200 paired fields in five European countries indicated that agri-environment programmes had marginal to moderate positive impacts on biodiversity, but largely failed to benefit rare or endangered species (Kleijn et al. 2006). The Economics of Ecosystems and Biodiversity (TEEB) led by the United Nations Environment Programme (UNEP), is an international effort designed to integrate science, economics and policy around biodiversity and ecosystem services. A recent report for policy-makers highlights the link between poverty and the loss of ecosystems and biodiversity, with the intent of facilitating the development of effective policy in this area (ten Brink 2009). Another approach is the establishment of markets for pollution credits, including the growing global carbon market operating under various cap and trade initiatives, such as the European Union Emission Trading System. The production of agricultural goods is highly dependent on the services provided by neighbouring natural ecosystems, but only recently have there been attempts to estimate the value of many of those services to agricultural enterprises. Some services are more easily quantified than others, to the extent that they are essential to crop production or they substitute directly for purchased inputs. Biological control of pest insects in agroecosystems is an important ecosystem service that is often supported by natural ecosystems. Non-crop habitats provide the habitat and diverse food resources required for arthropod predators and parasitoids, insectivorous birds and bats, and microbial pathogens that act as natural enemies to agricultural pests and provide biological control services in agroecosystems (Tscharntke et al. 2005). These biological control services can reduce populations of pest insects and weeds in agriculture, thereby reducing the need for pesticides. Because the ecosystem services provided by natural enemies can substitute directly for insecticides and crop losses to pests can often be measured, the economic value of these services is more easily estimated than many other services. For example, an analysis of the value of natural enemy suppression of soya bean aphid in soya bean indicated that this ecosystem service was worth a minimum of US$239 million in four US states in 2007–2008 alone (Landis et al. 2008). Since this is an estimate of the value of suppressing a single pest in one crop, the total value of biological control services is clearly much larger. Natural pest control services have been estimated to save $13.6 billion per year in agricultural crops in the US (Losey & Vaughan 2006). This estimate is based on the value of crop losses to insect damage as well as the value of expenditures on insecticides. Studies suggest that insect predators and parasitoids account for approximately 33 per cent of natural pest control (Hawkins et al. 1999), therefore the value of pest control services attributed to insect natural enemies has been estimated at $4.5 billion per year (Losey & Vaughan 2006). Pollination is another important ecosystem service to agriculture that is provided by natural habitats in agricultural landscapes. Approximately 65 per cent of plant species require pollination by animals, and an analysis of data from 200 countries indicated that 75 per cent of crop species of global significance for food production rely on animal pollination, primarily by insects (Klein et al. 2007). Of the most important animal-pollinated crops, over 40 per cent depend on wild pollinators, often in addition to domesticated honeybees. Only 35–40% of the total volume of food crop production comes from animal-pollinated crops, however, since cereal crops typically do not depend on animal pollination. Aizen et al. (2009) used data from the United Nations Food and Agriculture Organization (FAO) on the production of 87 globally important crops during 1961–2006 to estimate that the consequences of a complete loss of pollinators for total global agricultural production would be a reduction of 3–8%. The percentage increase in total cultivated area that would be required to compensate for the decrease in production was much higher, particularly in the developing world where agriculture is more pollinator-dependent. Like biological control, pollination services are more readily quantified than many other services. Early estimates of the value of pollination services were based on the total value of animal-pollinated crops, but recent estimates have been more nuanced. Since most crops are only partly dependent on animal pollination, a dependence ratio or a measure of the proportion reduction in production in the absence of pollinators can provide a better approximation of production losses in the absence of pollinators (Gallai et al. 2009). Clearly, these estimates are also fairly crude and intended to provide a broad-brush assessment of potential economic benefits. Moreover, most estimates do not take into account potential changes in the value of each commodity as demand increases owing to reduced crop production. A recent assessment of agricultural vulnerability to loss of pollination services based on the ratio of the economic value of insect pollination to the economic value of the crop indicated an overall vulnerability of 9.5 per cent, but vulnerability varied significantly among types of commodities as well as by geographical region (Gallai et al. 2009). Stimulant crops (coffee, cacao, and tea), nuts, fruits and edible oil crops were predicted to be particularly vulnerable to the loss of pollination services (table 1). The economic impact of insect pollination on world food production in 2005 in the 162 FAO member countries has been calculated at 153 billion euro, but vulnerability to loss of pollinators varies among geographical regions due, in part, to crop specialization (Gallai et al. 2009). For example, West African countries produce 56 per cent of the world's stimulant crops with a vulnerability to pollinator loss of 90 per cent. The loss of pollination services in these crops could have devastating effects on the economies of such countries in the short term and lead to significant restructuring of global prices in the longer term (Gallai et al. 2009).
A crucial question is whether the loss of pollination services could jeopardize world food supply. Gallai et al. (2009) conclude that overall production would keep pace with consumption, but a complete loss of pollinators would cause global deficits in fruits, vegetables and stimulants (table 1). Such declines in production could result in significant market disruptions as well as nutrient deficiencies, even if total caloric intake is still sufficient. The provision of sufficient quantities of clean water is an essential ecological service provided to agroecosystems, and agriculture accounts for about 70 per cent of global water use (FAO 2003). Perennial vegetation in natural ecosystems such as forests can regulate the capture, infiltration, retention and flow of water across the landscape. The plant community plays a central role in regulating water flow by retaining soil, modifying soil structure and producing litter. Forest soils tend to have a higher infiltration rate than other soils, and forests tend to reduce peak flows and floods while maintaining base flows (Maes et al. 2009). Through hydraulic lift and vertical uplifting, deep rooting species can improve the availability of both water and nutrients to other species in the ecosystem. In addition, soil erosion rates are usually low, resulting in good water quality. Fast-growing plantation forests may be an exception to this generalization, however; they can help regulate groundwater recharge, but they may reduce stream flow and salinize or acidify some soils (Jackson et al. 2005). Water availability in agroecosystems depends not only on infiltration and flow, but also on soil moisture retention, another type of ecosystem service. While the supply of surface water and groundwater (‘blue water’) inputs to agriculture through irrigation are indispensable in some parts of the world, 80 per cent of agricultural water use comes from rainfall stored in soil moisture (‘green water’; Molden 2007). Water storage in soil is regulated by plant cover, soil organic matter and the soil biotic community (bacteria, fungi, earthworms, etc.). Trapping of sediments and erosion are controlled by the architecture of plants at or below the soil surface, the amount of surface litter and litter decomposition rate. Invertebrates that move between the soil and litter layer influence water movement within soil, as well as the relative amounts of infiltration and runoff (Swift et al. 2004). These soil processes provide essential ecosystem services to agriculture. With climate change, increased variability of rainfall is predicted to lead to greater risk of drought and flood, while higher temperatures will increase water demand (IPCC 2007). Estimates of water availability for agriculture often neglect the contribution of green water, but predictions about water availability in 2050 are highly dependent on the inclusion of green water. Whereas more than six billion people are predicted to experience water shortages in 2050 when only blue water is taken into account, this number drops to about four billion when both blue and green water availability is taken into account (Rockström et al. 2009). Some regions of the world are much more dependent on green water than others (Rockström et al. 2009). On-farm management practices that target green water can significantly alter these predictions of water shortages (Rost et al. 2009). For example, modifying the tillage regime or mulching can reduce soil evaporation by 35–50%. Rainwater harvest and on-farm storage in ponds, dykes or subsurface dams can allow farmers to redirect water to crops during periods of water stress, recovering up to 50 per cent of water normally lost to the system. By incorporating moderate values (25%) for reductions in soil evaporation and water harvesting into a dynamic global vegetation and water balance model, Rost et al. (2009) predicted that global crop production could be increased by nearly 20 per cent, a value comparable to the current contribution of irrigation, from on-farm green water management practices. True markets for water are rare (Mendelsohn & Olmstead 2009), and the value of hydrological ecosystem services to agriculture is only partially accounted for in most estimates. Most farmers who withdraw surface waters directly do not pay for these services, except where local water sources are controlled by irrigation districts. Agricultural water demand estimates are often based on production data, where the marginal value of water is estimated by the increase in profits from a unit increase in water inputs. Production data can be highly variable, however, and increases in production can be difficult to assign to water inputs (Mendelsohn & Olmstead 2009). Although market approaches for direct water pricing are available, they tend to focus on blue water in a particular water basin. Many water prices for agricultural use are based on groundwater removal, using the energy costs of pumping as the key input variable. The relatively new approach of payments for environmental services has often focused on supporting watershed protection and water quality enhancements that target the provision of blue water (Wunder et al. 2008). It has been suggested recently that farmers should receive payments or ‘green water credits’ from downstream water users for good management practices that enhance green water retention as well as blue water conservation (ISRIC 2007). Soil structure and fertility provide essential ecosystem services to agroecosystems (Zhang et al. 2007). Well-aerated soils with abundant organic matter are fundamental to nutrient acquisition by crops, as well as water retention. Soil pore structure, soil aggregation and decomposition of organic matter are influenced by the activities of bacteria, fungi and macrofauna, such as earthworms, termites and other invertebrates. Micro-organisms mediate nutrient availability through decomposition of detritus and plant residues and through nitrogen fixation. Agricultural management practices that degrade soil structure and soil microbial communities include mechanical ploughing, disking, cultivating and harvesting, but management practices can also protect the soil and reduce erosion and runoff. Conservation tillage and other soil conservation measures can maintain soil fertility by minimizing the loss of nutrients and keeping them available to crops. Cover crops facilitate on-farm retention of soil and nutrients between crop cycles, while hedgerows and riparian vegetation reduce erosion and runoff among fields. Incorporation of crop residues can maintain soil organic matter, which assists in water retention and nutrient provision to crops. Together these practices conserve a suite of ecosystem services to agriculture from the soil. The delivery of ecosystem services to agriculture is highly dependent on the structure of the landscape in which the agroecosystem is embedded (figure 1). Agricultural landscapes span a continuum from structurally simple landscapes dominated by one or two cropping systems to complex mosaics of diverse cropping systems embedded in a natural habitat matrix. Water delivery to agroecosystems depends on flow patterns across the landscape and can be influenced by a variety of biophysical factors. Stream flow is influenced by withdrawals for irrigation, as well as landscape simplification. Water provisioning is also affected by diversion to other uses in the landscape or watershed, such as domestic, industrial or energy consumption. Both natural biological control services and pollination services depend crucially on the movement of organisms across the agricultural landscape, and hence the spatial structure of the landscape strongly influences the magnitude of these ecological services to agricultural ecosystems (Tscharntke et al. 2005; Kremen et al. 2007). In complex landscapes, natural enemies and pollinators move among natural and semi-natural habitats that provide them with refugia and resources that may be scarce in crop fields (Coll 2009). Natural enemies with the ability to disperse long distances or that have large home ranges are better able to survive in disturbed agricultural landscapes with fewer or more distant patches of natural habitat (Tscharntke et al. 2005). Agricultural intensification can jeopardize many of the ecosystem services provided by the landscape (Matson et al. 1997). Across large areas of North America and Western Europe, agricultural intensification has resulted in a simplification of landscape structure through the expansion of agricultural land, increase in field size, loss of field margin vegetation and elimination of natural habitat (Robinson & Sutherland 2002). This simplification tends to lead to higher levels of pest damage and lower populations of natural enemies (Brewer et al. 2008; Gardiner et al. 2009; O'Rourke 2010). A meta-analysis of the effects of landscape structure on natural enemies and pests in agriculture showed that landscape complexity enhanced natural enemy populations in 74 per cent of cases, whereas pest pressure was reduced in more complex landscapes in 45 per cent of cases (Bianchi et al. 2006). Natural enemies such as predators and parasitoids appear to respond to landscape structure at smaller spatial scales than herbivorous insects (Brewer et al. 2008; O'Rourke 2010) and may be more susceptible to habitat fragmentation. Based on a review of 16 studies of nine crops on four continents, Klein et al. (2007) concluded that agricultural intensification threatens wild bee communities and hence may degrade their stabilizing effect on pollination services at the landscape level. Recent studies have suggested that farm-level diversification is more likely to influence pests and natural enemies if the wider landscape is structurally simple, than if it is already very complex (Tscharntke et al. 2005; O'Rourke 2010). In complex landscapes, adding farm-level complexity does not necessarily enhance the benefits of pest control services. Agricultural intensification in the landscape can diminish other ecosystem services as well. Protection of groundwater and surface water quality can be threatened by intensification because of increased nutrients, agrochemicals and dissolved salts (Dale & Polasky 2007). Loss of riparian vegetation that often accompanies intensification can result in significant sedimentation of waterways and dams. Other studies, however, have suggested that initial conversion to agriculture can cause significant reductions in ecosystem services, but subsequent intensification of the system may not have large impacts (Steffan-Dewenter et al. 2007). Since the quantification of intensification can be highly variable among studies and agricultural systems, these results may not be incompatible. The bulk of evidence indicates that increasing agricultural intensification will erode many ecosystem services, and projections indicate that 80 per cent of crop production growth in developing countries through to 2030 will come through intensification (FAO 2006). Not all agricultural landscapes are currently shaped by intensification. Interestingly, changes in agricultural policies that encourage regional specialization have led to intensification in some European landscapes, accompanied by cropland abandonment in others (Stoate et al. 2009). Widespread abandonment of agricultural land without restoration presents its own set of problems, including landscape degradation, increased risk of erosion and fire. In some areas, both agricultural intensification and land abandonment coexist in the same landscapes, and both processes may influence the delivery of ecosystem services to agroecosystems (Stoate et al. 2009). Agroecosystems are essential sources of provisioning services, and the value of the products they provide are readily measured using standard market analysis. Depending on their structure and management, they may also contribute a number of other ecosystem services (MEA 2005). Ecosystem processes operating within agricultural systems can provide some of the same supporting services described above, including pollination, pest control, genetic diversity for future agricultural use, soil retention, and regulation of soil fertility, nutrient cycling and water. In addition, agricultural systems can be managed to support biodiversity and enhance carbon sequestration—globally important ecosystem services. Agriculture can contribute to ecosystem services, but can also be a source of disservices, including loss of biodiversity, agrochemical contamination and sedimentation of waterways, pesticide poisoning of non-target organisms, and emissions of greenhouse gases and pollutants (Dale & Polasky 2007; Zhang et al. 2007). These disservices come at a significant cost to humans, but there is often a mismatch between the benefits, which accrue to the agricultural sector, and the costs, which are typically borne by society at various scales, from local communities impacted by pesticides in drinking water to the global commons affected by global warming. Linking these disservices more closely to agricultural activities through incorporating the externalities into the costs of production has the potential to reduce these negative environmental consequences of agricultural practices. From the local scale to the global scale, agriculture has profound effects on biogeochemical cycles and nutrient availability in ecosystems (Vitousek et al. 1997; Galloway et al. 2004). The two nutrients that most limit biological production in natural and agricultural ecosystems are nitrogen and phosphorus, and they are also heavily applied in agroecosystems. Nitrogen and phosphorus fertilizers have greatly increased the amount of new nitrogen and phosphorus in the biosphere and have had complex, often harmful, effects on natural ecosystems (Vitousek et al. 1997). These anthropogenically mobilized nutrients have entered both groundwater and surface waters, resulting in many negative consequences for human health and the environment. Approximately 20 per cent of N fertilizer applied in agricultural systems moves into aquatic ecosystems (Galloway et al. 2004). Impacts of nutrient loss from agroecosystems include groundwater pollution and increased nitrate levels in drinking water, eutrophication, increased frequency and severity of algal blooms, hypoxia and fish kills, and ‘dead zones’ in coastal marine ecosystems (Bouwman et al. 2009). Ecosystem services within agroecosystems can be supported by nutrient management strategies that recouple nitrogen, phosphorus and carbon cycling within the agroecosystem. Under conventional practice in developed countries, agroecosystems are often maintained in a state of nutrient saturation and are inherently leaky as a result of chronic surplus additions of nitrogen and phosphorus (Galloway et al. 2004; Drinkwater & Snapp 2007; Vitousek et al. 2009). In developing countries, soils are more likely to be depleted and nutrients may be much more limiting to production, though chronic nutrient surpluses may still occur in some systems (table 2; Vitousek et al. 2009).
To maintain ecosystem services, soil nutrient pools can be intentionally managed to supply crops at the right time, while minimizing nutrient losses by reducing soluble inorganic nitrogen and phosphorus pools (Drinkwater & Snapp 2007). Practices such as cover cropping or intercropping enhance plant and microbial assimilation of nitrogen and reduce standing pools of nitrate, the form of nitrogen that is most susceptible to loss. Other good management practices include diversifying nutrient sources, legume intensification for biological nitrogen fixation and phosphorus-solubilizing properties, and diversifying rotations. Integrated management of biogeochemical processes that regulate the cycling of nutrients and carbon could reduce the need for surplus nutrient additions in agriculture (Drinkwater & Snapp 2007). Recent analyses forecasting human alterations of soil nitrogen and phosphorus cycling under various scenarios to 2050 further emphasize that closing nutrient cycles in agroecosystems can significantly influence soil nutrient balance (Bouwman et al. 2009). Spatially explicit modelling of soil nitrogen and phosphorus balances suggest that soil phosphorus will be depleted in grasslands around the world and rock phosphate reserves will be reduced by 36–64% by 2100. Many scenarios indicate increases in soil nitrogen over this period along with increased leaching and denitrification losses, though nitrogen balances are likely to decline in North American and Europe because of ongoing changes in management practices (Bouwman et al. 2009). Other ecosystem disservices from agriculture include applications of pesticides that result in loss of biodiversity and pesticide residues in surface and groundwater, which degrades the water provisioning services provided by agroecosystems. Moreover, agriculture modifies the species identity and root structure of the plant community, the production of litter, the extent and timing of plant cover and the composition of the soil biotic community, all of which influence water infiltration and retention in the soil. The intensity of agricultural production and management practices affect both the quantity and quality of water in an agricultural landscape. Practices that maximize plant cover, such as minimum tillage, polycultures or agroforestry systems are likely to decrease runoff and increase infiltration. Irrigation practices also influence runoff, sedimentation and groundwater levels in the landscape. Agricultural activities are estimated to be responsible for 12–14% of global anthropogenic emissions of greenhouse gases, not including emissions that arise from land clearing (US-EPA 2006; IPCC 2007). After fossil fuel combustion, land-use change is the second largest global cause of CO2 emissions, and some of this change is driven by conversion to agriculture, largely in developing countries. In developed countries, forest conversion to cropland, pasture and rangeland were common through the middle of the twentieth century, but current conversions are primarily for suburban development. In addition to losses of above-ground carbon due to deforestation or other land clearing, conversion of natural ecosystems to agriculture reduces the soil carbon pool by 30–50% over 50–100 years in temperate regions and 50–75% over 20–50 years in the tropics (Lal 2008a). Although agricultural systems generate very large CO2 fluxes to and from the atmosphere, the net flux appears to be small. However, both the magnitude of emissions and the relative importance of the different sources vary widely among agricultural systems around the world. Agricultural activities contribute to emissions in several ways (table 3). Approximately 49 per cent of global anthropogenic emissions of methane (CH4) and 66 per cent of global annual emissions of nitrous oxide (N2O), both greenhouse gases, are attributed to agriculture (FAO 2003), although there is a wide range of uncertainty in the estimates of both the agricultural contribution and the anthropogenic total. N2O emissions occur naturally as a part of the soil nitrogen cycle, but the application of nitrogen to crops can significantly increase the rate of emissions, particularly when more nitrogen is applied than can be taken up by the plants. Nitrogen is added to soils through the use of inorganic fertilizers, application of animal manure, cultivation of nitrogen-fixing plants and retention of crop residues. Globally, approximately 50 per cent of N applied as fertilizer is taken up by the crop, 2–5% is stored as soil N, 25 per cent is lost as N2O emissions and 20 per cent moves to aquatic systems (Galloway et al. 2004). In addition to direct N2O emissions, the production of synthetic nitrogen fertilizers is an energy-intensive process that produces additional greenhouse gases. Flooded rice cultivation contributes to greenhouse gas emissions through anaerobic decomposition of soil organic matter by CH4-emitting soil microbes. The practice of burning crop residues contributes to the production of both CH4 and N2O.
Livestock production also contributes to CH4 and N2O emissions (Pitesky et al. 2009), and these impacts are likely to increase through to 2050 as the demand for meat increases (FAO 2003). Ruminant livestock such as cattle, sheep, goats and buffalo emit CH4 as a byproduct of their digestive processes (enteric fermentation). Livestock waste can release both CH4, through the biological breakdown of organic compounds, and N2O, through microbial metabolism of nitrogen contained in manure. The magnitude of direct emissions depends strongly on manure management practices, such as the use of lagoons or field spreading, and to some degree on the type of livestock feed. The magnitude of emissions attributed to livestock is controversial, ranging from 3 to 18 per cent of global emissions, depending on whether the effects of land-clearing (i.e. deforestation) for livestock production is included in the estimate (Pitesky et al. 2009). On-farm management practices can significantly enhance the ecosystem services provided by agriculture. Farmers routinely manage for greater provisioning services by using inputs and practices to increase yields, but management practices can also enhance other ecosystem services, such as pollination, biological pest control, soil fertility and structure, water regulation, and support for biodiversity. Habitat management within the agroecosystem can provide the resources necessary for pollinators or natural enemies (Tscharntke et al. 2005). Many studies have identified the important role of perennial vegetation in supporting biodiversity in general and beneficial organisms in particular (e.g. Perfecto & Vandermeer 2008). Evidence suggests that management systems that emphasize crop diversity through the use of polycultures, cover crops, crop rotations and agroforestry can often reduce the abundance of insect pests that specialize on a particular crop, while providing refuge and alternative prey for natural enemies (Andow 1991). Similar practices may benefit wild pollinators, including minimal use of pesticides, no-till systems and crop rotations with mass-flowering crops. Agricultural practices can effectively reduce or offset agricultural greenhouse gas emissions through a variety of processes (Drinkwater & Snapp 2007; Lal 2008a; Smith et al. 2008). Effective manure management can significantly reduce emissions from animal waste. Replacing synthetic nitrogen fertilizers with biological nitrogen fixation by legumes can reduce CO2 emissions from agricultural production by half (Drinkwater & Snapp 2007). The process of perennialization and legume intensification in agroecosystems modifies internal cycling processes and increases N use efficiency within agroecosystems via the recoupling mechanisms discussed above. Chronic surplus additions of inorganic N, which are currently commonplace, can be reduced under these scenarios, leading to reductions in NOx and N2O emissions. Agriculture can offset greenhouse gas emissions by increasing the capacity for carbon uptake and storage in soils, i.e. carbon sequestration (Lal 2008a,b). The net flux of CO2 between the land and the atmosphere is a balance between carbon losses from land-use conversion and land-management practices, and carbon gains from plant growth and sequestration of decomposed plant residues in soils. In particular, soil conservation measures such as conservation tillage and no-till cultivation can conserve soil carbon, and crop rotations and cover crops can reduce the degradation of subsurface carbon. In general, water management and erosion control can aid in maintaining soil organic carbon (Lal 2008a). Soil carbon sequestration thus provides additional ecosystem services to agriculture itself, by conserving soil structure and fertility, improving soil quality, increasing the use efficiency of agronomic inputs, and improving water quality by filtration and denaturing of pollutants (Lal 2008b; Smith et al. 2008). The economic benefits of conservation agriculture have been estimated in diverse systems around the world, from smallholder agricultural systems in Latin America and sub-Saharan Africa to large-scale commercial production systems in Brazil and Canada (reviewed in Govaerts et al. 2009). Many farmers have already adopted practices that retain soil C in order to achieve higher productivity and lower costs. However, even the use of soil conservation and restoration practices cannot fully restore soil carbon lost through conversion to agriculture. It is estimated that the soil C pool attainable through best management practices is typically 60–70% of the original soil C pool prior to conversion (Lal 2008a). Finally, agricultural land can also be used to grow crops for bioenergy production. Bioenergy, particularly cellulosic biofuels, has the potential to replace a portion of fossil fuels and to lower greenhouse gas emissions (Smith et al. 2008). While burning fossil fuels adds carbon to the atmosphere, bioenergy crops, if managed correctly, avoid this by recycling carbon. Although carbon is released to the atmosphere when bioenergy feedstocks are burned, carbon is recaptured during plant growth. The replacement of fossil fuel-generated energy with solar energy captured by photosynthesis has the potential to reduce CO2, N2O and NOx emissions. However, calculating net emissions from bioenergy is tricky (Searchinger et al. 2008). First, management practices used to grow crops and forages for bioenergy production will influence net emissions. Development of appropriate bioenergy systems based on perennial plant species that do not require intensive inputs such as tillage, fertilizers and other agrochemicals have the potential to help offset fossil fuel use in agriculture. Bioenergy systems that rely on annual row crops such as corn are not likely to be as beneficial, and expanding these systems can dramatically reduce the delivery of other ecosystem services like biological pest control (Landis et al. 2008). Second, even with the use of perennial species and few inputs, there is significant potential for higher, rather than lower, emissions attributable to bioenergy crops, resulting from land-use change as farmers respond to higher prices and convert forest and grassland to new cropland (Fargione et al. 2008; Searchinger et al. 2008). The production of bioenergy from waste products, such as crop waste, fall grass harvests from reserve lands, or even municipal waste, could avoid land-use change and result in lower CO2 emissions. Several studies have explicitly analysed possible tradeoffs between the supply of various ecosystem services from agricultural systems. In general, ecosystem services are not independent of one other and the relationships between them are likely to be highly nonlinear. For agriculture, the problem is typically posed as a tradeoff between provisioning services—i.e. production of agricultural goods such as food, fibre or bioenergy—and regulating services such as water purification, soil conservation or carbon sequestration (MEA 2005). Cultural services and biodiversity conservation are also often viewed as tradeoffs with production. Tradeoffs among ecosystem services should be considered in terms of spatial scale, temporal scale and reversibility (Rodriguez et al. 2006). Are the effects of the tradeoff felt locally, for example on-farm, or at a more distant location? How quickly does the tradeoff occur? Are the effects reversible and if so, how quickly can they be reversed? Management decisions often focus on the immediate provision of a commodity or service, at the expense of this same or another ecosystem service at a distant location or in the future. As either the temporal or spatial scale increases, tradeoffs become more uncertain and difficult to manage. Management is further complicated by biophysical and socioeconomic variation, since every hectare of a given habitat is not of equal value in generating a given ecosystem service (Nelson et al. 2009). For natural ecosystems, habitat quality, size of unit and spatial configuration are likely to influence the services provided by the ecosystem. For agroecosystems, management practices, along with access to market and patterns of trade are likely to be critical to the provision of ecosystem services. Furthermore, the values of both market and non-market goods and services will vary according to various biophysical and socioeconomic factors. Without information on the factors that influence the quantity and value of ecosystem services, it is difficult to design policies, incentives or payment schemes that can optimize the delivery of those services (Nelson et al. 2009). Ecosystem services are provided to agriculture at varying scales, and this can influence a farmer's incentives for protecting the ecosystem service. Farmers have a direct interest in managing ecosystem services such as soil fertility, soil retention, pollination and pest control, because they are provided at the field and farm scale. At larger scales, benefits are likely to accrue to others, including other farmers, in addition to the farmer providing the resource. A farmer who restores on-farm habitat complexity increases pollination and pest control services to her neighbours as well as herself. The neighbours benefit from these services without having to give up land that would otherwise produce crops and generate income. Greater landscape complexity may be considered a common pool resource, and a farmer, acting alone, may lack the incentive to set aside the optimal amount of habitat for both the farmer and the neighbour (Zhang et al. 2007). Recent studies suggest that tradeoffs between agricultural production and various ecosystem services are not inevitable and that ‘win–win’ scenarios are possible. An analysis of yields from agroecosystems around the world indicates that, on average, agricultural systems that conserve ecosystem services by using practices like conservation tillage, crop diversification, legume intensification and biological control perform as well as intensive, high-input systems (Badgley et al. 2007). The introduction of these types of practices into resource-poor agroecosystems in 57 developing countries resulted in a mean relative yield increase of 79 per cent (Pretty et al. 2006). In these examples, there was no evidence that the provisioning services provided by agriculture were jeopardized by modifying the system to improve its ability to provide other ecological services. These analyses suggest that it may be possible to manage agroecosystems to support many ecosystem services while still maintaining or enhancing the provisioning services that agroecosystems were designed to produce. Sustainable intensification will depend on management of ecosystem processes rather than fossil fuel inputs (Baulcombe et al. 2009). Futures scenarios are an increasingly common tool used to evaluate tradeoffs between commodity production, ecosystem services and the conservation of biodiversity in various ecosystems, including agroecosystems (MEA 2005). In addition, advances in spatially explicit modelling have greatly improved the ability to estimate the production of ecosystem services from landscapes. Analysis of the provision of agricultural goods and other ecosystem services in an agricultural valley in Oregon, USA, found few tradeoffs between ecosystem services and biodiversity conservation (Nelson et al. 2009). The spatially explicit modelling tool InVEST (integrated valuation of ecosystem services and tradeoffs—Tallis & Polasky 2009) was used to evaluate three stakeholder-defined scenarios of land use through to 2050, including current land-use patterns, increased development or increased conservation. The models predicted changes in commodity production, biodiversity conservation and ecosystem services (hydrological services, soil conservation and carbon sequestration) under the three scenarios. In general, scenarios that scored high on delivering ecosystem services also scored high on biodiversity conservation. Scenarios with increased development had higher commodity values and lower levels of conservation and ecosystem services, but this tradeoff disappeared when payments for carbon sequestration were included. Other spatially explicit studies have also found that biodiversity conservation and carbon sequestration can be achieved in agricultural landscapes (Eigenbrod et al. 2009). Clearly, more detailed studies like these are needed to reach a conclusion about the ecological and economic conditions that may lead to tradeoffs between agricultural production and ecosystem services. Current FAO projections suggest that the rate of conversion of forested land to agriculture will continue to slow through to 2050, there will be little change in grazing area, and protected areas will increase (FAO 2003, 2006). Increases in protected areas will assist in maintaining the flow of ecosystem services like water provisioning, pollination and biological control to agriculture. Advances in sustainable agriculture in developed countries should also lead to enhanced ecosystem services in agricultural landscapes. In some regions, however, conversion of land to urbanization is expected to increase dramatically and will put significant stress on the availability of agricultural land and protected areas. At the global scale, the growth of demand for all crop and livestock products is projected to be lower than in the past: 1.5 per cent per annum in the period 2000–2030 and 0.9 per cent for 2030–2050 when compared with rates around 2.1–2.3% in the preceding four decades, in part due to the lower population growth (FAO 2006). Despite slowing demand growth, ecosystem disservices are likely to increase as a result of intensification of both crop and animal production, particularly in developing countries where demand for energy-intensive food is expected to grow. The current trend for increasing emissions and water pollution from nitrogen fertilizers with agricultural intensification is forecast to continue through to 2050, despite potential increases in fertilizer-use efficiency (FAO 2003). N-use efficiency is complex and fertilizer prices are likely to remain low. The predicted growth of confinement systems for animal production in developing countries will lead to increased methane and N2O emissions from manure, even as improvements in productivity reduce emissions per animal (IPCC 2007). Pesticide use and its non-target effects are likely to increase in some regions through to 2030, while decreasing in others because of increasing regulation and IPM adoption (FAO 2003). Agricultural intensification is likely to interact with climate change in several ways. Increased frequency of flooding and droughts will increase nutrient losses through runoff and emissions, while over-extraction of groundwater in intensified systems may be exacerbated by drought. At mid- to high latitudes, crop productivity is expected to increase slightly, then decline, with rising temperatures (IPCC 2007). At lower latitudes, productivity is likely to decline even with small temperature increases. Some of the most food-insecure regions, including sub-Saharan Africa, are projected to experience severe declines in agricultural production owing to water shortages by 2020. Moreover, the ability of natural ecosystems to provide ecosystem services to agriculture is expected to be compromised by the interaction of rising temperatures, flooding, drought, pollution and fragmentation (IPCC 2007). In the face of climate change, resilient agricultural systems with limited fossil fuel inputs will be needed (Lin et al. 2008). Sustainable intensification through the management of ecosystem processes has the potential to increase food production while minimizing some of the negative impacts of agricultural intensification on biodiversity and ecosystem services (Baulcombe et al. 2009). Agricultural systems provide provisioning ecosystem services that are essential to human wellbeing. They also provide and consume a range of other ecosystem services, including regulating services and services that support provisioning. Maximizing provisioning services from agroecosystems can result in tradeoffs with other ecosystem services, but thoughtful management can substantially reduce or even eliminate these tradeoffs. Agricultural management practices are key to realizing the benefits of ecosystem services and reducing disservices from agricultural activities. These challenges will be magnified in the face of climate change, but there have been several recent advances in our ability to estimate the value of various ecosystem services related to agriculture, and to analyse the potential for minimizing tradeoffs and maximizing synergies. Future research will need to tackle these challenges in spatially and temporally explicit frameworks. FootnotesOne contribution of 23 to a Theme Issue ‘Food security: feeding the world in 2050'. While the Government Office for Science commissioned this review, the views are those of the author(s), are independent of Government, and do not constitute Government policy. © 2010 The Royal Society This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. References
Page 12Agriculture is strongly influenced by weather and climate. While farmers are often flexible in dealing with weather and year-to-year variability, there is nevertheless a high degree of adaptation to the local climate in the form of established infrastructure, local farming practice and individual experience. Climate change can therefore be expected to impact on agriculture, potentially threatening established aspects of farming systems but also providing opportunities for improvements. This paper reviews recent literature relevant to the impacts of climate change on global agricultural productivity through a wide range of processes. The aim is to provide a global-scale overview of all relevant impacts, rather than focusing on specific regions or processes, as the purpose of this review is to inform a wider assessment of the risks to global food security. Although there are a large number of studies which focus on the impact of a particular aspect of climate change in a specific location, there are relatively few studies which provide a global assessment. Moreover, these studies tend to focus more on the direct effect of changes in the mean climate state on crop growth and do not consider changes in extremes or in indirect effects of climate change such as sea-level rise or pests and diseases. A comprehensive, internally consistent assessment of all potential direct and indirect effects of climate change on agricultural productivity has not yet been carried out. As a step towards such a full-system assessment, we complement each stage of our review of the literature with presentation of projected changes in relevant climate-related quantities from the Met Office Hadley Centre (MOHC) models. This allows a comparison of the different aspects of climate change relevant to agricultural productivity, so that the relative importance of the different potential causes of impacts can be assessed. This provides some context to decision making in an area of high uncertainty, and also informs future research directions. Most previous assessments of the impacts of climate change on agriculture (and indeed on other sectors) have focused on time horizons towards the end of the twenty-first century, illustrating the impacts of anthropogenic climate change that could be avoided by reducing greenhouse gas emissions. However, there is also a need to assess the impacts of climate change over the next few decades, which may now be largely unavoidable owing to inertia in the physical climate system and the time scales over which large-scale change in human social, economic and political influences on greenhouse gas emissions could be brought about. Even if greenhouse gas emissions began to be reduced immediately, there would still be some level of ongoing warming for decades and some sea-level rise continuing for centuries, as the climate system is slow to respond fully to imposed changes. There is relatively little information in the literature available on climate change impacts over these time horizons, so we present MOHC climate projections for approximately 2020 and 2050 in order to put the existing literature into context on these time scales. This paper focuses on impacts on crop productivity, but many of the processes and impacts discussed may also apply to livestock. Some discussion of this is provided in the electronic supplementary material. The nature of agriculture and farming practices in any particular location are strongly influenced by the long-term mean climate state—the experience and infrastructure of local farming communities are generally appropriate to particular types of farming and to a particular group of crops which are known to be productive under the current climate. Changes in the mean climate away from current states may require adjustments to current practices in order to maintain productivity, and in some cases the optimum type of farming may change. Higher growing season temperatures can significantly impact agricultural productivity, farm incomes and food security (Battisti & Naylor 2009). In mid and high latitudes, the suitability and productivity of crops are projected to increase and extend northwards, especially for cereals and cool season seed crops (Maracchi et al. 2005; Tuck et al. 2006; Olesen et al. 2007). Crops prevalent in southern Europe such as maize, sunflower and soya beans could also become viable further north and at higher altitudes (Hildén et al. 2005; Audsley et al. 2006; Olesen et al. 2007). Here, yields could increase by as much as 30 per cent by the 2050s, dependent on crop (Alexandrov et al. 2002; Ewert et al. 2005; Richter & Semenov 2005; Audsley et al. 2006; Olesen et al. 2007). For the coming century, Fisher et al. (2005) simulated large gains in potential agricultural land for the regions such as the Russian Federation, owing to longer planting windows and generally more favourable growing conditions under warming, amounting to a 64 per cent increase over 245 million hectares by the 2080s. However, technological development could outweigh these effects, resulting in combined wheat yield increases of 37–101% by the 2050s (Ewert et al. 2005). Even moderate levels of climate change may not necessarily confer benefits to agriculture without adaptation by producers, as an increase in the mean seasonal temperature can bring forward the harvest time of current varieties of many crops and hence reduce final yield without adaptation to a longer growing season. In areas where temperatures are already close to the physiological maxima for crops, such as seasonally arid and tropical regions, higher temperatures may be more immediately detrimental, increasing the heat stress on crops and water loss by evaporation. A 2°C local warming in the mid-latitudes could increase wheat production by nearly 10 per cent whereas at low latitudes the same amount of warming may decrease yields by nearly the same amount (figure 1). Different crops show different sensitivities to warming. It is important to note the large uncertainties in crop yield changes for a given level of warming (figure 1). By fitting statistical relationships between growing season temperature, precipitation and global average yield for six major crops, Lobell & Field (2007) estimated that warming since 1981 has resulted in annual combined losses of 40 million tonne or US$5 billion (negative relationships between wheat, maize & barley with temperature). Sensitivity of cereal ((a,b) maize (mid- to high-latitude and low latitude), (c,d) wheat (mid- to high-latitude and low latitude) and (e,f) rice (mid- to high-latitude)) to climate change as determined from the results of 69 studies, against temperature change. Results with (green), and without (red) adaptation are shown. Reproduced from Easterling et al. (2007), fig. 5.2.
Figure 2 and table 1 show two scenarios for changes in mean annual temperature at 2020 and 2050 relative to present day. All areas of cropland are projected to experience some degree of warming, but the largest change in warming is projected in the high latitudes. However, small increases in temperature in low latitudes may have a greater impact than in high latitudes (figure 1), possibly because agriculture in parts of these regions is already marginal.
Two projections of change in annual mean temperature (°C) over global croplands for 30-year means centred around 2020 and 2050, relative to 1970–2000. The two projections are the members of the ensemble with the greatest and least change in annual mean temperature averaged over all global croplands. See the electronic supplementary material for further details.
Water is vital to plant growth, so varying precipitation patterns have a significant impact on agriculture. As over 80 per cent of total agriculture is rain-fed, projections of future precipitation changes often influence the magnitude and direction of climate impacts on crop production (Olesen & Bindi 2002; Tubiello et al. 2002; Reilly et al. 2003). The impact of global warming on regional precipitation is difficult to predict owing to strong dependencies on changes in atmospheric circulation, although there is increasing confidence in projections of a general increase in high-latitude precipitation, especially in winter, and an overall decrease in many parts of the tropics and sub-tropics (IPCC 2007). These uncertainties are reflected in two scenarios shown in figure 3 and table 1, which project different signs of precipitation change averaged over all croplands, even though there is agreement in the sign of change in some regions. One scenario which predicts an overall increase in precipitation, shows large increases in southern USA and India but also significant decreases in the tropics and sub-tropics. The other scenario also shows the decreases in the low latitudes but without significant increases in India. Two projections of change in annual mean precipitation (mm d−1) over global croplands for 30-year means centred around 2020 and 2050, relative to 1970–2000. The two projections are the members of the ensemble with the most positive and negative changes in annual mean precipitation averaged over all global croplands. See the electronic supplementary material for further details.
This reflects the wide range of projections of precipitation change from different climate models (Christensen et al. 2007). The differences in precipitation projections arise for a number of reasons. A key factor is the strong dependence on changes in atmospheric circulation which itself depends on the relative rates of warming in different regions, but there are often a number of factors influencing precipitation change projections in a given location. For example, the uncertainty in precipitation change over India arises partly from the expected weakening of the dynamical monsoon circulation (decreasing the Indian monsoon precipitation) versus the increase in atmospheric water content associated with warming (increasing the Indian monsoon precipitation; Meehl et al. 2007). However, changes in seasonal precipitation may be more relevant to agriculture than annual mean changes. In India, climate models generally project a decrease in dry season precipitation and an increase during the rest of the year including the monsoon season, but still with a large inter-model spread (Christensen et al. 2007). Precipitation is not the only influence on water availability. Increasing evaporative demand owing to rising temperatures and longer growing seasons could increase crop irrigation requirements globally by between 5 and 20 per cent, or possibly more, by the 2070s or 2080s (Döll 2002; Fisher et al. 2006), but with large regional variations—South-East Asian irrigation requirements could increase by 15 per cent (Döll 2002). Regional studies project increasing irrigation demand in the Middle East and North Africa (Abou-Hadid et al. 2003) and potentially 15 per cent increases in irrigation demand in South-East Asia (Arnell et al. 2004; Fisher et al. 2006). However, decreased requirements are projected in China (Tao et al. 2003). Clearly these projections also depend on uncertain changes in precipitation. While change in long-term mean climate will have significance for global food production and may require ongoing adaptation, greater risks to food security may be posed by changes in year-to-year variability and extreme weather events. Historically, many of the largest falls in crop productivity have been attributed to anomalously low precipitation events (Kumar et al. 2004; Sivakumar et al. 2005). However, even small changes in mean annual rainfall can impact on productivity. Lobell & Burke (2008) report that a change in growing season precipitation by one standard deviation can be associated with as much as a 10 per cent change in production (e.g. millet in South Asia). For example, Indian agriculture is highly dependent on the spatial and temporal distribution of monsoon rainfall (Kumar et al. 2004). Asada & Matsumoto (2009) analysed the relationship between district-level crop yield data (rainy season ‘kharif’ rice) and precipitation for 1960–2000. It was shown that different regions were sensitive to precipitation extremes in different ways. Crop yield in the upper Ganges basin is linked to total precipitation during the relatively short growing season and is thus sensitive to drought. Conversely, the lower Ganges basin was sensitive to pluvial flooding and the Brahmaputra basin demonstrated an increasing effect of precipitation variability on crop yield, in particular drought. These relationships were not consistent through time, in part owing to precipitation trends. Variation between districts implied the importance of social factors and the introduction of irrigation techniques. Meteorological records suggest that heatwaves became more frequent over the twentieth century, and while individual events cannot be attributed to climate change, the change in probability of a heatwave can be attributed. Europe experienced a particularly extreme climate event during the summer of 2003, with average temperatures 6°C above normal and precipitation deficits of up to 300 mm. A record crop yield loss of 36 per cent occurred in Italy for corn grown in the Po valley where extremely high temperatures prevailed (Ciais et al. 2005). It is estimated that such summer temperatures in Europe are now 50 per cent more likely to occur as a result of anthropogenic climate change (Stott et al. 2004). As current farming systems are highly adapted to local climate, growing suitable crops and varieties, the definition of what constitutes extreme weather depends on geographical location. For example, temperatures considered extreme for grain growers in the UK would be considered normal for cereal growers in central France. In many regions, farming may adapt to increases in extreme temperature events by moving to practices already used in warmer climate, for example by growing more tolerant crops. However, in regions where farming exists at the edge of key thresholds increases in extreme temperatures or drought may move the local climate into a state outside historical human experience. In these cases it is difficult to assess the extent to which adaptation will be possible. Recent increases in climate variability may have affected crop yields in countries across Europe since around the mid-1980s (Porter & Semenov 2005) causing higher inter-annual variability in wheat yields. This study suggested that such changes in annual yield variability would make wheat a high-risk crop in Spain. Even mid-latitude crops could suffer at very high temperatures in the absence of adaptation. In 1972, extremely high summer averaged temperature in the former Soviet Union (USSR) contributed to widespread disruptions in world cereal markets and food security (Battisti & Naylor 2009). Changes in short-term temperature extremes can be critical, especially if they coincide with key stages of development. Only a few days of extreme temperature (greater that 32°C) at the flowering stage of many crops can drastically reduce yield (Wheeler et al. 2000). Crop responses to changes in growing conditions can be nonlinear, exhibit threshold responses and are subject to combinations of stress factors that affect their growth, development and eventual yield. Crop physiological processes related to growth such as photosynthesis and respiration show continuous and nonlinear responses to temperature, while rates of crop development often show a linear response to temperature to a certain level. Both growth and developmental processes, however, exhibit temperature optima. In the short-term high temperatures can affect enzyme reactions and gene expression. In the longer term these will impact on carbon assimilation and thus growth rates and eventual yield. The impact of high temperatures on final yield can depend on the stage of crop development. Wollenweber et al. (2003) found that the plants experience warming periods as independent events and that critical temperatures of 35°C for a short-period around anthesis had severe yield reducing effects. However, high temperatures during the vegetative stage did not seem to have significant effects on growth and development. Reviews of the literature (Porter & Gawith 1999; Wheeler et al. 2000) suggest that temperature thresholds are well defined and highly conserved between species, especially for processes such as anthesis and grain filling. Although groundnut is grown in semi-arid regions which regularly experience temperatures of 40°C, if after flowering the plants are exposed to temperatures exceeding 42°C, even for short periods, yield can be drastically reduced (Vara Prasad et al. 2003). Maize exhibits reduced pollen viability for temperatures above 36°C. Rice grain sterility is brought on by temperatures in the mid-30s and similar temperatures can lead to the reverse of the vernalizing effects of cold temperatures in wheat. Increases in temperature above 29°C for corn, 30°C for soya bean and 32°C for cotton negatively impact on yields in the USA. Figure 4 and table 1 show that in all cases and all regions, one in 20-year extreme temperature events is projected to be hotter. Events which today are considered extreme would be less unusual in the future. The impacts of extreme temperature events can be difficult to separate from those of drought. However, key temperature thresholds exist beyond which crop physiology is altered, potentially devastating yields. Two projections of change in one in 20-year extreme temperature level (°C) over global croplands for 2020 and 2050, relative to 2000. The two projections are the members of the ensemble with the greatest and least change averaged over all global croplands. See the electronic supplementary material for further details.
There are a number of definitions of drought, which generally reflect different perspectives. Holton et al. (2003) point out that ‘the importance of drought lies in its impacts. Thus definitions should be region-specific and impact- or application-specific in order to be used in an operational mode by decision makers.’ It is common to distinguish between meteorological drought (broadly defined by low precipitation), agricultural drought (deficiency in soil moisture, increased plant water stress), hydrological drought (reduced streamflow) and socio-economic drought (balance of supply and demand of water to society; Holton et al. 2003). Globally, the areas sown for the major crops of barley, maize, rice, sorghum, soya bean and wheat have all seen an increase in the percentage of area affected by drought as defined in terms of the Palmer Drought Severity Index (PDSI; Palmer 1965) since the 1960s, from approximately 5–10% to approximately 15–25% (Li et al. 2009). Global mean PDSI has also increased (IPCC 2007), and a comparison of climate model simulations with observed data suggests that anthropogenic increases in greenhouse gas and aerosol concentrations have made a detectable contribution to the observed drying trend in PDSI (Burke et al. 2006). In climate-modelling studies, Burke et al. (2006) define drought as the 20th percentile of the PDSI distribution over time, for pre-industrial conditions; this definition is therefore regionally specific. Therefore at any given time, approximately 20 per cent of the land surface will be defined as being in drought, but the conditions in a normally wet area under drought may still be less dry than those in another region which is dry under normal conditions. Using this definition, the MOHC climate model simulates the proportion of the land surface under drought to have increased from 20 to 28 per cent over the twentieth century (Burke et al. 2006). Li et al. (2009) define a yield reduction rate (YRR) which takes a baseline of the long-term trend in yield (assumed to be owing to technological progress and infrastructure improvement) and compares this with actual annual yields to define a YRR owing to climate variability. Using national-scale data for the four major grains (barley, maize, rice and wheat), Li et al. (2009) suggested that 60–75% of observed YRRs can be explained by a linear relationship between YRR and a drought risk index based on the PDSI. Present-day mean YRR values are diagnosed as ranging from 5.82 per cent (rice) to 11.98 per cent (maize). By assuming the linear relationship between the drought risk index and YRR holds into the future, Li et al. (2009) estimated that drought related yield reductions would increase by more than 50 per cent by 2050 for the major crops. The impacts of drought may offset benefits of increased temperature and season length observed at mid to high latitudes. Using models of global climate, crop production and water resources, Alcamo et al. (2007) suggested that decreased crop production in some Russian regions could be compensated by increased production in others, resulting in relatively small average changes. However, their results indicate that the frequency of food production shortfalls could double in many of the main crop growing areas in the 2020s, and triple in the 2070s (Alcamo et al. 2007). Although water availability in Russia is increasing on average, the water resources model predicted more frequent low run-off events in the already dry crop growing regions in the south, and a significantly increased frequency of high run-off events in much of central Russia (Alcamo et al. 2007). Food production can also be impacted by too much water. Heavy rainfall events leading to flooding can wipe out entire crops over wide areas, and excess water can also lead to other impacts including soil water logging, anaerobicity and reduced plant growth. Indirect impacts include delayed farming operations (Falloon & Betts in press). Agricultural machinery may simply not be adapted to wet soil conditions. In a study looking at the impacts of current climate variability, Kettlewell et al. (1999) showed that heavy rainfall in August was linked to lower grain quality which leads to sprouting of the grain in the ear and fungal disease infections of the grain. This was shown to affect the quality of the subsequent products such that it influenced the amount of milling wheat that was exported from the UK. The proportion of total rain falling in heavy rainfall events appears to be increasing, and this trend is expected to continue as the climate continues to warm. A doubling of CO2 is projected to lead to an increase in intense rainfall over much of Europe. In the higher end projections, rainfall intensity increases by over 25 per cent in many areas important for agriculture (figure 5). (a) Lower and (b) upper estimates covering the central 80% range of changes in precipitation intensity on wet days with a 1 year return period for a doubling of CO2.
A tropical cyclone is the generic term for a non-frontal synoptic scale low-pressure system over tropical or sub-tropical waters with organized convection (i.e. thunderstorm activity) and definite cyclonic surface wind circulation (Holland 1993). Severe tropical cyclones, with maximum sustained wind speeds of at least 74 mph, are known as ‘hurricanes’ in the eastern North Pacific and North Atlantic and ‘typhoons’ in the western North Pacific. The strongest tropical cyclones can reach wind speeds as large as 190 mph, as recorded in Typhoon Tip in the western North Pacific in October 1979. Tropical cyclones usually occur during the summer and early autumn: around May–November in the Northern Hemisphere and November–April in the Southern Hemisphere, although tropical cyclones are observed all year round in the western North Pacific. The North Indian Ocean is the only basin to have a two-part tropical cyclone season: before and after the onset of the South Asian monsoon, from April to May and October to November, respectively. Figure 6 shows observed tropical cyclone tracks for all known storms over the period 1945–2008. In this context, the most vulnerable agricultural regions are found, among others, in the USA, China, Vietnam, India, Bangladesh, Myanmar and Madagascar. Observed tropical cyclone tracks and intensity for all known storms over the period 1947–2008. Tracks are produced from the IBTrACS dataset of NOAA/NCDC (Knapp et al. 2010).
Both societal and economic implications of tropical cyclones can be high, particularly in developing countries with high population growth rates in vulnerable tropical and subtropical regions. This is particularly the case in the North Indian Ocean, where the most vulnerable people live in the river deltas of Myanmar, Bangladesh, India and Pakistan; here population growth has resulted in increased farming in coastal regions most at risk from flooding (Webster 2008). In 2007, cyclone Sidr hit Bangladesh costing 3500 lives (United Nations 2007), and in 2008 cyclone Nargis caused 130 000 deaths in Myanmar. The agricultural impacts of these and other recent cyclones are shown in table 2.
Although many studies focus on the negative impacts, tropical cyclones can also bring benefits. In many arid regions in the tropics, a large portion of the annual rain comes from cyclones. It is estimated that tropical cyclones contribute to 15–20% of South Florida's annual rainfall (Walther & Abtew 2006), which can temporarily end severe regional droughts. Examples of such storms are hurricane Gabrielle (2001) and tropical storm Fay (2008), which provided temporary relief from the 2000–2001 and 2006–2009 droughts, respectively. As much as 15 inches of rainfall was recorded in some regions from tropical storm Fay, without which, regions would have faced extreme water shortage, wildfires and potential saltwater intrusion into coastal freshwater aquifers (Abtew et al. 2009). Tropical cyclones can also help replenish water supplies to inland regions: cyclone Eline, which devastated agriculture in Madagascar in February 2000, later made landfall in southern Africa and contributed significantly to the rainfall in the semi-desert region of southern Namibia. There is much debate on the global change in tropical cyclone frequency and intensity under a warming climate. Climate modelling studies contributing to the IPCC's Fourth Assessment Report (AR4) suggest tropical cyclones may become more intense in the future with stronger winds and heavier precipitation (Meehl et al. 2007). This is in agreement with more recent studies using high resolution models, which also indicate a possible decrease in future global tropical cyclone frequency (McDonald et al. 2005; Bengtsson et al. 2007; Gualdi et al. 2008). However, there is limited consensus among the models on the regional variations in tropical cyclone frequency. Rising atmospheric CO2 and climate change may also impact indirectly on crops through effects on pests and disease. These interactions are complex and as yet the full implications in terms of crop yield are uncertain. Indications suggest that pests, such as aphids (Newman 2004) and weevil larvae (Staley & Johnson 2008), respond positively to elevated CO2. Increased temperatures also reduced the overwintering mortality of aphids enabling earlier and potentially more widespread dispersion (Zhou et al. 1995). Evidence suggests that in sub-Saharan Africa migration patterns of locusts may be influenced by rainfall patterns (Cheke & Tratalos 2007) and thus potential exists for climate change to shape the impacts of this devastating pest. Pathogens and disease may also be affected by a changing climate. This may be through impacts of warming or drought on the resistance of crops to specific diseases and through the increased pathogenicity of organisms by mutation induced by environmental stress (Gregory et al. 2009). Over the next 10–20 years, disease affecting oilseed rape could increase in severity within its existing range as well as spread to more northern regions where at present it is not observed (Evans et al. 2008). Changes in climate variability may also be significant, affecting the predictability and amplitude of outbreaks. Climate changes remote from production areas may also be critical. Irrigated agricultural land comprises less than one-fifth of all cropped area but produces between 40 and 45 per cent of the world's food (Döll & Siebert 2002), and water for irrigation is often extracted from rivers which depend upon distant climatic conditions. For example, agriculture along the Nile in Egypt depends on rainfall in the upper reaches of the Nile such as the Ethiopian Highlands. Figure 7 shows the projected changes in monthly river-flow for the 2020s and 2050s for selected key rivers of interest in this context. In some rivers such as the Nile, climate change increases flow throughout the year which could confer benefits to agriculture. However, in other catchments, e.g. the Ganges, the increase in run-off comes as an increase in peak flow around the monsoon. However, dry season river-flow is still very low. Without sufficient storage of peak season flow, water scarcity may affect agricultural productivity despite overall increases in annual water availability. Increases at peak flow may also cause damage to crop lands through flooding. Projected mean monthly river flow (kg s−1) for 30 year means centred on 2000 (black), 2020 (green) and 2050 (blue) for the (a) Nile, (b) Ganges and (c) Volga. Projections are bias corrected ensemble means from the Hadley Centre models. See the electronic supplementary material for further details.
Figure 8 shows areas in the world where river flow is dominated by snow melt. These areas are mostly at mid to high latitudes where predictions for warming are greatest. Warming in winter means that less precipitation falls as snow and that which accumulates melts earlier in the year. Changing patterns of snow cover fundamentally alter how such systems store and release water. Changes in the amount of precipitation affect the volume of run-off, particularly near the end of the winter at the onset of snow melt. Temperature changes mostly affect the timing of run-off with earlier peak flow in the spring. Although additional river-flow can be considered beneficial to agriculture this is only true if there is an ability to store run-off during times of excess to use later in the growing season. Globally, only a few rivers currently have adequate storage to cope with large shifts in seasonality of run-off (Barnett et al. 2005). Where storage capacities are not sufficient, much of the winter run-off will immediately be lost to the oceans. Figure 7c shows the monthly river-flow from the Volga catchment in Russia. It shows an earlier and increased peak flow around snow melt with subsequently lower flow later in the year. The fraction of run-off originating as snowfall. The red lines indicate the regions where streamflow is snowmelt-dominated, and where there is not adequate reservoir storage capacity to buffer shifts in the seasonal hydrograph. The black lines indicate additional areas where water availability is predominantly influenced by snowmelt generated upstream (but run-off generated within these areas is not snowmelt-dominated). Reproduced from Barnett et al. (2005) with permission from Macmillan Publishers Ltd: Nature.
Some major rivers, such as the Indus and Ganges, are fed by mountain glaciers, with approximately one-sixth of the world's population currently living in glacier-fed river basins (Stern 2007). Populations are projected to rise significantly in major glacier-fed river basins such as the Indo-Gangetic plain. As such, changes in remote precipitation and the magnitude and seasonality of glacial melt waters could therefore potentially impact food production for many people. The majority of observed glaciers around the globe are undergoing shrinkage (Zemp et al. 2008). Formerly attributing this retreat to recent warming is not currently possible. However, there is a broad consensus that warming is a primary cause of retreat, although changes in atmospheric moisture particularly in the tropics may be contributing (Bates et al. 2008). Melting glaciers will initially increase river-flow although the seasonality of flow will be enhanced (Juen et al. 2007) bringing with it an increased flood risk. In the long term, glacial retreat is expected to be enhanced further leading to eventual decline in run-off, although the greater time scale of this decline is uncertain. The Chinese Glacier Inventory catalogued 46 377 glaciers in western China, with approximately 15 000 glaciers in the Himalayas. In total these glaciers store an estimated 12 000 km3 of fresh water (Ding et al. 2006; Cruz et al. 2007). Analysis of glaciers in the western Himalayas demonstrates evidence of glacial thinning (Berthier et al. 2007), and radioactive bomb deposits from one high altitude glacier show no net accumulation since 1950 (Kehrwald et al. 2008). The limited number of direct observations also supports evidence of a glacial retreat in the Himalayas (Zemp et al. 2008). The water from these glaciers feeds large rivers such as the Indus, Ganges and Brahmaputra and is likely to be contributing a significant proportion of seasonal river flow although the exact magnitude is unknown. Currently nearly 500 million people are reliant on these rivers for domestic and agricultural water resources. Climate change may mean the Indus and Ganges become increasingly seasonal rivers, ceasing to flow during the dry season (Kehrwald et al. 2008). Combined with a rising population this means that water scarcity in the region would be expected to increase in the future. Sea-level rise is an inevitable consequence of a warming climate owing to a combination of thermal expansion of the existing mass of ocean water and addition of extra water owing to the melting of land ice. This can be expected to eventually cause inundation of coastal land, especially where the capacity for introduction or modification of sea defences is relatively low or non-existent. Regarding crop productivity, vulnerability is clearly greatest where large sea-level rise occurs in conjunction with low-lying coastal agriculture. Many major river deltas provide important agricultural land owing to the fertility of fluvial soils, and many small island states are also low-lying. Increases in mean sea level threaten to inundate agricultural lands and salinize groundwater in the coming decades to centuries, although the largest impacts may not be seen for many centuries owing to the time required to melt large ice sheets and for warming to penetrate into the deep ocean. The potential sea-level rise associated with melting of the main ice sheets would be 5 m for West Antarctic Ice Sheet (WAIS), 60 m for East Antarctic Ice Sheet (EAIS), and 7 m for Greenland Ice Sheet (GIS), with both the GIS and WAIS considered vulnerable. Due to the possible rate of discharge of these ice sheets, and past maximal sea-level rise (under similar climatic conditions) a maximum eustatic sea-level rise of approximately 2 m by 2100 is considered physically plausible, but very unlikely (Pfeffer et al. 2008; Rohling et al. 2008; Lowe et al. 2009). Short-lived storm surges can also cause great devastation, even if land is not permanently lost. There has been relatively little work assessing the impacts of either mean sea-level rise or storm surges on agriculture. As well as influencing climate through radiative forcing, increasing atmospheric CO2 concentrations can also directly affect plant physiological processes of photosynthesis and transpiration (Field et al. 1995). Therefore any assessment of the impacts of CO2-induced climate change on crop productivity should account for the modification of the climate impact by the CO2 physiological impact. The CO2 physiological response varies between species, and in particular, two different pathways of photosynthesis (named C3 and C4) have evolved and these affect the overall response. The difference lies in whether ribulose-1,5-bisphosphate carboxylase–oxygenase (RuBisCO) within the plant cells is saturated by CO2 or not. In C3 plants, RuBisCO is not CO2-saturated in present day atmospheric conditions, so rising CO2 concentrations increase net uptake of carbon and thus growth. The RuBisCO enzyme is highly conserved in plants and as such it is thought that the response of all C3 crops including wheat and soya beans will be comparable. Theoretical estimates suggest that increasing atmospheric CO2 concentrations to 550 ppm, could increase photosynthesis in such C3 crops by nearly 40 per cent (Long et al. 2004). The physiology of C4 crops, such as maize, millet, sorghum and sugarcane is different. In these plants CO2 is concentrated to three to six times atmospheric concentrations and thus RuBisCO is already saturated (von Caemmerer & Furbank 2003). Thus, rising CO2 concentrations confer no additional physiological benefits. These crops may, however, become more water-use efficient at elevated CO2 concentrations as stomata do not need to stay open as long for the plant to receive the required CO2. Thus yields may increase marginally as a result (Long et al. 2004). Many studies suggest that yield rises owing to this CO2-fertilization effect and these results are consistent across a range of experimental approaches including controlled environment closed chambers, greenhouse, open and closed field top chambers, and free-air carbon dioxide enrichment (FACE) experiments (Tubiello et al. 2007). Experiments under idealized conditions show that a doubling of atmospheric CO2 concentration increases photosynthesis by 30–50% in C3 plant species and 10–25% in C4 species (Ainsworth & Long 2005). Crop yield increase is lower than the photosynthetic response; increases of atmospheric CO2 to 550 ppm would on average increase C3 crop yields by 10–20% and C4 crop yields by 0–10% (Gifford 2004; Long et al. 2004; Ainsworth & Long 2005). Some authors argue that crop response to elevated CO2 may be lower than previously thought, with consequences for crop modelling and projections of food supply (Long et al. 2004, 2009). Plant physiologists and modellers alike recognize that the effects of elevated CO2, as measured in experimental settings and subsequently implemented in models, may overestimate actual field and farm level responses. This is because of many limiting factors such as pests and weeds, nutrients, competition for resources, soil water and air quality which are neither well understood at large scales, nor well implemented in leading models. Despite the potential positive effects on yield quantities, elevated CO2 may, however, be detrimental to yield quality of certain crops. For example, elevated CO2 is detrimental to wheat flour quality through reductions in protein content (Sinclair et al. 2000). Figure 9 and table 1 show the impact of including CO2 physiological effects in projections of plant productivity in agricultural regions. Without CO2 fertilization, many regions, especially in the low latitudes, suffer a decrease in productivity by 2050. In contrast, by including CO2 fertilization all but the very driest regions show increases in productivity. Two projections of future change in net primary productivity (kg C m−2 yr−1) over global croplands for 30-year means centred around 2020 and 2050, relative to 1970–2000. The two projections show the impact of including CO2 physiological effects and are the members of the ensemble with the most positive and negative changes in productivity averaged over all global croplands. See the electronic supplementary material for further details.
Global-scale comparisons of the impacts of CO2 fertilization with those of changes in mean climate (Parry et al. 2004; Nelson et al. 2009) show that the strength of CO2 fertilization effects is a critical factor in determining whether global-scale yields are projected to increase or decrease. If CO2 fertilization is strong, North America and Europe may benefit from climate change at least in the short term (figure 10). However, regions such as Africa and India are nevertheless still projected to experience up to 5 per cent losses by 2050, even with strong CO2 fertilization. These losses increase to up to 30 per cent if the effects of CO2 fertilization are omitted. In fact without CO2 fertilization all regions are projected to experience a loss in productivity owing to climate change by 2050. However, existing global scale studies (Parry et al. 2004; Nelson et al. 2009) have only used a limited sample of available climate model projections. Potential changes (%) in national cereal yields for the 2020s and 2050s relative to 1990, with climate change projected by the HadCM3 model under the A1FI scenario (a) with and (b) without CO2 fertilization. Reproduced from Parry et al. (2004) with permission from Elsevier.
A reduction in CO2 emissions would be expected to reduce the positive effect of CO2 fertilization on crop yields more rapidly than it would mitigate the negative impacts of climate change. Even if GHG concentrations rose no further, there is a commitment to a certain amount of further global warming (IPCC 2007). Stabilization of CO2 concentrations would therefore halt any increase in the impacts of CO2 fertilization, while the impacts of climate change could still continue to grow. Therefore in the short term the impacts on global food production could be negative. However, estimates suggest that stabilizing CO2 concentrations at 550 ppm would significantly reduce production losses by the end of the century (Arnell et al. 2002; Tubiello & Fisher 2006). For all species higher water-use efficiencies and greater root densities under elevated CO2 in field systems may, in some cases, alleviate drought pressures, yet their large-scale implications are not well understood (Wullschleger et al. 2002; Norby et al. 2004; Centritto 2005). This could offset some of the expected warming-induced increase in evaporative demand, thus easing the pressure for more irrigation water. This may also alter the relationship between meteorological drought and agricultural/hydrological drought; an increase in meteorological drought may result in a smaller increase in agricultural or hydrological drought owing to increased water-use efficiency of plants (Betts et al. 2007). Soil moisture and run-off may be more relevant than precipitation and meteorological drought indices as metrics of water resource availability, as these represent the water actually available for agricultural use. These quantities are routinely simulated by physically based climate models as a necessary component of the hydrological cycle. Figure 11 and table 1 show two scenarios of projected changes in soil moisture as a fraction of that required to prevent plant stress. The available soil moisture fraction is projected to increase on average across global croplands (table 1), with increases in some regions, particularly the mid-latitudes, but decrease in others, particularly in the tropics. Similarly, run-off increases in some regions and decreases in others (figure 12), but the mean change across global croplands varies in sign between scenarios (table 1). Importantly, the scenarios with an increase in mean run-off and the greatest increase in available soil moisture included the effects of CO2 fertilization in the model, while those with a decrease in mean run-off and the smallest increase in soil moisture availability did not include this effect (Betts et al. 2007). Two projections of future change in soil moisture as a fraction of that required to prevent plant water stress over global croplands for 30-year means centred around 2020 and 2050, relative to 1970–2000. Positive values indicate increased water availability. The two projections are the members of the ensemble with the greatest and least change averaged over all global croplands. See the electronic supplementary material for further details.
Two projections of future change in annual mean run-off (mm d−1) over global croplands for 30-year means centred around 2020 and 2050, relative to 1970–2000. The two projections are the members of the ensemble with the most positive and negative changes in annual mean run-off averaged over all global croplands. See the electronic supplementary material for further details.
However, as discussed in §2b, changes in extremes are also important, and agricultural drought may be more critical than annual mean soil moisture availability. With drought defined as the driest 20th percentile of the distribution in soil moisture over time in any given location, the model ensemble used here consistently projects an increase in the time spent under drought in most regions for the first half of the twenty-first century (figure 13 and table 1). Two projections of percentage change in time spent under meteorological drought as defined in terms of soil moisture in global croplands for 30-year means centred around 2020 and 2050, relative to 2000. The two projections are the members of the ensemble with the greatest and least percentage change averaged over all global croplands. See the electronic supplementary material for further details.
Ozone is a major secondary air-pollutant, which at current concentrations has been shown to have significant negative impacts on crop yields (Van Dingenen et al. 2009). Whereas in North America and Europe, emissions of ozone precursors are decreasing, in other regions of the world, especially Asia, they are increasing rapidly (Van Dingenen et al. 2009). Ozone reduces agricultural yield through several mechanisms. Firstly, acute and visible injury to products such as horticultural crops reduces market value. Secondly, ozone reduces photosynthetic rates and accelerates leaf senescence which in turn impacts on final yield. In Europe and North America many studies have investigated such yield reductions (e.g. Morgan et al. 2003). However, in other regions, such as Asia, little evidence currently exists. Thus, our understanding of the impacts in such regions is limited. Anthropogenic greenhouse gas emissions and climate change have a number of implications for agricultural productivity, but the aggregate impact of these is not yet known and indeed many such impacts and their interactions have not yet been reliably quantified, especially at the global scale. An increase in mean temperature can be confidently expected, but the impacts on productivity may depend more on the magnitude and timing of extreme temperatures. Mean sea-level rise can also be confidently expected, which could eventually result in the loss of agricultural land through permanent inundation, but the impacts of temporary flooding through storm surges may be large although less predictable. Freshwater availability is critical, but predictability of precipitation is highly uncertain and there is an added problem of lack of clarity on the relevant metric for drought—some studies including IPCC consider metrics based on local precipitation and temperature such as the Palmer Drought Severity Index, but this does not include all relevant factors. Agricultural impacts in some regions may arise from climate changes in other regions, owing to the dependency on rivers fed by precipitation, snowmelt and glaciers some distance away. Drought may also be offset to some extent by an increased efficiency of water use by plants under higher CO2 concentrations, although the impact of this again is uncertain especially at large scales. The climate models used here project an increase in annual mean soil moisture availability and run-off in many regions, but nevertheless across most agricultural areas there is a projected increase in the time spent under drought as defined in terms of soil moisture. Moreover, even the sign of crop yield projections is uncertain as this depends critically on the strength of CO2 fertilization and also O3 damage. Few studies have assessed the response of crop yields to CO2 fertilization and O3 pollution under actual growing conditions, and consequently model projections are poorly constrained. Indirect effects of climate change through pests and diseases have been studied locally but a global assessment is not yet available. Overall, it does not appear to be possible at the present time to provide a robust assessment of the impacts of anthropogenic climate change on global-scale agricultural productivity. We are grateful to Simon Brown, Ian Crute, Diogo de Gusmão, Keith Jaggard, Doug McNeall, Erika Palin, Doug Smith and Jonathan Tinker for useful discussions. FootnotesOne contribution of 23 to a Theme Issue ‘Food security: feeding the world in 2050'. While the Government Office for Science commissioned this review, the views are those of the author(s), are independent of Government, and do not constitute Government policy. © 2010 The Royal Society This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. References
Page 13The future for farming and agriculture holds many challenges, not least the continued efforts to optimize energy inputs and reduce greenhouse gas (GHG) emissions. This needs to be set against the urgent and growing need to improve yields to meet the anticipated requirements to provide food, feed, fuel, chemicals and materials for the growing global population. These challenges are and will increasingly be influenced by the availability and price of oil, natural gas and coal, as well as by policies set to meet carbon emissions targets and other sustainability requirements. This paper aims to investigate the impact of energy inputs on agricultural systems to the farm gate, for the production of key commodities. It has a strong UK focus but draws conclusions where possible from an international perspective. The paper reviews the impact of current and future agricultural production on climate change and policies associated with reducing GHG emissions and finally considers options for reducing the dependency of agriculture on energy by considering alternatives, including the optimization and integration of land use for multi-purpose outcomes. The 3rd Assessment report of the Intergovernmental Panel on Climate Change (IPCC 2001) estimated that by 1995, agriculture accounted for about 3 per cent (9 EJ) of global energy consumption, but more than 20 per cent of global GHG emissions. Figure 1 highlights the trend of increasing energy inputs to agriculture since 1971 and shows the high degree of variability both between regions and over time, for example, the collapse in energy inputs in the former Union of Soviet Socialist Republic (USSR) after the fall of the iron curtain in 1989. Primary energy use in agriculture, 1970–1995. Source: IPCC (2001). Light blue line, total fertilizers per ha cropland; brown line, cereal yield; purple line, total area equipped for irrigation; green line, tractors per ha; dark blue line, agricultural labour per ha cropland.
Substantial areas of agricultural land also came out of production as these (former USSR) farms became exposed to global competition with governments unable to continue subsidizing production. The links between agricultural energy inputs, yields, economic returns, land requirements and land-use change (LUC) needs further research. However, LUC has major implications for GHG emissions and carbon stocks, particularly where forest land is cleared or where previously arable land is allowed to revert to forest. These issues are discussed briefly in the ‘indirect emissions’ section below but are not a major focus in this paper. If energy consumption by agriculture continued to grow at the annual rate outlined by the IPCC for 1995 (IPCC 2001), total energy inputs into agriculture would have exceeded 10 EJ in 2005, equivalent to a share of about 2 per cent of global primary energy consumption. Therefore, agricultural demand for fossil energy, while growing, represents a relatively insignificant and shrinking share of the overall fossil energy supply market. On the other hand, as yields and the inputs needed to support those yields increase, agriculture is becoming more dependent on fossil fuels, either directly for tillage and crop management or through the application of energy-intensive inputs e.g. nitrogen fertilizer and pesticides. Furthermore, the embodied energy in tractors, buildings and other infrastructure necessary to support agriculture and food supplies is likely to continue to grow as developing agricultural producers invest in the infrastructure needed to increase yields and become competitive in the global food commodity markets as outlined in figure 2 (IPCC 2001). Embodied energy is all the energy used in the creation of a product. In the life cycle assessment (LCA) described subsequently, it is assumed that the long-term phosphorous (P) and potassium (K) requirements of all crops must be met. Global trends in the intensification of crop production (index 1961–2002/2005). Source: updated from Hazel & Woods (2008) based on FAOSTAT 2010. Dark blue line, industralized countries; pink line, economic in transition; green line, developing countries in Asia–Pacific; sky blue line, Africa; yellow line, Latin America; cyan line, Middle East.
Fossil energy inputs into agriculture have generally been outweighed by yield improvements that deliver positive energy ratios (energy out: energy (fossil) inputs) ‘i.e. the energy content of the harvested crop is greater than the fossil energy used to produce the crop,’ as highlighted by Samson et al. (2005), in figure 3. Future technologies that will allow both the higher value starch, oils and/or protein fractions to be harvested along with the lower value lignocellulosic fractions will improve the energy ratios and apparent nutrient use efficiencies of conventional food crops in comparison to dedicated biomass crops, such as switchgrass, as shown. However, over the full life cycle of a crop, particularly where energy-intensive drying and processing are required, in some cases more fossil energy can be used than is contained in the final product. A detailed assessment of the energy inputs and GHG emissions from UK agriculture in food production systems follows. While much of this assessment is specific to the UK, the heterogeneity in inputs, energy carriers, energy intensities and resulting GHG emissions for different crops is considered a conservative representation of commercial agriculture globally. Solar energy collection in harvested component of crops and fossil fuel energy requirements of Canadian (Ontario) crop production, in Giga-Joules (GJ) per hectare. Source: Samson et al. (2005). Grey bars, energy content of crop per hectare less fossil-fuel energy consumption; black bars, fossil energy consumption per hectare production.
This section covers the main commodities produced in the UK and is from the perspective of LCA, which is a standard method for assessing the ‘cradle to grave’ environmental impacts of a product or process. The detailed breakdown that follows comes from the work of Cranfield University and is reported in various outputs (Williams et al. 2006, 2009; Audsley et al. 2010). The work was parameterized for England and Wales, although much applies in other parts of the UK. The original study included three field crops (bread wheat, oilseed rape and potatoes), four meats (beef, poultry, pork and lamb), milk and eggs. Tomatoes were included as the main protected crop. Apples and strawberries were analysed in a later study, together with overseas production of apples, potatoes, tomatoes, strawberries, lamb, beef and poultry meat. Primary production up to the farm gate was included in all these studies, although in Williams et al. (2009), the endpoint was the regional distribution centre. Studies have been carried out by various authors as reported by Pretty et al. (2005), who make an analysis of transport costs from farm to plate or ‘food’ miles, and substantial gains are possible in energy efficiency and waste reduction beyond the farm gate. However, this paper has focused on reviewing energy inputs for production to the farm gate. With LCA, all energy use is traced back to resources in the ground, so that overheads of extraction and distribution are included in reported energy figures. All inputs are considered, so that the embodied energies in fertilizer, machinery, buildings and pesticides are included along with the direct energy of diesel and other fuels (also known as energy carriers). Estimates for the energy inputs into animal production include inputs for the production of all feed crops e.g. UK feed wheat, UK field beans, American soya and forage (grazed grass and conserved grass or maize) and for feed processing and distribution. All breeding overheads are also included, so that the final values represent the totality of energy used per commodity. One of the challenges of these analyses is how to allocate burdens when crops are multi-functional. Oilseed rape is grown primarily for oil, but a useful meal is also produced as the result of oil extraction, which can be used as an animal feed. It is common practice with products of disparate properties to allocate burdens by economic value, rather than simply by weight or energy content, and this approach has been used here. Energy inputs to produce the UK's main crops (table 1) range from 1 to 6 GJ t−1. However, each agricultural product has very different properties and uses, making comparisons using a single metric problematic. Farming systems employed to grow crops will also influence outcomes for energy input, GHG emissions and potentially yield. Making comparisons between conventional and organic farming systems often leads to the general conclusion that organic provides a more energy-efficient system than conventional farming, but fossil energy input reduction has to be balanced against human energy inputs, which are often higher for organic systems (Zeisemer 2007). Comparisons of conventional farming and integrated arable farming systems (IAFS) have been reported by Bailey et al. (2003), suggesting that IAFS has lower energy inputs per hectare, but that this is balanced out by reduced yield reported for this set of results.
Oilseed rape stands out as being the highest energy consumer per tonne of product, resulting from relatively low yields and high fertilizer requirements, but the grain is more energy-rich than cereals or legumes. Bread wheat receives more fertilizer than feed wheat, in order to obtain the high protein concentrations that are required for bread-making, and so takes more energy than feed wheat. Although field beans require no nitrogen (N) fertilizer, they have much lower yields than wheat and more diesel is used per tonne of beans produced. Cereals tend to follow the same pattern in terms of energy inputs and wheat is used here as a proxy for cereals in general (figure 4). UK wheat also has a similar energy input intensity to US maize production as shown in table 1. In non-organic bread wheat production, over half of the energy used is in fertilization and about 90 per cent of that energy is in N, typically ammonium nitrate (AN) and urea. Bread wheat is unusual in that urea is applied relatively late in the growth season, as a foliar feed. Direct field energy is just under a quarter of the input. Post-harvest energy inputs are mainly for grain drying and cooling, which were calculated here on a long-term basis: this clearly varies yearly according to climatic conditions. Pesticide manufacture accounts for less than 10 per cent of energy input, but a lack of modern data leads to higher degrees of uncertainty about the impacts of pesticide use, with the most recent publicly available analysis by Green (1987). In contrast, organic production uses more diesel per unit production, owing to lower yields and the obligation to use the plough, coupled with extra cultivations for weed and pest control. Breakdown of energy used in major domestic crop production. Source: Williams et al. (2009). Green bars, fertilizer manufacture; red bars, pesticide manufacture; blue bars, post harvest; purple bars, machinery manufacture; black bars, field diesel.
Potato cropping is energy-intensive compared with cereals and legumes. For example, the energy used in storage is much larger than other crops: potatoes are kept cool and a proportion is maintained over the year. This is in contrast to traditional low-energy clamping systems, in which losses are much higher, but the supply season shorter. Early potatoes are generally not stored on farms, so energy requirements for field operations incur a major fraction of total energy inputs, which also include irrigation inputs as well as the high energy costs of planting, cultivating and harvesting. However, because potatoes are high-yielding crops, they have low-energy input requirement per tonne harvested. If calculated per tonne of harvested dry matter, because the harvested biomass is 80 per cent water for potatoes, compared with 15–20% for wheat grain, for example, potatoes would have a higher energy intensity factor. Sugarcane production under Brazilian conditions and management is also high-yielding and has a high water content (70% moisture content) when harvested. The relatively low-energy inputs needed for the production of this semi-perennial crop and lower moisture content compared with potatoes mean that when accounting for energy intensity on a dry weight basis, sugarcane would have a lower energy intensity than UK wheat. Even when processed to ethanol and/or crystalline sugar, because of the use of residual biomass arising from sugar extraction to provide power and heat, fossil energy inputs are minimized. The types of energy used vary between crops and production systems (figure 5), and also location. In the UK, as with most of Europe, nitrogen fertilizer production uses mainly natural gas. However, according to He et al. (2009), in China, coal currently provides about 80 per cent of the energy inputs into nitrogen fertilizer production, rising from 71 per cent in 2004. Diesel comes from crude oil. Electricity used either directly (e.g. cooling grain) or indirectly in machinery manufacture, also uses coal, nuclear and some renewables. The dominant energy carrier in non-organic wheat production is thus natural gas, but it is crude oil in organic wheat production and in China it would be coal. The embodied energy in machinery is an overhead of about 40 per cent of the energy used in diesel, reflecting the high wear environment of cultivating and harvesting, as well as continually high power demand on engines, compared with road transport. Distribution of energy carriers used in field crop production. Source: Williams et al. (2009). Green bars, renewable %; red bars, nuclear %; grey bars, coal %; blue bars, natural gas %; black bars, crude oil %.
Although fertilizer manufacture is energy-intensive, reducing fertilizer use has mixed effects. Energy input per hectare is reduced, but so is yield, thus increasing the relative input of cultivation energy per tonne. Reducing yield also implies a need to displace production elsewhere in order to maintain supply. This could be in areas that are less suitable and/or lead to LUC, e.g. conversion of grassland to arable, with the consequent loss of soil carbon (C). It does appear, however, that some reduction in N supply can reduce energy use per tonne bread wheat (figure 6). However, a very large reduction in N application can cause sufficient yield loss that cultivation becomes the dominant energy demand and energy use per tonne increases again. Effects of changing N supply on bread wheat using the Cranfield model. PE, Primary Energy; GWP, Global Warming Potential. Source: Williams et al. (2006). Black line, PE; red long dashed line, GWP; green long dashed line, land use.
The energies used per tonne of the main outputs of animal production are all substantially higher than crops (table 2). This results from the concentration effect as animals are fed on crops and concentrate these into high-quality protein and other nutrients. Feed is the dominant term in energy use (average of about 75%), whether as concentrates, conserved forage or grazed grass. Direct energy use includes managing extensive stock, space heating for young birds and piglets, and ventilation for pigs and poultry. Housing makes up a relatively small fraction of total energy inputs, and is even lower for more extensive systems, like free-range hens. For egg production, the energy demand of manure management is more than offset by the value of chicken manure as a fertilizer, hence the negative value.
The energy carriers used in animal production vary less than crops (table 3). About one-third is from crude oil and another third from natural gas. However, because animal feed production and supply requires 70–90% of the total energy inputs for livestock production, animal husbandry may be more vulnerable to high and volatile energy costs compared with the direct supply of arable crops. This could lead to increased pressure on extensive grazing, reversing the trends over the recent decades of decreasing land area requirements per kilogram livestock production.
Agriculture occupies more than 50 per cent of the world's vegetated land (Foley et al. 2005) and accounts for around 20 per cent of all anthropogenic GHG emissions, depending on where the boundaries are drawn between agriculture and the other sectors, and revisions to the global warming factors assigned to each GHG (IPCC 2001, 2006; International Fertilizer Industry Association 2009). However, its contribution to methane and nitrous oxide production is disproportionately large. On a global scale, agricultural processes are estimated to account for 50 per cent of anthropogenic methane production and 80 per cent of anthropogenic nitrous oxide production (Olesen et al. 2006; Crutzen et al. 2008). As in industry, at all production stages fossil fuel combustion for heat and energy represents a direct and major source of agricultural GHG emissions. In addition, anaerobic fermentation and microbial processes in soil and manure lead to releases of methane and nitrous oxide in both livestock and arable systems. Nitrogen fertilizer production alone consumes about 5 per cent of the global natural gas supplies and significant amounts of nitrous oxide are emitted during the production of nitrate (Jenssen & Kongshaug 2003; Kindred et al. 2008; International Fertilizer Industry Association 2009). Furthermore, emissions as a result of LUC (mainly as carbon dioxide) can form a significant part of the agricultural impact on the atmosphere. The period between 1965 and 2000 saw a doubling of global agricultural production (Tilman 1999). The total area under cultivation has remained relatively static and this huge increase in output is primarily the result of massive increases in fertilization and irrigation (figure 2; IPCC 2001), as well as improved crop genetics. Global nitrogen fertilizer applications have increased more than sixfold over the past 40 years (Tilman 1999), although there has been considerable regional variation. The production of mineral and synthetic fertilizers, especially nitrogen using the Haber–Bosch Process, uses large amounts of fossil energy, mainly natural gas, releasing around 465 Tg carbon dioxide into the atmosphere each year (International Fertilizer Industry Association 2009). It has been estimated that 30 per cent of the total fossil energy used in maize production is accounted for by nitrogen fertilizer production (Tilman 1999) and that fertilizer production is responsible for up to 1.2 per cent of all anthropogenic GHG emissions (Wood & Cowie 2004). Fertilizer application can also lead to further emissions. Nitrification and de-nitrification of mineral and organic nitrogen fertilizers leads to the release of large amounts of nitrous oxide from soils (Snyder et al. 2009). The IPCC (2006) tier 1 estimate is that 1 per cent of all applied nitrogen is emitted in the form of nitrous oxide, although there is considerable uncertainty over this figure. Loss of nitrous oxide from arable soils accounts for around 1.5 per cent of total anthropogenic GHG emissions (International Fertilizer Industry Association 2009). Modern techniques that reduce soil compaction, such as GPS-guided controlled traffic farming, can reduce nitrous oxide emissions by between 20 and 50 per cent (Vermeulen & Mosquera 2009). Emissions vary according to cultivation technique and crop type. Anaerobic turnover in rice paddies is a major source of methane (Olesen et al. 2006), although the anoxic conditions, when paddies are flooded minimize carbon dioxide release. Ploughing soils encourages microbial digestion of soil organic matter (SOM), leading to greater net carbon dioxide emissions. Energy use at all stages of arable production represents another significant source of carbon dioxide. However, differences in farming techniques, levels of mechanization, scales of production and soil and weather conditions in different regions make it difficult to quantify total fossil energy use and to extrapolate data from one agricultural system to another. Meat, egg and milk production are estimated to account for half of all the GHG emissions associated with food production and represent about 18 per cent of global anthropogenic emissions (Garnett 2009). In the UK, livestock farming generates 57.5 Tg carbon dioxide equivalent, which is around 8 per cent of total UK emissions (Garnett 2009). Global demand for meat and dairy products is predicted to increase over the next 50 years owing to human population growth and increased wealth. An important source of GHGs in livestock farming is enteric fermentation in ruminants, such as sheep and cattle, which produces significant quantities of methane (Olesen et al. 2006). Growth of crops to feed livestock is another major source of GHG emissions. Around 37 per cent of global cereal production and 34 per cent of arable land is used to provide animal feed (FAO 2006), and so meat, egg and milk production also contributes to the release of nitrous oxide and other gases as described above. A further consideration is the efficiency with which animal feed is converted to meat. A large proportion of animal feed is respired or accumulates in non-edible parts of the animal. In the case of cattle, up to 10 kg of cereal may be required per kilogram of meat produced and so cattle farming can represent a significant demand for land and resources (Garnett 2009). Substantial differences exist between the different forms of livestock production in terms of net energy and protein feed requirements per kilogram meat produced. Increasing and volatile fossil fuel prices, unless mitigated, could drive both reductions in meat demand owing to increased prices, but also switching to the lower energy intensity, higher efficiency, forms of meat production, possibly favouring mono-gastric rather than ruminant supply chains. On a global scale, 75 per cent of anthropogenic GHG emissions are the result of fossil fuel combustion. The remaining 25 per cent are primarily the result of LUC (Le Quéré 2009; Snyder et al. 2009). However, land also continues to be a net sink for carbon, absorbing about 29 per cent of total emissions, with the oceans taking up a further 26 per cent. The balance, about 45 per cent, accrues to the atmosphere (Le Quéré 2009). Deforestation involves the removal of large above-ground biomass stocks, which represented an important carbon sink during the twentieth century (Bondeau et al. 2007). Below-ground biomass is lost as woody root systems and replaced by the smaller, finer roots of grasses and crop plants. Disturbance during cultivation breaks down SOM and accelerates decomposition, leading to further losses of soil carbon and, consequently, carbon dioxide emissions (IPCC 2006). The soil organic carbon content of temperate arable, grassland and woodland soils are of the order of 80, 100 and 130 t C ha−1, respectively (Bradley et al. 2005). It is thought that between 50 and 100 years are required for soil carbon content to reach a new equilibrium following LUC (Falloon et al. 2004; King et al. 2005), and so this form of disturbance leads to a long-term source of carbon dioxide. It is generally assumed that there is little difference in soil carbon between annual and perennial food crops, including fruit orchards and plantation crops (IPCC 2006). However, detailed information is lacking and further research is needed to determine the real effects of perennial crops on emissions from soils. Deforestation in the Brazilian Amazon basin to provide land for cattle ranching and soya bean cultivation for animal feed accounts for a loss of 19 400 km2 of rainforest each year. This alone accounts for 2 per cent of global anthropogenic GHG emissions. While complex interlinkages and causality chains exist as drivers for deforestation, much of the soya bean grown in Brazil is exported for use as animal feed in Europe, Asia, the US and Russia. Soya bean expansion is more closely associated with Amazonian deforestation than the expansion of other crops (Volpi 2010). Overall, 7 per cent of anthropogenic emissions, totalling 2.4 Pg of carbon dioxide per year, are estimated to be the result of livestock-induced LUC (Garnett 2009). Consequently, livestock farming is a major cause of LUC. Use of former forest land for cattle ranching represents a direct LUC; use of the land to grow feed for livestock overseas represents a major indirect LUC. Each process results in further GHG emissions. Fossil energy prices directly affect the costs of tillage and fertilizers and indirectly affect almost all aspects of agricultural production, through to the prices of food seen by the end consumer. The previous sections of this paper have outlined the different energy inputs and GHG emissions (energy and non-energy related) of a range of agricultural production pathways for the major food commodities. The results strongly suggest that the production costs of some agricultural commodities will be more sensitive to changing fossil fuel prices than others and that the options for mitigating the risks of fossil energy prices will also differ between those chains. This section assesses the trends in the price of oil, natural gas and coal over the last four decades and uses differences between projections for future oil prices to 2030 as a proxy for overall fossil fuel price volatility in this period. Historic trends in the spot prices of oil, natural gas and coal show that throughout the 1980s and most of the 1990s, spot prices remained below US$4 per GJ, with coal staying below US$2 per GJ until the turn of the millennium (figure 7). In fact, until 1995 fossil fuel prices were converging around US$2 per GJ, making electricity production, in particular, more attractive from natural gas than from coal because of the greater flexibility, decreased capital costs and modularity of natural gas-fired power stations. Since 1995, prices have increased first for oil then for gas and finally followed by coal. By 2007, prices for oil and natural gas had more than quadrupled, while for coal they had nearly trebled. Since then, as a result of recession and also from increased investment in new supply and refining capacity, prices have fallen sharply but more recently, since the beginning of 2009, have started increasing again, particularly for oil, although not yet to the levels seen in 2007 (BP 2009; IEA 2009; US EIA 2009). Trends in global oil, gas and coal spot-market prices; 1961–2009 (US$ per GJ). Source: BP (2009); IEA (2009). Dark blue with diamonds, oil (Dubai): $ GJ−1; pink with squares, gas (EU): $ GJ−1; yellow with triangle, coal (EU): $ GJ−1.
In part, increasing supplies are a result of the deployment of new technologies, allowing hitherto inaccessible fossil fuel resources such as oil shale, tar sands or ‘tight’ gas reserves to be exploited. It is also a result of conventional supplies becoming constrained and the resulting increase in prices making previously too expensive reserves possible to access profitably. As shown in (figure 5), all agricultural commodities in the UK simultaneously use all forms of fossil-derived energy and some renewables too. A major question remains as to whether increasing overall prices and increasing volatility in those prices will drive further diversity in energy supply resources, or reductions in overall energy intensity, or even in the total supply of agricultural products. As a result of real and perceived constraints to conventional fossil fuel supplies, in particular oil and natural gas, robust predictions for prices more than a few years forward are not available and the uncertainties associated with projections to 2030 are so great that the US Energy Information Administration currently uses three scenarios for oil price projections that range from US$50 to US$200 per barrel (figure 8). Projected oil and gas price ranges to 2030; US$ per GJ. Source: US EIA (2009). Dark blue line, reference case ($130 per bbl oil); red line, high price ($200 per bbl oil); green line, low price ($50 per bbl oil); dashed violet line, gas: 2008 US$ GJ.
For natural gas, the dominant energy feedstock for nitrogen fertilizer production, the recent development of new drilling techniques has released very substantial quantities of so-called ‘tight’ or ‘shale’ gas, reducing the price of natural gas in the US from around US$13 per MBTU in 2008 to less than US$5 per MBTU in early 2010 (The Economist 2010) or from US$12.7–US$4.3 per GJ. If tight gas is found elsewhere in substantial volumes, as seems possible, then the historic link between oil and gas prices will be broken, with oil prices likely to increase significantly and gas remaining competitive with coal. If bioenergy, particularly biodiesel and biogas, becomes cheaper than the direct fossil fuel inputs into agriculture, primarily diesel, then a rapid switch to on-farm bioenergy is likely to occur where rotary power, transport and thermal processing are required. While the complexity of the interactions between conventional agricultural feedstocks for food and their use for energy, when coupled to global oil markets, makes this price threshold difficult to estimate, it is likely to be around US$ 70–100 per barrel oil equivalent but may be lower for large-scale commercial production facilities. Whether this switch to bioenergy production is competitive or synergistic with food production will mainly depend on: the strength of the linkage between energy and food prices; the rate of increase of demand for bioenergy feedstocks as commodity crops; the impact from increased investment from bioenergy and the resultant increase in yields of both conventional crops (food and fuel) and advanced lignocellulosic crops; and, the availability of new land or recovered degraded or abandoned land. The impact of climate change on agricultural production is still uncertain. However, reports of the potential outcomes for agriculture are well documented (AEA 2007). Farmers in general face the looming spectre of climate change at two levels; firstly, by having to adapt existing practices to cope with the outcomes of climate change (i.e. changing weather patterns; water availability; changing patterns of pests, disease and thermal stress in livestock) and secondly, by addressing those farming activities that are contributing factors to increased GHG emissions. While it is likely that farmers will readily adopt measures that will benefit their productivity and financial outcomes, adopting practices at a cost to farming businesses is more likely to require policy intervention. Developing mechanisms to improve GHG abatement in the agricultural sector is complex, not least because policy mechanisms are often devised through different departmental policy-making regimes. Within the EU Climate and Energy Package (2008), the agricultural industry is not part of one of the main components, the European Emissions Trading Scheme (EU ETS 2009). Agriculture, as a non-EU ETS sector, is charged with reducing emissions to 10 per cent below 2005 levels by 2020, and it is anticipated that this will be through binding national targets. In the policy context, the farming industry faces many challenges before carbon trading as an economic strategy becomes a reality. The UK Government published its low carbon transition plan in 2009 (http://www.theccc.org.uk/carbon-budgets). The Plan's main points for agriculture are to:
Policies to reduce emissions from the fossil energy sector may impact on agriculture in two different ways. Firstly, by promoting crops that can be used as feedstocks for biofuel or bioenergy; different growing regimes and more efficient energy inputs may be adopted. Secondly, GHG emission reporting requirements that are being developed for biofuels may affect farming practices, particularly if benefits for improved emissions are transferred down the supply chain to the feedstock producers. Policies in the UK that aim to impact fossil fuel energy use and, which in turn may impact on agriculture are the renewable transport fuels obligation (RTFO; DfT 2007) and the renewables obligation (RO; DTI 2007). In the EU, the climate and energy package (2008) committed the 27 member states to reduce CO2 emissions by 20 per cent, and to target a 20 per cent share of energy supply from renewable energy by 2020 i.e. the so-called ‘20–20 in 2020’. Policy instruments in the package, which may then indirectly impact on agriculture, are the Fuels Quality Directive (EU FQD 2009) and the Renewable Energy Directive (EU RED 2009). The FQD aims to reduce harmful atmospheric emissions, including GHGs, and includes mandatory monitoring of life cycle GHG emissions. The RED aims to promote renewable energies and has a component that addresses sustainability of biofuels and the land used to grow biofuel feedstocks. In the United States, the California Environmental Protection Agency Air Resources Board (CARB) has been at the forefront of developing policy to reduce emissions from fossil energy and has developed the low carbon fuels standard (LCFS 2007). This standard is under review by a number of individual states in the US, which are also looking to adopt an emissions approach to the inclusion of biofuels in transport fuels. Nationwide in the US, the Environmental Protection Agency (EPA) has developed, under the Energy Independence and Security Act of 2007, a renewable fuel standard programme (RFS2 2009) that aims to increase the volume of renewable fuel in gasoline from 9 billion gallons (34 billion litres) in 2008 to 36 billion gallons (144 billion litres) by 2022. In many ways, these policies are leading the development of methodologies that will improve energy efficiency and reduce GHG emissions across supply chains. Improving emissions and ensuring the sustainability of biofuels have led to the development of variety of policy-specific methodologies. They have also encouraged the formation of global stakeholder interactions, which address environmental, economic and social issues e.g. Roundtable on Sustainable Biofuel (RSB); Global Bioenergy Partnership (GBEP) and crop-specific initiatives e.g. Roundtable on Sustainable Palm Oil (RSPO), Round Table on Responsible Soy (RTRS) and the Better Sugar Cane Initiative (BSI). The UK's RTFO has been devised with GHG emissions monitoring and reduction as a key component and it has been necessary to stipulate methodology and processes to report GHG emissions from the individual biofuel supply chains used by obligated parties in law (RFA 2009). The RTFO's carbon and sustainability methodologies cover biofuel supply chains from feedstock source, by country and by on-farm production inputs and outputs. In a biofuel supply chain, this may encourage farmers to improve management practices, providing that a share of the value or benefits feed back to farmers. Currently, carbon and sustainability reporting is not mandatory under the RTFO and better practices leading to improved carbon and sustainability profiles are not rewarded. Many farmers in the UK have been encouraged by the idea of reducing on-farm diesel costs by producing their own biodiesel from oilseed rape. However, the market value of vegetable oil and costs for processing oils into biodiesel will always be calculated against fossil diesel costs for farm use (Lewis 2009). Furthermore, farm vehicles will generally be under warranty from the vehicle manufacturer and it is unlikely that farmers would risk using out-of-spec fuel, to the detriment of these costly machines. As noted by Monbiot (2009), addressing energy needs using on-site, renewable energy options only reduces dependence on diesel for on-farm use by a quarter. Options for farmers to use renewable energies, such as biomass or biogas for electricity and heat production, are often limited to on-farm use only, as there are not the facilities or incentives to connect to the electrical grid. Allowing access to the national grid would give farmers an option to trade renewable energy under the RO, whereby the mandatory renewable requirement of 15 per cent electricity by 2015 could potentially be met in part by surplus on-farm energy generation, traded as renewable energy certificates (ROCs). The UK Government is also reviewing opportunities for a renewable heat incentive (RHI), under the Energy Act (DECC 2008), which promotes investment for biomass boilers and combined heat and power (CHP) facilities. Land preparation has become increasingly mechanized over the years. However, mechanical tillage systems are energy-intensive and expose SOM to decomposition, leading to enhanced GHG emissions, reduced SOM concentration in soil and, potentially, in the short and longer term, to soil erosion and degradation. The potential for reducing the energy intensity of agricultural production by adopting alternative tillage systems may occur from decreased fuel use in mechanical operations or as the result of better long-term soil productivity. Alternative methods of land preparation and crop establishment have been devised to reduce energy requirements and maintain good soil structure. These include minimum tillage (min-till), conservation tillage (no tillage or min-till) and direct drilling resulting in increased surface organic matter from previous crops residues (soil coverage of 30%; Van Den Bossche et al. 2009). Robertson et al. (2000) compared management techniques in a three-crop rotation over 8 years in Michigan. The net changes in soil C (g m−2 yr−1) were for conventional tillage (plough-based tillage), 0; organic with legume cover, 8.0; low input with legume, 11 and no till, 30. The consequences of reduced tillage on soil carbon are not straightforward. Baker et al. (2007), concluded that the widespread view that reduced tillage favours carbon sequestration may be an artefact of sampling methodology, with reduced tillage resulting in a concentration of SOM in the upper soil layer rather than a net increase throughout the soil. They did, however, highlight that there were several good reasons for implementing reduced tillage practices. In contrast to Baker et al. (2007), Dawson & Smith (2007) reviewed the subject area and suggested sequestration rates of 0.2 (0–0.2) and 0.39 (0–0.4) t C ha yr−1 for reduced tillage and no-till farming, respectively. Energy balance calculations resulting from fertilizer application are more difficult to assess, as interactions with increased SOM become more complex. Studies that focus on energy inputs, attributed to soil preparation, tend to be regional and crop-specific. Energy from tillage will depend on crop requirements, soil type, cultivation/climatic conditions, equipment used and engine efficiency. A study that compares conventional and integrated farming in the UK attributed energy savings in integrated farming almost entirely to the reduction in energy required for mechanical operations (Bailey et al. 2003). The study also considered the effects on energy of multi-functional crop rotation, integrated nutrient and crop protection methods, and ecological infrastructure management (i.e. field/farm boundary maintenance to promote biodiversity and reduce pollution), in integrated systems. A study for wheat grown in Iran provides a more detailed evaluation of five specific tillage regimes (Tabatabaeefar et al. 2009). The study reports the min-till system (‘T5’ in figure 9) as the most energy-efficient, with energy for tillage accounting for 19 per cent of the total energy versus 32.5 per cent for the least energy-efficient (‘T1’). Yield outcomes are also reported whereby the min-till system gives the second-highest yield of the five systems, but in overall performance ‘T3’ is reported as being the most efficient system when taking both energy input and yield into account. Energy consumed for 1 Kg wheat production in Maragheh region of Iran. Source: Tabatabaeefar et al. (2009). T1, mold board plough + roller + drill; T2, chisel + roller + drill; T3, cyclo-tiller + drill; T4, sweep + roller + drill; T5, no-till + drill.
Soil carbon as a component of SOM is important in carbon turnover within the carbon cycle, and in maintaining soil fertility, water and nutrient-holding capacity, ecosystems functions and preventing soil degradation. Soil carbon and SOM are important in preserving soil in a productive, quality state for long-term crop production (Dawson & Smith 2007). Understanding the processes of carbon interaction in soils is complex, both at local and national levels. Carbon losses from the SOM pool, the effect of carbon loss on nutrient availability and crop productivity, and the subsequent outcomes for agricultural management activities are all important variables in calculating the overall carbon stocks and productivity of soils (Dawson & Smith 2007). Other farming options, such as residue mulching and the use of cover crops, aim to conserve and enhance SOM or soil carbon sequestration (Lal 2007). The subsequent effects of nutrient availability on crop productivity vary between cropping systems (e.g. conventional or organic systems), land types, climatic conditions and time, and require further research before being fully integrated into farming systems (Kong et al. 2009). Studies carried out on sites in Belgium have been used to demonstrate nitrogen interactions under various planting regimes and to demonstrate the action of tillage on organic matter degradation and the subsequent availability of nitrogen in the nutrient pool over time (Van den Bossche et al. 2009). They report higher SOM, microbial biomass and enzymatic activity for conservation tillage, which increases with time. The anticipated effect is slower mineralization or immobilization of nitrogen, leading to enhanced soil fertility as the result of long-term build-up of nutrient reserves of the soil. Understanding the interaction between soil carbon and nitrogen also adds further complexity to determining the benefits of increasing soil carbon through changes in tillage systems. While increasing fertilizer inputs may increase the soil carbon pool, the poorer GHG balance from the increased use of nitrogen fertilizers may negate the sequestration benefit. The reasons for changing agricultural activities should be clear from the outset. Is the anticipated benefit to reduce energy inputs, reduce GHG emissions, improve soil carbon sequestration or to maintain the long-term productivity of soils? Land management choices may then follow, with trade-offs expected and accepted—for example, planting marginal lands with biomass crops to improve carbon sequestration versus maximizing yields on productive lands by increasing fertilizer use, or adopting min-till systems on land areas where mechanical activities are also degrading soil quality or causing soil erosion, such as on sloping sites. In addition to the direct energy inputs for tillage and harvesting, fertilizers can constitute a significant share of total energy inputs to agriculture (figure 4) and food production, particularly for nitrogen-intensive crops such as cereals. Figure 10 shows the different energy requirements for the main constituents of commercial fertilizers, using European average technologies. The main nitrogen components of fertilizers, ammonia (NH4; 32 GJ t−1), urea (22 GJ t−1) and liquid UAN (urea AN; 22 GJ t−1), are the most energy-intensive to produce, while the P and K components all require less than 5 GJ t−1 to produce. Energy inputs into the main fertilizer building blocks; European average technology. Source: Jenssen & Kongshaug (2003).
The energy inputs needed to produce and supply fertilizers and pesticides substantially outweigh the energy required to apply the products in the field. GHG emission factors for production, supply and use of N, P and K fertilizers, under average UK conditions, are provided in table 5. However, for nitrogen fertilizers, the GHG emissions arise both as a result of the fossil energy inputs needed to capture and process atmospheric nitrogen, and also from complex soil-based processes that result in the production and release to the atmosphere of nitrous oxide (N2O) in-field.
The energy inputs into nitrogen fertilizer production have decreased significantly since the beginning of the last century as a result of continual technological innovation (figure 11). GHGs emitted during its production include carbon dioxide, methane and nitrous oxide as shown in table 6. Carbon dioxide emissions account for 98 per cent of the GHG emissions on a mass basis, but only 33 per cent on a global warming potential (CO2 equivalent) basis. N2O accounts for 0.6 per cent of the mass of the GHG released but 65 per cent on a CO2 equivalent global warming potential basis.
Historic development in energy requirements in N-fixation for nitrogen fertilizer. Source: Konshaug (1998).
However, while ammonia production is the most energy-intensive part of the production of nitrogen fertilizers, nitric acid production causes the release of N2O during its production. Nitric acid is needed to produce AN through a reaction with ammonia. The N2O leaks to the atmosphere in the nitric acid plants and between 70 and 90 per cent of this N2O can be captured and catalytically destroyed. European plants are now being fitted with this nitrous oxide abatement technology and as a result overall AN GHG emissions could be reduced, by 40 per cent overall, from 6.93 to 4.16 kg CO2 eq kg N−1. The production of woody biomass on land unsuitable for intensive arable farming or extensive grazing is widely seen as a low-energy input option, for the production of such biomass for material or energy usage. Numerous opportunities exist to integrate the production of woody biomass and agricultural crops or livestock and production and such ‘farm-forestry’ or ‘agro-forestry’ systems have been widely discussed in the literature and through the work of the consultative group on International Agricultural Research's (CIGIAR) World Agroforestry Centers,1 much of which is focused on the developing world. A recent geospatial study by Zomer et al. (2009) has shown agro-forestry to be a significant feature of agriculture in all regions of the world (figure 12). Percentage of world agricultural land that can be regarded as being under agro-forestry systems to varying intensities. Source: after Zomer et al. (2009). Dark green bars, >10%; green bars, >20%; light green bars >30%.
Zomer et al. (2009) provide a cautious estimate that 17 per cent (approx. 3.8 million km2) of global agricultural land involves agro-forestry at greater than 30 per cent tree cover and, potentially, this can be as high as 46 per cent or just over 10 million km2, at greater than 10 per cent or more tree coverage rates. Agro-forestry systems are found in developed as well as less-developed regions. The widespread and significant proportion of agricultural land under agro-forestry management (e.g. in Central and South America) already points to a successful form of integrated land management for both crop production and woody biomass for energy production. This indicates a capacity for agricultural land management to accommodate integrated energy production; currently, in most cases, the woody biomass is used for immediate local needs such as fuelwood for cooking. However, there is also considerable scope for more widespread introduction of tree or coppice material to agricultural land specifically to meet on-farm energy needs and, subject to transportation constraints, as an economic product for off-farm sale. For example, in the UK, a number of estates are currently using wood produced on the estate for biomass heat schemes, which is encouraged under the UK's Bioenergy Capital Grant Scheme. With combinations of increasing prices for conventional energy inputs to farming and incentives for low-carbon forms of renewable energy, farmers may be incentivized to allocate a proportion of their crop land to meet on-farm energy use, for example, for diesel fuel replacement or potentially for high-value low-carbon certified electricity, either produced on-farm or from farm-derived woody/residual feedstocks. The ability to co-produce woody biomass for heat and/or power generation at farm scale, alongside commodity crops, provides a potentially attractive route to mitigating increased or volatile external energy costs (e.g. for drying, livestock management or domestic use) and potentially as a saleable commodity in its own right (biomass fuel product(s)). Future incentivization for farmers to minimize agricultural GHG emissions is also likely to favour greater integration of forestry and/or woody biomass cultivation on-farm e.g. short rotation coppice or perennial grasses such as Miscanthus in UK/EU. At the individual farm level, cultivation of perennial biomass crops on a proportion of the land may provide an attractive route to ‘balance’ more GHG-intensive cultivation activities with carbon ‘credits’ from enhanced C-storage in soils, via avoided emissions from displaced fossil fuel requirements or as a direct economic benefit from biomass sales at a premium owing to renewable heat and power incentive value trickling down the supply chain. Recent studies by Hillier et al. (2009) have illustrated the GHG benefits associated with soil carbon storage effects for certain biomass crops and land-use transition scenarios modelled in a LCA context for England and Wales. Attention is also being given to the use of biochar2 as a potential energy source (during the charring process) and significantly as a soil-based carbon sequestration and storage approach that can also offer soil fertility benefits (Collison et al. 2009; Sohi et al. 2009). Biomass supply for biochar production can be drawn from diverse sources, including woody biomass from agro-forestry systems as well as from existing UK farm biomass, such as hedgerow management (A. Gathorne-Hardy 2009, personal communication). This paper has identified that there are significant risks to future farming and yields owing to increasing and increasingly volatile fossil fuel prices. While it has been difficult to obtain robust projections for oil, natural gas and coal prices, it is clear that:
FootnotesWhile the Government Office for Science commissioned this review, the views are those of the author(s), are independent of Government, and do not constitute Government policy. © 2010 The Royal Society This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. References
Page 14One of the most striking features of economic development is the relative decline in the agricultural sector in growing economies. Also typical for countries with above-average population density is a decline in their agricultural comparative advantage as capital accumulation and industrialization proceed. An export-led boom in another sector, or large prolonged inflows of foreign aid, also weaken the international competitiveness of a country's farm sector. Changes in consumption patterns (the slow growth in consumption of farm products and, in middle-income countries, the move away from grains and other staples and towards livestock and horticultural products) also alter the net trade situation of countries. However, whether that leads to a decline or a rise in the overall food self-sufficiency in and net exports of total agricultural products depends also on productivity growth in farming relative to non-agricultural production (Anderson 1987), and in trends in government assistance to farmers relative to producers of other tradables. In the past, price-distorting policies have gradually changed from disfavouring to favouring agriculture relative to other tradable sectors as per capita incomes grow (Anderson 2009); globally, productivity growth has been faster in the farm sector than in other sectors (Martin & Mitra 2001). A further influence on agricultural trade has been the acceleration of globalization over the past quarter-century. That has been characterized by a rapid decline in the costs of cross-border trade in farm and other products, driven by declines in the costs of transporting bulky and perishable products long distances, the information and communication technology (ICT) revolution and major reductions in governmental distortions to agricultural trade. Together, these developments have boosted economic growth and reduced extreme poverty globally, and in the process altered global agricultural production, consumption and hence trade patterns. This paper first examines the key drivers of the above developments over the past four or five decades and then draws on that analysis and recent events to suggest likely drivers of—and uncertainties associated with—global food and other agricultural trade trends over the next four decades. The first part of this section summarizes the structural changes in global agricultural markets and trade since the 1960s. The second part outlines one set of drivers, namely rapid technological changes including those that have lowered trade costs for farm products over the past quarter-century. The third part summarizes reforms to agricultural and trade policies since the 1980s and economy-wide modelling results that suggest those reforms have more than halved the global trade- and welfare-reducing effects of price-distorting policies. One of the most striking features of economic development is the relative decline of the agricultural sector in growing economies. Also typical for countries with a reasonably high population density is a decline in their agricultural comparative advantage as industrialization proceeds (or when another sector such as mining, manufacturing or services enjoys an export-led boom or there is a sustained inflow of foreign aid). There is a wide dispersion across regions of the world in the importance of agriculture in national GDP and employment, in endowments of arable land and fresh water as well as capital per worker, in the availability of modern farm and non-farm technologies that take account of relative factor prices and hence in agricultural comparative advantage. Appropriate indicators of agricultural comparative advantage are difficult to assemble, because government policies that distort food markets are so pervasive and because of the range of technologies made available via adaptive research and development (R&D) investments to suit different relative factor scarcities (Hayami & Ruttan 1985; Alston et al. 2009a,b). Thus, the sector's share of national exports relative to the global average, or even net exports as a ratio of exports plus imports of primary agricultural products (both shown in table 1 for the key regions of the world), are rather poor reflections of comparative advantage, and they also conceal much intra-regional diversity.
A key determinant of agricultural comparative advantage differences across countries is relative factor endowments, which can change substantially as economies grow at varying rates. Differing technologies also can have an influence on the supply side of the market, and those differences can persist for long periods if governments under-invest in agricultural R&D. As for differences in tastes on the demand side, international diffusion tends to ensure they are far less important than factor endowment differences over the very long term. Nonetheless, changes in the preferred mix of foods away from starchy staples and towards livestock and horticultural products as consumers move from low-income to high-income status can influence comparative advantages within the farm sector. The simplest model to capture the influence of changes in relative factor endowments in a growing world economy is perhaps that provided by Leamer (1987). His model has just three productive factors: natural resources, labour time and produced capital (human as well as physical, where the human component is defined here to include not only skills but also technologies available in each country). The higher a country's endowment of natural resources relative to the other two factors, when compared with the global average, the stronger its comparative advantage in primary products. The latter can be interpreted as food and agricultural products if the only natural resources are agricultural land and water; but, if a country also has resources that can be depleted through mining (e.g. minerals, energy raw materials or natural forests), then changes in the profitability of such mining also will affect agricultural comparative advantages. Generally, a mining boom, or a sustained inflow of foreign aid, would diminish a country's agricultural comparative advantage (Corden 1984). However, if the boom was driven by a surge in the international price of non-farm tradables (rather than supply driven as with the discovery of a new reserve of minerals or a new mining technology), and the product whose price rose has an agricultural substitute, then producers of that farm product could also benefit—as discussed in §3a with respect to biofuels. Apart from occasional supply-driven mining booms, sustainable economic growth is generally due to growth in produced capital (including available technologies) per worker. Some of any increment in produced capital may be used to expand primary production, but mostly it is used in other sectors. This tendency begins at an earlier stage of development, and thus, at a lower national wage rate, the smaller a country's per-worker endowment of land and other exploitable natural resources, and the smaller its investment in new technologies for agriculture relative to non-farm sectors. Thus, the ranking of countries according to their agricultural comparative advantage is correlated with their farmland/labour endowment ratio, while their capital intensity of agricultural production is correlated with their produced capital/labour endowment ratio. A crude index of the latter is simply per capita GDP, reported for 2005 in table 1 along with arable land and fresh water per capita. Global agricultural trade has grown much slower than trade in other products. Prior to the 1960s, farm products accounted for more than 30 per cent of all merchandise trade globally, but since the beginning of this century their share has averaged less than 9 per cent (Sandri et al. 2007). Since agriculture's share of global GDP has also fallen, a more appropriate indicator of the changing extent to which agriculture is globalized is the share of agricultural and food production or consumption that is traded internationally. Table 2 provides estimates of that for various regions, based on a sample of 75 countries that account for all but 1/10 of the world's population and agricultural GDP. Those numbers suggest that agriculture's tradability has increased considerably since the 1960s, rising from about one-ninth to about one-sixth of global production or consumption. However, a glance at the regional data reveals that most of that change is due to increased intra-European trade behind the EU's common external trade barrier, apart from some growth (from low bases) since the 1970s in agricultural imports by Asia and Latin America.
Particularly striking is the decline in the extent to which African agricultural production is exported, bringing down the region's agricultural self-sufficiency from 120 per cent to 105 per cent over the four decades to 2000–2004 (table 2). It needs to be kept in mind, though, that this could be in part owing to the region's changing comparative advantages rather than to trade taxes. Such a change in comparative advantage could be because of a boom in other sectors of African economies, for example due to the local discovery, exploitation and exportation of mining products, or because of the large sums of foreign aid flowing into the region, either of which would strengthen a country's currency and thus make its farmers less competitive in international markets. Another possible explanation is the faster growth of farm relative to non-farm productivity in the rest of the world, which is consistent with the relatively slow growth in Africa's crop yields. Alston et al. (2009a) found that land productivity growth between 1961 and 2005 increased only 2.19 per cent per year in Africa compared with 2.72 per cent in all developing countries, and they note that the lag in farm labour productivity growth was even greater (0.76% for Africa versus 1.93% per year for all developing countries). A third possibility is that other regions have reduced their trade costs, or their anti-agricultural and anti-trade policy biases, more than have countries of sub-Saharan Africa in recent decades. The latter is supported by recently compiled evidence on policy trends reported in Anderson (2009). In addition to governmental barriers to trade, there are natural trade barriers caused by transport, information and communication costs. Farm products are relatively bulky commodities, making them costly to transport over long distances, especially if they are perishable. Some of them are desired in fresh form, a desire that can be satisfied only in season. Hence, food prices can vary substantially across time and space for these reasons. If we define globalization as a decline in costs of doing business across space, there has been, and continues to be, great scope for farmers and food consumers to be beneficiaries of its acceleration. When the relevant space includes national borders, a key effect of such cost declines is to enhance the international integration of markets. A standard indicator of such integration is the trade-to-GDP ratio. Merchandise trade for centuries has grown faster than output for all periods (other than between the two world wars), and the gap has been larger in the 1990s than in any earlier period since reliable data became available. According to Maddison (2001, p. 363) merchandise exports as a share of global GDP was only 1 per cent in 1820, 5 per cent in 1870 and 8 per cent in 1913 at 1990 prices. Between 1975–1979 and 2000–2004, however, the share of all goods and services exports as a share of global GDP rose from 19 per cent to 26 per cent (Sandri et al. 2007). The impacts of the drivers of globalization are not uniform across countries, which is showing up in trade specialization data: between 1980–1984 and 2000–2004, the share of non-food manufactures in merchandise exports rose from just over one-quarter to almost two-thirds for middle-income countries (and from less than half to 90% for China), and the share of processed food products in the value of food and agricultural exports over that period rose from 54 per cent to 69 per cent for high-income countries (HICs) and from 49 per cent to 67 per cent for Asia (Sandri et al. 2007). The lowered cost of moving products and people was dominated, in the middle half of the twentieth century, by the falling cost of motor vehicle and aeroplane transportation, thanks to mass production of such goods and associated services. Ocean freight rates (helped by containerization) and telephone charges also fell massively over this period. Transport costs can be crudely captured by the extent to which a product's Cost Insurance and Freight (c.i.f.) import price at its destination port exceeds its Free On Board (f.o.b.) export price at its port of origin. For US merchandise, that markup fell from 10 per cent in the 1950s to 6 per cent in the 1990s (Frankel 2000). An example for agriculture was the change from handling crop products such as grains in bags to bulk for storage and for land and water transportation, reducing substantially transport and storage costs including post-harvest losses. The bag-to-bulk transformation began in industrial countries following World War II and gradually permeated middle-income countries such as Argentina and Brazil, and it is now becoming more widespread in low-income countries too. Other improvements, which need not show up as a reduction in the f.o.b./c.i.f. price gap, are improved transport services such as faster and more frequent schedules and controlled atmosphere containers that allow perishables such as meats, milk products and fresh fruit and vegetables to be transported longer distances by sea or air. A more recent phenomenon, beginning near the end of the twentieth century, is digital—namely the ICT revolution. Aided by deregulation and privatization of telecom markets in many countries, it has been lowering long-distance communication costs enormously, especially the cost of rapidly accessing and processing knowledge, information and ideas from anywhere in the world. Science has been among the beneficiaries of the digital revolution, spawning yet other revolutions, such as in biotechnology and nanotechnology. Foreign direct investment (FDI) liberalization sometimes has been a complement to trade liberalization. Developing countries so far are only minor players as hosts of FDI in processed food, beverages and tobacco, however: in 2007, their inflow was less than $3 billion, compared with an inflow of $46 billion into HICs. Flows of FDI into the primary agricultural sector were even less, such that FDI accounted for less than 0.3 per cent of capital formation in developing country agriculture compared with 13 per cent for the overall economy of that country group (UNCTAD 2009, ch. 3). Nonetheless, Reardon & Timmer (2007) argued that FDI has facilitated the transformation of food value chains over the past two decades, in particular via the expansion and merger/takeover activity in supermarket retailing. In most HICs now, no more than five firms account for the majority of sales, and in many of those countries, the four top firms have more than two-thirds of sales. Supermarkets have been spreading even faster in developing countries than they did in HICs. This is having dramatic effects further up the value chain. First-stage processors, food and beverage manufacturers, and distributors are also becoming more concentrated so as to better match the bargaining power of supermarkets, although typically in narrowly focused industries rather than across the board as in supermarket retailing. Their actions are constrained too by the supermarkets' capacity to develop their own brands and even their own processing and distribution. In turn, these developments are altering dramatically the way farmers are expected to supply those markets, with the emphasis on timely delivery of uniformly high-quality products with very specific attributes (Reardon & Timmer 2007; Swinnen 2007; Reardon et al. 2009). According to Swinnen & Vandeplas (2009), though, consumers and possibly even farmers in developing countries are benefitting from the trade and investment liberalization and ICT revolution that have stimulated these changes, because of the fierce competition that ensues among middlemen along the food value chain. In addition to agricultural trade being affected by economic growth and declining trade costs, it has been greatly affected by distortionary government policies. Since the 1950s, world agriculture has been characterized by the persistence of high agricultural protection in developed countries, by anti-agricultural and anti-trade policies of developing countries and by the tendency for both sets of countries to use trade measures to stabilize their domestic food market—thereby exacerbating price fluctuations in the international marketplace. This disarray has not only been highly inefficient but has also contributed to global inequality and poverty (since the vast majority of the world's poorest households depend directly or indirectly on farming for their livelihoods; see Anderson et al. 2010a). The situation worsened up to the mid-1980s, with agricultural protection in Europe, North America and Japan peaking and international food prices plummeting in 1986, thanks in large measure to an agricultural export subsidy war between the US and the European community. Meanwhile, many developing countries had been reducing farm incomes not only by heavily taxing agricultural exports but also, albeit indirectly, by protecting manufacturers from import competition and overvaluing the national currency. This disarray in world agriculture meant that there was over-production of farm products in HICs and under-production in more-needy developing countries. It also meant there was less international trade in farm products than would be the case under free trade, thereby ‘thinning’ the market for these weather-dependent products and thus making them more volatile. The extent of that volatility is evident in figure 1. Using a stochastic model of world food markets, one study estimates that the coefficient of variation of international food prices in the 1980s was three times greater than it would have been under free trade and that the volume of international trade in grains, livestock products and sugar was half what it could have been (Tyers & Anderson 1992, tables 6.9 and 6.14). Real international food price index, 1900–2008 (1977–1979 = 100). The deflator used is the price of manufactured exports to developing countries from the five largest HICs (France, Germany, Japan, the UK and the USA). Author's compilation using data from Pfaffenzeller et al. (2007), updated from 2004 with data from www.worldbank.org/prospects. Solid line, real food price index.
During the past quarter-century, numerous developing countries and HICs have begun to reform their agricultural price and trade policies. This has contributed to the rise in the extent to which farm products are traded internationally, noted above. Much of this reform was undertaken unilaterally or as part of regional trading arrangements, but some was also undertaken in response to international pressures such as Uruguay Round stipulations, commitments required for accession to the World Trade Organization (WTO) and structural adjustment loan conditionality by international financial institutions. Meanwhile, reforms in some middle-income economies (most noticeably Korea) have ‘overshot’, going from discouraging their farmers to protecting them from import competition—which raises concerns that other emerging economies may follow suit and pursue the same agricultural protection growth path of more-advanced economies in earlier stages of their economic development. A recent World Bank research project (see Anderson (2009) and www.worldbank.org/agdistortions) developed a series of indicators to measure the impact of those interventions and subsequent policy developments on farmers' incentives. Its most basic measure, the nominal rate of assistance (NRA) is the percentage by which government policies have raised gross returns to farmers above what they would be without the government's intervention (or lowered them, if the NRA is negative). Farmers are affected not just by prices of their own outputs but also (albeit indirectly through changes to factor market prices and the exchange rate) by the incentives offered to non-agricultural producers. That is, it is relative prices and hence relative rates of government assistance that affect producers' incentives, so a relative rate of assistance (RRA) was also calculated. The NRAs from the World Bank study, which involves 75 countries (including 20 HICs) which together account for 92 per cent of global agricultural GDP, are sumarized in figure 2. They reveal that assistance to farmers in HICs rose steadily from the mid-1950s until the end of the 1980s, apart from a small dip when international food prices (see figure 1) spiked around 1973–1974. After peaking at more than 50 per cent in the mid-1980s, the average NRA for HICs has fallen a little, depending on the extent to which one believes that some new farm programmes are ‘decoupled’ in the sense of no longer influencing production decisions. For developing countries, the average NRA for agriculture has been rising, but from a level of around −25 per cent during the period from the mid-1950s to the early 1980s to nearly 10 per cent in the first half of the present decade. Nominal rates of assistance to agriculture in HICs and European transition economies and in developing countries, 1955–2004 (per cent, weighted averages). The European transition economies is denoted by the World Bank as ECA, its acronym for (Central and Eastern) Europe and Central Asia. From Anderson (2009, ch. 1), based on estimates in Anderson & Valenzuela (2008). Black line, HIC and ECA; dashed line, HIC and ECA, including decoupled payments; grey line, developing countries.
The average NRA for developing countries conceals the fact that the exporting and import-competing subsectors of agriculture have very different NRAs. Figure 3 reveals that while the average NRA for exporters has been negative throughout (going from −20% to −30% before coming back up to almost zero in 2000–2004), the NRA for import-competing farmers in developing countries has fluctuated between 20 per cent and 30 per cent (and even reached 40% in the years of low prices in the mid-1980s). The anti-trade bias within agriculture (the taxing of both exports and imports) has diminished for developing countries since the mid-1980s, but the NRA gap between the import-competing and export subsectors still averages around 20 percentage points (and it has grown to 40 percentage points for HICs, although there even exporters have enjoyed positive NRAs). Figure 3 also reveals that the NRA for import-competing farmers in developing countries has increased at virtually the same pace as that in HICs, suggesting that growth in agricultural protection from import competition is something that tends to begin at modest levels of per capita income rather than being a phenomenon exclusive to HICs. Nominal rates of assistance to exportable, import-competing and all covered agricultural products (covered products only, and the total also includes non-tradables), HICs and developing countries, 1955–2007. (a) Developing countries. (b) HICs plus Europe's transition economies. From Anderson (2009, ch. 1), based on estimates in Anderson & Valenzuela (2008). Black lines, import competing; grey lines, exportables; dashed lines, total.
The improvement in farmers' incentives in developing countries is understated by the above NRA estimates, because those countries have also reduced their assistance to producers of non-agricultural tradable goods, most notably via cuts in restrictions on imports of manufactures. The decline in the weighted average NRA for the latter, depicted in figure 4, was clearly much greater than the increase in the average NRA for tradable agricultural sectors for the period to the mid-1980s, consistent with the finding of Krueger et al. (1988, 1991) two decades ago. For the period since the mid-1980s, changes in the NRAs of both sectors have contributed almost equally to the improvement in incentives to farmers. The RRA for developing countries as a group went from −46 per cent in the second half of the 1970s to 1 per cent in the first half of the present decade. This increase (from a coefficient of 0.54 to 1.01) is equivalent to an almost doubling in the relative price of farm products, which is a huge change in the fortunes of developing country farmers in just a generation. This is mostly because of the changes in Asia, but even for Latin America this relative price hike is one-half, while for Africa this indicator improves by only one-eighth. As for HICs, assistance to manufacturing was on average much less than assistance to farmers, even in the 1950s, and its decline since then has had only a minor impact on that group's average RRA (figure 4). The exceptions are Australia and New Zealand, where manufacturing protection had been very high and its decline occurred several decades later than in other HICs (Anderson et al. 2007). Nominal rates of assistance to agricultural and non-agricultural sectors and relative rate of assistance, developing countries and HICs, 1955–2004 (per cent, production-weighted averages across countries). (a) Developing countries. Dashed line, RRA; black line, NRA non-agricultural tradables; grey line, NRA agricultural tradables. (b) HICs. Black line, NRA agriculture; grey line, NRA non-agriculture; dashed line, RRA. The RRA is defined as 100 * [(100 + NRAagt)/(100 + NRAnonagt) − 1], where NRAagt and NRAnonagt are the percentage NRAs for the tradable parts of the agricultural and non-agricultural sectors, respectively. From Anderson (2009, ch. 1), based on the estimates in Anderson & Valenzuela (2008).
The above influences of policies focus on long-term trends, but policies also influence year-to-year fluctuations around trend prices and quantities as governments seek to reduce fluctuations in domestic food markets. One way for a country to achieve that objective is by varying the restrictions on its international trade in food according to seasonal conditions domestically and changes in prices internationally. Anderson et al. (2010b) capture this phenomenon by estimating the elasticities of transmission of the international product price to the domestic market, using a geometric lag formulation for each product for all focus countries for the period since 1985. The unweighted average estimate for the short-term elasticity for 12 key products is 0.54, suggesting that within the first year little more than half the movement in international prices of those farm products is transmitted domestically. To assess how far the world had come, and how far it still has to go, in rectifying the disarray in world agriculture, Valenzuela et al. (2009) use the World Bank's global economy-wide model known as Linkage to provide a combined retrospective and prospective analysis. It quantifies the impacts both of past reforms and current policies by comparing the effects of the recent World Bank project's distortion estimates for the period 1980–1984 with those of 2004. The findings from that economy-wide modelling study suggest that:
With this as background, we are now able to consider the likely drivers of changes in national agricultural comparative advantages, trade costs and pertinent policies over the next four decades and their associated uncertainties and impacts on global farm trade. The list includes the following, each of which is considered in turn in the rest of this section of the paper:
The economic recession in the USA and Europe since 2007 has slowed global economic growth. How long the recovery will take is uncertain because it depends on how quickly risk perceptions abate, which depends in turn on on-going government macroeconomic and trade policy responses (McKibbin & Stoeckel 2009). In that process of readjustment, while long-term growth rates to 2050 may not be greatly affected, currencies may be realigned in ways that have long-term effects on comparative advantages in farm products. However, there is too much uncertainty surrounding such possibilities at this stage to do more than simply note them. One recent set of population and per capita income growth projections to 2050 is summarized in table 3. Clearly, these projections imply significant changes to the economic centres of gravity of consumption in the global economy, given differing income elasticities of demand for various products. They also affect the supply side of each economy: population growth along with demographic changes and labour–leisure choices influence the growth of the workforce, and per capita income growth suggests an expansion in the endowment of capital, whether it be in the form of physical assets, workforce skills or new technologies.
In economy-wide computable general equilibrium model projections, it is common to represent physical capital assets and human skills explicitly, but to incorporate new technologies simply as shocks to total factor productivity (TFP; the number of units of each input needed to produce a unit of output). The latter can be determined endogenously if the modeller accepts projections of growth in per capita income and in the various factors of production, but it is then a challenge to allocate that aggregate TFP shock to different sectors and to different industries within those sectors. Typically, the agricultural sector's TFP growth rate is assumed to exceed that for the rest of the economy, based on historical experience (see Martin & Mitra 2001), so as to ensure the relative price of farm products declines over time as in the second half of the twentieth century (see figure 1). With the growth in international food prices over the 2003–2008 period, however, expectations about their future trend are now less certain. Is that rise just due to a rundown of grain stocks globally, or is it also because of the greater neglect of public investment on agricultural R&D in recent decades (Alston et al. 2009b; Royal Society 2009)? The possibilities of technological catch-up by lagging regions through faster international technology transfer also need to be considered (e.g. via the Green Revolution for Africa initiative of the Gates and Rockefeller Foundations, but also bearing in mind the apparent recent surge in inflow of FDI in farming from countries relatively poorly endowed with farm land and water; see von Braun & Meinzen-Dick 2009). This suggests that more than one set of assumptions about productivity growth is needed in developing a family of baselines for projections of agricultural productivity to 2050. Also of more relevance to projections now than in the past are assumptions about food consumption growth. Previously, modellers have relied on past econometric evidence, suggesting that price and income elasticities of demand for food decline with per capita income, and earlier for lower-valued foods such as staple grains and tubers than for livestock and horticultural products. The latter switch will be especially important with the rapid income growth in populous emerging economies such as Brazil, China and India. However, consumer concerns for food quality, food safety and the environment also need to be considered, especially for HICs. Environmental concerns affect things such as the disposal of packaging or the carbon footprint associated with the transport of goods and hence a desire to ‘buy local’ or at least to know of the country of origin. Increasing numbers of consumers wish to know how products are produced on-farm and processed, so as to assess whether they are causing environmental damage or reducing animal welfare. The continuing preference of some consumers to avoid foods containing genetically modified organisms (GMOs) is a clear case in point (Qaim 2009). This consumer concern has already led to significant government barriers to trade based on production processes and to constraints on domestic production. If that behaviour persists, models of international trade need to differentiate between products that may or do not contain GMOs. Now that traceability information along with other attributes can be stored on barcodes, these and related biosecurity concerns can be reflected in the demands that the large supermarket chains place on their suppliers for information on myriad attributes of products. This is adding to the need to incorporate greater agricultural product differentiation across suppliers in trade models. It could be argued that the above concerns of consumers are confined to HICs, especially Western Europe and Japan, where the quantity of food consumed is unlikely to grow rapidly over the next four decades because of relatively low population and income growth and low-income elasticities of demand for farm products there. However, that would be to miss the point that high-income consumers are willing to pay substantial premia for foods that are perceived to be safer, of higher quality and produced with minimal damage to the environment and animal welfare. They are thus potentially highly profitable markets to which all farmers seek access, including those in developing countries—notwithstanding the disadvantage due to their higher carbon footprint insofar as more transportation is probably required to get their produce to those northern markets than is the case for local import-competing farmers. While the real price of crude oil spiked briefly in mid-2008 at nearly three times its previous record, it provides no guidance as to the long-term trend price of petroleum and other energy raw materials. Spikes in the spot price can occur whenever there is a sudden change in expectations (including about OPEC cartel actions), given the low short-term price elasticities of demand and supply for crude oil. Long-term trend prices, on the other hand, are affected by government taxes and developments in known reserves and in demand, which tend to change relatively slowly as economies grow. Technological innovations in exploration and exploitation have caused reserves to expand faster than demand, so the world is apparently not running out of fossil fuels: according to Smith (2009), the ratio of reserves to annual production of crude oil has grown from a multiple of 29 years in 1980 to 45 years in 2008, and if unconventional petroleum resources (heavy oil, oil sands and oil shale) are included, that adds another 160 years of available supplies at current consumption levels. The capacity of petroleum prices to spike occasionally is not unlike that for grains. As Wright (2009) pointed out, wheat, rice and maize are highly substitutable in the global market for calories, and when aggregate stocks decline to minimal feasible levels for trading and processing, prices become highly sensitive to small shocks. By the middle of the past decade, grain stocks-to-use ratios had declined to their lowest levels for 25 years due to high-income growth in emerging economies and de-stocking in China (Wiggins & Keats 2010). When there were then some crop failures plus a surge in demand because of biofuel mandates and subsidies, grain prices started rising. The crude oil price spike in 2008 raised further the demand for biofuels (as well as fuel and fertilizer input costs for farmers), and a sequence of trade restrictions by key grain exporters, beginning in the thin global rice market in the autumn of 2007, led to panic buying. The linkage between crude oil and food prices will remain strong when petroleum prices exceed the threshold that makes biofuel production privately profitable on a significant scale, as in 2005–2008 (FAO 2008; IMF 2008; DEFRA 2010; Pfuderer et al. 2010). A continuation of biofuel subsidies and mandates will make this co-movement in above-trend prices more common, as will the development of new biofuel crop production technologies that effectively lower the threshold oil price above which ethanol or biodiesel production is profitable (Chakravorty et al. 2009; Rajagopal et al. 2009). The latter has considerable potential over the next four decades, especially if private life science companies view investments in biofuel crop R&D as more profitable than R&D in politically sensitive GM food crops. Mandates to include an increasing proportion of biofuels in road transport fuel are now in place in most OECD countries and in Brazil. The current targets in the EU mandate go through to 2020, and those of the US to 2022. These policy measures, if they continue and remain inflexible, will add a certain demand for biofuel crops no matter what happens to fossil fuel and food prices. This will not reduce the extent of any downward food price spike, however, because biofuel production will be privately profitable and so the mandates will tend to be redundant when grain and oilseed prices are very low relative to fossil fuels prices. On the other hand, mandates will exacerbate the extent of any upward food price spike, because fuel retailers will be required to include in their road fuel mix at least the mandated quantity of biofuel regardless of its high cost. The ICT revolution will continue to lower trade costs, including for supermarkets as they search globally for the lowest-cost suppliers of products with the attributes desired by their customers. Such searching by supermarkets will increase also in response to governments lowering the remaining barriers to FDI in retailing and associated logistics services. This will more or less offset the impact of any new carbon taxes or their equivalent on transportation costs. The consequences of a continuing supermarket revolution will spread right along the food value chain. One is that first-stage processors, food and beverage manufacturers, and distributors will become more concentrated so as to better match the bargaining power of supermarkets. Even so, supermarkets will exploit their capacity to develop their own brands and even their own processing and distribution. In turn, these developments will alter dramatically the way farmers supply those markets, with the emphasis on timely delivery of uniform-quality products leading to more-efficient (possibly larger) farmers displacing less-efficient ones and thereby raising agricultural productivity growth. Insofar as large supermarkets in HICs source also from farmers in other countries, their private standards will be set with at least some consideration to the costs they impose on foreign suppliers, and so may be less trade restricting than they would be without that feature of globalization. The reasons why some countries have reformed their price-distorting agricultural and trade policies more than others in recent decades provide hints as to what to expect in coming decades. The reasons are varied. Some countries reformed unilaterally, apparently having become convinced that it is in their own national interest to do so. China is the most dramatic and significant example of the past three decades among developing countries, and Australia and New Zealand among the HICs (Anderson 2009). Other developing countries may have done so partly to secure bigger and better loans from international financial institutions and then, having taken that first step, they have continued the process, even if somewhat intermittently. India is one example, but there are numerous other examples in Africa and Latin America. And some countries have reduced their agricultural subsidies and import barriers at least partly in response to the General Agreement on Tariffs and Trade's multilateral Uruguay Round Agreement on Agriculture and to opportunities to form or expand regional integration agreements. The EU is the most important example of committing to reductions in farm protection, helped by its desire for otherwise costly preferential trade agreements including its expansion eastwards. The EU reforms suggest that growth in agricultural protection can be slowed and even reversed if accompanied by re-instrumentation away from price supports to decoupled measures or more direct forms of farm income support—but the wealthiest Western European countries (Norway and Switzerland), like Japan, continue to resist external pressure to undertake major reform. The stark example of Australia shows that one-off buyouts can bring faster and even complete reform. In the USA, by contrast, most subsidy cuts in the 1990s proved to be short lived and have since been reversed, with one set of analysts seeing few signs of that changing in the foreseeable future (Orden et al. 2010). In the developing countries, where levels of agricultural protection are generally below those in HICs, there are fewer signs of a slowdown of the upward trend in protection from agricultural import competition over the past half-century. Indeed, there are numerous signs that the governments of developing countries want to keep open their options to raise agricultural NRAs in the future, particularly via import restrictions. One indicator is the high tariff bindings to which developing countries committed themselves following the Uruguay Round (Anderson & Martin 2006, table 1.2). Another is the demand by many developing countries to be allowed to maintain their rates of agricultural protection from import competition for reasons of food security, livelihood security and rural development. This view has succeeded in bringing ‘special products’ and a ‘special safeguard mechanism’ into the multilateral trading system's agricultural negotiations, even though such policies would raise domestic food prices in developing countries and thus may worsen poverty and food security of the urban poor while exacerbating instability in international markets for farm products. If the WTO's Doha Development Agenda collapses, or if Doha leads to only a weak agricultural agreement full of exceptions for politically sensitive products and safeguards, the governments of HICs may find it more difficult to ward off agricultural protection lobbies. This would make it more likely that developing countries choose an agricultural protection path. The potential cost of this alternative counterfactual could be several times the estimated benefit of a successful Doha agreement when the counterfactual is assumed to be current policies (Bouët & Laborde 2008). Regional and other preferential trading arrangements may be able to reduce farm protection growth somewhat, but the experiences with regional integration arrangements to date is mixed. Effects of climate change on aggregate global agricultural production and its location across countries and regions without and with mitigation and adaptation are great unknowns, not least because there are many possible government policy responses unilaterally and multilaterally. Moreover, the uncertainties about what policy instruments will be adopted by whom and when will be spread over decades rather than just the next few years. Land use undoubtedly will be affected non-trivially; carbon credits and emissions trading will have unknown and possibly major effects depending among other things on whether/how/when agriculture and forestry are included in the schemes of various countries, as will any border tax adjustments or other sanctions imposed on imports from countries deemed to be not sharing the burden of reducing greenhouse gases; biofuel mandates and subsidies and emerging biofuel crop technologies are likely to increasingly affect food markets, and even more so if carbon taxes or emission caps raise the user price of fossil fuels; crop yield fluctuations will be greater because of weather volatility and especially more extreme weather events, leading to further triggers for trade policy interventions aimed at stabilizing domestic food markets and so on. The literature on these and myriad other ways in which agricultural markets are expected to be affected directly and indirectly by climate change and associated policy and technological responses are growing exponentially. Numerous global economic modellers have begun analysing the possible effects of some of the above influences on the international location of agricultural production and trade in particular. One of the more widely cited is Cline (2007), who predicted that by the 2080s, even with carbon fertilization, agricultural output will be 8 per cent lower in developing countries, 8 per cent higher in HICs and 3 per cent lower globally. However, mitigation policies could have an adverse effect on industrialization in developing countries and lead to their agricultural sector in aggregate benefitting indirectly, although different types of border tax adjustments by HICs would affect the outcome non-trivially (Mattoo et al. 2009a,b). It is clearly very difficult to discern what the main influences are likely to be over the next four decades, let alone to quantify the effects of even the most likely of them. This underscores the need for sensitivity analysis around any baseline scenario to 2050 that does not include any of the influences listed in the previous paragraph. Water is essential for growing food and critical for food security, but in many parts of the world it has been one of the most-abundant factors of production used in agriculture. Certainly, it is not evenly spread across the world (see column 2 of table 1), and irrigation water property rights and water markets are poorly developed in most countries. With population growth and the increasing need for non-farm uses of water, the urgency for policy reform in this area is growing, especially outside temperate, well-watered areas such as Europe (Rosegrant et al. 2002). The experiences with reforms to date, such as in the USA and Australia, indicate there will be much trial and error in policy design and implementation and it will take many decades before water markets are as efficient as farm land markets. This suggests that irrigation water costs could well rise in coming decades but to varying extents across the globe and in ways that could have non-trivial impacts on the optimal location of certain water-intensive crops. Agricultural R&D investments have had a huge payoff (Alston et al. 2000). Yet there has been a considerable slowdown in such investments over the past two decades, and this may already be contributing to a slowing of agricultural productivity growth (Alston et al. 2009a,b). If that slowdown in investment was in response to the low prices of food in international markets in the mid-1980s, then the rise in those prices in recent years, together with the newly perceived need for adaptive research in response to climate change and increased water scarcity, may boost farm productivity growth over the next four decades. Advances in biotechnology will help raise potential yields in field trials and thus attainable yields in the best farms, but much can also be gained by reducing the gap between those attainable yields and average on-farm yields, particularly in developing countries. Part of the slowdown in traditionally measured gains from agricultural research in recent decades may be due to research being directed away from things such as maintaining and improving yields and towards conservation of natural resources and the environment. It is likely that climate change concerns will also lead to some re-direction of R&D investment, to goals such as crop tolerance to drought and other extreme weather events. Another large dilemma for research administrators, both public and private, is how much effort to direct to transgenic foods. While there remains strong opposition by some consumers and governments of large countries to GM food production and imports, the returns from such research will be dampened, both absolutely and relative to efforts to produce non-food GM crops (cotton, biofuels and other industrial crops). R&D on the latter will reduce the upward pressure that demands for those non-food crops would otherwise put on food prices, but the anti-GM food stance will continue to reduce the potential for biotechnology to lower food prices in countries where GM food is discouraged or banned—with major implications for bilateral trade flows since it effectively divides the world food supplies into two separate markets (Anderson & Jackson 2006). Recent globalization has been characterized by a decline in the costs of cross-border trade in farm and other products. It has been driven by the ICT revolution, declines in real transport costs and—in the case of farm products—by reductions in governmental distortions to agricultural incentives and trade. The first but maybe not the second of these drivers will continue in coming decades. World food prices will depend also on whether/by how much farm productivity growth continues to outpace demand growth. Demand in turn will be driven not only by population and income growth, but also by crude oil prices if they remain at current historically high levels, since that will affect the biofuel demand. Climate change mitigation policies and adaptation, water market developments and market access standards including for transgenic foods add to future agricultural production, price and trade uncertainties. The key issues that modellers need to grapple with in projecting world agricultural markets to 2050—assuming they have already dealt with simulating the macro-policy settings and the evolving pattern of international capital flows and their effects on currency exchange rates and broad comparative advantages—are what to assume about trends and fluctuations for each country and hence globally in:
Second, governments could commit to a more ambitious programme of support for agricultural R&D investment, so as to slow or reverse the decline since the 1990s in such investments. Lags between R&D investments and farm productivity growth are very long, but certainly results would show within the next four decades. Governments yet to embrace the relatively new agricultural biotechnologies could reassess their stance in the light of (i) the experiences of countries that have accepted this technology (as environmental effects have been mostly benign or positive and no food safety issues are evident) and (ii) the higher benefits from expanding such investments now that food price levels are higher and climate changes are requiring farmer adaptation. Finally, governments could make clear what their policy responses will be to climate change. The difficulties associated with this global issue make multilateral trade negotiations look easy, as was clearly demonstrated by the difficulty in drafting a communiqué at the end of the Copenhagen global conference on the issue in December 2009. The author is grateful for helpful comments from the editor and referees as well as for financial assistance for some of the underlying research from the World Bank and the Australian Research Council. Opinions and any errors are the responsibility of the author alone. FootnotesOne contribution of 23 to a Theme Issue ‘Food security: feeding the world in 2050'. While the Government Office for Science commissioned this review, the views are those of the author(s), are independent of Government, and do not constitute Government policy. © 2010 The Royal Society This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. References
Page 15World dollar prices of major agricultural food commodities (‘food prices’ in what follows) rose dramatically from late 2006 through to mid-2008. Prices collapsed dramatically in the second half of 2008 with the onset of the financial crisis. This episode is often referred to as the ‘2008 price spike’. Prices partially recovered in the second half of 2009 to levels that generally exceed pre-spike values. Figure 1 shows (nominal) monthly prices for major grains and oilseeds over the period 1990–2009. Grains price index numbers (2005 = 100), 1990–2009. Dark grey line, wheat; brown line, maize; green line, rice; red line, soya beans.
A number of authors have discussed the factors that lie behind the 2008 price spike (Abbot et al. 2008; Mitchell 2008; Cooke & Robles 2009; Gilbert 2010a). A large number of potential explanations are available. Those given greatest prominence are: We do not join this debate. Instead, we ask whether food prices have become more variable. Was the 2008 price spike a ‘one-off’ event without implications for the longer term, or does it signal the initiation of a more volatile period in which price spikes of this sort will become more frequent occurrences? Previous periods of high volatility have prompted the same questions but the historical experience has generally been that periods of high volatility have been relatively short and interspaced with longer periods of market tranquillity. It would therefore be wrong simply to extrapolate recent and current high volatility levels into the future. However, it remains valid to ask whether part of the volatility rise may be permanent. The structure of the paper is as follows. Section 2 contains a historical review of food price volatility. Section 3 looks at volatility determinants. Section 4 then considers the likely future evolution of food price volatility while §5 considers the effects of heightened volatility. Rice is discussed in §6 on the basis that it differs in significant ways from other food commodities. Section 7 considers public policy with the objective of moderating volatility or offsetting its effects, and §8 concludes. Volatility is a directionless measure of the extent of the variability of a price or quantity. It follows that volatility measures derive from the second moment of the distribution of the price or quantity in question, or transformations thereof. Economists generally focus on the standard deviation of logarithmic prices since this is a unit-free measure. For low levels of volatility, the log standard deviation is approximately equal to the coefficient of variation. Economic series typically exhibit trends. Measurement of volatility therefore requires the series to be detrended since otherwise trend movements will be included in the volatility measures. Because trends are rarely linear and deterministic (Kim et al. 2003; Kellard & Wohar 2006), detrending requires a trend model that implies a judgemental trade-off between attribution of variability to the trend itself and to variation about the trend. The volatility measure can therefore depend on the choice of the trend model in an undesirable manner. In looking at price volatility, economists often circumvent these issues by measuring volatility as the standard deviation of price returns, i.e. the standard deviation of changes in logarithmic prices. We adopt this standard measurement convention. Academic and policy analyses have tended to focus on price levels rather than volatilities. An exception is Gilbert (2006) who showed that agricultural price volatility was low in the 1960s but was higher in the 1970s and the first half of the 1980s. Volatility fell back in the second half of the 1980s and the 1990s but remained well above its 1960s level. Table 1 updates table 4 of Gilbert (2006) looking from 1970 to 2009. The sample is divided at the end of 1989, which is the half-way point in the sample. The first column of the table reports the volatility estimate for the commodity over the entire 40 year period. The second column gives the estimates for 1970–1989 and 1990–2009. The third column reports the standard F-test for variance equality. The test outcome is summarized in the final column. Figure 2 shows the same figures graphically, with the commodities ordered by the extent to which volatility increased between the two periods.
Changes in volatility over time. Lavender coloured bars, 1970–1989; magenta bars, 1990–2009.
From the first column of table 1, we see that agricultural volatilities have been lowest for grains and meats and highest for fresh fruit. Fruit is perishable and storage, which can limit volatility, plays a more limited role for fruits than for the other commodities considered in the table—see the discussion in §3. Columns 2–4 of table 1 show that there was a statistically significant rise in volatility for only two commodities—bananas and rice. By contrast, nine commodities saw statistically significant falls in volatility—cocoa, soya beans, sugar, three vegetable oils (soya bean, groundnut and palm) and the three meat and fish products (beef, lamb and fishmeal). Overall, therefore, the most recent two decades have seen lower levels of agricultural volatility than in those of the 1970s and 1980s, with rice constituting the main exception to this tendency. These findings are in line with those of Balcombe (2009) who also failed to find evidence of any general increase in volatilities. In splitting the sample at the end of the 1980s, the tests reported in table 1 provide a relatively crude indication of whether volatilities have been changing. It is arguable that it is the high volatility levels of the most recent years that are out of line with past experience. This is difficult to judge because volatility itself is highly variable over time. Furthermore, periods of high volatility tend to bunch. One way of posing the question in relation to recent levels of volatility is to estimate a volatility model. The generalized autoregressive conditional heteroscedasticity (GARCH) model is now the standard procedure for modelling volatility in financial markets (Engle 1982; Bollerslev 1986). GARCH specifies an autoregressive moving average process for the variance (scedastic) process followed by a time series to yield an estimate of the conditional variance of the process at each date in the sample. In Gilbert & Morgan (2010), we use the first-order GARCH(1,1) framework to ask whether there was an upward shift in the mean of the scedastic process over the period 2007–2009. The question may be paraphrased as asking whether the conditional volatility of food prices was higher from 2006 or whether we simply observed a number of high prices, leaving expected volatility unchanged. Results are summarized in table 2. They vary across commodity groups. At the 5 per cent level, the mean of the conditional volatility process only increased significantly for soya bean oil. (Bananas show a significant volatility decrease.) However, the estimated increase in conditional volatility is positive for all five grains and all five vegetable oils, and the increases are significant at the 10 per cent level for soya beans and groundnut oil as well as for soya bean oil. There is no systematic pattern in the change in conditional volatility for the remaining nine food commodities.
To summarize, this analysis has generated two conclusions:
Agricultural prices vary because production and consumption are variable. Economists distinguish between predictable and unpredictable variability, the latter being characterized in terms of shocks. Shocks to production and consumption transmit into price variability. Production can vary either because of variations in area planted or because of yield variations, typically owing to weather. Consumption varies because of changes in incomes, changes in prices of substitutes and shifts in tastes. It is generally supposed that the most important source of price variability in agriculture is weather shocks to agricultural yields. Nevertheless, demand shocks, in particular income shocks (Gilbert 2010a) and policy shocks (Christiaensen 2009), may also play an important role. The extent to which given production and consumption shocks translate into price volatility depends on supply and demand elasticities, which, in turn, reflect the responsiveness of producers and consumers to changes in prices. It is generally agreed that these elasticities are low over the short term, in particular within the crop year. Farmers cannot harvest what they have not planted and will almost invariably harvest everything that they have planted. Consumers are reluctant to revise habitual dietary patterns and, in poor countries, they may have few alternatives. Furthermore, the commodity raw material may comprise only a small component of many processed foods, with the consequence that even large commodity price rises have a small impact on final product prices. Stockholding causes volatility to bunch. When stocks are low, relatively small production or consumption shocks can have large price impacts but when they are high, the reverse is the case. Moreover, once stock levels become high, they will remain high until consumption has exceeded production for sufficient time to absorb past surpluses. Stockholding therefore results in a cyclical pattern in prices and volatilities even if supply and demand shocks are independent over time. World grain stocks fell to low levels by 2006, and this is seen as one cause of recent high grains price volatility. Since it takes time to rebuild stocks, it is possible that volatility levels will remain high over the next few years. But this does not imply that volatilities will be permanently higher. Other factors may also be important in either amplifying or attenuating volatility. Stockholding will reduce volatility so long as stocks are accumulated in periods of excess supply and released in times of excess demand. However, stockholding is more effective in reducing the extent of price falls in the event of positive supply shocks (abundant harvests) than in reducing the extent of price rises in the event of shortfalls since destocking depends on the existence of a carryover from previous years. Stockholding therefore reduces volatility but also gives a positive skew to the price distribution (Wright & Williams 1991; Deaton & Laroque 1992). Speculation is a second factor that may have either a positive or a negative impact on volatility. Speculation may be either through stockholding or through purchase and sale of commodity futures or other derivative contracts. However, not all futures markets transactions are speculative—the standard regulatory distinction between hedging, in which supply chain agents attempt to offset risk exposure through futures transactions, and speculation is that speculators are ‘non-commercials’, i.e. they do not have any involvement in the physical commodity trade. Commodity futures markets are seen as providing a structure in which risk is transferred from commercial to non-commercial traders, i.e. from hedgers to speculators. In assuming this price risk, speculators provide the market liquidity that enables hedgers to find counterparties in a relatively costless manner. By analogy with insurance markets, in aggregate, speculators will expect to profit and hedgers to pay for this risk transfer. The traditional view among economists is that speculation will tend to be stabilizing (i.e. volatility reducing) because destabilizing speculation will be unprofitable and will therefore not persist (Friedman 1953). However, much speculation is undertaken by trend-following commodity trade advisors or amateur traders, and there is a worry that their extrapolatively based actions may result in self-fulfilling beliefs—if identified as a nascent trend, a randomly induced price rise will generate further buying, thereby reinforcing the initial movement (De Long et al. 1990; Irwin & Yoshimaru 1999; Irwin & Holt 2004; Gilbert 2010b). More recently, a significant group of institutional investors have started to invest in commodity futures through index-based swap transactions as a portfolio diversification strategy and to assume exposure to the commodity ‘asset class’. In agricultural futures markets, these positions are often large in relation to total activity—up to 40 per cent of market open interest (Gilbert 2010b). Differently from traditional speculation, these positions are relatively long term and are predominantly long, i.e. they involve purchase of futures contracts, which are then held as long-term investments. The sharp rise in index-based investment in commodity futures over the past five years may therefore be seen as a positive shock to inventory demand. Gilbert (2010a) argues that this shock was a significant contributory factor to the 2007–2008 food price spike; see also US Senate Permanent Subcommittee on Investigations (2009). Food price volatility arises from shocks that can come from a number of sources, with the impact being felt differently in each separate commodity market. On some occasions, these shocks will be correlated. Often, this will be the case if common factors simultaneously affect a range of different markets, perhaps including non-agricultural markets. This appears to have been the case in 2007–2008 when most agricultural prices and many non-agricultural prices (energy, metals and freight rates) rose simultaneously. It was also the case in the 1973–1974 food price spike. In such cases, it appears likely that there are common causal factors. There is less agreement in the identity of these causal factors but demand growth, high oil prices perhaps generating demand for grains as biofuel feedstocks, dollar depreciation and futures market speculation are all candidates in this regard (Cooper & Lawrence 1975; Baffes 2007; Abbot et al. 2008; Mitchell 2008; Gilbert 2010b). Our focus in this discussion is grains and, to a lesser extent, vegetable oils since these are overall the most important food crops. Grains are the major staple food across the globe and also are an input into the production of meat products. As such, they are key within the food price volatility question. Within the grains group, we can distinguish between:
The impact of food price volatility can be viewed at both the economy level and at the individual (producer and consumer) level, although the impact will depend on which economy and which individuals are being examined. Focusing on the economy level first, there are a number of key factors that will affect the way food price volatility will create an impact. Virtually all economies trade in food—as importers and/or exporters—and thus volatility in world food prices will potentially have trade bill effects, the net outcome of which will depend on the country's net food export position and the extent to which it is integrated in world markets. As such, it is unsurprising to note that a country-by-country approach to evaluating the effect of food price volatility would need to be carried out before precise impacts could be measured and even then, specific periods of time would have to be identified over which the effects were to be measured. However, it is possible to review some of the generic outcomes alongside case studies of particular countries. Importing, richer nations are concerned about food price volatility in terms of the impact it might have on consumer price inflation and, to a lesser extent, by balance of trade effects. As world commodity prices generally rise, food prices included, domestic price levels could rise with fears of price–wage spirals being set off (Bloch et al. 2007). Mundlak & Larsen (1992) explore the transmission of world prices to domestic levels and the null hypothesis of the law of one price rarely holds owing to many factors, not least of which are impact of exchange rates and degrees of imperfect competition within domestic supply chains. It is possible to characterize richer nations as being more open to world price effects given established trading policies, which could suggest a greater concern over volatility, but this is dampened by the relatively low expenditure on food as a proportion of national income. The same concerns arise with respect to oil price volatility but pass-through has been low over the most recent decade. Looking at individuals in richer nations, consumers of food, now largely in the form of processed food products, are affected to the extent that world agricultural prices are transmitted into the prices paid for products in retail outlets. Retail sectors are often imperfectly competitive (Clarke et al. 2002) and thus pass-through is often incomplete, dampening volatility effects. More pertinent is the possible link to rising wage demands to compensate for higher food prices, but this is now a relatively weak link given the relatively low proportion of household income spent on food (10–15% in many countries is typical). Perhaps of some interest is the relative impact on poorer consumers in rich countries who do spend a higher proportion of their income on food and thus who could potentially suffer greater welfare loss from more volatile (higher) prices. It is notable, however, that the high food prices in 2007–2008 were much lower on the political agenda in the rich countries, including Britain, than the high energy and fuel prices. Despite the inherent risks in agricultural production (Moschini & Hennessy 2001), producers in many richer nations may in principle cope with these risks and the resulting food price volatility through a range of different mechanisms such as forward and futures markets and crop insurance. While these arrangements do little to reduce price volatility, they do allow producers to cope more effectively with this volatility. As such, food price volatility can bring some short-term uncertainty, but in aggregate terms, the welfare impact for producers in richer nations is relatively minor. Many poorer nations are net importers of food products, either in raw or processed form. For these countries, the proportion of the import bill that goes on food is generally much higher than in richer nations. Grains are the principal commodities of concern, followed by vegetable oils. In Asia, food security concerns relate primarily to the adequacy of rice supplies. In southern and eastern Africa, white maize plays this role. Because many food-importing countries are landlocked, price volatility can be very high—see Dana et al. (2006) in relation to maize in Malawi and Zambia. The major use for soya beans is in meat production, so volatility in soya bean prices feeds through into meat prices. This factor is particularly important in China, which is the major world importer of soya beans. Volatile world food prices can create major import bill uncertainty with concomitant exchange rate uncertainty. Scarce foreign exchange reserves can be exhausted relatively quickly with a sudden spike in food prices as the elasticity of demand for food imports is relatively low. The Food and Agriculture Organization of the United Nations (FAO) (FAO 2008) shows how increasing cereal import costs as a percentage of GDP can lead to a significant widening of the current account deficit in seven economies of more than 3 per cent, while for another seven countries, the anticipated increase is between 2 and 3 per cent (2006/2007–2007/2008). Many developing country governments act to stabilize the domestic prices of food staples in order to avoid importing volatility from the world market. In most cases, the countries will also be significant producers of the staple. Stabilization will then limit the incentive for domestic farmers to respond to signals from the world market. If a sufficient number of countries act in this way, the resulting reduction in the world supply elasticity will exacerbate volatility. Where countries are net importers, stabilization will require fiscal resources. Food price volatility therefore introduces volatility into government expenditure. In the poorest nations, where poverty levels are high and where food security becomes a pressing concern, food price volatility can in extremis lead to great hardship for consumers and even revolt (the 2008 riots in Indonesia and Haiti, for example), reflecting the fact that food expenditure constitutes a significant proportion of the total income (70–80% of income). Large and sudden increases in prices or indeed, just large increases alone, can ultimately cause hunger, poor nutrition and illness if consumers are unable to buy their staple needs. Equally, as with richer nations, there are potentially inflationary effects in poorer nations too. FAO (2008) shows the relationship between consumer price index (CPI) increases and food price increases for a number of countries, for example, Egypt seeing CPI rise by 15.4 per cent while food prices rose 24.6 per cent (January 2007–January 2008) and Haiti 10.3 and 14.2 per cent, respectively, for the same period. Clearly, such dramatic impacts on the population are unpalatable for governments who often employ controls on markets or subsidization of prices to mitigate the effects. Controls can take a number of forms, but in periods of very steeply rising prices, some governments have sought to limit food shortages by banning exports of staple products grown in their own country (e.g. rice markets in Vietnam, Cambodia and Egypt). Others try to stem the impact of higher prices by buying at the world market and then selling on to the domestic market at lower (subsidized) prices. The difficulty with this policy is that the expense can cause great stress on government finance as the difference between world and domestic prices gets larger. The current concern is that food price volatility may have increased over recent years and may increase further in the future. It follows from the discussion in §2 that an increase in price volatility must arise from one or more of the following four factors:
Gilbert (2010b) emphasizes the role of demand factors in the determination of food prices, and a number of commentators have pointed to rapid economic growth in China and elsewhere in Asia as the common driver of commodity price changes in energy and metals as well as for foods. If demand growth is becoming more variable as it becomes faster, this will also generate increased food price volatility. At the time of writing, the global macroeconomic outlook is highly uncertain and combines continuing fast growth in the emerging economies with a stagnant prospect in the developed economies. If the eventual resolution of current global imbalances involves further crises, these are likely to be reflected in greater food price volatility. The use of food crops as biofuel feedstocks also fits under the demand variability heading. Many commentators have claimed that the demand for food commodities, in particular corn, sugar and vegetable oils, as biofuel feedstocks has increased the correlation between agricultural prices and the oil price—see, in particular, Mitchell (2008). This allows transmission of oil price volatility to agricultural prices, in effect increasing the variance of demand shocks. If one concedes that oil price volatility has increased over time, this could lead to increased food price volatility. There has been no systematic study of the effect of biofuel demand on food price volatility, as distinct from the level of food prices. Scientific studies of the effects of biofuel demand on food price levels fail to find clear evidence of an increased linkage between the oil price and agricultural prices over recent years (Gilbert 2010a). This may be because biofuel production in Europe and the United States has to date been driven more by government mandate requirements than by direct profit considerations and has therefore not been sensitive to changes in the oil price. This may change as China becomes a major producer of biofuels. Index-based investment in commodity futures, discussed in §3 in relation to speculation, also relates to the demand variability heading. Index investors purchase long positions in commodity futures, generally via swap transactions, and hold these for extended periods of time. This may be regarded as a form of ‘virtual storage’ in which the investors pay the market to carry inventory on their behalf. The result is to add an additional component to the demand equation and hence also an additional source of demand variability with the implication that financial market shocks can be imported into food markets. Many commercial traders argue that this is precisely what has happened over recent years, with the consequence that price movements have sometimes been divorced from underlying developments in physical supply and demand. Gilbert (2010a,b) confirms the importance of index-based futures investment in amplifying price movements in 2008 but notes that these effects were smaller in food markets than in energy and metals markets, reflecting the lower involvement of index-based investors in agricultural futures. Poor Australian wheat harvests in 2006 and 2007 and a poor European 2007 harvest have been mentioned as possible causes of the 2006–2008 food price spike. However, these poor harvests were offset by good harvests elsewhere in the world, notably Argentina, Kazakhstan and Russia, and 2008 harvests were good. Mitchell (2008) discounts poor harvests as a major cause of the spike. Looking to the future, there must be a concern that global warming will increase the variance of agricultural production. Theoretical models, e.g. Schlenker et al. (2005) and FAO (2008), suggest damage to existing cropping areas if temperatures rise. It is certainly possible to find clear examples of specific crop–country combinations where this is the case. These mainly relate to production in relatively arid areas—grain production in much of Australia, cattle in areas of Africa bordering the Sahara and food production in South Asia and southern Africa (World Bank 2009). It is widely believed that global warming may result in more extreme weather conditions, and this may result in greater yield variability. We are not aware of scientific discussion of this possibility. In any case, there remains the question of the extent to which increased yield variability in specific crops and countries will generalize to the entire spectrum of food prices. Demand can only respond to price developments if food consumers face prices that are related to world markets. This forces attention on the issue of food transmission, i.e. the extent to which prices on world markets are passed through to local prices. Price transmission is generally high in developed countries but, because the food commodity itself often only accounts for a small share of the total value of the product—transportation and marketing dominate—even quite large changes in world prices only have small effects on retail prices. Transmission is more variable in developing countries and is often hindered by high transportation costs that can divorce local prices from those on world markets (Conforti 2004). Over time, greater market integration (‘globalization’) is tending to diminish these barriers. On the other hand, governments often respond to higher food prices by raising subsidies. Irrespective of the wisdom of such policies, they will diminish price responsiveness on the part of consumers. This has been cited as a contributory factor for oil price volatility but has not generally been regarded as important for food crops. The traditional view of speculation as price stabilizing, discussed in §3, may also be seen as affecting demand elasticities. By buying low and selling high, profitable speculation should reduce price variability. It will do this more effectively as markets become more liquid. There are three qualifications to these arguments. First, the evidence is mixed that speculation is generally profitable (Edwards & Ma 1992, pp. 472–476). Second, not all speculation corresponds to this traditional view—see the discussion of index-based investment in §5a. Third, even if speculation does reduce variances at lower frequencies (e.g. month-to-month variability), it also appears to increase higher frequency variances (day-to-day and intraday variability). The overall effects of futures speculation are therefore more mixed than those predicted by the simple traditional account. Grain inventories have fallen over the period since the millennium, and this has been cited as a contributory factor in the 2006–2008 price spike. That argument is difficult to sustain in a simple form since the decline in inventory levels was slow and steady while the price rise, in 2007 and the first half of 2008, was sharp and sudden. What is clearer is that low inventory levels will have reduced the responsiveness of supply to the demand shocks which we argued above are seen as important in generating the price rise. Demand and supply shocks are responsible for the incidence of price changes while the level of inventories determines the amplitude of the resulting price movements. Grain reserves have fallen to low levels for two reasons. First, commercial users have sought to economize on inventory and have placed reliance on rapid and flexible delivery. Second, governments have come to rely more on trade than food security inventories to meet shortfalls in domestic availability. Both developments have been driven by the awareness that inventories are expensive to maintain. Commercial reliance on suppliers and national reliance on trade provide lower cost solutions to availability problems so long as shocks are idiosyncratic. They will fail when shocks are common. This was brought home to governments in 2008 who found that reliance on trade for food security objectives is likely to fail in exactly those circumstances in which it is required. The result is a move back to inventories both in the commercial supply chain and at the governmental level in relation to food security. Higher grain inventory levels should ensure that future supply and demand shocks are more easily absorbed. Underinvestment in agriculture, cited in World Bank (2007) and particularly acute in the developing world, by contrast, cannot be addressed so rapidly. It takes the form of poor agricultural infrastructure (roads, warehousing, port facilities), undeveloped rural credit, exhaustion of soil nutrients, often as the result of poor farming practice, and lack of research into new seed varieties (Thurow & Kilman 2009). All of these factors limit the ability of developing country farmers to respond to price incentives, and this exacerbates price volatility. There is a final factor, exchange rate variability, which does not fit easily into the four categories set out above. Changes in exchange rates reallocate purchasing power and price incentives across countries without changing the overall food supply–demand balance. Dollar depreciation raises prices to US producers and consumers but lowers prices to consumers outside the dollar area. This is because the dollar price of the commodity on world markets will rise as the result of the depreciation, but by less than the extent of the depreciation, implying a fall in say euro and sterling prices (Ridler & Yandle 1972). Exchange rate variability therefore contributes to the variability of prices measured in dollar terms, but would vanish if prices were measured in terms of an appropriately weighted basket of currencies. The overall scorecard is therefore mixed. Table 3 attempts a highly judgmental summary of the impact of the various factors considered both on the incidence and amplitude of the 2006–2008 price shock and on likely important future price volatility.
Rice, which is the staple food in much of Asia and is also widely imported and consumed in central and west Africa and in the Caribbean, is an exception to many of the general conclusions drawn above in relation to food price volatility.
The rice story in 2007–2009 is peculiar and in some sense pre-modern (Christiaensen 2009; Timmer 2009). Rice differs from other food commodities in that only a small proportion of world rice enters into international trade (most major consumers are also major producers), and that much rice which is traded is bought or sold at contracted, and not free, market prices. The free market is therefore residual and has the potential to exhibit high volatility. There were no significant production or consumption shocks in the rice market, which was in surplus through the whole of 2007–2008. The initial price rise came in October 2007 when the Indian Government limited rice exports in order to offset the effects of rising wheat prices of the cost of living index. Fears that this might lead to a shortfall led to panic buying by governments of poor rice-importing countries, which drove up prices to unprecedented levels. Prices fell back in July 2008 when the Japanese Government agreed to sell rice from its World Trade Organization (WTO) stockpile. In the end, no rice was sold, but the offer was sufficient to cool the market. The international rice market is evidently highly problematic as well as politically important—most of the so-called food riots in 2007–2008 involved rice. It is urgent and important that steps are taken to avoid a repeat of this episode (Timmer 2010). In our view, however, it would be an error to see the problems affecting the rice market as generalizing to other grains markets or to wider agricultural markets. Both the sequence of events over 2007–2009 and the volatility statistics in §2 underline that ‘rice is different’. Whether or not rice price volatility increases or declines over the coming years will depend on how well the international community addresses the particular problems of that market, not on any general tendency of volatility in general to increase or decline. There have been many attempts to deal with the problems associated with price volatility. These can be reviewed in terms of the time period of interest—the short term and the longer term. Taking the short term first, this refers to an instant and short-term response to increased volatility, often in conjunction with rising price levels. Many developing and middle income countries have sought to deal with significant price volatility either through export controls (as in Southeast Asia in relation to rice) or through price subsidies. The result is that shocks on the world market are not transmitted to domestic consumers. By insulating domestic producers and consumers from what is often seen as ‘imported volatility’, countries reduce demand and supply elasticities in the world market. When a significant number of major producers of the commodity act in this way, prices on the residual world market become highly volatile. The interesting aspect of these short-term measures is that while domestic markets might experience a degree of greater stability as a result of intervention, the impact on the world market and more open countries is that volatility increases. Such beggar-your-neighbour policies often arise when world markets are in decline or in periods of great instability. This was the situation in the rice market in 2007–2008 and characterized the world sugar market through much of the 1970s and 1980s. In these cases, we need to balance the advantage of reduced volatility in the protected markets against the costs of increased volatility for countries dependent on the residual free market. Longer term policies and responses are more systematic and expansive in what they try to achieve. At the aggregate level, economies have sought to work collectively to limit fluctuations in world prices of commodities, an approach manifest in the international commodity agreements that dominated the 1960s and 1970s for a range of commodities including sugar, coffee and cocoa. Control in these markets came via a combination of buffer stocks (cocoa) and quota limitation of exports (coffee and sugar) with the aim of maintaining prices within target bands agreed between consumer and producer nations. The historical experience indicates that export controls are politically difficult and cannot easily accommodate the arrival of new producers while buffer stock agreements are costly and vulnerable to speculative attack. Gilbert (1996) argued that the cocoa and sugar agreements achieved little success in their objectives, in the case of cocoa because of lack of adequate financing and in that of sugar because of political problems in relation to the Cuban export quota. The coffee agreement did, however, both raise and stabilize prices and the ending of controls in 1989 resulted in both lower prices and greater volatility. Coffee market controls lapsed because of a diminished enthusiasm for their enforcement. As the largest coffee-consuming country, the United States saw less interest in supporting the export revenues of its Latin American allies in the post-Cold War period. Brazil, which remains the largest coffee-producing country, had seen its market share eroded by higher cost African producers as the result of export restrictions and, having grown to become the second most important coffee-consuming country, had come to have mixed views on the benefits of high prices (Gilbert 1996). Arguably, if controls had been maintained in 1989, the agreement would have been unable to accommodate the arrival of Vietnam as a major new exporter in the 1990s since this would have required existing exporters to cede export quotas. With the lapse of controls, Vietnamese exports displaced higher cost African production, allowing Brazil to increase its market share. There have been calls for a return to a more regulated food trade environment as a means of combating some of the effects of world price instability. It is hard, however, to envisage that the current world order would countenance such a move, particularly in a trading environment dominated by multinational trade negotiations designed to create more free trading conditions and which seek to open up markets rather than close them down. Buffer stock intervention raises different issues. There is a widespread view, discussed in §3, that low levels of grain stocks may have exacerbated food price volatility over 2006–2008. If governments take the view that the private sector is unwilling or unable to hold adequate stocks, they may wish to augment these through public stocks. These could be held either nationally or through an international authority. This policy direction is dangerous. First, public stockholding discourages and crowds out private stockholding (Miranda & Helmberger 1988) as the private sector comes to rely on the availability of subsidized public inventory. The second problem is that any commitment to maintain prices within pre-announced bands, as in the cocoa agreement, makes the stockholding authority vulnerable to speculative attack (Salant 1983). There is a case for public stockholding of food commodities in landlocked developing countries that are largely isolated from world markets and where the private sector is poorly represented. This case is much weaker for developed countries and in relation to the world market where it would be preferable to provide improved incentives for private stockholding. A possible mechanism is for an international agency to purchase grain futures contracts in periods of excess supply so as to induce, and have access to, larger inventories in subsequent years. Alternative measures for stabilization of price came in ex post policies such as the EU's STABEX scheme that focused less on prices per se but instead on the impact volatility had on a country's current account balance. Under STABEX, payments were made to those countries that experienced large current account swings owing to increasing import bills or indeed a collapse in export earnings owing to price declines. However, such schemes were often viewed as insensitive to specific country concerns and were quite slow to respond to crises, with the consequence that their impact was probably to amplify rather than dampen the effects of price cycles. The successor FLEX scheme is generally seen as ineffective as while it sought to improve on the STABEX scheme, it still appears to contain some of the constraints and rigidities embodied in its predecessor. As Aiello (2009) suggests, the FLEX scheme has been dogged by a lack of finance to support its operation and also delays in getting funding to those countries that meet eligibility criteria. In richer nations, agricultural policies have been established often with an explicit target of price volatility reduction, as seen in the original rationale for the EU's Common Agricultural Policy (CAP). While ostensibly more about raising farm incomes, as was also the case in the US policy, the CAP did initially attempt to manage prices for both producers and consumers through elements of supply control. Thus, quotas in sugar and milk, and trade restrictions (import tariffs and export subsidies) sought to balance consumption and production at ‘reasonable’ prices. Much of the policy intervention in recent years (e.g. the reforms under the Macsharry plans of 1992) had been designed to curb the growing subsidization of exports onto world markets as EU production outstripped EU consumption and as the EU came under increasing pressure to negotiate a settlement in the Uruguay round of the GATT talks. Thus, input controls such as set-aside and variable levies were phased out to meet this requirement, although the recent WTO ruling on sugar has led to a reduction in the use of export subsidies in that crop too, which, when coupled with the more generic liberalizing of the EU policy, has led to a more limited ability of the EU to isolate its internal market from the global market. Instead, greater attention is being paid to market-based measures of price risk management (Morgan 2001). Insurance markets are well developed in most rich nations and offer some cover for crop failure, but not for price risk. Futures and options markets instead provide a means to hedge price risk that is far cheaper than the alternative use of forward contracts and major exchanges in the USA, Britain and increasingly India and China, which offer contracts in a range of major commodities such as grains, soya beans and other soft commodities like sugar, coffee and cocoa. However, direct uptake by producers can be limited (Pannel et al. 2007) even when communication is good, awareness of opportunities is high and the advantages would appear strong. At the same time, producers benefit indirectly from the greater pricing that futures-based risk management offers to intermediaries such as grain elevator companies. In cases where producers do not have such conditions—in poorer nations—use of futures and options markets becomes much more difficult. A World Bank-sponsored project—the International Task Force for Commodity Risk Management (ITF) 2000—sought to explore ways to design intermediation between producer nations and major commodity exchanges so that the benefits of hedging could be opened to all. Dana & Gilbert (2008) review this experience and argue that the major impact is more likely to be seen through the protection of supply chain intermediaries than directly by the producers themselves. The 2007–2008 food price spike has reawakened interest in food security issues. Governments, whether or not democratic, have found that they cannot afford to leave these issues to the operation of the market. Indeed, the perception on the part of the private sector that governments are unable to commit to staying outside food issues makes it difficult for private traders to ensure adequate supply until government has declared its own hand. In many developing countries, the private sector makes insufficient preparation for food supply problems knowing that government will in the end act, and government does act, justifying the necessity to do so on the basis of the inadequate actions of the private sector. The question is therefore not whether governments should ensure food security, but how they should do so and how they should involve the private sector. Over the past two decades, western governments and multilateral agencies have emphasized trade over national food reserves. Food reserves were seen as expensive, inflexible and prone to generate corruption. To the extent that supply shocks are uncorrelated across countries, it is less costly to import to meet a domestic shortfall. This advice worked well until 2007 when agricultural prices rose across the board. However, in 2007–2008, exactly when many countries needed to import additional food, they found prices rising against them or, in the extreme case of rice, markets being closed, with the result that supplies were not available at any price. Governments have drawn the conclusion that the advice to rely on trade was incorrect and are now attempting to re-establish food security stocks. Concerns have been raised about the extent of speculation, and there have been calls for tougher regulation to ensure that supposedly destabilizing speculative activity is controlled. Index-based speculation in commodity futures was highlighted in §5 as a contributory factor in recent food price volatility that may have exacerbated the 2006–2008 food price spike.
There is a general tendency for commentators to assert that food price volatility has increased over time—however, the reverse appears to be true. Volatility has jumped over the most recent years, but there have also been periods of high volatility in the past and, except in the important case of grains, the recent episode does not appear exceptional. It is therefore possible to hope that volatility levels will drop back to historical levels over the coming years. Despite this, there are factors—global warming, oil price volatility transmitted via biofuel demand, index investment in futures markets—that may have led to a permanent increase in volatility in particular in grains prices. We cannot rule this possibility out, but we see little evidence that substantiates these claims, which we therefore regard as (perhaps reasonable) conjecture and not fact. It is unhelpful, but nevertheless correct, to say that we need to wait for several more years before firm conclusions will be possible. This review has emphasized the exceptionality of rice. Recent rice price volatility has been much greater than historical experience would have suggested as likely. To a considerable extent, perceptions of the recent food price spike were driven by the difficulties experienced in the rice market, and the dramatic price increases that these engendered. Rice was, however, not typical of other markets and the rice experience does not generalize. Low-income rice-importing countries do urgently need to address their food security problems, but the solutions to those problems will not necessarily be relevant to other food commodity markets. There are three areas in which it would be helpful to have more research.
We are grateful to the editors, two anonymous referees and to DEFRA for comments on the initial draft. The views expressed are, however, those of the authors and not of DEFRA. FootnotesOne contribution of 23 to a Theme Issue ‘Food security: feeding the world in 2050'. While the Government Office for Science commissioned this review, the views are those of the author(s), are independent of Government, and do not constitute Government policy. © 2010 The Royal Society This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. References
Page 16Since the Second World War increasing agricultural productivity allowed food output to more than keep pace with demand, so that the long-term trend in prices was downward. This resulted from the application of science to agriculture on the biological side, first in the developed countries (DCs), but with a considerable public sector interest that ensured the transmission to less developed countries (LDCs). It also led to the success of the green revolution. But since 2008, rising food prices have swept away the apathy that had caused the neglect of agriculture over the last two decades (Piesse & Thirtle 2009; see the electronic supplementary material, table 1 and glossary).
Thus, §2 outlines the way in which R&D generates new technologies and extension transmits them to farmers, increasing productivity domestically and internationally. This encompasses the role of the DCs in generating basic advances that spill over to the developing counties and the role of extension services in the dissemination of technology. Section 3 addresses international comparisons of R&D expenditures and different measures of productivity growth. Increases in yields are crucial to output growth, but labour productivity growth is dominant in the DCs and is most closely correlated with wages and incomes in the LDCs. Labour productivity increases as human labour is replaced first by animal and then mechanical power, which has been driven predominantly by private sector R&D. These partial measures do not take account of this substitution process, but total factor productivity (TFP) indices do. TFP also distinguishes between technical progress, efficiency change and input intensification, so TFP growth has different implications, according to its cause. Section 4 explains the effects of the expansion of private R&D, biotechnology and patents, which have together led to a rapid concentration in the sector, so that there are now less than half a dozen major producers of new varieties. This represents a threat to the public good nature of agricultural technology on which the green revolution was based. Section 5 reports estimates of the levels of agricultural investment that would be needed to end hunger by 2025. The conclusion summarizes and evaluates the evidence. The two simplest productivity measures are partial: yield and the average product of labour. New data from Alston et al. (2008) show that for the US, between 1866 and 2007 average yields of maize increased by a factor of 6 and wheat yields by a factor of 3.5. In 2002, US agricultural production was more than five times its 1910 level. The increase in output from 1910 to 2002 was 1.82 per cent per year, achieved with only a 0.36 per cent per year increase in aggregate inputs. Thus, from 1911 to 2002, yields increased by a factor of 4.4, labour productivity by a factor of 15.3 and TFP by a factor of 4.1. Similarly, by the early 1980s in the UK, the labour input required to produce crops like potatoes, sugar beet, wheat and barley was only one-tenth the 1930 level, and over the same period wheat yields increased by a factor of 3 (Grigg 1989). From the Second World War to the early 1980s, tractor horsepower increased more than ten-fold and nitrogen fertilizer application grew by a factor of 6 (Holderness 1985). These achievements required massive and sustained expenditures on R&D. The US expenditures are recorded by Huffman & Evenson (1992) and those for the UK by Thirtle et al. (1997). The review of the literature on the returns to R&D by Alston et al. (2000) leaves no doubt that R&D expenditures have led to these productivity gains. For the developing countries, the average internal rate of return (IRR) is 43 per cent. Evenson (2001) similarly reviewed a large number of studies and found the IRR to be above 40 per cent but with a large variance. These results are generally taken to be evidence of under-investment in agricultural R&D and Evenson shows that the same is true for extension. Much of the improvement in plant materials was the work of the public sector, while mechanical innovations have been mostly attributable to private R&D. The diffusion of both biological and mechanical innovations takes many years, so there is a lag between the R&D expenditures and the productivity gains at the farm level that can be 25 to 40 years. R&D produces yield gains at the trial plot level, which then require expenditures on extension to take them to the farmer. Then, since more educated farmers are generally better at screening and adapting new technologies, farmer education plays a role. There is also good evidence that spillovers between research jurisdictions are as important as direct benefits within countries (Schimmelpfennig & Thirtle 1999). The relationship between public and private R&D has been less studied, but it seems probable that the two are complements rather than substitutes (Thirtle et al. 2004). The relationship between basic and applied research was pioneered by Evenson et al. (1979) and the lags between basic and applied research and diffusion are modelled in Thirtle et al. (1998). The international transmission of productivity-enhancing technologies depends on the rate at which new technology becomes available, the extent to which it is allowed and encouraged to spill over into other jurisdictions and the capacity of the recipient countries to identify, customize and diffuse it. Hayami & Ruttan (1985) give an account of the development of sugarcane varieties that starts with Evenson & Kislev (1975) on the relationship between basic and applied science, continues with plant breeding and continues into the international diffusion of genetic material and research capacity. The productivity gain will be limited by the weakest link in this chain, but from 1961 when data became available, the international system saw public R&D growing at an increasing rate in both DCs and LDCs. This generated substantial yield growth in the DCs and the new varieties could be fairly quickly adapted by the LDCs. There is technology on the shelf for the countries that follow the leaders, and LDCs with good research and extension resources can prosper under these circumstances. However, extension and simple adaptive research lack the glamour of path-breaking new technologies and are often neglected. Agricultural extension has attracted relatively little attention in the past two decades and there is a serious lack of data for most LDCs. The only compilation is Swanson et al. (1991), which does not extend past 1988 and has much missing data. However, Anderson & Feder (2007) estimate the number of extension personnel in the world is about half a million, of which 80 per cent are public sector and 396 000 are in LDCs where they outnumber research scientists by as many as 20 : 1. The regional distribution of workers in the developing countries is in need of updating but, based on Swanson et al. (1990, p.56) data on public-sector employees is in table 2.
The balance between R&D and extension has long been an issue, as critics suggested that many of these workers had nothing to extend owing to weak R&D. Also, extension has tended to be the poor relation at the bottom of the funding chain (Thirtle & van Zyl 1994). This resulted in entire budgets being spent on recurrent items like salaries, while there was no fuel for vehicles and hence no farm visits. Despite these concerns Evenson's (2001) survey of the impact of extension services showed a median IRR of 80 per cent, but with a large variance. This study covered the available work from 1970 to the early 1990s, which fitted production functions with extension as an explanatory variable for output or yields. The problem with this technique, which accounts for most rate of return (ROR) studies on R&D, is that many covariates are omitted and hence the researchers find just what they are looking for: a positive and significant elasticity for extension or R&D. As studies have become more sophisticated, especially by allowing for international spillovers of technology (Schimmelpfennig & Thirtle 1999), RORs have fallen to more reasonable levels of around 30 per cent. Evenson & Pingali (2007) took a different approach, showing that whereas there was correlation between research scientists and the adoption of green revolution modern varieties, there was no correlation with extension officers. Many countries with an abundance of extension personnel did not have a green revolution. This suggests that the causes of failure and success in extension need to be examined and this challenge was taken up by Feder et al. (2001), who suggest there are some generic and universal difficulties in the operation of public extension systems and in the bureaucratic–political environment within which they are budgeted and managed. They find eight factors that can cause deficient performance: the scale and complexity of extension operations; the dependence of success on the broader policy environment; the problems that stem from the less than ideal interaction of extension with the knowledge generation system; the difficulties inherent in tracing extension impact; the profound problems of accountability; the oftentimes weak political commitment and support for public extension; the frequent encumbrance with public duties in addition to those related to knowledge transfer; and the severe difficulties of fiscal unsustainability faced in many countries. Anderson & Feder (2007) offer solutions to problems with training-and-visit extension, decentralized mechanisms for delivery, fee-for-service and privatized extension, and farmer field schools. Their review emphasizes the efficiency gains that can come from locally decentralized delivery with incentive structures based on largely private provision, much of which will inevitably remain largely publicly funded, especially for impoverished developing countries. In the 1950s and 1960s science was applied to agriculture in the DCs, with rapidly rising R&D expenditures and productivity growth, whether measured by yields, labour productivity or TFP. During the 1960s and 1970s this process was extended to the LDCs, as the green revolution raised yields, especially in the densely populated countries of Asia. The DCs have good data that have been used to substantiate the claims made above. Obviously, the DCs are responsible for much of the food that is traded, and basic R&D in the DCs should spill over and play a role in LDC productivity. The data for the rest of the world are less detailed, but it is possible to assess the changes in R&D and productivity for both the DCs and the LDCs. In the longer term, supply response depends on the availability of appropriate technology. Public R&D expenditures for the high-income countries fell from 10 534 million constant 2000 international US$ in 1991 to 10 191 million in 2000 (Pardey et al. 2006). This fall is minor, but R&D was also retargeted towards public interest areas such as the environment and food safety so the allocation to productivity enhancing research declined substantially (Alston et al. 1999). The decline in the high income countries' R&D expenditures is shown in figure 1, which summarizes the world situation. The figure shows that the rate of growth of agricultural R&D expenditures has declined everywhere since the first period. World R&D expenditures by region and income level in 2000 international US$.
These regional trends hide a growing divide between the scientific haves and have-nots. In the Asia-Pacific region, just two countries, China and India, accounted for 89 per cent of the $42.5 billion increase in regional spending from 1995 to 2000. Thus, China and India accounted for 59 per cent of the region's scientific spending in 1995, jumping to 73 per cent by 2000. While these huge national agricultural research systems (NARS) are successful, partly because the multinationals will collaborate to gain access to large markets (Pray et al. 2007), sub-Saharan Africa (SSA) has suffered from what Lipton & Longhurst (1989) called the Balkanization of research. There are marked returns to scale and the NARS of small countries fare poorly. In SSA the growth rate in full time staff since 1960 has been 4 per cent per annum and 75 per cent have post-graduate training and the remaining 25 per cent have doctorates. The proportion of expatriates has declined from 90 per cent in 1960 to 2 per cent in 2000, which is amazing, but the combination of HIV/AIDS and higher private sector salaries has left many NARS short of staff. SSA has always lagged so far behind that it is only now showing signs of increased growth. This is partly the result of better policies, institutions and infrastructure, and more robust systems of governance, all of which are allowing the backlog of available technology to be exploited. The World Development Report (WDR; World Bank 2008) states that developing countries achieved much faster agricultural growth (2.6% a year) than industrial countries (0.9% a year) from 1980 to 2004, accounting for 79 per cent of growth. Their share of world agricultural gross domestic product (GDP) rose from 56 per cent in 1980 to 65 per cent in 2004, with the newly industrializing economies (NIEs) in Asia accounting for two-thirds of the developing world's agricultural growth. The major contributor to growth in Asia, and the developing world in general, was productivity gains rather than expansion of land devoted to agriculture. Cereal yields in East Asia rose by an impressive 2.8 per cent a year in 1961–2004, much more than the 1.8 per cent growth in industrial countries. Only in SSA did area expansion have more impact than growth in yields. The FAO (2007) and USDA (2009) both report yields for individual crops, measured in physical terms. The World Bank (2007) data are in terms of agricultural value added per hectare, in constant 2000 US$, which obviously allows for enterprise switching and includes all outputs, such as animals and animal products. Figure 2 shows that for the most important cereal crops the growth rates in the developing countries were 3 per cent or better at the height of the green revolution in the early 1980s. Since then growth rates have fallen, so that by 2000 the rates for rice and wheat were about 1 per cent and maize a little better at around 1.5 per cent. There seems to be a slight recovery since 2000, which is surely needed as these growth rates are less than population growth and per capita food availability would be falling. Developing country productivity growth rates for major cereals. Source: World Development Report (WDR; World Bank 2008).
The calculations in this study are split pre- and post-1985 as that was the year at which a downturn in growth rates seemed to begin (table 3). Even so, the results show that all Africa is doing substantially better and South America almost doubling yields in the second period. These positive results need to be set against decline in the other six regions, which reduces LDC yield growth overall from 2.5 to 1.6 per cent. However, this performance is good relative to the DCs, where growth rates have fallen to one-third of their pre-1986 levels and thus the greater decline in the DC's R&D growth is possibly a cause of the greater decline in yield growth.
But other factors need to be considered, too. The gains from hybrids and the green revolution-type technologies were exploited earlier in the DCs, so by the second period progress may be more difficult. Note, too, that North America, which is exploiting the major new technology, genetically modified (GM) crops, has no significant fall in yields, whereas the other DCs who are reluctant so far to embrace this science, are doing extremely poorly. As a general proposition, the low-yield growth in the DCs must mean there will be less technology available to the LDCs in the future, so the falling yield growth in Asia may reflect this. Africa is further behind and has not yet reached this point in the sequence of diffusion. The work on spillovers suggests clear patterns, and within agricultural economics Schimmelpfennig & Thirtle (1999) found there was a cascade effect for the US and the EU. The direction of spillovers was from the US to Northern Europe, from there to Italy and from Italy to the least technologically advanced EU countries such as Greece. This suggests that the lags could be very long. SSA may be able to continue finding technologies for a long time before catching up with Asia, as Asia slows owing to fewer spillovers from the US and Europe. The very large and successful NARS of China, India and Brazil, which benefit from substantial economies of scale, will play an increasing role in technology diffusion. Productivity in the livestock sector is becoming increasingly important, as with higher incomes the expenditure share of meat, eggs, milk (and fruit and vegetables) rises. Evenson (2001) has shown that productivity growth in the US has always been slow in extensive livestock, but where feed concentrates and selective breeding can be applied, as in pigs and poultry, yield growth can be as fast as in crops. Conradie et al. (2009) show that in South Africa, productivity growth in chicken production accounts for the most rapid increases in the Western Cape. Although productivity-enhancing livestock research is not further considered explicitly, it is included in §3c, which measures land productivity in value terms, regardless of use. The yields above are measured in physical terms, but the World development indicators data (World Bank 2007) are published with yields measured in value added per hectare, at constant 2000 US$. This incorporates all outputs and does allow for crop switching, so it may give lower or higher yield growth than the FAO crop level data. It has also been used to estimate labour productivity for Africa and Asia (Thirtle & Piesse 2008). The averages for both land and labour productivity were calculated by regressing the productivity measure on time using a random coefficients model to give an average for the region. The regional averages are in table 4, starting with the yield for all three groups of developing countries, by continent, which is much the same sample as all the LDCs in table 3. This average yield is lower in the first period at 1.7 per cent, but instead of falling, rises slightly in the second period. Either enterprises other than basic cereals have done better, or switching to higher valued production has contributed to the increase in output value, or both. The Asia average for 1961–2006 of 2.6 per cent is above that for Africa (2%) but this is far higher than expected, given that African agriculture is regarded as failing. Over half the African sample (22 of the 42 countries) had yield growth of over 2 per cent and only eight had less than 1 per cent. Latin America and the Caribbean (LAC) have lower yield growth, but again no decline. Thirtle & Piesse (2008) did not include the DCs, but the last row in this section shows that the former Soviet Union (FSU) and non-EU countries in Europe actually had poorer yield growth than the developing countries.
The lower part of table 4 shows that yield growth in Asia translated into labour productivity, growing at an average of 1.5 per cent, with only five of the 12 countries having less than 1 per cent growth. For Africa, labour productivity grew at only 0.4 per cent per annum and although the top few countries are in the same league as Asia, almost half the sample (18 countries) actually have negative growth in labour productivity. This problem has not attracted the attention it deserves. Labour productivity is also crucial to food security because 70 per cent of the poor are rural, and agricultural wages, incomes and poverty reduction are dependent on the productivity of the labour force. The common view is that by 2050 total food output will be sufficient, but there will be extreme insecurity because the poor in SSA will not be producing sufficient food or have enough income to buy it. There is a clear contrast between the poorest countries and both Europe groups, EU and non-EU, which have rather more than twice as much labour productivity growth. This again follows from the induced innovation hypothesis, as labour is the scarce resource in the DCs. It also fits the view that labour productivity is closely linked to incomes, as Europe, especially the EU member states, is substantially better off. The poorest countries are all still predominantly agricultural, while the richest have now moved beyond being purely industrial to being economies dominated by the service sector. The greatest empirical regularity in economics is the structural transformation, whereby during the development process agriculture declines in importance relative first to industry and later to services. Labour productivity does not increase dramatically until a country has passed the turning point in the structural transformation, at which the total numbers employed in agriculture start to fall. When countries pass the turning point, their economies change dramatically. Labour productivity in agriculture has to increase rapidly enough for the falling rural and agricultural population to feed the growing urban, industrial labour force. China passed the turning point some years ago and the agricultural population has been declining since 1999. India too has reached the turning point. These changes have serious implications for the rate and direction of technical change in agriculture. Biological technical change may continue, but it is quite quickly outweighed by mechanical technical change. In China and India, which comprise over 40 per cent of the world population, labour productivity in agriculture will now grow rapidly and so will agricultural incomes. The problem is that no countries in SSA, except South Africa, have reached the turning point and there seems to be no means of transforming labour incomes at this stage of development. Worse still, labour productivity is a two-edged sword. Inappropriate labour saving technology simply increases the numbers of beggars, as early machinery imports in South Asia showed. However, there are some successes. Herbicide-tolerant GM maize was developed in the US to save labour, but in SSA it has been used with minimum tillage in a way that has reduced erosion and increased area and yields rather than reducing employment (Piesse et al. 2009). Lack of data on input prices prevents the construction of TFP indices using accounting or index number methods. The alternative is programming techniques, such as data envelopment analysis, which allows the use of the Malmquist index. However, this is constructed by comparing each observation with the best practice frontier that is determined from all the observations, so the choice of the peer group changes the index for each country. The index number alternative is followed by Avila & Evenson (2007), who dealt with the lack of data for share weights by applying values from Brazil and India to all the other LDCs. They apply the share weights calculated for India to Africa and Asia, and those for Brazil to Latin American and the middle income countries. Fuglie (2008) extends this by including China, Indonesia, Japan, the UK and the US as sources of shares, so that the allocation is slightly less crude. For example, the estimates for Brazil were applied to LAC, the Middle East and North Africa (MENA) region and South Africa. The estimates for India were applied to other countries in South Asia as well as SSA except South Africa. The US shares were used for the FSU and the 1967–1990 UK shares (from Thirtle & Bottomley 1992) were used for all of Europe except the USSR. Fuglie (2008) finds no evidence of a general slowdown in TFP growth from 1970 to 2006. Indeed, the world TFP up to 1989 grew at 0.87 per cent per annum and since 1990 has grown at 1.56 per cent per annum. He also notes that for maize, rice and wheat yield growth fell from 2.29 to 1.35 per cent per annum and output per hectare from 1.96 per cent to 1.95 per cent, while labour productivity growth rose from 1.25 to 1.51 per cent per annum. The major finding that reconciles these results is that it is input growth that has declined, as in the extreme case of the UK, where it was negative. TFP growth largely offset decelerating input growth to keep the real output of global agriculture growing at about 2 per cent per year since the 1960s, but there was a slowdown in the growth of agricultural investment. This is an important finding and explains why the supply response to the food price crisis was so strong. Lack of investment on farms can be corrected far more quickly than lack of new technology where the R&D and diffusion lags are very long. Fuglie's (2008) results show there is no general decline. Indeed, there is TFP growth that offsets the decline in inputs and keeps output rising. His results are reported by decade. For the DCs and the LDCs, USSR, the FSU, Eastern Europe, LAC, North-East Asia, China, South-East Asia and North America, TFP growth in the 1990s and since 2000 was greater than in the 1980s, which was an improvement on the 1970s. For western Europe, Oceania and SSA the 1990s growth was an improvement on the 1980s, but since 2000 is lower than the 1980s. For the MENA, South Asia and India, both recent periods show lower growth than the 1980s. On average then, his results are heavily biased in favour of growth rather than decline. Since there is evidence that yield growth may have declined and even labour productivity growth has slowed, it is important to note that TFP is different exactly because it takes account of the land and labour substitutes that are modern intermediate and capital good inputs. Less capital investment and chemicals will increase TFP just the same as more output or higher land or labour productivities. Ludena et al. (2007) use the Malmquist index to estimate agricultural productivity growth for 116 countries and find that average annual agricultural TFP growth increased from 0.6 per cent to 1.29 per cent between 1961–1980 and 1981–2000. These positive results are used to forecast optimistic increases in productivity to 2040. Most recently, Nin Pratt & Yu (2008) have produced estimates concentrating on SSA using the Malmquist index, but with shares constrained to stay within bounds derived from Avila & Evenson (2007). This is an interesting approach, but it may combine the best or the worst of both alternatives. The results in figure 3 show that TFP was declining in SSA until the mid 1980s, after which there is a dramatic recovery. Up to 1985, TFP declined at 1.67 per cent per annum but then grew at 1.73 per cent, with a further improvement to 1.83 per cent in the 1990s. This is in keeping with earlier Malmquist results, such as Lusigi & Thirtle (1997), although the technique may be inclined to have an upward bias. Index of cumulative TFP growth in SSA (1961 = 1; filled black diamonds, sub-Saharan-Africa and filled black squares, not including Nigeria).
Figure 4 shows TFP growth in SSA relative to Asia and Latin America. The conflict with the index number approach results is clear, as only Latin America had positive growth prior to 1983, and even then it was less than 1 per cent. Then, all the regions improve between 1984 and 1993 and all improve again between 1994 and 2003. Average TFP growth rate of SSA's agriculture in different periods compared with TFP growth in other regions. Source: Nin Pratt & Yu (2008).
The improvement in SSA is partly owing to better agricultural practices, but it must be stressed that almost all are owing to efficiency increases rather than technological change. Many countries in SSA now have better policies, better governance and improved institutions and infrastructure, which have taken them closer to the technological frontier. However, the frontier is not moving outwards, so the R&D systems are not creating better technologies for the future. Thus, the prospects are limited as the efficiency gains will cease if the frontier is static. So Nin Pratt & Yu (2008) take a pessimistic view in their paper to counterbalance the optimism of Ludena et al. (2007). The above review is sufficient to show that there is no consistent evidence of a decline in TFP. The methods may be imperfect, but all suggest growth rather than decline when using these broad sets of data. Thus, the evidence of decline is confined to TFP studies of individual countries, such as the UK (Thirtle et al. 2004), which turns out to be atypical and different even from the rest of the EU. However, the evidence on yields was more dismal and labour productivity is improving very slowly in SSA. This lack of entitlements may well prove to be the least tractable problem. Although the literature concentrates on land, labour and TFP, there are also social TFPs, which incorporate changes in environmental quality. From the 1980s in the UK, R&D was redirected away from productivity and towards public interest issues such as the environment, animal welfare and food safety, in line with government policy. The UK disasters in animal production and food safety, such as bovine spongiform encephalopathy (mad cow disease), Creutzfeldt–Jacob disease, Escherichia coli, foot and mouth and Salmonella suggest that this was required. The FAO argues that animal diseases are now transmitted across borders owing to globalization, and with selective breeding for performance comes higher disease risk, which increases animal health R&D on preventive measures and means more prophylactic pharmaceuticals. For livestock in South Africa, Townsend & Thirtle (2001) showed that maintenance research (animal health) is at least as important as productivity enhancing research and this distinction should also be made in the arable sector, where a substantial share of R&D is to sustain existing yields. In LDCs, more efficient natural resource use can improve water productivity. For instance, drip irrigation uses scarce water very parsimoniously and is labour-intensive, which suits LDCs with high unemployment. Much is expected of a Gates- and Buffet-funded initiative in which Monsanto and BASF are collaborating with CIMMYT and several NARS to develop water-efficient maize for Africa (WEMA; see www.monsanto.com). The expectation is that by 2020 the project will lead to 2 million extra tonnes of grain and will improve the nutrition of 14–21 million poor people. As climate change raises the incidence of drought, the gains will obviously increase further. Herbicide-tolerant GM white maize is already being used with ‘planting without ploughing’ in KwaZulu Natal, and is both high yielding and prevents soil erosion (Gouse et al. 2006). Soil erosion can also be prevented by labour intensive soil conservation measures, even when population pressure is increasing, as Tiffen et al. (1994) showed in their study of the Machakos district of Kenya. The oil price rises also means expensive fertilizer and will mitigate in favour of precision agriculture to use only the necessary modern inputs. A specific example of what can be done is the encapsulation of sugar beet seeds in the UK with nutrients and plant protection chemicals, which massively reduces input use and pollution as well as giving higher extraction rates (Thirtle 1999). What is needed is soil and water efficient, modern input reducing, low-emission technologies for both DCs and LDCs. Finally, simple improvements in storage technology can prevent heavy post harvest losses and there are health impacts, as GM maize has been shown to have less carcinogenic toxins, while Bacillus thuringiensis (Bt) cotton reduces hospital admissions for burns and poisoning from pesticide sprays. Some R&D expenditures are targeted at counteracting disasters. As crops become less genetically diverse, the risk posed by new pests or viruses increases. Gene banks are some insurance that other genetic material will be available to counteract the threat. The recent outbreaks of animal diseases suggest that the public resistance to GM crops is irrational. It is the animal production systems that are a danger to health. In the DCs, expenditures on preventive and reactive R&D for mad cow disease, swine flu, foot and mouth and Salmonella have increased. In the LDCs, the close proximity of people and animals lies behind the outbreaks of swine and bird flu and their transmission to humans. At present, there is an outbreak of Rift Valley fever in South Africa. Increased animal output and population pressure will mean these outbreaks should be regarded as an increasing threat in future. The international system that produced the results outlined above was centred on open access to intellectual property that Dalrymple (2004) called a global public good (GPG), albeit an impure one. This is a reasonable description of the system that was in place from the Second World War to the end of the millennium and it has produced some excellent progress in international productivity growth. However, this is now increasingly under threat. Most DC research is now private while over 90 per cent of LDC R&D is public, as shown in table 5. Twenty years ago universities and public laboratories in the DCs did all the basic and strategic research and this created a global commons of intellectual property. Now Monsanto and Syngenta lead and the Consultative Group on International Agricultural Research (CGIAR) and the rest of the international public systems tend to follow.
The extent of the domination of the agricultural chemicals, seeds and biotechnology market is apparent in table 6, which is from a survey conducted by USDA. The ‘big six’, which are BASF, Bayer, Syngenta, Dupont, Dow and Monsanto, together spend US$3.6 billion, compared with US$1.42 billion for the other 249 companies operating in these areas and US$4 billion for all the other areas. The total private expenditures on agricultural chemicals is US$2.65 billion and for seeds and biotechnology US$2.37 billion. The effect of this increasing concentration is not clear, but there is an early study of the effect of market structure on innovation in biotechnology (Schimmelpfennig et al. 2004).
The several meanings of the term private in the R&D context were discussed by Thirtle & Echeverria (1994). A technology produced by a public R&D institution, funded by public taxes, producing outputs that are in the public domain, is clearly public. Historically, this best describes the invention of biological technologies at the basic end of the science spectrum, where patenting was not possible and a private R&D institution would have no way of appropriating the returns to the investment. At the other extreme John Deere, indulging in adaptive, near-market mechanical innovations is purely private and the returns are secured when the new tractor in which the innovation is embodied is sold at a price that reflects its superiority. Or, alternatively, other companies pay for the use of the patented innovation. Most innovations lie somewhere between these two extremes. Research may be publicly or privately funded, performed by a public or private institution and the innovation produced may be proprietary or in the public domain. Nor is it easy to state a location, as private agricultural R&D is now the province of a few huge multinationals with global reach, noted above. Thus, any simple statement tends to be inaccurate. Pray et al. (2007) start from the fact that private expenditures are now larger than public in most industrialized countries. They outline the history of private research in the developing counties, where plantations (tea, rubber) and processors (tobacco) played a major role. Similarly, haciendas in Latin America were large enough to finance R&D as were sugar plantations such as Lonrho in Africa. Pray et al. (2007) estimate that in 1995 private R&D in Asia and Latin America was only 10 per cent of the total, but rising. In SSA it is lower, but not non-existent even in the late 1980s (Thirtle & Echeverria 1994). South Africa is the exception, owing to Monsanto's involvement in maize and cotton. Pray et al. (2007) report 525 field trials of GM crops in Africa, as compared with 1235 in Latin America and only 243 in Asia, as of 2003. This is a better indicator than expenditures for South Africa, as Monsanto does most of its R&D in the US. Total global R&D is estimated at US$33-35 billion per annum by the mid 1990s, with a bit over one-third spent by the private sector. Public R&D split about evenly between DCs and LDCs, but 94 per cent of private R&D was in the DCs. However, as noted, the location of the R&D and the use of the innovation can easily be as far apart as California and KwaZulu Natal. The growing importance of private R&D activity is closely associated with biotechnology and GM crops for two reasons. GM crops account for much of the private sector activity in LDCs and Monsanto is the dominant player, responsible for 39 of the 54 GM events that have been approved for commercial use. The great majority of these involve Bt or herbicide tolerant (or both) soy, maize or cotton. Technically, the development of genetic markers played a key role in moving the public–private boundary as they allowed the identification of specific traits in biological material that were not previously possible. Hence, patenting became possible more widely and the courts pushed the process forward with decisions in favour of patenting. Changes in the US law in 1980 were important as the Diamond v. Chakrabarty decision allowed patenting in a case where living organisms were involved (actually an oil eating bacterium that General Electric wanted to patent). Cohen and Boyer were awarded a US patent for their work on recombinant DNA and the Bayh–Dole Act allowed grant recipients, such as universities, to apply for patents on federally funded research. Other DCs followed and in the 1990s the trade-related aspects of intellectual property rights (TRIPS) agreement of the World Trade Organization (WTO) globalized the protection of intellectual property (Wright et al. 2007). Many LDCs were unhappy, but had to accept TRIPS in order to keep the advantages of WTO membership. The technology problem has always been that patents help push the resources committed to R&D towards an optimum, but slow the diffusion of the innovations, given they are frequently non-rival in consumption. Excludability is at odds with maximizing welfare in the case of a non-rival good. Wright et al. (2007) compare patents and other means of providing incentives to innovation, such as prizes and contracts. The means of protecting intellectual property rights (IPRs) in agriculture, such as plant patents, plant breeders' rights, utility patents, trade secrets, trademarks and geographical indicators vary in terms of protection levels. There are also alternatives to IPRs, such as hybrid varieties, genetic use restriction technologies, contractually defined rights over tangible property, and mergers and acquisitions. The results are similar in that the trend is towards concentration. Thus, a consequence of extending patents to plants, in combination with the huge costs of biotechnology research is that the NARS, whose size led to their ascendancy over small private seed companies in the past century, are losing ground to massive multinationals. Herdt et al. (2007) note the concentration of varietal development and seed production in less than half a dozen such companies. Biotechnological discoveries and enabling technologies are patented, and since genetic improvement is a derivative process, each incremental improvement adds a further layer of IP constraints. Mergers increase a company's IP portfolio, giving it more freedom to operate and hence an advantage over smaller rivals. The building blocks and the tools all come with IP constraints and are commercially useful only to companies with portfolios covering most inputs. For example, Golden Rice required 40 patents and six material transfer agreements (MTAs). De Janvry et al. (2000) report that whereas in 1994, 77 per cent of patents for Bt were held by individuals and independent biotech companies, by 1999 six multinational companies (MNCs) held 67 per cent of Bt patents, 77 per cent of which were by the acquisition of smaller firms. The level of difficulty in terms of legal costs and simply having any idea of what to patent or otherwise protect is increasingly putting agricultural research beyond the reach of most potential producers. North (1990) explains the importance of institutions that give the correct incentives by pointing out that throughout history, except for the past couple of hundred years and in limited parts of the world, the norm has been rent seeking and directly unproductive activities aimed at redistribution rather than production. Adam Smith's invisible hand is the exception not the rule, and legal activities are almost redistributional by definition. Thus, the scientific revolution in agriculture is in danger of sinking into the mire of rent seeking, with the growth potential snuffed out. So it is Wright et al. (2007) who argue that decentralized ownership of IP and high transactions costs can lead to an ‘anti-commons’ phenomenon in which innovations with fragmented IP rights are underused. The evidence is sparse and largely anecdotal, but they do not reject the possibility that IPRs do prevent progress in agricultural technologies, much as they have in pharmaceuticals. In terms of forecasting, it seems probable that the tangle of IPRs will get worse as time passes. The concentration in technology production is unlikely to be reversed and the international organizations of the CGIAR are likely to finish second to the multinationals. This leaves a vision of a world of oligopolistic competition or collusion in agricultural innovations. The objective functions of Monsanto and its competitors may not be totally incompatible with that of the CGIAR, but they are surely not the same. Any MNC has to protect its own position, make profits and satisfy its shareholders, whereas the CGIAR should have poverty reduction and adequate nutrition for the poorest at the top of its agenda. Since there is a strong negative correlation between profitability and ending poverty, it is unlikely that the MNCs can be at the forefront of poverty reduction. By 2050, the world population is expected to grow by 40 per cent (from 6.5 to 9.1 billion) and allowing for increased incomes and changes in diet, global demand for food, feed and fibre is expected to grow by 50 per cent by 2030 and 70 per cent by 2050 (Bruinsma 2009). There are a wide range of estimates of demand and supply, but most consider that although demand can be met, some intervention will be required to ensure that supply keeps up and thus prices rises are prevented. More will be required to reduce poverty and move towards the FAO's stated aim of ending world hunger by 2050. The conventional wisdom that the very high RORs to agricultural R&D indicate underinvestment implies that increasing investment is an economically efficient way of increasing food output and decreasing hunger. Beintema & Elliot (2009) suggest that the current yield growth rate of about 1 per cent can be increased to its historical level of about 2 per cent by increasing investments in the LDCs. The estimate in this paper is that the compound growth rate needed to increase output by 70 per cent over the next 40 years is 1.34 per cent per annum. If the output elasticity of agricultural R&D is as low as 0.05, which is the bottom end of the range of estimates (see Nin Pratt & Fan (2009), discussed below), then this crude approach suggests that with a US$36 000 million world total for R&D, a 6.8 per cent increase would take growth from 1 per cent to 1.34 per cent and this would cost about US$2500 million per annum. Von Braun et al. (2008) use a lower estimate of current output growth, 0.53 per cent per annum, and calculate that an extra US$5000 million of R&D investment in the LDC NARs and the CGIAR would be needed to raise this to 1.55 per cent per annum. Using the more pessimistic R&D elasticity and without the targeting of expenditures, a 20 per cent increase of US$7200 million extra R&D investment would be needed to increase output growth by 1 per cent per annum. These figures give a good indication of the magnitudes involved, against which other estimates can be judged. Tweeten & Thompson (2008) estimate that cereal demand will grow at 1.17 per cent per annum, giving an increase of 79 per cent by 2050. With linear yield growth projected to be 1.07 per cent per annum, giving 71 per cent more output implies excess demand at current prices and they estimate that real prices would rise by 44 per cent. They assume no area expansion, so all the increase in output must come from yields. Estimates by Rosegrant et al. (2008) show that cereal demand increased by 0.9 per cent per annum, giving a total increase of 56 per cent, or 1.048 billion tons. After allowing for expansion of planting, and irrigation offset by an average diversion to biofuels, their yield gain estimates are 14 per cent lower, resulting in excess demand that increases rice prices by 60 per cent and wheat and maize by over 90 per cent (from Fisher et al. 2009). To prevent this outcome Rosegrant et al. (2008) estimate that a 13 per cent per annum increase in public investment, especially in R&D, would produce a 0.4 per cent per annum increase in output, which would lower prices enough to halve the number of malnourished children. Under reasonable assumptions on R&D elasticities, von Braun et al. (2008) calculate the impact of LDC investment in R&D doubling from nearly 5000 million 2005 US$ per annum to nearly 10 000 million 2005 US$ by 2013. In the first scenario, where investments are allocated to maximize output (by equalizing the marginal return to R&D across regions), the impact of seven years of doubled R&D (US$35 000 million total) is a 1 per cent increase in output by 2020 and a reduction of the number of people in $1 per day poverty of 203 million. The bulk of the increased R&D is allocated to East, South-East and South Asia, which have the highest R&D elasticities. If poverty reduction is the target, much more is allocated to SSA and South Asia and by 2020 output is increased by 0.58 per cent, but the number taken out of $1 per day poverty is 282 million. In SSA 144 million would be taken out of $1 per day poverty, practically halving the poverty rate, from 48 per cent to 25 per cent. For South Asia, the equivalent figure is 124 million, with the poverty rate reduced from 35 per cent to 26 per cent. This work is an early report from an ambitious ongoing project on ‘best bet’ programmes to be scaled up in the future strategy of the CGIAR institutions (von Braun et al. 2009). The latest estimate on the R&D requirements is Nin Pratt & Fan (2009) who have refined their earlier work. They review the evidence on R&D and poverty elasticities and determine the required investment in LDC NARS and the CGIAR and its regional allocation, in order to maximize output growth or minimize poverty. They show that to maximize output growth, R&D investment should be mainly in South-East and South Asia, but if poverty reduction is the aim then SSA and South Asia should be particularly targeted. This paper investigates the productivity slowdown in the DCs and its impact on the prospects for productivity growth in the LDCs. It questions the existence of such a slowdown, pointing out that although yield growth has slowed in aggregate, labour productivity growth varies and TFP has improved in most regions. To the extent that there is a break in the trends for all measures, it comes in the mid 1980s, which fits with the new FAO data showing that the long fall in food prices practically ceased from this time (Piesse & Thirtle 2009). The paper stresses the interactions between DCs and LDCs, between public and private R&D and between sectors, as countries such as China and India have reached the transitional stage where industry and urbanization have taken the leading role in the growth process. Similarly, agriculture not only reacts to the oil price because of fertilizer, fuel and transport costs, but has become a part of the energy industry, owing to the rapidly growing demand for biofuels. This is a challenge and opportunity for the LDCs, as well as a threat. Although the food price crisis generated pleas for increased funding, the economic downturn will mean that few donors will increase funding for public R&D. So, Monsanto and the other multinationals will continue to account for a growing share of total R&D funding. Thus, the gene revolution will lack the public sector agenda that led to the green revolution having a poverty focus. Private companies must operate where profits can be made and this precludes the least resource rich, the most marginal and the most distant farming areas. The prospects for growth may well be better than those for poverty reduction. The public sector and the international institutions increasingly have to find a way through the growing tangle of IPRs that threaten what used to be a GPG. This is unfortunate, as in the past agricultural productivity has been an important source of poverty reduction as it helps the rural poor increase their welfare directly and also helps the urban poor by lowering food prices. Perhaps the reform of the CGIAR may help alleviate this potential loss by extending the role its institutions play in spreading the green revolution (Pingali & Kelley 2007). These problems will be exacerbated by climate change, as both higher temperatures and a rise in sea levels will hit tropical LDCs hardest. Against this gloom, there is the fact that SSA is finally making slow progress, so that some of the poorest are likely to benefit. However, it is probably not possible to generate sufficient food output or incomes in much of SSA to feed the population at all adequately. Higher up the income distribution, more countries are reaching the turning point in the structural transformation, when agricultural labour productivity rises as a result of labour being withdrawn from agriculture. Agriculture has to provide both food and labour for industrialization. If it succeeds it transforms itself and the country joins the ranks of the industrialized, urbanized group with greater prosperity. If it fails, it holds back industrialization and urbanization and slows the development process. So, for LDCs at all levels, there are prospects of productivity growth, but those with very little technological capacity will be disadvantaged. Increasing labour productivity is the norm for the industrializing countries, but it is SSA where both labour productivity growth and employment need to increase, and this is where biggest regional challenge lies. Increasing yields will also be needed if agriculture is to meet world demand for both food and energy. This competition for agricultural output will re-emerge as the recession eases and China and India resume their rapid growth and transformation. We gratefully acknowledge funding while working in South Africa, provided by the University of Stellenbosch's Over-arching Strategic Plan.
FootnotesOne contribution of 23 to a Theme Issue ‘Food security: feeding the world in 2050'. While the Government Office for Science commissioned this review, the views are those of the author(s), are independent of Government, and do not constitute Government policy. © 2010 The Royal Society This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. References
Page 17The first documented attempt to use modelling to explore the uncertainty surrounding the world's ability to feed a growing population was possibly by Malthus (1798) in the first edition of An Essay on the Principles of Population. This essay famously put forth the hypothesis that exponential population growth and its associated demand for food would overwhelm linear growth of supply. Malthus's hypothesis has been subject to persistent challenge both empirically and theoretically (Boserup 1965); but with population growth projected by the UN to increase to 9.1 billion in 2050, concerns remain. More recently, according to McCalla & Revoredo (2001), there have been at least 30 different major long-term model-based simulations of global food supply and demand undertaken over the second half of the twentieth century. In the past decade, a number of further studies concerned with the future of the global food system have been published. In order to manage the uncertainties inherent in this system, these studies have used scenario analysis as well as model simulations. This article reviews a selection of contemporary studies which use scenario analysis and modelling to explore the future of the global food system to 2050. The case studies under review are World Agriculture towards 2030/2050, the Comprehensive Assessment of Water Management in Agriculture (CAWMA), a study on the effects of climate change on global food production based on Intergovernmental Panel on Climate Change socio-economic scenarios, the Millennium Ecosystem Assessment (MA) scenarios and the Agrimonde 1 scenario. Case studies have been chosen to illustrate a scenario typology and to demonstrate a diversity of modelling approaches. We begin with a short history of scenario analysis and then provide an explanation of why scenarios have been increasingly used to manage uncertainty in systems with socio-economic and biophysical dimensions. A typology is proposed to classify the scenarios used by the case studies. An outline of the quantitative modelling approaches of the case studies is followed by brief summaries of their scenarios and a discussion of results. The article concludes by considering some of the challenges for food system scenario analysis and modelling. The Oxford English Dictionary defines a scenario as ‘a postulated sequence or development of events’. Prominent proponents of scenario analysis view scenarios variously as ‘hypothetical sequences of events constructed for the purpose of focusing attention on causal processes and decision-points’ (Kahn & Wiener 1967, p. 6),‘focused descriptions of fundamentally different futures presented in a coherent script-like or narrative fashion’ (Schoemaker 1993, p. 195), ‘internally consistent and challenging narrative descriptions of possible futures’ (van der Heijden 2005, p. 14), as ‘a tool for ordering one's perceptions about alternative future environments in which one's decisions might be played out’ (Schwartz 1991, p. 4) or ‘a description of potential future conditions, developed to inform decision-making under uncertainty’ (Parson et al. 2007, p. 1). The emphasis on exploring multiple futures underlines that scenario analysis does not aim to predict the future. Scenario analysis copes with uncertainty by presenting a range of plausible futures, usually without assigning probabilities to the outcomes. In particular, for complex socio-ecological systems, scenarios can be used to explore uncertainties over long-term horizons that cannot be represented by probability distributions on known parameters (Swart et al. 2004; Parson 2008). The origins of scenario analysis lead back to the Manhattan Project in 1942, where the limits of using probability in decision-making led to computer simulations of atomic explosions. The concept was further refined after World War II at the RAND Corporation, particularly by Hermann Kahn, and especially for the large-scale early warning system Air Defence System Missile Command. Kahn's book On Thermonuclear War used scenario analysis to explore the uncertainties surrounding nuclear war (Kahn 1960). In 1961, Kahn left RAND to set up the Hudson Institute, a think-tank with a broader remit for scenario analysis. A subsequent book The Year 2000 written in 1967 graduated his methods beyond military planning; and it was also a signal of growing curiosity in the comparative advantage scenario analysis might offer to business. Pierre Wack pioneered the use of scenario analysis at Shell based on possibilities presented by Kahn for corporate planning. In the late 1960s, Shell used a system of Unified Planning Machinery with a 6 year horizon to prepare its value chain for the future. It was posited however, on a single ‘business as usual’ scenario. Wack participated in an experiment to look ahead 15 years in an exercise called Horizon Year Planning. The striking findings of the study, which suggested that transformative change could be imminent in the oil market, provoked Shell in 1971 to migrate from predictive forecasting to a new method of scenario analysis (Wack 1985a). The approach employed by Wack at Shell, and adapted from Kahn's early work, identifies predetermined elements in a system of interest in order that the outcomes of uncertainties, which are prioritized strategically, can be explored in multiple scenarios. The food system shares an important attribute with that of the energy system: crop-based technologies often have long lead times. Strategic planning is likely to become increasingly necessary if the world is to feed a projected 9 billion people healthily and sustainably in 2050. The food system is multi-dimensional (Ericksen 2008) and includes social, economic, biophysical, political and institutional dimensions. Using a model as a proxy to this system raises ontological and epistemological issues (Rotmans & van Asselt 2001). Funtowicz & Ravetz (1990) suggest three types of uncertainties in integrated assessment:
These challenges notwithstanding, scenario analysis offers an opportunity to manage technical uncertainty in the socio-economic dimensions of the food system differently from uncertainties in its biophysical dimension (Rotmans & van Asselt 2001; Döös 2002). Model simulations using scenarios of multiple input assumptions for socio-economic variables may mitigate technical uncertainties in the model. However, it will not be robust to manage uncertainty through multiple input assumptions for socio-economic variables if uncertainties in its biophysical dimensions are not respected. For example, current models used to simulate the effects of climate change on sea-level rise may not be adequate proxies to the system because of epistemological uncertainties surrounding the dynamics of melting ice sheets (Hansen 2007). In addition, it will not be accurate to quantify socio-economic drivers of change as discrete or exogenous if they are actually endogenous to the system or correlated with other drivers (Garb et al. 2008). If scenarios are to be used to manage the uncertainties that can accumulate in models, the type of scenario chosen will depend on the purpose of the exercise. A typology modified from Börjeson et al. (2005) is proposed to classify three different approaches to scenarios of the future:
The narratives of exploratory scenarios are predominantly qualitative but usually with a quantitative underpinning provided by model simulation outputs. They can either focus on drivers of change that are exogenous to the system and out-with the control of the actors for whom the scenarios are being developed, external scenarios, or they can include policy, in which case they are described as strategic. Exploratory scenarios are useful if the uncertainties in the system cannot be sufficiently managed using a model, or modelling framework alone. For example, a technological surprise like the ‘green revolution’ would have been very difficult to simulate using prior historical data but nonetheless had a profound impact on the food system and its outcomes (Evans 1998). Methodological and epistemological uncertainty may be explored using qualitative narratives. Normative scenarios develop stories that meet specific outcomes or targets. Preserving scenarios seek out pathways for the system to reach an outcome without transformation. Alternatively, transforming scenarios assume that change in the system will be necessary to meet the normative target. Although normative scenarios meet a specific outcome or target, they are, paradoxically, the least predictive of scenario types. Indeed, such scenarios may be helpful in reducing dilemmas of legitimacy in futures analysis (Robinson 1992). The scenario studies included in this review use models of the food system to simulate endogenous variables including food production and consumption. Table 1 provides a brief synopsis of the various models employed, distinguishing the key variables determined endogenously by each model from drivers of change that are exogenous to the model and based on external assumptions. The geographical and sectoral resolution of the models is also provided.
The studies of the MA and Parry et al. (2004) adopt general equilibrium representations of global production, consumption and trade, in which sectoral and economy-wide variables including aggregate income, factor prices and real exchange rates are simultaneously determined in an internally consistent manner. In contrast, the World Agriculture Towards 2030/2050 and CAWMA scenarios are based on a partial equilibrium approach, which treats global markets for individual agricultural commodities one by one in isolation from each other. In these models, regional demand and regional supply for each agricultural commodity is a function of its market price for given levels of income and given productivity drivers, and the model solves endogenously for the world market price that equates global supply and demand. The partial-analytic approach ignores economy-wide constraints including budget constraints on the demand side, balance-of-payments constraints and aggregate land endowment constraints, as well as repercussions of shocks to agricultural markets on aggregate income. This simplifies the analysis considerably, but limits the domain of applicability of these partial-analytic models to scenarios in which major shocks that affect many agricultural commodities simultaneously do not occur. On the other hand, partial equilibrium multi-market models like the World Food Model, IMPACT and WATERSIM support a more detailed commodity disaggregation than global computable general equilibrium (CGE) models. Among the five scenario exercises considered here, the MA scenario study employs the most complex and sophisticated modelling framework. Its centrepiece is the global integrated assessment model IMAGE, developed at the Dutch National Institute for Public Health and the Environment (RIVM). IMAGE is designed to capture interactions between economic activity, land use, greenhouse gas (GHG) emissions, climate, crop yields and other environmental variables. It includes a multi-region CGE model of global trade and production, a carbon-cycle module to calculate GHG emissions resulting from economic activity including energy and land use, a detailed land-use module and an atmosphere–ocean climate module that translates GHG emissions into climate outcomes. The model-determined temperature and precipitation outcomes in turn feed back into the performance of the economic system via agricultural productivity impacts. For the purposes of the MA study IMAGE has been ‘soft-linked’ to a range of other simulation models (listed in table 1) to achieve a further downscaling of variables of interest. In soft-linked model ensembles, output variables from one model are used to inform the selection of values for the input variables or parameters of another model, but the different models are not formally merged—or hard-wired—into a single consistent simultaneous-equation system. Downscaling refers to the process of disaggregating variables towards a more detailed spatial or commodity classification scale. For instance, changes in crop yields owing to climate change predicted by IMAGE have been used to adjust the agricultural productivity parameters of the agricultural market model IMPACT, which features a finer disaggregation of crops by type and region than IMAGE. Similarly, the soft link with the integrated assessment model AIM provides downscaled results for the Asia-Pacific region. Changes in irrigation within IMPACT as well as the climate projections of IMAGE have been used as inputs for the WaterGAP hydrology and water-use model simulations to assess water stress. Owing to the heterogeneity of scales, accounting methods and conceptual frameworks across different models, the soft-linking approach is associated with substantial problems in achieving consistency and is susceptible to error propagation. The scientific basis for linking models across disciplines and scales is still weak and requires specific attention in future research (Ewert et al. 2009). On the other hand, links can be based on established models and can exploit the embodied specialized knowledge from different disciplines rather than requiring new modelling work. As Böhringer & Löschel (2006) put it, these pragmatic advantages may outweigh to some degree impending deficiencies in overall consistency. The Agribiom tool employed in the Agrimonde study endeavours to simulate regional supplies, uses and balances of physical food biomasses and their calorie equivalents without any attempt to determine market prices for agricultural commodities. Thus, the simulated outcomes may be achievable in a biophysical sense but are not necessarily viable in an economic sense. As pointed out in §2b, the simulation results from any dynamic global simulation analysis for a long-term horizon of several decades are surrounded by numerous uncertainties—about the adequacy of the model structure to capture the key factors at work, about the presence of nonlinearities that entail tipping points beyond which fundamental change in systems behaviour occurs, about model parameters, and about the evolution of the main drivers of change in agricultural systems. Model outputs should not be misinterpreted as forecasts with well-defined confidence intervals. Rather they are meant to provide quantified insights about the complex interactions in a highly interdependent system and the potential general size order of effects, which cannot be obtained by qualitative and theoretical reasoning alone. The results are crucially contingent on the current state of scientific knowledge used in the course of the development and parameterization of the model components. For example, the skill of the climate model component in IMAGE is necessarily restricted by the current state of the art in global circulation modelling and hence precipitation is poorly represented, which in turn limits the accurate simulation of crop responses. This review will adopt a conceptualization of the food system and its outcomes suggested by Ericksen (2008) where food system activities are linked to social welfare, food security and natural capital outcomes. Case studies have been chosen to illustrate our typology (figure 1). Classification of review studies based on scenario typology. Source: modified from Borjeson et al. (2005).
The UN Food and Agricultural Organization (FAO) produced a baseline projection of the food system to 2050 using its partial equilibrium, World Food Model (Alexandratos 2006). One of the main purposes of this scenario was to consider whether a revision by the UN in 2004 of its population growth projections could result in a Malthusian future. In this future, growth in cereal productivity declines from 2.1 per cent per annum in 1961–2001 to 1.2 per cent per annum in 2001–2030 and then to 0.6 per cent in 2030–2050. However, this decline occurs alongside slowing population growth rates; and per capita consumption levels improve in developing countries to reach an average of 3070 kcal per capita by 2050. A peak in the population by the middle of the century is expected to ease the demands on natural capital from agricultural production. Reductions in absolute numbers of those malnourished are tempered by population growth but the proportion falls from 20.3 per cent in 1990/1992 to 3.9 per cent by 2050. Nevertheless, countries that increase their per capita consumption levels could still face a ‘double burden of malnutrition’ on healthcare systems if diets contain a higher proportion of fat, sugar and salt. Increasing demand in developing countries heightens import dependencies: but the market is projected to adapt autonomously, and developing world net exporters increasingly trade with developing world net importers. Growing competition among developing world producers to supply a relatively static market of developed world consumers leads to some price instability. The scenario was produced before the food price spike of 2007, which has been attributed partly to a rise in first-generation biofuel production (World Bank 2008). Although the implications for the food system of future energy prices are not fully explored in this baseline projection, there is foresight in its call for more analysis on the prospects of competition for land between food and fuel. Finally, there remain several countries, identified as vulnerable to food insecurity in this future, challenged by a deleterious confluence of high population growth rates, limited prospects for enabling economic growth, and low capacity for agricultural production. The CAWMA created five ‘what-if?’ projections to test the efficacy of alternative investment approaches to meet the projected food demand in 2050 (de Fraiture et al. 2007) (figure 2). The scenario narratives rely on outputs from WATERSIM, a quantitative model consisting of two integrated modules: a partial equilibrium framework based on the IMPACT model simulating food supply and demand, and a water balance and accounting framework simulating the supply and demand of water.
To meet the projected food demand, it has been estimated that water use for crops, or evapotranspiration, will have to increase by around 70–90% (de Fraiture et al. 2007). However, agriculture is likely to face competition from other sectors for freshwater; its use is more consumptive; withdrawals may not be accessible or sustainable; and pollution is increasing (Shiklomanov 2000). Although equipped irrigated areas have more than doubled since 1960, more than half of agricultural production still comes from rainfed agriculture, which is inherently uncertain. In the ‘rainfed optimistic’ scenario increasing concerns about the high cost and environmental impacts of large-scale irrigation provoke a step-change, whereby there is no expansion in the irrigation area for crop production. Instead there is a focus on rural, poor smallholders in rainfed areas. Institutional reform encourages farm-level adoption of recommended production practices including in situ water management and harvesting techniques. Around 80 per cent of exploitable yield gaps are assumed to be bridged by 2050. The projections of this scenario suggest that there is at least the potential of rainfed agriculture to meet additional food requirements globally. The risks in a predominantly rainfed strategy are demonstrated in the ‘rainfed pessimistic’ scenario. In this scenario, only 20 per cent of exploitable yield gaps are bridged by 2050, mostly as a consequence of slow rates of adoption of recommended production practices. The rainfed area must increase by 53 per cent to meet future food demands; such expansion is feasible but there may be negative environmental consequences. Countries without potential to expand rainfed areas must increase food imports; and the volume of global food trade necessarily increases. Lower levels of food availability and accessibility in poorer countries exacerbate food insecurity, which is adjudged to be highest in this particular scenario. Alternatively, in the ‘irrigation expansion’ scenario there is an emphasis on food self-sufficiency and improved access to agricultural water for more people, particularly in Asia and Sub-Saharan Africa. Yet, expanding the irrigated area by 33 per cent meets less than 25 per cent of additional global food demand. Furthermore, the costs of such expansion are substantial—estimated at around US$400 billion to expand the harvested area; with additional costs to build supporting infrastructure and create institutional capacity to manage irrigation schemes. Although food security improves and rural incomes are enhanced, pressure on freshwater resources increases. The number of people experiencing physical water scarcity increases from 1.2 billion to 2.6 billion in 2050. There is increased competition among sectors and trans-boundary conflicts intensify. In several basins, minimum environmental flow requirements are not satisfied, implying adverse environmental impacts of withdrawals on ecosystems and fisheries. Many irrigation schemes, particularly in South Asia, perform below their potential and there are opportunities for improving water productivity. The ‘irrigation yield improvement’ scenario assumes that 75–80% of exploitable yield gaps are bridged in coming decades from a combination of institutional reform, better motivation of farmers and water managers to improve productivity of land and water, and improved water allocation mechanisms among competing actors. Improving irrigated yields contributes around 50 per cent of increased global food demand by 2050; there is also a 9 per cent expansion of irrigated area globally. Irrigated diversions increase by 32 per cent but a larger amount of diverted water is used beneficially by crops, livestock or other productive processes. Investment costs are again substantial and are estimated at around US$300 billion. The efficacies of these alternative strategies are dependent on regional agro-ecological capability and capacity (Fisher et al. 2002), and outcomes for regions vary considerably. In the ‘trade’ scenario countries with capability and capacity export to countries that do not. The logic in this scenario recognizes an increasing awareness of the concept of virtual water trade (Allan 1998; Hoekstra & Chapagain 2008) as well as the relatively modest volumes of trade in developing countries. Cereal trade, for example, relieves pressure on irrigation water because major grain exporters in the USA, Canada, Argentina and France produce grain in highly productive rainfed conditions. Thus, trade has the potential to mitigate water scarcity and reduce environmental degradation. Increases in food demand can be satisfied through international trade without worsening water scarcity or requiring additional costly irrigation infrastructure. However, trade alone will not solve structural problems of water scarcity; and poor water-scarce countries may not be able to afford to import large amounts of agricultural commodities without foreign currency from exports. Countries struggling with food insecurity may be wary of depending on imports to satisfy basic food needs, especially after the recent food price spike. The inherently political nature of the food system also suggests that it is simplistic to assume that freer international trade is readily achievable even if it is considered by many to be beneficial to food system outcomes. Trade, furthermore, requires energy and recent spikes in oil prices have resulted in de-globalization hypotheses (Rubin 2009). Parry et al. (2004) explored the impact of climate change on food security outcomes to 2080. Socio-economic scenarios (A1FI, A2, B1, B2), previously produced by the Intergovernmental Panel on Climate Change were reused (Arnell et al. 2004). A modelling framework, based on a general equilibrium approach, was created to estimate the response of cereal yields to simulated climate change based on these scenarios, and then to quantify the implications for cereal production, prices and risk of hunger (figure 3). Uncertainty in the socio-economic dimension of the food system (e.g. population and GDP growth) is managed with scenarios, whereas uncertainty in the biophysical dimensions (cereal productivity growth) is managed using modelling. Although this study resembles ‘what-if’ projections, it augments a set of external exploratory scenarios. The A1FI scenario is a globalized future with very rapid economic growth and greater distribution of income between regions. Population growth is low and, similarly to the FAO baseline projection, peaks by mid-century. The energy system in this future is fossil fuel-intensive, global temperatures are the highest and cereal yields suffer most, especially in Africa and parts of Asia. Assuming no CO2 fertilization effects, aggregate cereal yields worldwide are depressed by roughly 10 per cent in 2050 compared with a reference scenario, there are large price increases, and an additional 100 million people may be at risk of hunger. With CO2 fertilization effects, many areas witness yield increases, apart from Africa, which is unable to counter a 20 per cent reduction. The effect of carbon fertilization limits rises in prices to around 10 per cent and the additional risk of hunger is hugely reduced. A2 is a heterogeneous world where there is more self-reliance and preservation of local identities. Population is higher and economic growth less rapid than in A1FI. Although there is an increasing divergence in cereal yields between developed and developing countries in all the scenarios, the differences are greatest in this scenario. In particular, yields dramatically decrease in developing countries with regional temperature increases and precipitation decreases. Although the impact on production upto 2050 is similar to A1FI, prices are higher, and with a larger and poorer population the additional number of people at risk of hunger is greater. Without CO2 fertilization effects, around 200 million people are additionally at risk of hunger by 2050 and there are almost 6000 million by 2080. B1 is a globalized future with the same low population as A1FI, but economic development follows a more environmentally sustainable pathway. Global temperatures in B1 are the coolest of the IPCC scenarios and cereal production decreases without CO2 fertilization effects are around half that of A1F1 and A2. The CO2 fertilization effect is less significant in this future because of the lower levels of CO2 concentrations in the atmosphere. Including the CO2 fertilization effect limits production decreases; but these reductions are smaller than in the A1FI and A2 scenarios because B1 has less CO2 in its atmosphere. Price increases are the lowest in the scenarios with or without CO2 fertilization effects at just over 10 per cent and just under 50 per cent, respectively. Without CO2 fertilization effects, the additional people at risk of hunger in 2050 and 2080 are considerably less than in A1FI and A2 futures, which are dominated by economic growth. In contrast to B1, in the B2 world there is an emphasis on local rather than global solutions to economic, social and environmental sustainability. Population increases but at a rate lower than A2. Economic growth in this more regionalized world is also moderate. Food security outcomes such as production, prices and additional people at risk of hunger are a little worse than in B1 but better than in A1FI and A2. Parry et al. (2004) find that, based on IPCC scenarios, it will be possible to feed a growing world population in 2050. While climate change appears likely to widen the difference in cereal yields between developed and developing countries, global trade prevents negative food security outcomes. However, regional outcomes will vary, particularly in Africa, Latin America and parts of Asia, and the number of additional people at risk of hunger may increase, especially to 2080. CO2 fertilization effects are likely to be an important determinant of future food security outcomes in 2050; but if such effects are based on experimental results in either controlled environmental conditions or optimal conditions, the benefits for low-input, stressed environments may be over-estimated (Long et al. 2006). Results also suggest that the major climate stressors for agricultural production could lie from 2050 to 2080 (figure 3). Additional millions of people at risk under seven SRES scenarios with and without CO2 fertilization effects, relative to a reference scenario with no climate change. Source: Parry et al. (2004). Blue bars, 2020; yellow bars, 2050; pink bars, 2080.
The main objectives of the scenario study conducted as part of the 2005 MA are ‘to assess future changes in world ecosystems and resulting ecosystem services over the next 50 years and beyond, to assess the consequences of these changes for human well-being, and to inform decisions-makers at various scales about these potential developments and possible response strategies and policies to adapt to or mitigate these changes’ (Carpenter et al. 2005, p. 450). The four MA scenarios are framed in terms of contrasting evolutions of governance structures for international cooperation and trade (globalized versus regionalized) and cooperation and contrasting approaches towards ecosystem management (pro-active versus reactive). The approach to scenario development uses an iterative process of qualitative storyline development and quantitative modelling in order to capture aspects of ecosystem services that are quantifiable as well as those that are difficult or impossible to express in quantitative terms. The scenarios can be classified using our typology as exploratory and strategic. In conception, the results of the quantitative simulation models are meant to ensure the consistency of the storylines (figure 4). However, in practice, time constraints limited the number of iterations and the MA scenario report candidly admits the presence of remaining inconsistencies between storyline narratives and simulation results. International cereal prices in the millennium ecosystem assessment (MA) scenarios in 2050. Source: Carpenter et al. (2005). Light grey bars, 1997; dark grey bars, TechnoGarden; white bars, Global Orchestration; medium grey bar, Order from Strength; black bars, Adapting Mosaic. Source: Millennium Ecosystem Assessment 2005 Ecosystems and human well-being: scenarios. Reproduced by permission of Island Press, Washington, DC.
In all four scenarios global per capita food production in 2050 is higher than in the 2000 base. Thus, none of the futures presented is a classic Malthusian scenario (Willenbockel 2009). However, the global average masks considerable variation across regions within the individual scenarios. Under the Global Orchestration (GO) scenario, which is characterized by global trade liberalization, global cooperation and a reactive approach towards environmental management, by 2050 agricultural output in both developed and developing regions is mostly produced on large highly mechanized farms. Low-intensity farming continues only as a lifestyle choice and on marginal lands in least developed areas. Despite this intensification, crop area expands globally as the share of meat in people's diets increases with growing prosperity, which in turn raises the demand for animal feed. Around 50 per cent of sub-Saharan Africa's forests are envisaged to disappear towards 2050. Growth in per capita calorie availability is highest among the four scenarios, and child malnourishment drops to around 40 per cent of current levels. In the TechnoGarden (TG) scenario, a proactive technology- and market-based approach to ecosystems fosters a rapid transformation of agriculture across the globe. In developed regions, the assignment of property rights generates incentives for farmers to dedicate land increasingly to the provision of multiple ecosystem services. The elimination of agricultural trade barriers attracts investments from agri-business and supermarket chains into Latin American, African and Eastern European agriculture and leads to agricultural intensification in combination with an increasing development and adoption of locally adapted genetically modified crops in these regions. Indeed, sub-Saharan Africa is envisaged to turn into ‘one of the globe's ‘breadbaskets’ with some of the cleanest cities and most rational land use in the world’ (Carpenter et al. 2005, p. 259). Calorie consumption levels and child malnourishment are similar to the GO scenario. The Adapting Mosaic (AM) scenario is a future with an emphasis on local approaches and local learning to the improvement of ecosystem services and with diverse outcomes across regions. Under AM, the WTO Doha Round trade liberalization negotiations break down and climate change mitigation as a globally coordinated effort disappears from the policy agenda. Global increases in calorie availability are very low compared with GO and TG. Food system outcomes are worst under the Order from Strength (OS) scenario, which combines a reactive approach to ecosystem stresses with high trade barriers and low levels of global cooperation. Per capita food availability in 2050 reaches only around 80 per cent of GO levels. OS is the only MA scenario with rising child malnutrition. Owing to insufficient investment in yield improvements, production growth necessitates significant crop area expansion in both developed and developing regions. The outlook for sub-Saharan Africa is particularly concerning: OS envisages a significant decline in farm output exacerbated by climate change impacts, and widespread food insecurity as a trigger of mass migration from southern to West and East Africa, leading to social unrest and civil war in the latter regions. The CAWMA also developed a preferred future of optimistic investment approaches to meet the target of feeding a global population of 9 million in 2050 (de Fraiture et al. 2007). In scenario analysis preferred futures are often referred to as a ‘fifth scenario’. The findings from the five scenarios developed previously (rainfed optimistic, rainfed pessimistic, irrigation expansion, irrigation yield improvement, trade) strongly favour a portfolio approach to investment that is customized for each region. In South Asia, the emphasis is on irrigation yield improvement, with limitations placed on new irrigation development so that there is a focus on poverty reduction of smallholders and groundwater resources are protected. On the other hand, in sub-Saharan Africa, the emphasis is on improving the performance of rainfed agriculture. Smallholders concentrate on producing labour-intensive crops for local markets. Physical and institutional infrastructure enables rural growth and poverty reduction, and eventually with urbanization and diversification, farm sizes and incomes increase. There is also an increase in the irrigated area by around 80 per cent to support production of high-value cash crops such as sugar, cotton and fruit. For the Middle East and North Africa freshwater withdrawals are subject to increased regulation and there is a switch from irrigated cereal crops to higher value fruit and vegetables. East Asia improves existing irrigation productivity and with the integration of fisheries in paddy production, aquaculture production increases. China, in particular, regulates environmental flows more carefully and becomes a major grain importer. There is an expansion of cultivated areas in Eastern Europe, Central Asia and Latin America, mostly for rainfed production. Latin America increases exports of sugar, soya beans and biofuels. In OECD countries aquatic ecosystem services are restored and agricultural exports fall with subsidy reform. The global average rainfed cereal yield increases by 58 per cent and rainfed water productivity improves by 31 per cent. For irrigated yields the increase is 55 per cent and water productivity improves by 38 per cent. Globally, harvested areas increase by 14 per cent, although much of the increase in the harvested irrigated area comes from cropping intensity rather than from expansion. Negative impacts on terrestrial ecosystems are mitigated by regulation. Freshwater withdrawals by agriculture increase by only 13 per cent in 2050 in this normative, preserving future. The Agrimonde project, jointly initiated by the Institut National de la Recherche Agronomique and the Centre de Coopération Internationale en Recherche Agronomique, created a mostly qualitative scenario of a sustainable food system that feeds a global population of 9 billion people in 2050. It uses a basic quantitative tool called Agribiom to simulate regional supplies, uses and balances of physical food biomasses and their calorie equivalents but does not attempt to determine market prices for agricultural commodities (Chaumet et al. 2009). This future, entitled Agrimonde 1, was inspired by a book that proposed a sustainability scenario for the food system driven by a ‘doubly green revolution’ (Griffon 2006). The normative target is, thus, that in 2050 the world has developed a sustainable food system. In fact, it is assumed provocatively that in each region there is an equalization of consumption to an average of 3000 kcal per person per day in 2050. In the late 2010s, increasing instances of food crises threaten social and political stability. Values converge among actors and the concept of a sustainable food system is pursued following ‘hunger riots'. A globalized community of practice evolves to manage ecosystem services and there are limits on proprietary intellectual property. Climate change has driven technological development in agriculture towards an ecological intensification that is sufficiently productive yet minimizes environmental externalities for soil, water and biodiversity. Greater biodiversity is assumed to improve system resilience. Such paradigms for sustainable agriculture have been advocated for developing countries (Pretty et al. 2006). An energy crisis in the 2020s provokes a step-change in the energy system towards decentralization of production. By 2050, there is global governance to prevent distorting policies and to intervene in the management of reserve stocks in order to protect import-dependent countries. Markets are regulated to prevent price volatility. There are also national and regional strategies integrated at different layers of power devoted to food security. Greater investments in infrastructure and social services have been partly made possible by improved income from rural areas. The industrial agricultural model, though initially dominant, merges with more local food and agricultural systems, especially in developing countries. There is a lower proportion of processed to raw products; and regulations impose greater accountability on companies to support nutritional objectives. In OECD countries, reductions in kilocalorie per capita consumption are driven by less waste, better nutrition policy and behaviour change; in sub-Saharan Africa, increases are driven by sustainable economic development. Latin America and sub-Saharan Africa successfully exploit supply-side yield gaps where agro-ecological capability and capacity are available. Countries in the former Soviet Union also exploit yield gaps but on land with less potential. Yield gaps between the least productive and the most productive have narrowed. A new generation of biofuels has also emerged by 2050. The world's total crop area (food and non-food) is extended by 39 per cent to 2050 with new croplands mainly in Latin America and sub-Saharan Africa. Pasture is the land cover mostly converted because of pressures to conserve forests. The irrigated area is static in all regions except sub-Saharan Africa where it has doubled, and in Asia where there has been a slight increase. Three regions have aggregate import dependencies; Asia has to import calories for animal feed; and it is necessary for the Middle East, North Africa and sub-Saharan Africa to import to satisfy food demand. Three regions have surpluses—OECD countries, Latin America and the former Soviet Union. The challenge of communicating multiple futures of complex systems has led to a preference towards scenario axes of two relatively independent, high impact, highly uncertain dimensions of uncertainty (Alcamo 2001) (figure 5). Rigorous and transparent management of uncertainty is necessary to judge the adequacy of any model to be a proxy to the future system (Wack 1985a; Rotmans & van Asselt 2001). Nevertheless, quantitative food system models are valuable in managing existing knowledge on system behaviour and ensuring the credibility of qualitative stories. Axes of the MA scenarios. Source: Carpenter et al. (2005). Source: Millennium Ecosystem Assessment 2005 Ecosystems and human well-being: scenarios. Reproduced by permission of Island Press, Washington, DC.
Wack (1985a) argues that the most important part of the scenario analysis process is to challenge the mental maps that actors use to navigate the future of the system of interest. Projections based solely on a model of the existing system may help to point towards sensitivities in the system and highlight new policy areas worthy of present attention but they are less suitable to manage uncertainty over long-term horizons. Indeed, the FAO baseline projection acknowledges the need for greater analysis of the impact of rising energy prices on food system structure. The CAWMA is an interesting example of the potential of multiple scenarios to simplify policy challenges rather than complicate them (Schoemaker 1993). Its five scenarios point compellingly to a normative preferred future for the food system. Regardless of this ‘fifth scenario’ these strategies can only ameliorate the increase in freshwater withdrawals that will be required to feed the global population in 2050. Variation in regional agro-ecological capability and capacity, and diversity in agricultural systems, suggest that a strategic portfolio of policy responses will be necessary. Input assumptions for highly significant socio-economic drivers of change such as GDP growth are held constant across the scenarios to test the sensitivity of the system to alternative investment strategies; and although this may have been suitable for the purpose of the exercise in question, this method would not have been appropriate for exploratory scenario analysis. There are no Malthusian scenarios in Parry et al. (2004), even if food security outcomes in the A2 scenario include an estimated increase in the number of people at risk of hunger to 600 million by 2080. The limitations of the analysis are transparently acknowledged and highlight areas where innovation is necessary in biophysical modelling. For example, crop yield change estimates assume pests and diseases are controlled; flooding is not simulated in the crop models; assumptions of farm-level adaptation are based on current technology; and hydrological processes are simplified because of the resolution of the climate simulations. The effect of CO2 fertilization on yields is an important ‘known unknown’. It should also be noted that where once the A2 scenario was considered to be an extreme future, it has increasingly begun to be viewed as ‘business as usual’ (Nelson et al. 2009). Exploratory scenarios may be the most suitable scenario type for managing uncertainty in the food system over long-term horizons to 2050 but the development of such scenarios requires significantly more resources than projections (Willenbockel 2009). Interestingly, exploratory scenarios from previous exercises that include analysis of food system are homogeneous and group into scenario families with similar food security outcomes. Cumming et al. (2005) propose five scenario families for integrated environmental assessments. In ‘market forces’, economic growth is the overriding aim of the system and as a consequence there are negative environmental externalities; values are broadly individualistic. The ‘reformed market’ uses hierarchical governance to address such externalities with regulation at the expense of some economic growth. A disconnected world of ‘higher fences’ may be the result of de-globalization if protectionism rises and trade volume falls in response to anxiety and fatalism about the future. The ‘values change’ family of scenarios is characterized by convergence towards a more sustainable and egalitarian society. Lastly, regionalization and localism may produce a ‘multipolar world’. For three recent integrated assessments that provide a reasonable fit to these families, food security outcomes are similar (Parry et al. 2004; Carpenter et al. 2005; UNEP 2007). Global aggregate food availability and accessibility outcomes are broadly similar in the ‘market forces’ and ‘reformed market’ scenario family with significant reductions in malnourishment; at a regional scale sub-Saharan Africa and Asia remain the regions at most risk of hunger. However, food security outcomes may worsen beyond 2050 in ‘market forces’ as negative environmental externalities accumulate. The ‘values change’ scenario produces the most positive food security outcomes at global and regional scales because this is a more equitable future, with positive economic convergence between regions, and livelihoods that are increasingly sustained by nature's income rather than from erosion of its capital. The ‘higher fences’ scenario family produces noticeably negative outcomes at global and regional scales as a consequence of protectionist trade, which limits food availability, and low economic growth, which reduces food accessibility. Negative environmental externalities are especially severe as agro-ecological capabilities are stretched beyond appropriate limits. The full implications of climate change for the food system are not yet examined in these case studies because of technical, methodological and epistemological uncertainties. Nevertheless, it is expected to challenge the adaptive capacity of agriculture production in the developing world by 2050 (Parry et al. 2004; Nelson et al. 2009). If climate change widens the difference in yields between developed and developing countries in the future, such a divergence in outcomes may be exacerbated by existing yield gaps in the present. If fences are erected—politically, economically or technologically—food security outcomes for vulnerable regions in this future are very worrying. Agrimonde 1 provides the narrative of a pathway towards feeding the global population healthily and sustainably, but it is not able to underpin its analysis with a credible quantitative simulation of the food system. It is unsurprising that this scenario does not use model simulation outputs. Food system models simulate the future based on the past, and if the food system is expected to profoundly transform, as it does in this scenario, a quantitative proxy for the existing system is less valid. It is a scenario that deliberately challenges the mental maps of food system actors, not least in its assumption of an equalization in food demand and in its expectation of extensification. According to the internal logic of the scenario, a world with a sustainable food system is still vulnerable to negative food security outcomes. Moreover, the ‘values change’ scenario family, the step-changes required to produce a paradigm shift to a sustainable food system in 2050 are non-trivial. Multiple scenarios are, therefore, recommended for food system actors to prepare for the future with strategies that adequately hedge against uncertainty (Lempert et al. 2006). Finally, if there is one conclusion that can be drawn across this diverse selection of case studies, it is that international trade will be a crucial determinant of food system outcomes, both for food security and sustainability. Yet, both the general and partial equilibrium modelling approaches have a tendency to smooth outcomes, based on a sequence of equilibria, which means that potential trade shocks and resulting discontinuities in the food system are difficult to simulate. Scenarios are not predictions; and scenario analysis is arguably at its most powerful as a vehicle for experiential learning (Wack 1985a). Alcamo (2001) suggests that integrated environmental assessments employing qualitative scenarios analysis and quantitative modelling may influence policy-makers by managing knowledge in a way that is more communicable. Yet, there is a paucity of research on the impact of such assessments on system actors. An evaluation of the MA found ‘little evidence so far that the MA has had a significant direct impact on policy formulation and decision-making, especially in developing countries’ (Wells et al. 2006, p. 38). For environmental assessments more generally, Mitchell et al. (2006, p. 324) find that the nature of the process of knowledge co-production among stakeholders is a stronger determinant of influence than final outputs. For scenario analysis in particular, stakeholder participation is crucial (van der Heijden 2005, p. 220). Knowledge co-production may be impeded if scenario analysis is not sufficiently participatory or if the modelling process used to underpin narratives is not accessible. Garb et al. (2008) highlight a social divide between scenario developers and users that results in a ‘clumsy hand-off’ of learning. Drivers of change affect the food system at global, regional, national and local scales (Hazell & Wood 2008). Food system actors also interact with the system at different scales and in a variety of ways. Although scenario analysis is necessary at the global scale, participatory processes with key stakeholders at other geographical scales may increase the quality of scenario analysis and improve its impact (Zurek & Henrichs 2007). Alternatively, in circumstances where this is not feasible, improving the transparency of the scenario and modelling process may be a pragmatic compromise to encourage engagement with other food system actors (Parson 2008; table 2). The process for developing a new generation of normative climate scenarios builds on some of these principles and may offer a useful way forward (Moss et al. 2010).
Wack (1985a,b) evaluates the impact of scenario analysis based on its ability to provoke decision-makers to reconsider and ultimately redraw the mental maps with which they navigate the future of a system. Schoemaker (1993), in an exploration of the psychological benefits of scenario analysis, concludes that scenario analysis can indeed expand thinking; but more empirical research is required into the ways in which scenarios can successfully alter the mental maps actors have of a system (Garb et al. 2008). Thompson & Scoones (2009) challenge the worldviews with which the food system is envisaged. Basic narratives of growth, it is argued, have been over-emphasized, at the expense of more multi-dimensional narratives of adaptation. For long-term objectives of reducing poverty in the rural developing world and maintaining ecosystem services, alternative narratives of sustainable agriculture and participatory research and development are proposed. With notable exceptions such as the MA, the concept of sustainability across the social, economic, biophysical, political and institutional dimensions of the food system has been inadequately explored so far in integrated assessments, mostly for reasons of technical, methodological and epistemological uncertainty (Swart et al. 2004). Scenario analysis could be increasingly important in developing new worldviews of a food system that can feed a growing population healthily and sustainably in 2050. It is widely acknowledged that more work on the validation of model components used in integrated assessment studies is required, yet existing data sources often do not provide a sufficient basis for an ex-post comparison of simulation results with historical observations. On the other hand, in the presence of climate change and potential nonlinearities and tipping points, there is a risk of over-calibrating models to past processes that might not necessarily be the processes driving future developments (Uthes et al. in press). For modellers involved in integrated assessment, the availability, coverage, quality and accessibility of spatially explicit datasets for global crop production and trade, land use and hydrology are major concerns. In addition to primary data collection efforts, the development of an integrated data repository along with concordances between datasets that are based on different conceptual schemes and scales would be desirable. There is a need for scaling algorithms that ensure conceptual consistency of the data flow between model components that operate at different spatial, sectoral and temporal scales. Various up- and downscaling methods exist but knowledge about scaling in integrated assessment is still in a state of infancy and often lacks scientific rigour (Ewert et al. 2009). The EU SEAMLESS project may be seen as a promising initial effort in this direction. The IPCC Fourth Assessment Report identifies a long list of knowledge gaps and associated research priorities related to climate change impacts on agricultural production (Easterling et al. 2007), which includes inter alia the need for (i) further free air CO2 enrichment (FACE) experiments on an expanded range of crops, pastures, forests and locations, especially crops of importance for the rural poor in developing countries; (ii) basic knowledge of pest, disease and weed response to elevated CO2 and climate change; (iii) a better representation of climate variability including extreme events at different temporal scales in crop models; (iv) new global simulation studies that incorporate new crop, forestry and livestock knowledge in models; (v) more research to identify highly vulnerable microenvironments and to provide economic coping strategies for the affected populations, since relatively moderate impacts of climate change on overall agro-ecological conditions are likely to mask much more severe climatic and economic vulnerability at the local level; and (vi) examination of a wider range of adaptation strategies and adaptation costs in modelling frameworks. FootnotesOne contribution of 23 to a Theme Issue ‘Food security: feeding the world in 2050'. While the Government Office for Science commissioned this review, the views are those of the author(s), are independent of Government, and do not constitute Government policy. © 2010 The Royal Society This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. References
Page 18Attempts have been made to quantify global food waste over several decades, motivated partly by the need to highlight the scale of ‘waste’ in relation to global malnutrition. Such assessments are reliant on limited datasets collected across the food supply chain (FSC) at different times and extrapolated to the larger picture. The most often quoted estimate is that ‘as much as half of all food grown is lost or wasted before and after it reaches the consumer’ (Lundqvist et al. 2008). Such estimates are difficult to scrutinize but highlight the need for greater resource efficiencies in the global FSC. This paper presents results from a driver review of food waste issues, combining information on food waste from the international literature and interviews with supply chain experts. Although waste is formally defined in different legal jurisdictions, definitions relate to particular points of arising and are often framed in relation to specific environmental controls. Food waste occurs at different points in the FSC, although it is most readily defined at the retail and consumer stages, where outputs of the agricultural system are self-evidently ‘food’ for human consumption. Unlike most other commodity flows, food is biological material subject to degradation, and different food stuffs have different nutritional values. There are also moral and economic dimensions: the extent to which available food crops are used to meet global human needs directly, or diverted into feeding livestock, other ‘by-products’ and biofuels or biomaterials production. Below are three definitions referred to herein:
Within the literature, food waste post-harvest is likely to be referred to as ‘food losses’ and ‘spoilage’. Food loss refers to the decrease in food quantity or quality, which makes it unfit for human consumption (Grolleaud 2002). At later stages of the FSC, the term food waste is applied and generally relates to behavioural issues. Food losses/spoilage, conversely, relate to systems that require investment in infrastructure. In this report, we refer to both food losses and food waste as food waste. Similarly, both ‘FSC’ and ‘post-harvest systems’ are used to mean the same thing in the literature, with ‘post-harvest loss’ also often used when describing agricultural systems and the onward supply of produce to markets. FSC is more associated with industrialized countries where post-harvest processing and large retail sectors are important features. ‘Post-consumer losses’ include food wasted from activities and operations at the point at which food is consumed. The method of measuring the quantity of food post-harvest is usually by weight, although other units of measure include calorific value, quantification of greenhouse gas impacts and lost inputs (e.g. nutrients and water). Where loss data are available for each step of a crop and are applied to production estimates, a cumulative weight loss can be calculated. When the Food and Agriculture Organization of the United Nations (FAO) was established in 1945, it had reduction of food losses within its mandate. By 1974, the first World Food Conference identified reduction of post-harvest losses as part of the solution in addressing world hunger. At this time, an overall estimate for post-harvest losses of 15 per cent had been suggested, and it was resolved to bring about a 50 per cent reduction by 1985. Consequently, the FAO established the Special Action Programme for the Prevention of Food Losses. The main focus was initially on reducing losses of durable grain; by the early 1990s, the scope of work had been broadened to cover roots and tubers, and fresh fruits and vegetables (FFVs). Poor adoption rates for interventions led to the recognition that a purely technical focus was inadequate for solving problems within the sector and a more holistic approach was developed (Grolleaud 2002). There is no account of progress towards the 1985 post-harvest loss reduction target, and recently Lundqvist et al. (2008) called for action to reduce food waste advocating a 50 per cent reduction in post-harvest losses to be achieved by 2025. Post-harvest losses are partly a function of the technology available in a country, as well as the extent to which markets have developed for agricultural produce. Three inter-related global drivers provide an overall structure for characterizing supply chains and future trends in developing and transitional countries.
To reflect these important global drivers, post-harvest losses are considered along a technological/economic gradient: ‘developing’, ‘intermediate’ and ‘industrialized’ FSCs. Figure 1 provides an overview of the development of post-harvest infrastructure along this gradient, expanded in table 2.
Schematic development of FSCs in relation to post-harvest infrastructure.
Developing countries: The majority of the rural poor rely on short FSCs with limited post-harvest infrastructure and technologies. More extended FSCs feeding urban populations are likely to involve many intermediaries between growers and consumers, which may limit the potential for growers to receive higher prices for quality. Farming is mostly small scale with varying degrees of involvement in local markets and a rapidly diminishing proportion of subsistence farmers who neither buy nor sell food staples (Jayne et al. 2006). Interventions within these systems focus on training and upgrading technical capacity to reduce losses, increase efficiency and reduce labour intensity of the technologies employed. However, attempts to reduce post-harvest losses must take account of cultural implications. In years with food surpluses, the prices received for goods will be low. One option is to store surplus for lean years, but there may not be suitable storage facilities. To rectify this, investment and engineering skills are needed. There are many instances of relatively simple technologies providing effective solutions, such as an FAO project in Afghanistan and elsewhere that provided simple effective sealed storage drums for grain farmers, dramatically reducing post-harvest food losses (FAO 2008). Transitional and industrialized post-harvest systems have a closer integration of producers, suppliers, processors, distribution systems and markets ensuring greater economies of scale, competitiveness and efficiency in the FSC. Supermarkets are the dominant intermediary between farmers and consumers. Even in poorer transitional economies, supermarkets are the main vehicle for delivering diversified diets: for the growing middle classes and the urban poor. This is almost entirely dependent on foreign direct investment, with high growth rates in Eastern Europe, Asia and Latin America (Reardon et al. 2007). The sequence of transformation follows a different route in each country, particularly in the extent to which retailers bypass existing markets and traditional wholesalers to secure produce of the required standard and volume. There are often strong cross-links with export quality assurance, the quality standards set by supermarkets, and the procurement systems. Many of the issues identified are no different from supply chain issues in developed economies:
Accounts of supermarket expansion in some countries suggest there are instances of successful adaptation to traditional supply chains (Chen et al. 2005), particularly in regions that have not been so involved in export-orientated markets. Where central wholesale markets are used to source fresh produce, retailers may be reliant on wholesalers to perform the ‘out-grading’ that in developed countries is likely to occur on-farm or at front-end packing operations. In countries with traditional two-tier produce markets (higher quality export and lower quality domestic markets), local supermarkets have created a third market for intermediate to high-quality products. At the same time, retailers provide upward pressure to improve product quality and food safety in the domestic market. Growth in FFV production has been particularly strong in the Asia-Pacific region (Rolle 2006), although the replacement of traditional markets with supermarkets has been slower in the fresh produce sector, compared with other food sectors. Within the region, FFV producers can be grouped into small farmers, groups of farmers, cooperatives, commercial farmers and foreign entities/multinationals. These producers target different markets, and show a gradient in their production capabilities, access to technologies, markets information and infrastructure. Production is dominated by small farmers with limited access to resources and technology. Growers generally focus on production activities, showing little interest in post-harvest and marketing, which are primarily undertaken by middlemen and traders. Their major markets include highly disorganized traditional wholesale and wet markets, though many supply the requirements of institutions, supermarkets and fast food chains. With limited access to financial resources and low returns from agricultural production, these farmers do not invest in new technologies or improve yields through increasing inputs to production (Mittal 2007). Development of more industrialized FSCs can also result in growth in the food processing sector. In some BRIC countries, public sector investment is being considered to accelerate this process. In India, the government is discussing an ‘evergreen revolution’, which will involve the build-up of food processing units. While this is a sensitive issue because of concerns about the industrialized sector taking control over small farmers, the improved infrastructure has helped farmers branch out into new foods, diversifying their incomes. Industrialized FSCs: medium-high income countries often argue that better resource efficiency and less waste are achieved through centrally processing food. Although more food wastage occurs at the factory, logic suggests less waste overall is generated as there is less ‘scratch-cooking’ at home. However, research on post-consumer food waste suggests that this is not the case, as consumers still waste significant quantities of food, thus potentially negating the benefits of centralized food processing. The distinction between perishable and non-perishable food stuffs is an important consideration in post-harvest losses and the adequacy of FSC infrastructure (table 3). The following sections review post-harvest losses for cereals (non-perishables) and FFVs (perishables); few sources were found for other food types.
Losses in industrialized countries are not included as loss rates are generally considered to be low (e.g. barley losses can be as low as 0.07–2.81%; Smil 2004a) and are not considered significant under normal circumstances. Grain losses occur in post-harvest systems owing to physical losses (spillage, consumed by pests) or loss in quality. Few datasets were found relating to loss of grain quality, owing to difficulty in measurement. As most of the global production of maize, wheat, rice, sorghum and millet must be held in storage for periods from one month to more than a year, many studies focus largely on storage losses. Data available for rice post-harvest losses, based on field surveys and used here as an example, are quite extensive (table 4) and represent the ‘best case’ compared with data for other crops. More extensive studies suggest that about 15 per cent of grain may be lost in the post-harvest system (Liang et al. 1993), with higher storage losses associated with the 80 per cent of China's grain stored by peasants inside their houses or in poorly constructed granaries. The extent to which variations in data presented in table 4 might relate to different levels of post-harvest technology is unclear. For instance, data discussed by Grolleaud's review (2002) found the heaviest losses at the milling stage, perhaps attributable to case studies from more mechanized systems than Liang's data, where storage losses were predominant. This emphasizes the need for post-harvest loss data to be regularly updated and more fully described, particularly for transitioning economies.
Climatic conditions are also an important consideration in determining the wider applicability of data. In humid climates, rice losses are generally greater at the drying stage (Grolleaud 2002). Hodges (undated) reviewed grain losses in East and South Africa, attempting to compare loss rates in hot humid climates (where open storage structures were required to maintain airflow) and hot dry climates (favouring sealed storage designs). Hodges concluded that data on storage losses were too limited to permit reliable comparisons of loss rates under different climates. In common with other authors, Tyler (1982) suggested that the aggregated data reflecting losses on a worldwide basis are of little value. Long-term studies of post-harvest losses in Zambia and India were identified as using ‘reliable methodology’ and indicative of the fact that when post-harvest losses are determined by field survey, storage and related post-harvest losses are usually lower than previously reported (table 5; Tyler 1982).
In summary, the main factors contributing to overestimation of grain losses were: (i) where extremes are taken rather than averages: ideally sample size and standard deviation should be quoted with the loss estimate to avoid this; (ii) removal from store over the season are not always accounted for and where they do occur, percentage losses calculated on the basis of grain remaining in store will be overestimates unless an inventory is kept; (iii) treating partial damage as a total loss, when the damaged grain would be used by farmers for home consumption or animal feed; and (iv) potential for double-counting losses at different stages in the post-harvest system. The causes and rates of post-harvest losses for perishable crops are substantially different from those for grains. Horticultural products generally suffer higher loss rates within industrialized and developing countries, although at different points in the FSC and for different reasons. Table 6 summarizes post-harvest loss estimates for FFVs for both developing and industrialized FSCs.
Kader (2005) estimated that approximately one-third of all FFVs produced worldwide is lost before it reaches consumers. Losses in the USA are estimated from 2 to 23 per cent, depending on the commodity, with an overall average of 12 per cent. A tentative estimate from the UK suggests losses of 9 per cent (Garnett 2006), but this disregards produce that might be left in the field after failing to meet cosmetic or quality criteria. Although not necessarily a post-harvest loss, out-grading represents a significant aspect of waste that is difficult to quantify and largely anecdotal (Stuart 2009), with some produce likely to enter the food processing sector if it does not meet the criteria. In the EU, quality and size classifications for marketing FFVs have excluded non-conforming produce from the market. Recent moves have relaxed these rules to allow the sale of such FFVs where they are labelled appropriately (EC 2008). The general difference between developed and developing countries is that FFV infrastructure losses are greater in developing than in developed countries. As with grain, more data on post-harvest losses are required to better understand the current situation, and uncertainties around how post-harvest loss data are extrapolated are broadly similar. As with grain, there is evidence of overestimation of perishable crop losses from traditional subsistence systems. Within such agriculture, the chain from field-to-consumer is usually short, both in time and distance. Traditional harvesting techniques, e.g. using sticks to harvest papaya and mango, may bruise fruit, with loss implications for more extended supply chains. However, a high proportion of FFVs are consumed because every quality finds a ready consumer within the locality. Where FFVs are marketed, there is potential for FSCs to be ill-adapted to the changing circumstances. Post-harvest loss literature cites measures to reduce these losses, including gentler handling of produce, better conditioning, faster transportation and proper storage (FAO 1981; Rolle 2006). These measures require improved infrastructure and a heightened interest in quality of produce by the grower. Choudhury (2006) highlights high loss rates associated with a lack of packing houses in India, with FFVs generally packed in the field and some even transported without transit packaging. Furthermore, 30 per cent of FFV production in India is wasted through lack of a cold chain (Mittal 2007). Government-supported cold chain programmes are operational in countries such as Thailand and Taiwan. In lower income countries, low-cost energy-efficient cool storage systems have been developed and implemented in an effort to minimize FFV storage losses. This review of post-harvest losses has considered wastage rates from the perspective of different food types. For industrialized countries, where waste arising data are compiled, it is possible to quantify total food losses across different sectors of the FSC. Figure 2 provides this profile for the UK, with the post-consumer element included for comparison. Food waste profile, UK food processing, distribution, retail and post-consumer. Light blue bars, recovery/reuse; magenta bars, disposal. From WRAP (2010).
Food and drink waste is estimated to be approximately 14 megatonnes (Mt) in the UK, of which 20 per cent is associated with food processing, distribution and retail. Household food waste makes the largest single contribution, but reliable estimates of other post-consumer wastes (hospitality, institutional sources) have yet to be published. The estimated total waste arisings from the food and drink manufacturing and processing sector is 5 Mt per annum, where approximately 2.6 Mt is estimated to be food waste; a further 2.2 Mt of by-products are diverted into animal feed (WRAP 2010). Waste production surveys have identified that a large proportion of these arisings originate from meat and poultry, FFVs and beverage sectors. These wastes largely consist of by-products and unsold prepared food products. As an indication of the overall resource efficiency of the sector, a mass balance estimated that nearly 56 Mt of ingredients are used annually to produce 59 Mt of food products (C-Tech 2004). More mass balances conducted at food and drink manufacturing sites suggested that around 16 per cent of raw materials were wasted (WRAP 2010). A small number of large retailers in the UK exercise market power over the 7000 suppliers within the sector. To avoid being ‘de-listed’, food manufacturers will often over-produce in case extra quantities are required at short notice. For manufacturers of supermarkets' own brands, packaged surplus production cannot be sold elsewhere and becomes waste; however, the sector is adept at reusing the majority of food waste generated (C-Tech 2004). More detailed supply chain mapping studies are under way to understand where the greatest opportunities for increased resource efficiency lie (WRAP 2010). At the retail and distribution stage, the most recent estimate suggests 366 kt per annum (WRAP 2010). Amounts of waste produced by food retailers vary between outlet types. Small grocery stores produce proportionately more waste than large supermarkets, as the former tend to be used by consumers for top-up shopping, which makes demand unpredictable. This section summarizes knowledge of post-consumer food waste, focusing on household sources and quantities of food wasted. Data from a handful of OECD countries and economies in transition were reviewed. We were unable to find published studies relating to post-consumer food waste in the developing world, where a ‘buy today, eat today’ food culture exists. Methodologies for post-consumer waste analysis vary, from small numbers of households weighing food waste or using kitchen diaries to waste compositional and behavioural studies involving thousands of households (WRAP 2008, 2009a). Others have used contemporary archaeological excavations of landfill sites to determine historical levels of food waste (Jones 2006); estimated household food waste indirectly from loss coefficients based upon existing research (Sibrián et al. 2006); or estimated wastage using statistical models relating population metabolism and body weight (Hall et al. 2009). Some studies have measured household food waste as a percentage of total consumed calories, others as a percentage of the total weight of consumed food or of the consumed food items. Some studies have sought to estimate the environmental impact of food waste, including the embodied greenhouse gas emissions (WRAP 2008, 2009a) or water (Lundqvist et al. 2008). Most of the estimates relying on exogenous food loss coefficients come from studies dating back to the 1970s. Since then, technological progress resulting in fast changes in markets, distribution systems and household storage facilities have rendered these estimates outdated (Kantor 1998; Naska et al. 2001). Increased consumer choice and a decrease in the proportion of disposable income spent on food have tended to increase wasteful behaviour. As such, any study where waste was measured over time as a constant proportion of food consumed is in danger of being inaccurate (Sibrián et al. 2006). In many studies, food scraps fed to domestic animals and sink disposals were not included, thus yielding inaccurate estimates for total food waste (Harrison et al. 1975; Wenlock & Buss 1977; T. Jones 2003, unpublished data). In some cases, the wastage owing to feeding to pets reached 30 per cent of the total food wastage in dietary energy terms (Mercado-Villavieja 1976; Wenlock et al. 1980; Osner 1982). Sources of food and drinks that are consumed within the home include retail and contributions from home-grown food and takeaways. Figure 3 indicates which disposal routes are classified as household waste streams. In effect, this excludes significant quantities of food and drink eaten ‘on-the-go’, in the workplace or in catering establishments. Wherever possible, the distinction is made between three classifications of household food waste (figure 4): ‘avoidable’, ‘possibly avoidable’ and ‘unavoidable’. Sources and disposal routes of household food and drink in UK homes. From WRAP (2009a) Household food and drink waste in the UK.
Definitions associated with household food and drink waste. From WRAP (2009a) Household food and drink waste in the UK.
Pre-Second World War studies (Cathcart & Murray 1939) showed that 1–3% of food was wasted in the home in Britain. The next major study, by the UK Ministry of Agriculture, Fisheries and Food in 1976, investigated the 25 per cent ‘crude energy gap’ between estimates of embodied energy in domestically grown and imported food (an average of 12.3 MJ (2940 kcal) of energy to each person per day), and the average physiological requirement for energy according to the UK Department of Health and Social Security (9.6–9.8 MJ (2300–2350 kcal)/person per day) (Wenlock et al. 1980; Osner 1982). The resultant survey of 672 households recorded all the potentially edible food wasted in a week, and found that, when assessed against the expected usage of food in the home, wastage accounted on average for 6.5 per cent of the energy intake in summer and 5.4 per cent in winter (Osner 1982). More recently, the Waste and Resources Action Programme (WRAP) has shown that household food waste has reached unprecedented levels in UK homes (WRAP 2008, 2009a,b), with 8.3 Mt of food and drink wasted each year (with a retail value of £12.2 billion, 2008 prices) and a carbon impact exceeding 20 Mt of CO2 equivalent emissions. The amount of food wasted per year in UK households is 25 per cent of that purchased (by weight). A 1998 study by Kantor et al. of food waste in the USA revealed that 25 per cent of food was wasted. Archaeological excavations of US landfills by the University of Arizona (Griffin et al. 2009) also drew attention to food waste in the USA and provided quantitative data on the likely scale. Jones et al. (T. Jones, A. Bockhorst, B. McKee & A. Ndiaye 2003, unpublished data) estimated that American households discarded 211 kg of food waste per year, not including food to drain, into home composting or feed to pets. The amount of food loss at the household level was estimated to be 14 per cent (T. Jones, A. Bockhorst, B. McKee & A. Ndiaye 2003, unpublished data), costing a family of four at least $589.76 annually (Jones 2004). Jones has estimated that overall food losses in the USA amount to US$90–100 billion a year, of which households threw away US$48.3 billion worth of food each year (Jones 2006). Finally, the US Environmental Protection Agency estimated that food waste in 2008 accounted for 12.7 per cent (31.79 Mt) of municipal solid waste stream (USEPA 2009). Despite the dearth of food waste data in Australia, a submission to the Senate inquiry estimated that food waste comprises 15 per cent of the 20 Mt of waste that goes to landfill each year (Morgan 2009). A South Korean study (Yoon & Lim 2005) followed their 2002 landfill ban on food waste in the municipal waste stream and suggested that food accounted for 26–27% of household waste (Baek 2009). Despite an awareness-raising effort in advance of the ban, food waste increased by almost 6 per cent over 4 years after the ban, with increased consumption of FFVs linked to higher incomes cited as a reason. The Dutch Ministry of Agriculture, Nature and Food Quality has estimated that Dutch consumers throw away approximately 8–11% of food purchased (2009), equating to 43–60 kg of food waste with an average value of €270–400 per person per year (Thönissen 2009). Finally, a UN FAO study (Pekcan et al. 2006) estimated household food wastage using a sample of 500 households in Ankara, Turkey, grouped according to socio-economic status. Mean energy intake levels per consumption unit and per person were 2692.6 and 2207.9 kcal d–1, respectively. The mean daily energy loss from acquisition of food to plate waste was 481.7 kcal by the average household and 215.7 kcal per person, amounting to 8.9 per cent of daily per person dietary energy consumption. The average daily discards per household and per person were 816.4 and 318.8 g, respectively. The few quantitative studies that relate to post-consumer food waste are difficult to compare in terms of food wastage per household, as demonstrated in table 7. Different methods and definitions applied to the measurement of food waste reduce comparability of data and some methods do not provide robust estimates owing to small samples. Different definitions of food waste are applied, particularly with regard to ‘edible’ and ‘inedible’ fractions and the extent to which alternative disposal routes are considered. When differences are identified, the post-consumer element must also be considered in the context of the whole FSC.
Most studies that have sought to identify the main food types wasted find that it is the most perishable food items that account for the highest proportion of food waste. FFVs are usually among the most-wasted items, followed by other perishables like bakery and dairy products, meat and fish (Pekcan et al. 2006; WRAP 2008; Morgan 2009; Thönissen 2009). There is often a large variation in the wastage rates for different food types: WRAP (2009a) found that 7 per cent of milk purchases is wasted, 36 per cent of bakery and over 50 per cent of lettuce/leafy salads (by weight), while Jones et al. (T. Jones, A. Bockhorst, B. McKee & A. Ndiaye 2003, unpublished data) found similar variations in the average wastage rates for different food types. Although food and drink categories are not fully consistent across studies, figure 5 serves to highlight variation in household food waste composition. Thönissen (2009) found an unusually high proportion of food waste consisted of dairy products, while in the Turkish data, wasted FFVs accounted for the highest proportion (Pekcan et al. 2006). The extent to which such differences relate to consumption patterns or different wastage rates cannot be divined from these data alone, although the Turkish study noted the importance of fruit in the diets of households studied. Nor do these compositional data distinguish between avoidable and unavoidable food waste, the exception being the UK data shown in figure 6. Summary of household food waste composition across five countries.
Weight of food and drink waste by food group, split by ‘avoidability’. Brown bars, avoidable; yellow bars, possibly avoidable; dark blue bars, unavoidable. From WRAP (2009a) Household food and drink Waste in the UK. Brown bars, avoidable; yellow bars, possibly avoidable; dark blue bars, unavoidable.
The following factors may help to explain variation in quantities of household food waste generated.
There are a limited number of studies focusing specifically on the reasons for householders wasting food, largely restricted to the UK (Exodus 2006; Brook Lyndhurst 2007; WRAP 2008, 2009a,b), the USA (Van Garde & Woodburn 1987) and Australia (Hamilton et al. 2005). These studies highlight a complex array of consumer attitudes, values and behaviours towards food and how varying degrees of food knowledge affect individual's propensity to waste food. It is possible to group identified attitudes, values and behaviours by using a combination of qualitative and quantitative consumer research techniques thereby determining key claimed behaviours, and waste compositional analysis. The resultant information is then used to establish those attitudes, values and behaviours that are the strongest drivers of household food waste. In the UK, detailed research findings described in figure 7 for the two principal reasons why avoidable food waste occurs are: ‘food is not used in time’ and ‘too much food is cooked, prepared or served’. Classification of UK household food and drink waste by avoidability, reason for disposal and economic value. Bracketed figures show the tonnages and economic values for food and drink separately. From WRAP (2009a) Household food and drink waste in the UK.
Bracketed figures show the tonnages and economic values for food and drink separately. These two broad categories are explained below:
Food wasted along the FSC is the outcome of many drivers: the market economy, resource limitations and climate, legislation and cultural differences being just a few. We have outlined the difficulties in defining and quantifying such waste and described how production of waste differs within the developing, transitional and developed worlds. Here we discuss the trends likely to drive waste production in future, where there could be the greatest potential for reduction of food waste to occur in developing and developed worlds, and what policies and systems may be required to reduce food waste to 2050. In the developing world, lack of infrastructure and associated technical and managerial skills in food production and post-harvest processing have been identified as key drivers in the creation of food waste, both now and over the near future (WFP 2009). This situation contrasts with that in developed countries where our interviewees forecast the majority of food waste continuing to be produced post-consumer, driven by the low price of food relative to disposable income, consumers' high expectations of food cosmetic standards and the increasing disconnection between consumers and how food is produced. Similarly, the increasing urbanization within transitioning countries will potentially disconnect those populations from how food is grown, which is likely to further increase food waste generation. Across the globe, resource and commodity limitations, in part as a result of an increasing population but also owing to impacts of climate change, were viewed as being likely to increase the economic value of food, potentially driving more efficient processes that could lead to food waste reduction. Industrialized FSCs will continue to develop in response to these wider challenges by the development of shared logistics (e.g. collaborative warehousing), identification and labelling of products (use of barcodes and RFID tags) and better demand forecasting (Global Commerce Initiative 2008), and domestic kitchen technologies (smart fridges, cookers, online meal planning and recipe resources) may make it easier for consumers to manage their food better and waste less of it. Interviewees emphasized the importance of implementing sustainable solutions across the entire FSC to fully realize the potential for food waste reduction. In developing and emerging economies, this would require market-led large-scale investment in agricultural infrastructure, technological skills and knowledge, storage, transport and distribution. Such investments have been shown to stimulate rural economies (WFP 2009), e.g. the development of the Nile Perch Fishery in East Africa. In this case, and despite the unintended consequences of over-fishing and disruption of local communities, the international market for Nile perch stimulated infrastructure development and considerably reduced post-harvest losses. Where international markets and local policies and investment are lacking, large-scale capital investment in infrastructure in developing countries has often failed (FAO 2003; Kader 2005). For long-term sustainability, development across the FSC in the developing world requires locally supported government policies and investment alongside any market-led private investment with reach through into developed world markets. Examples of integrated cross-FSC approaches to food waste reduction include various cooperative schemes, e.g. the Common Code for Coffee Community, and the Sustainable Agriculture Initiative. Conversely, the greatest potential for the reduction of food waste in the developed world lies with retailers, food services and consumers. Cultural shifts in the ways consumers value food, stimulated via education, increased awareness of the FSC and food waste's impact on the environment have the potential to reduce waste production. Improved food labelling and better consumer understanding of labelling and food storage also have food waste reduction potential. WRAP's ongoing activities in this area, through programmes such as ‘Love Food Hate Waste’, are very recent and their impact is yet to be established. With food price recognized as the most important factor in determining consumer decisions, anecdotal evidence suggests that the economic crisis has stimulated a shift in consumer attitude to food waste. Innovative technology throughout the FSC, in both developed and developing worlds, particularly in packaging, contributes to improving shelf life for perishable foods and semi-prepared meals. Continued developments in packaging, e.g. utilizing nanotechnology and materials science, have the potential to further increase shelf life. In the developing world, transfer of existing technologies and the spread of good practice, allied to market-led investment, have the greatest potential to reduce food waste across the FSC. It is of key importance, however, that practical developments address the problems of local farmers, using indigenous knowledge where that has been shown to be sustainable. Without participation of local farmers, such knowledge transfer is unlikely to succeed. While attempts to shift consumer behaviour may result in reduction in food waste in developed countries, changes in legislation and business behaviour towards more sustainable food production and consumption will be necessary to reduce waste from its current high levels. An example might be through the development of closed-loop supply chain models (WEF 2010). In such models, waste of all forms would be fed back into the value chain (such as packaging waste being re-used), food graded as lower quality for cosmetic reasons and food that is surplus to retailer or manufacturers, to be made available through alternative routes (e.g. Fareshare or as cheaper alternatives), while unavoidable food waste would be utilized as a by-product, e.g. in providing energy from waste using the appropriate technology. A firm evidence base from which to assess food waste globally is lacking, with no specific information on the impact of food waste in BRIC countries a major concern, and with much of the loss estimates from developing countries collected over 30 years ago. There is a pressing need for quantitative evidence covering developing countries and the rapidly evolving BRIC country FSCs. Without systematic evidence, the arguments over the potential for reducing global food waste as a contribution to feeding nine billion people by 2050 will remain largely rhetorical, and measuring progress against any global reduction target impossible. As a consequence of the information gaps and uncertainties, there is no consensus on the proportion of global food production that is currently lost. Ranges between 10 and 40 percent of total global food production and as high as 50 per cent are quoted, but on closer examination, these estimates all link back to the same limited primary datasets, where much of the published data relates to fieldwork undertaken in the 1970s and 1980s. Recent reviewers of these data claim there is a tendency to over-state losses in relation to traditional agricultural systems in developing countries, a point reiterated in this review. The lack of infrastructure in many developing countries and poor harvesting/growing techniques are likely to remain major elements in the generation of food waste. Less than 5 per cent of the funding for agricultural research is allocated to post-harvest systems (Kader 2003), and yet reduction of these losses is recognized as an important component of improved food security (Nellemann et al. 2009). Irrespective of global region, there is a need for successful introduction of culture-specific innovations and technologies across the FSC to reduce losses. Linked to the above, market transformation has enormous potential to develop FSC infrastructure and reduce waste in developing and BRIC countries. Account should be taken of the impact of market transformation on the local communities to whom food may no longer be available. The rapid expansion of FFVs supplied to consumers in transitional countries is highly likely to have resulted in significant post-harvest losses, owing to inadequate infrastructure. In the industrialized world meanwhile, post-harvest losses have been squeezed out of grain supply through heavy technological investments, while for FFVs, retailers' and consumers' demand for ‘cosmetically perfect’ produce has created significant post-harvest losses through ‘out-grades’. There is also strong evidence of an increase in post-consumer waste over the past several decades, particularly in the developed world, with pockets of data supporting similar behaviour in BRIC countries. The majority of studies show that as the proportion of income spent on food declines, food waste increases. There is clear evidence of a distribution of waste across demographic groups, with the lowest wastage rates in the immediate post-war age generation. However, it would be a mistake to assume that the demographic distribution will remain the same in the future, as today's elderly generally exhibit a ‘waste not want not’ mentality, while the elderly of the future are likely to continue to retain the same attitudes and behaviours to food that they have today. There are clearly fundamental factors affecting post-consumer food waste worldwide, some of which may require solutions that involve direct communication and awareness-raising among consumers of the importance of reducing food waste. Others require government interventions and the support and cooperation of the food industry itself, such as improving the clarity of food date labelling and advice on food storage, or ensuring that an appropriate range of pack or portion sizes is available that meets the needs of different households. The authors would like to acknowledge input to this review by those individuals interviewed during the course of the work. We are also grateful to the Foresight Team, David Lawrence, WRAP and Gaby Bloem of the Science and Innovation Network. FootnotesOne contribution of 23 to a Theme Issue ‘Food security: feeding the world in 2050'. While the Government Office for Science commissioned this review, the views are those of the author(s), are independent of Government, and do not constitute Government policy. © 2010 The Royal Society This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. References
Page 19The relationship between agricultural production and population health is complex. Patterns of production lead to patterns of availability, price and distribution of food commodities. These raw ingredients are then processed in increasingly complex ways by the food manufacturing system and the combined effects of food production and processing influence individual food consumption and thereby population health. Besides these primarily nutritional links, agricultural and food systems act as conduits of food-borne and zoonotic disease and agrochemical pollutants and compete with the water supply and sanitation needs of local communities. In the context of international development, the interaction between health, agricultural productivity and income is particularly important since more than half of the world's poorest people live in farming communities, including many suffering from under-nutrition. Finally, the various interactions between agriculture, food and health increasingly play out on a global stage, with food produced in one region frequently consumed in another, mediated by trade liberalization and growing multinational food production and distribution industries. To a large extent, global food production has kept up with the demands of a growing human population (Dyson 1996), but inequalities remain in regional and national distribution of the available food (Sen 1981). Recent estimates suggest that globally the combined effect of inadequate macro (protein–energy)- and micro-nutrient (including iron and iodine) intakes underpin 35 per cent of all child deaths and are responsible for 11 per cent of the global disease burden (Black et al. 2008). At the other extreme, excess dietary consumption, or over-nutrition, is increasingly leading to global epidemics of obesity and diabetes resulting in rapidly increasing burdens of disability and death affecting all world regions (WHO/FAO 2003; Haslam & James 2005). Indeed, several nutrition-related chronic diseases such as coronary heart disease and stroke are now among the leading causes of death worldwide, with the burden growing most rapidly in the world's lowest income countries (WHO 2008), often leading to a ‘double burden’ of both under- and over-nutrition, placing a huge burden on societies and the existing health systems (FAO 2006). There remains a clear challenge to define ways in which agricultural production could better contribute, through the food chain, to improved health for all people. To achieve this, we need to understand the interactions between agriculture, food systems and health and to have tools that allow us to predict the effects on health of agricultural change and innovation. In this paper, we explore our capacity to measure and predict agricultural impacts on health, focusing particularly on nutrition. We begin by pulling together the diverse current literature on nutrition and health to identify what constitutes a healthy diet. We then examine how we currently measure food availability and consumption in different populations, looking particularly at our capacity to do this on a global scale. Finally, we explore whether, given the tools currently at our disposal, we are able accurately to assess the impact of changes in agriculture and food systems on population health and the potential for health to act as a driver to stimulate these changes. It has long been recognized that a balance of nutrients forms the basis of a healthy diet, and ongoing research continues to further our understanding in this area. The primary elements of a diet are the three macronutrients, carbohydrates, protein and fat (table 1), but the relative contribution of these macronutrients and their constituent sub-types to the diet are crucially important in the definition of a healthy diet.
Carbohydrates are the predominant source of energy in the diet, playing a key role in metabolism and the maintenance of homeostasis. The type and balance of carbohydrates in the diet are of great importance to health. For example, the consumption of foods containing large amounts of simple carbohydrates (refined sugars), such as sweetened beverages, can promote weight gain by increasing the energy density of the diet and by their lower satiety value (van Dam & Seidell 2007). While, in contrast, diets rich in complex carbohydrates such as whole-grain cereals, vegetables and nuts contribute to lowering the risk of type 2 diabetes (de Munter et al. 2007; Barclay et al. 2008), cardiovascular disease (Streppel et al. 2005) and certain types of cancers (World Cancer Research Fund/American Institute for Cancer Research 2007), while also providing a good source of fibre and a range of vitamins and minerals. The most recent FAO/WHO Scientific Update on carbohydrates in human nutrition stated that ‘whole-grain cereals, vegetables, legumes and fruits are the most appropriate sources of dietary carbohydrate’ (Mann et al. 2007). Fats are a second major dietary energy source and are essential for growth and development in early life. The fat in our diets is composed mainly of fatty acids, which vary widely in their carbon chain length and the number and position of their double bonds (table 1). It is increasingly recognized that different structural categories of fats have contrasting impacts on health (Lecerf 2009). For example, there is strong evidence that the consumption of trans-fatty acids (TFAs) increases the risk of cardiovascular disease, with potential adverse effects also on insulin resistance and adiposity (Teegala et al. 2009). In contrast, the omega-3 long-chain polyunsaturated fatty acids (omega-3 LCPs), most commonly found in fish, have been shown to have beneficial effects for cardiovascular health (Scientific Advisory Committee on Nutrition 2004; Lecerf 2009). Omega-3 LCPs play a crucial role in brain and retinal development in utero (Uauy & Dangour 2006), but evidence is inconsistent that additional consumption of these oils in childhood enhances brain function. There is also no evidence that consuming supplemental omega-3 LCPs in later life helps slow cognitive decline (Dangour et al. 2010). Dietary intake of protein is vital for normal growth and development and the maintenance of body protein (WHO/FAO/UNU 2007). Proteins are composed of amino acids, some of which cannot be synthesized in the body and thus are termed ‘essential’, and the quality of protein in a diet is defined based on its provision of essential amino acids. The digestibility of proteins is also an important factor in defining dietary protein adequacy, with protein sources in typical Western diets having a digestibility of approximately 95 per cent, while proteins from a typical Indian rice-based diet have a digestibility of only 77 per cent (WHO/FAO/UNU 2007). Beyond the primarily metabolic demand, attention is now focusing on the role of protein intakes in promoting lifelong health and there is emerging evidence that protein quality may have consequences for optimal muscle and bone growth (Millward et al. 2008). The most recent expert consultation on protein requirements stated that an intake of 0.83 g of high-quality protein per kilogram of body weight per day should be sufficient to meet the requirements of most of the adult population and highlighted that intakes three to four times higher than this may not be risk free (WHO/FAO/UNU 2007). In reality, diets are not categorized based purely on their macronutrients content, but instead are composed of different foods providing specific combinations of macro- and micro-nutrients. One of the most diverse food groups is fruits and vegetables, which play an important role in promoting health. No single known component nutrient explains the observed beneficial health effects of consuming a high vegetable and fruit diet and their impact is likely due to a combination of being low in energy density, high in fibre and a source of vitamins and minerals as well as to lesser-understood bioactive components such as polyphenols. The protective effect of fruit and vegetable consumption on cardiovascular disease and other chronic disease risk is well recognized (WHO/FAO 2003), and it has been estimated that 2.6 million deaths per year could be attributed to the inadequate consumption of fruit and vegetables, primarily through their effects on ischaemic heart disease and stroke (Lock et al. 2005). In some countries and cultures, meat and dairy products are an important part of the diet, representing good sources of protein and a range of minerals such as iron, zinc and calcium and micro-nutrients such as vitamin B12. In contexts where dietary intakes are sub-optimal, animal source food products can be an essential source of these important nutrients. However, some meat and dairy products are also a major contributor of saturated fat in the human diet, and high intake of saturated fat is consistently associated with increased risk of heart disease, largely because of the effect on serum cholesterol concentrations (Hu et al. 2001; WHO/FAO 2003; Jakobsen et al. 2009). High consumption of red (and processed) meat has also been shown to be associated with increased risk of colorectal cancer (World Cancer Research Fund/American Institute for Cancer Research 2007) and total mortality (Sinha et al. 2009). There are evident complexities in defining the relationships between population nutritional intake and health. It is therefore a challenge to provide comprehensive dietary guidelines for population intakes based on the global diversity of primary foodstuffs. Dietary guidelines have been part of public health nutrition policies since the early twentieth century. These guidelines, often produced by expert bodies, initially focused on the prevention of specific nutrient inadequacies, but more recently, their focus has changed to the prevention of food and nutrition-related chronic diseases. However, expert reports rarely synthesize evidence into dietary guidelines that encompass nutritional inadequacy, infectious and chronic disease. This shortcoming was recently addressed in a systematic review of expert panel dietary recommendations for the prevention of nutritional deficiencies, infectious and chronic diseases published between 1990 and 2004 (World Cancer Research Fund/American Institute for Cancer Research 2007). The review identified 94 expert reports of which only three (two from India and one from South Africa) arose from expert panels in low-income countries. The reviewers identified a broad consensus in dietary recommendations for the prevention of disease (table 2). Generally, reports recommended diets high in cereals, vegetables, fruits and pulses and low in red and processed meats. Recommended diets are correspondingly high in dietary fibre and micro-nutrients and low in fats, saturated fatty acids, added sugars and salt (World Cancer Research Fund/American Institute for Cancer Research 2007).
In 2003, WHO published population nutrient intake goals (WHO/FAO 2003) which continue to reflect the current evidence and provide a simple definition of the nutritional composition of a ‘healthy diet’ for nine billion people (table 1). The WHO report did not focus on the micro-nutrient intake requirements, although this continues to be an active area of research (FAO/WHO 2002). Currently, the WHO recommends, among others, vitamin A supplementation to children in at-risk areas (de Benoist et al. 2001), salt iodization to prevent iodine-deficiency disorders (WHO 1994) and either iron fortification or supplementation for the prevention of iron deficiency anaemia (WHO/UNICEF/UNU 2001). Evidence from around the world suggests that economic development results in major transitions in population-level dietary, and corresponding disease, patterns. The nutrition-related changes (encompassing both dietary intake and physical activity) have been termed the ‘nutrition transition’ and describe trends moving away from dietary patterns that typify those of hunter–gatherers containing large amounts of fibre and low amounts of sugar and fat to energy-dense diets composed predominantly of highly processed foodstuffs common to much of the developed world today (Drewnowski & Popkin 1997; Popkin 2004, 2006). The dietary changes are themselves driven by a variety of culturally specific factors including the increased production, availability and marketing of processed foods and the complex effects of urbanization (Popkin 2006). The future prospects look bleak as societal change in low- and middle-income countries is accelerating the nutrition transition (Popkin 2002). Furthermore, as rural to urban migration continues, there will be increasing dependency on complex food chains, which implies that the direction of these dietary transitions is and will be one way. The consequences for population prevalence of nutrition-related chronic disease are all too evident; the WHO Global Burden of Disease project lists coronary heart disease and stroke within the top 10 leading causes of death worldwide with diabetes mellitus also a leading cause of death in high- and middle- and increasingly in low-income countries (WHO 2008). Changing patterns of agricultural production, food availability and processing will have profound impacts on individual food consumption and, as a result, on population health. A thorough understanding of these impacts requires a dependable means of measuring food consumption around the world. In the following sections, we compare the methods currently used to assess food consumption, particularly the estimation of food consumption from patterns of food production and availability through food balance sheets (FBS), from studies of food purchases as part of household budget surveys (HBS) and from individual dietary surveys. These methods are also critiqued elsewhere in this supplement as part of an analysis of food consumption trends (Kearney 2010). The United Nations Food and Agriculture Organization (FAO) compiles national data on food production and on per capita food availability for most countries in the world. These data are available online (http://faostat.fao.org) and are widely used to inform agricultural and food policy. Production data are presented for the top 20 most important food and agricultural commodities produced in a given country in terms of their value and size. Food availability data are presented in FBS and provide figures on the estimated availability of over 100 foodstuffs in grams per capita per day. The FBS are constructed using FAO information on food production and net trade. The food available for consumption is then calculated after estimating the amount used for industrial or agricultural purposes (for example, as seed or for animal consumption or bio-fuels), wastage in the production system and change in national stock levels. It is important to emphasize that measures of food availability are not measures of food consumption, but in the absence of other data, food availability is widely used as a proxy for food consumption. The calculation of food availability is subject to a range of potential errors, from the initial calculation of production and trade to the determination from this of what food is available for consumption. The statistics used for food production and net food trade by FAO have been criticized by both academics (Svedberg 1999) and independent evaluators (CC-IEE 2008). In 2008, an independent evaluation noted that ‘the quantity and quality of data coming from national official sources has been on a steady decline since the early 1980s’ (CC-IEE 2008). This lack of good quality data is particularly acute for certain developing countries where there may be no official statistics; FAO currently fills this gap by providing its own modelled or imputed estimates of food production, which are used for over 70 per cent of African countries and for over 50 per cent of countries from Asia and the Pacific (CC-IEE 2008). Figures on animal populations and production parameters provide further illustration of errors inherent in office-based estimates. A recent case study from South America revealed that livestock population figures reported by the FAO differed by 10–50% from the reality on the ground and that very sparse data on livestock production parameters were used to estimate production (Rushton & Viscarra 2010). A striking example from this region is the difference in estimates in Brazilian cattle populations, with the official number being 180 million compared with estimates of 160 million (FNP Consultoria e Comercio 2006). As agricultural production numbers form the basis of FBS estimates of food availability, errors of this magnitude will have important consequences for the accuracy of the resulting food availability data, and any estimates of consumption calculated from these. At the level of estimating per capita food availability, errors in FBS estimates can result from incomplete or out-of-date country-specific population estimates which are usually based on the resident population and do not take into account tourists, illegal immigrants or refugees. This issue may be particularly pronounced for many sub-Saharan African countries where published population census data are often out of date and are likely to suffer from undercounting and misreporting due to issues of accessibility, risk and the conceptual problems of encompassing highly mobile populations and complex patterns of household formation (Sender et al. 2005). FBS data provide incomplete information on the level of home production of foods or on the level of processing different food commodities undergo prior to their availability for consumption. In many low-income countries, foods produced at home (which do not reach the market) remain largely unprocessed and are predominant in the household diet. In contrast, as countries undergo the nutrition transition, foods are often highly processed, and FBS data based on the production and trade of agricultural commodities are unable to provide information on the composition of the processed foods actually available for consumption. Finally, a key source of error in using FBS food availability statistics as a proxy for food consumption is that FBS data do not allow for food waste at the retail and household level. This level of food wastage can be particularly high in urban areas of developed countries, but will vary greatly both between and within countries. In the UK, it has been estimated that one-third of all food purchases (i.e. foods available for consumption at the household level) are thrown away, equating to 6.1 million tonnes of foodstuffs a year (WRAP 2008). HBS generally conducted by national statistical offices are available from many countries in the world including an increasing number of low-income countries (Smith et al. 2006). These surveys generally aim to acquire nationally representative information on household expenditure for a range of commodities, including food, primarily to construct cost-of-living indices. Where HBS include information on the quantities of different types of foods purchased, as well as consumption from own production, this information equates to the food available at the household level and is therefore frequently used as a proxy estimate of consumption in a manner similar to FBS food availability data. In HBS, dietary data are collected as part of the larger household level survey, which is a strength as they can be related to the socio-economic status of the household and, provided the sample is representative, regional variations can also be investigated. In reality, however, samples are not always representative due to issues such as a lack of accurate sampling frame, poor response rates and a tendency to over-sample urban compared with rural areas and poorer compared with wealthier households. Other important limitations of using HBS data to assess the composition of the household diet include a lack of information on food consumed outside the home, on waste within the household or on food used for other reasons (such as pet food) or fed to guests. Measuring the consumption of home-produced food may also prove difficult. In addition, the methodologies used may not be directly comparable between countries (Naska et al. 2009). A further important limitation when using the data as a proxy for individual dietary intake is the lack of information on the distribution of food within the household. Intra-household food allocation may be a particular concern in low-income country settings where food consumption is known to vary widely between members of a household, with higher-status household members often consuming considerably more, and better quality, foods than other members of the family (Gomna & Rana 2007; Leroy et al. 2008). A final consideration is that seasonal trends in food consumption are not captured by these surveys unless they are conducted year-round, which has its own consequences in terms of implementation costs. Few studies have quantitatively assessed the comparability of food availability data derived from FBS and HBS. However, a recent comparison of data from 18 European countries reported a general tendency for HBS-derived values to be lower than those from FBS for the major food groups (Naska et al. 2009). Despite the lower values in HBS, estimates from the two methods of the availability of most food groups, with the exception of meat products, correlated well (Naska et al. 2009). HBS and FBS are thus complementary methods of assessing food availability and have an important role to play in informing public policy. However, because of their inherent limitations, they are not able to provide accurate data on food consumption at the individual level (Serra-Majem et al. 2003); a concept that is explored further in the following sections. Direct estimates of individual food consumption for a population are generally derived from surveys conducted on nationally representative samples. When conducted properly, individual dietary intake data from population surveys can often be sub-divided by age and sex categories and used to investigate regional and socio-economic variations. There is a surprising paucity of nationally representative surveys even from high-income country settings. Indeed, in order to estimate the consumption of fruit and vegetables by individuals worldwide, the Global Burden of Disease project was only able to identify nationally representative dietary intake survey data from 26 countries and had to rely entirely on FBS food availability data for African countries (Lock et al. 2005). This lack of dietary intake surveys probably arises from the complexities and expense involved in conducting regular high-quality rounds of data collection and analysis, insufficient information on the energy and nutrient composition of local foods and low participant literacy levels in some countries (Ferro-Luzzi 2002). Collecting individual dietary intake data involves methods such as weighed records, 24 h recalls and food frequency questionnaires, none of which is error free. Weighed food records over seven days are generally viewed as the ‘gold standard’ by nutritionists, although it is recognized that respondents must be highly motivated and literate and that the burden of data collection may impact on their dietary behaviour (Gibson 2005). Twenty-four-hour recall methods are commonly used, although must be repeated on several days to more accurately capture habitual dietary intake (Gibson 2005). Food frequency questionnaires require fewer resources, but there exists an ongoing debate around the validity of dietary intake data reported via this method (Bingham et al. 2003; Prentice 2003). Difficulties in comparison and interpretation of individual dietary intake data collected in different countries also arise from the use of diverse study designs, sampling frames, seasonal variation in dietary intake and methods of data collection. In order to examine the challenges posed in the comparison of individual dietary intake surveys with the more globally available FBS data on food availability, we present an analysis involving national surveys of individual dietary intake and FBS food availability data from two countries: the UK and Mexico. We selected these two national surveys to compare countries at different stages of development from different regions of the world. We were greatly constrained by the need to find comparable dietary intake survey data, and in this regard, it is noteworthy that we found no low-income or lower middle-income countries for which national-level dietary intake survey data could be obtained. The UK National Diet and Nutrition Survey (NDNS) recruited around 2000 adults individuals from across the UK and collected dietary information using a seven-day-weighed record (Henderson et al. 2003). The Mexican Health and Nutrition Survey (MHNS) included 20 000 adults and used a 101-item food frequency questionnaire to record foods eaten over the previous seven days (Ramirez et al. 2009). FBS food availability data from the same year that the surveys were conducted were extracted for both countries from the FAO website. For both the UK and Mexico, individual dietary intake of all macronutrients was substantially lower than estimated to be available at a national level from FBS data (tables 3 and 4). In the UK and Mexico, energy availability was approximately 70 per cent and 83 per cent higher, respectively, than the average adult energy consumption as estimated from dietary intake surveys. These findings mirror those reported from a comparison of four other high-income countries (Canada, Finland, Poland and Spain), which also demonstrated that FBS food availability data overestimated actual food consumption (Serra-Majem et al. 2003). Similarly, FBS data on fruit and vegetable availability in 15 countries (mostly high income) have been reported to substantially over-estimate actual consumption, although the degree of overestimation varied widely (Pomerleau et al. 2003). It has been suggested that as the food system develops and becomes more complex, the discrepancy between dietary intake and food availability data increases due to a lack of information at the manufacturing level as well as the variations in waste (FAO 1983; Dowler & Ok Seo 1985; Sekula et al. 1991).
Population dietary intake data can be used to assess the adequacy of the diet, to highlight at-risk groups and to assess the effectiveness of interventions aimed at population dietary change. Data from the NDNS suggest that adults in the UK are on average exceeding the recommended intakes of free sugars, total fat and saturated fat (table 3). These average figures obscure what can be a wide variation in intakes; the range between the lowest and highest 2.5 percentile of percentage energy from fat for men was 24–47% and for women was 22–48% (Henderson et al. 2003). In contrast to the UK, total and saturated fat intakes in Mexico appear to lie within the range recommended as a healthy nutrient intake goal, although again these mean values obscure a range of intakes and some individuals will be consuming over 35 per cent of energy from fat. The intake of fats has been shown to increase as countries progress through the nutrition transition and this difference in intakes may reflect the different transition stages attained by the two countries (Popkin 2006). In Mexico, the fruit and vegetable intakes are much lower than the 400 g intake goal and may point to an area of health promotion that requires emphasis. A significant shortcoming in the use and interpretation of FBS food availability data is that they provide no information on the variation of availability by sex, socio-economic status, region or age. Comprehensive national dietary intake surveys, such as the NDNS and MHMS, will stratify dietary intakes into sub-groups, thereby providing important insights into the differential burdens of disease risk factors in addition to highlighting at-risk groups. For example, in the NDNS, low socio-economic status, defined as individuals receiving state benefits, was associated with greater intake of free sugars in both men and women (table 3). Such wealth-related differences in diet pattern are well recognized as one of the main causes of social inequalities in health (Robinson et al. 2004; Shelton 2005). Similarly, data from the MHNS showed that individuals from urban areas reported substantially higher intakes of fat and saturated fat than those in rural areas (table 4), highlighting one of the commonly observed trends associated with urbanization, which is in turn one of the key drivers of the nutrition transition (Drewnowski & Popkin 1997). From this brief synopsis of the nutritional intakes of two countries at different stages of development, we can see the wealth of information that may be derived from national surveys and the usefulness of this information for informing nutrition policy. Nationally representative nutritional surveys have not been conducted in the majority of low-income countries (Smith et al. 2006). In South Africa, for example, intake surveys have been carried out for particular regions or for particular population groups (children and pregnant women), but not for the population as a whole. In these settings, FBS food availability data are often used as a proxy for individual dietary intakes despite their important limitations outlined above. FBS data for Bangladesh and Tanzania suggest very low energy availability (table 5), which for Tanzania does not meet the World Food Programme target level of calorie consumption (2100 kcal d−1) (WFP 2007). In addition, only a small proportion of this energy is derived from animal sources, suggesting a diet that may be low in certain key vitamins and minerals that are less available from vegetable sources. The Bangladesh data also reveal a level of fat availability that is below the minimal desirable intake of 15 per cent of energy (FAO/WHO 1994). However, given the substantial limitations of using FBS food availability data as a proxy measurement of food consumption, it seems pertinent to question the validity of the data presented in table 5.
Only a few studies have investigated the applicability of FBS food availability statistics for assessing dietary consumption in low-income country settings and generally conclude that FBS data may underestimate actual intake (Poleman 1981; Svedberg 1999), primarily because people grow, catch and process a large proportion of their diet that do not appear in country-level production statistics. For example, data on milk production and consumption in Bolivia, Kenya and Nepal indicate that only 13 per cent of milk is produced and traded in formal milk chains (Anderson et al. 2004). It is arguably of greater concern to have accurate measurement of food consumption in low- and middle-income countries where there remains under-nutrition coupled with the increasing transition to high-energy, low-nutrient diets. These transitions may not occur uniformly across a country or even within a household (FAO 2006), questioning the usefulness of country-level FBS for providing data that will inform nutrition policy. Nationally representative nutritional surveys are a more accurate and nuanced method of characterizing the diet of a population, and the widespread reliance on FBS food availability data in poorer countries has important implications for the limits of our understanding of diet in these settings, not least because of the paucity of FBS statistics from these regions. The incomplete nature of the available agricultural production and dietary intake data poses significant limitations on our ability to provide guidance to policy makers on ensuring food security for all. Projected agricultural production estimates are based on global food availability data and the likely changes in availability in light of historical patterns (FAO 2009b). Thus, the FAO estimates that by 2050 the global average daily calorie availability will reach 3050 kcal per person (FAO 2009b). While this estimate suggests that there should be sufficiency in terms of calorie availability, it does not mean that in 2050 all nine billion inhabitants of the Earth will be able to consume a healthy diet. Inaccuracies in measuring or estimating food consumption undermine our capacity to know whether we are currently able to feed the world healthily and to assess the impact of projected agricultural trends. It is noteworthy that in the millennium development goals (MDGs), the consumption-related indicator for reducing hunger (MDG 1C) is the proportion of the population below the minimum level of dietary energy consumption (based on food availability data), a statistic undermined by the limitations of FBS with its limited information on the distribution of food consumption, and also a statistic lacking any direct emphasis on dietary quality. Influencing the future production and processing of food requires a thorough understanding of the impacts of a changing food system on health, which will in turn rely on accurate data from each stage of the food system: from production to consumption. A good understanding of what foodstuffs are being produced, imported and exported in different countries and regions not only allows surveillance of current production for nutritional planning, but provides a means of evaluating policy interventions aimed at improving production for nutritional (and other) goals or assessing other shocks to the food system, such as the recent global financial crisis. As countries progress through economic and nutrition transitions, with a greater proportion of the diet becoming processed foods, the food system becomes increasingly complex, and traditional calculations of commodity availability are a poor proxy for consumption patterns of nutrients (Dowler & Ok Seo 1985). A thorough understanding of the impact of the changing food system on health will therefore require information on the combining, mixing and removing of nutrients during the manufacturing of processed products (FAO 2004) and/or detailed information on nutritional intakes. Although considered the ‘gold standard’ for monitoring population dietary intake, nationally representative data on food consumption are only available for a small minority of countries and this situation is unlikely to improve in the short term due to resource constraints. We have shown that food availability data cannot be used interchangeably with food consumption data. Moreover, the accuracy of statistics behind food availability data is extremely variable, and it seems unlikely that current institutional incentives to improve the system will be adequate to significantly enhance data collection and analysis. Notwithstanding these concerns, accurate data on food consumption are a vital component of effective planning of public agricultural investments and for the implementation of sound public health nutrition policy. To improve our capacity to predict the health consequences of changes in agriculture and food systems, we propose the following areas for future work:
In recent years, the world has seen dramatic change and improvements in data collection for other aspects of the economies in low- and middle-income countries such as poverty data capture and analysis relating to the MDGs. There are strong arguments that, as the MDGs come to be reviewed towards 2015, there should also be a refinement in data collection and analysis processes to ensure that links between food production, processing and consumption can be placed in a systems framework that not only demonstrates access to food but also to the right balance of key nutrients. This will require substantial resources, but its linkage to globally agreed goals will make such investment more likely. Secondly, the conditions are right today for public–private partnership approaches to healthier diets, with potential for greater collection of consumption data by the private sector. Major food manufacturers and retailers are increasingly aware of the significance of food quality, diet and health for social responsibility in relation to consumers, as indeed they are of the significance of agricultural production conditions for social and environmental responsibilities among suppliers. Moreover, through electronic data collection at the point of sale, major manufacturers and retailers are the repositories of at least some of the food production, processing, preference and purchase data for which there are public sector lacunae. While there remain considerable shortcomings in these data for assessing food consumption (no information on food distribution, waste and so on), they could represent an important untapped resource on patterns of food purchase in the retail sector. The dramatic spread of supermarkets in low- and middle-income countries (Reardon et al. 2003) may make such measurement particularly valuable there, where there is little public sector investment. The arguments presented so far address our need for a better understanding of the current relationship between agriculture and health. But they also apply to our desire to predict the health consequences of future agricultural change and to support the evaluation of different potential interventions to improve health through changing agriculture and food systems. Here we highlight a few trends and opportunities where improvements in measurement will be essential. Many of these relate to diet and nutrition, but others relate to factors resulting from the health ‘externalities’ of agricultural change. With an increasingly clear picture of what constitutes a healthy diet, we will see a growing effort to ensure equity of access. The public sector will see this as a social obligation, and the private sector will be increasingly motivated to contribute, as is clear from the recent investment of food producers in research and promotion for healthy diets. We will be faced with a range of opportunities to improve diets, many of which exist today at some level, and include among others:
These different agri-health interventions and others may have the potential to improve the health of all populations. Predicting their health outcomes will be essential to calculate the long-term health gains associated with the short-term private or public sector investment required, providing the basis for selecting—and selecting between—these different approaches for specific situations. For instance, the vitamin A-associated health benefits of uptake of new ‘golden rice’ varieties, genetically modified to express beta-carotene, has been calculated in terms of disability adjusted life years (Stein et al. 2006) which can in turn be used to calculate the rates of return on agricultural investment. There will also be a need to measure non-dietary health effects of changes in agriculture and food systems, as exemplified by the ‘livestock revolution’, an increase in meat and dairy production to respond to growing demands of wealthier, urban populations in developing countries (Delgado et al. 1999). Much of the recent growth in the livestock protein supply has come from intensive monogastric systems and to some extent from a growth in milk production. The dramatic increases in livestock production have been celebrated, but this trend has also generated concerns about the contribution of meat and dairy products in the dietary transition and the growth of chronic diseases (Popkin 2009). Concerns have also been raised in relation to health externalities such as the impact on the livelihoods of traditional farmers (Haan et al. 2001; Hefferman 2002) and potential negative environment impacts (Steinfeld et al. 2006). There has been concern about growing problems with the control of transboundary animal diseases and more specifically the emergence and resurgence of dangerous zoonotic diseases (Greger 2007). There are pervasive arguments that recent rises in disease problems are related to changes in livestock production systems and the increase in livestock populations (Leibler et al. 2009), although the capacity to collect data and to analyse these systems continue to be weak. These health externalities add to the challenge of developing agricultural systems that support health, but they also create indirect opportunities for health improvement. For example, a recent study estimates that reducing the production of animal source food products (especially but not only in high-income countries) could be an important strategy to achieve greenhouse gas mitigation targets. If reduced production also results in reduced consumption of animal source foods, it will represent an important health ‘co-benefit’ of an agri-environmental intervention (Friel et al. 2009). The global agricultural system is primarily concerned with ensuring that sufficient food (in terms of calories) will be produced to feed the projected global population of nine billion in 2050 (FAO 2009b). However, to tackle global public health problems associated with both under- and over-nutrition, healthy diets must be sufficient not just in calories but also in the balance of macronutrients, vitamins and minerals. Our increasingly sophisticated understanding of the association between diet and health should now prioritize health as a key driver of future agricultural production. The quality and paucity of available information on food production and individual-level food consumption, especially in the most nutritionally challenged regions of the world, severely hampers our efforts to link agricultural production with health. Furthermore, limitations in the available evidence look set to increase as the food system becomes more complex and global in nature. It is clear that food availability statistics provide information that should not be used as an estimate of individual dietary consumption and that actual food consumption data will be needed to assess the impacts on health of future developments in the agricultural and food systems. The enormous challenge of global food security is likely to stimulate considerable investment and innovation in agriculture and food science in the coming decades which will hopefully contribute to improving food supply at a global level. However, too narrow a focus on cereal improvements and calorie supply alone will not eradicate under-nutrition or address the health challenges arising from the nutrition transition. An integration of agricultural innovation and population health planning is required, based on matrices that will allow us to better understand the impact of the agriculture and food systems on population health. We are grateful to Sema El-Jamali for assisting with data extraction. Funding support for the production of this report was provided by the UK Government Office for Science's Foresight Global Food and Farming Futures Project. The authors have no conflict of interests to declare. FootnotesOne contribution of 23 to a Theme Issue ‘Food security: feeding the world in 2050'. While the Government Office for Science commissioned this review, the views are those of the author(s), are independent of Government, and do not constitute Government policy. © 2010 The Royal Society This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. References
Page 20
You have to enable JavaScript in your browser's settings in order to use the eReader. Or try downloading the content offline DOWNLOAD |