[spa] Los territorios cambian en forma constante y sistemática, impulsados por la influencia antropogénica y alentados o limitados por entorno biofísico. Los factores que dieron origen a la expresión espacial particular de los cantones Otavalo, Pujilí y Guamote, se relacionan con el arraigo cultural diferenciado, producto de su herencia histórica en donde los pueblos indígenas que se asientan en el área de estudio y conviven con habitantes de otras etnias han marcado las pautas de comportamiento, en especial en el cantón Otavalo. En esta investigación se comprobó que los impactos y consecuencias sociales, culturales, ambientales y económicas de las dinámicas territoriales biofísicas y antropogénicas son disímiles en los cantones en estudio, a pesar de que la densidad poblacional rural y la presencia indígena mayoritaria son similares en los tres cantones. Se evidenció mayor pobreza en los cantones Pujilí y Guamote en comparación con el cantón Otavalo; a pesar de que su superficie constituye la tercera parte de los otros dos cantones y supera a éstos con el triple de población. Se pudo inferir que la poca disponibilidad de tierra y la alta población contribuyeron para que los indígenas del cantón Otavalo incursionen en otras actividades económicas que no dependen del recurso tierra; en cambio en Pujilí y Guamote la disponibilidad de mayor superficie de territorio, ha coadyuvado para que su población vea como opción productiva la producción agropecuaria. Una consecuencia de estas dinámicas diferenciadas es la generación de conflictos de uso del suelo, cuya expresión más visible es la transgresión de la aptitud natural de uso del suelo para hacer agricultura en suelos cuya aptitud es la conservación. Las políticas públicas diferenciadas que toman en cuenta aspectos ambientales, socioculturales y económicos, vistos desde una óptica integradora por unidad territorial,-el cantón-, harían la diferencia, especialmente en Guamote y Otavalo, en donde los indígenas se han tomado el control político local, desde hace algunas décadas, por la vía de elecciones populares; mientras que en Pujilí, todavía el poder político local está en manos de la etnia mestiza. Con la finalidad de medir la pobreza, como consecuencia de las dinámicas territoriales y sociodemográficas disímiles en el área de estudio, se aplicó el Índice de Pobreza Multidimensional, IPM, que permite identificar de manera integral las carencias de los habitantes de una determinada unidad territorial. De los resultados, se encontró que Otavalo tiene las mejores condiciones, con un IPM de sólo el 43%, frente a Guamote con un IPM del 50% y Pujilí con un IPM de 53%; lo que significa que la población de Otavalo se encuentra en mejores condiciones que la de los otros dos cantones, a pesar de que es el cantón más densamente poblado. En este contexto, la principal conclusión del estudio fue que los impactos y consecuencias sociales, culturales, ambientales y económicas de las dinámicas territoriales biofísicas y antropogénicas disimiles, en los cantones Otavalo, Guamote y Pujilí, han dado como resultado una configuración territorial diferenciada en donde se evidencia mayor pobreza en los cantones Pujilí y Guamote, en comparación con Otavalo. La existencia de mayor porcentaje de personas que viven en hogares multidimensionalmente pobres, hace pensar que la incursión en otras actividades que no dependan del cultivo de la tierra como único sustento, debería ser una opción para disminuir la pobreza, en Pujilí y Guamote, acompañado de políticas públicas diferenciadas que tomen en cuenta aspectos ambientales, socioculturales y económicos, vistos desde una óptica integradora por unidad territorial, en este caso el cantón. ; [eng] Territories change constantly and systematically, driven by anthropogenic influence and encouraged or limited by a biophysical environment. The factors that gave rise to the particular spatial expression of the cantons Otavalo, Pujilí and Guamote, are related to the differentiated cultural roots, product of their historical inheritance in which the indigenous peoples who settle in the study area and live with inhabitants of other ethnic groups have marked the patterns of behaviour, especially in the Otavalo canton. In this research it was found that the social, cultural, environmental and economic impacts and consequences of the biophysical and anthropogenic territorial dynamics are dissimilar in the cantons under study, despite the fact that the rural population density and the majority indigenous presence are similar in the three cantons. Greater poverty was evident in the Pujilí and Guamote cantons compared to the Otavalo canton; although its surface constitutes the third part of the other two cantons and exceeds these with three times the population. It could be inferred that the scarce availability of land and the high population contributed to the indigenous people of the Otavalo canton entering other economic activities that do not depend on the land resource; On the other hand, in Pujilí and Guamote, the availability of a larger area of land has helped the population to see agricultural production as a productive option. A consequence of these differentiated dynamics is the generation of conflicts of land use, whose most visible expression is the transgression of the natural aptitude of land use to make agriculture in soils whose aptitude is conservation. Differentiated public policies that take into account environmental, sociocultural and economic aspects, viewed from an integrating perspective by territorial unit, -the canton-, would make a difference, especially in Guamote and Otavalo, where indigenous people have taken over local political control, for a few decades, through popular elections; while in Pujilí, local political power is still in the hands of the mestizo ethnic group. In order to measure poverty, as a consequence of the territorial and sociodemographic dynamics dissimilar in the study area, the Multidimensional Poverty Index, IPM, was applied, which allows to comprehensively identify the shortcomings of the inhabitants of a given territorial unit. From the results, it was found that Otavalo has the best conditions, with an IPM of only 43%, compared to Guamote with an MPI of 50% and Pujilí with an IPM of 53%; which means that the population of Otavalo is in better conditions than that of the other two cantons, even though it is the most densely populated canton. In this context, the main conclusion of the study was that the social, cultural, environmental and economic impacts and impacts of the dissimilar biophysical and anthropogenic territorial dynamics in the Otavalo, Guamote and Pujilí cantons have resulted in a differentiated territorial configuration where there is evidence of greater poverty in the Pujilí and Guamote cantons, in comparison with Otavalo. The existence of a greater percentage of people living in multidimensionally poor households, suggests that the incursion into other activities that do not depend on the cultivation of the land as the sole sustenance, should be an option to reduce poverty, in Pujilí and Guamote, accompanied by differentiated public policies that take into account environmental, socio-cultural and economic aspects, viewed from an integrating perspective by territorial unit, in this case the canton. ; [cat] Els territoris canvien en forma constant i sistemàtica, impulsats per la influència antropogènica i encoratjats o limitats per l'entorn biofísic. Els factors que van donar origen a l'expressió espacial particular dels cantons Otavalo, Pujilí i Guamote, es relacionen amb l'arrelament cultural diferenciat, producte de la seva herència històrica on els pobles indígenes que s'assenten en l'àrea d'estudi i conviuen amb habitants de altres ètnies han marcat les pautes de comportament, especialment al cantó Otavalo. En aquesta investigació es va comprovar que els impactes i conseqüències socials, culturals, ambientals i econòmiques de les dinàmiques territorials biofísiques i antropogèniques són dissímils en els cantons en estudi, tot i que la densitat poblacional rural i la presència indígena majoritària són similars en els tres cantons. Es va evidenciar major pobresa en els cantons Pujilí i Guamote en comparació amb el cantó Otavalo; tot i que la seva superfície constitueix la tercera part dels altres dos cantons i supera aquests amb el triple de població. Es va poder inferir que la poca disponibilitat de terra i l'elevada població van contribuir a qué els indígenes del cantó Otavalo s'endinsin en altres activitats econòmiques que no depenen del recurs terra; en canvi en Pujilí i Guamote la disponibilitat de major superfície territorial, ha coadjuvat perquè la seva població vegi com a opció productiva la producció agropecuària. Una conseqüència d'aquestes dinàmiques diferenciades és la generació de conflictes d'ús del sòl, l'expressió més visible és la transgressió de l'aptitud natural d'ús del sòl per fer agricultura en sòls on la aptitud és la conservació. Les polítiques públiques diferenciades que tenen en compte aspectes ambientals, socioculturals i econòmics, vistos des d'una òptica integradora per unitat territorial, -el cantón-, farien la diferència, especialment en Guamote i Otavalo, on els indígenes han près el control polític local, des de fa algunes dècades, per la via d'eleccions populars; mentre que a Pujilí, encara el poder polític local està en mans de l'ètnia mestissa. Amb la finalitat de mesurar la pobresa, com a conseqüència de les dinàmiques territorials i sociodemogràfiques dissímils en l'àrea d'estudi, es va aplicar l'Índex de Pobresa Multidimensional, IPM, que permet identificar de manera integral les mancances dels habitants d'una determinada unitat territorial. Dels resultats, es va trobar que Otavalo té les millors condicions, amb un IPM de només el 43%, enfront de Guamote amb un IPM del 50% i Pujilí amb un IPM de 53%; el que significa que la població d'Otavalo es troba en millors condicions que la dels altres dos cantons, tot i que és el cantó més densament poblat. En aquest context, la principal conclusió de l'estudi va ser que els impactes i conseqüències socials, culturals, ambientals i econòmiques de les dinàmiques territorials biofísiques i antropogèniques dissimils, en els cantons Otavalo, Guamote i Pujilí, han donat com a resultat una configuració territorial diferenciada on s'evidencia major pobresa en els cantons Pujilí i Guamote, en comparació amb Otavalo. L'existència de major percentatge de persones que viuen en llars multidimensionalment pobres, fa pensar que la incursió en altres activitats que no depenguin del cultiu de la terra com a únic suport, hauria de ser una opció per a disminuir la pobresa, en Pujilí i Guamote, acompanyat de polítiques públiques diferenciades que tinguin en compte aspectes ambientals, socioculturals i econòmics, vistos des d'una òptica integradora per unitat territorial, en aquest cas el cantó.
El ser humano con su inteligencia y conocimiento debería ser un elemento clave en el logro de un mejor desarrollo sostenible. Son evidentes los casos en que por desconocimiento o por omisión, se han desaprovechado o esterilizado los recursos naturales o no se han prevenido ni manejado adecuadamente los desastres naturales. El subsuelo y sus procesos tienen así implicaciones dentro del ordenamiento del territorio, con un sinnúmero de variables que generalmente son difíciles de cuantificar, ponderar e integrar. Se han hecho muchos intentos de diseños metodológicos para abordar el tema del ordenamiento territorial. En este documento se presenta una propuesta integral considerando con mayor relevancia al subsuelo más los aspectos bióticos y antrópicos. Este documento titulado "Consideración del subsuelo en el ordenamiento territorial" es una propuesta metodológica para la gestión del ordenamiento territorial de las regiones, haciendo énfasis en el subsuelo. Se demuestra como éste ocupa un papel determinante dentro de los criterios de construcción de propuestas, escenarios y finalmente en el desarrollo humano. Tres casos de estudios son desarrollados. Se tienen varias metodologías e infinidad de casos dentro del estado del arte que se revisó para el ordenamiento territorial. Mucho de lo reportado hace hincapié en lo urbano, turístico, económico, legal, político, cultural, entre muchas variables. Sin embargo, el subsuelo, la geología, los recursos minerales y las restricciones naturales allí presentes, son poco considerados en la mayoría de planes, metodologías y sobre todo en los casos de estudio. Esas son razones para proponer una metodología que haga énfasis en el subsuelo y que no solo se quede en lo conceptual, sino que se muestre con ejemplos concretos. El subsuelo estaría conformado por los recursos minerales e hídricos subterráneos, y también por las restricciones naturales, como la sismicidad, los volcanes, y deslizamientos, entre otros. Ello a su vez tiene implicaciones con la edafología, con las geoformas, con la geografía, con lo biótico (flora y fauna) y lo antrópico (poblacionales, educación, salud y cultura). Así que podría evidenciarse que considerar el subsuelo es fundamental dentro de cualquier proceso de ordenamiento del territorio. El subsuelo debería estar siempre presente dentro de las variables a considerar, ya que representa el largo plazo. El hecho de que algunos proyectos, ciudades, y regiones hayan sido planificados u ordenados sin considerar el subsuelo, y no hayan tenido incidentes, no quiere decir que sean correctos. Se tiene el caso de ciudades o regiones planificadas que después de varias décadas han sido arrasadas por deslizamientos, flujo de lodos, actividad sísmica, o simplemente no dispongan de materiales para la construcción, agua para consumo o energía. El propósito de este documento es el de presentar una metodología de ordenamiento territorial integral, holística, soportada en el subsuelo, que involucre diversos componentes y variables como el medio físico, biótico y antrópico. En la metodología se presenta un dimensionamiento de cómo las diferentes variables puede ser medidas, correlacionadas e integradas jerárquicamente con el fin de ir construyendo indicadores del geopotencial, biopotencial y el sociopotencial. Posteriormente se puede estimar la capacidad de acogida de un territorio frente a diferentes usos y a sus potenciales. Se van generando indicadores integrados frente a los diferentes conflictos ambientales y con los conocimientos de las personas que intervienen en los procesos de planificación y desarrollo, se pueden construir diferentes escenarios de ordenamiento territorial. La metodología se aplica para tres regiones en Colombia. La primera es una región de carácter amplio y diverso en cuanto a los aspectos geográficos, humanos y culturales, como lo es el Departamento de Cundinamarca, con más de 20 mil km2 de área, donde el componente físico tiene una mayor consideración. Un segundo caso es considerado a nivel más local, donde los diferentes componentes del sistema son tratados, haciendo énfasis en lo social y cultural hasta construir escenarios de desarrollo a nivel del municipio de La Peña. Y el tercer caso, es una propuesta para el ordenamiento de la minería de arcillas en la ciudad de Bogotá, donde se trata de racionalizar el uso del recurso mineral, haciéndolo en las zonas con mayor potencialidad y sin tanta dispersión en el territorio, haciéndolo compatible con otras demandas de uso del suelo principalmente. En los tres casos se parte de información del territorio, se estiman los diferentes potenciales y las restricciones, se determinan las capacidades de acogida, involucrando los diferentes actores, comunidades, políticos, y profesionales interdisciplinarios; además, se proponen diferentes escenarios de ordenamiento territorial, acorde con principios de alto consumo, de conservación o de sostenibilidad de la naturaleza. Esta metodología presenta algunas limitaciones y requiere ciertos ajustes para que tenga un mayor impacto en la sociedad civil; las limitaciones son más que todo de carácter político, ya que por más planteamientos objetivos que se hagan, la toma de decisiones está influenciada por los sentimientos, las presiones, los compromisos, y el ego. Sin embargo, se espera que esta contribución mas técnica desde las geociencias y los recursos naturales tenga una mayor relevancia en el desarrollo de la comunidad humana mundial. ; Human being with their intelligence and knowledge should be a key factor to achieve a better sustainable development. There are obvious cases which many resources have been sterilized due to decisions without non appropriate information or its omission, and natural hazards had had no prevention and even adequate management. The subsoil and its process have implications in the land use planning, with several variables that are difficult to quantify to average and to integrate when land use planning of the regions is made. Many methodological designs have made to approach the land use planning. This document tries to give a holistic proposal with more relevance to the subsoil, biotic and anthropogenic factors. This untitled document "Consideration of subsoil in the land use planning" is a methodological proposal for the regional management and planning for the regions with emphasis on the subsoil. It is demonstrated how subsoil has an important role when is used as a criteria to construct proposals, scenarios and human development. Three study cases are analyzed. There are several methodologies and infinity cases according to revised state of the art. Most of the reported does emphasis in urban, tourism, economy, legal and cultural among many variables. However subsoil, geology, mineral resources and natural hazards are few considered in most of plans, methodologies and study cases. These are reasons to propose a methodology with main emphasis in the subsoil, not only in conceptual terms, but with concrete equations and examples. Subsoil could be conformed by the mineral and groundwater resources and by the natural restrictions, such as the seismcity, volcanoes and landslides. All of these features have also some implications into the edaphology, geoforms, geography, biota (flora and fauna) and the anthropogenic matters (population, education, health and culture). In this way, the subsoil is a fundamental aspect in any territorial management process. Subsoil should be included within the set of variables to be considered. It represents de long term. The fact than some projects, cities and regions have been planned and ordered without considering the subsoil and any incident has occurred; does not mean that this is correct. In spite of the previous considerations, the cases of planned cities or regions are know, and after several decades have been devastated by landslides, floods, seismic activity, or simply they do not have building materials, water to consumption or energy. The purpose of this document is to show an integral, holistic methodology based in the subsoil, whose involve several and diverse components and variables such as the environment, biota and anthropogenic. The methodology shows a background of how the different variables can be measured, correlated and integrated hierarchically with the purpose of build indicators of the geopotential, biopotential and sociopotential. Subsequent, the carrying capacity of the territory for the different uses and to their potentials can be estimated. Integral indicators commence to be generated to respect of the different environmental conflicts and with the knowledge of the people who takes part in the processes of planning and development, different scenarios of environmental land use planning may be constructed. The methodology is applied for three regions in Colombia. First region is of huge and diverse character in the geographic, human and cultural aspects, as the Department of Cundinamarca, with and area more than 20.000 km2, in which the physical component has a greater relievance. A second case is considered as a local level, which the different components of the system are treated, making emphasis in the social and cultural matters to construct scenario of development in the La Peña municipality. The third case is a proposal for the ordering of the clay mining in the city of Bogotá, to rationalize the use of the mineral resource, doing it in the zones with greater potentiality and without much dispersion in the territory, doing it compatible with other use demands of the soil. The existent information of the territory is used for the three cases. The different potentials and restrictions are assessed, the carrying capacity is also determined, involving the different actors, such as the communities, politicians, and interdisciplinary professionals. Different scenarios of land use planning are proposed, according to the high consumption, conservation or sustainability principles for the nature. This methodology presents some limitations and requires certain adjustments to have a greater impact in the civil society. The limitations are mainly of the political character, because besides to many clear proposals, the decision making is influenced by the feelings, the pressures, the commitments, and the ego. Nevertheless, it is hope that this technical contribution from the geosciences and natural resources has a greater relevance in the development of the world-wide human community. ; Postprint (published version)
El ser humano con su inteligencia y conocimiento debería ser un elemento clave en el logro de un mejor desarrollo sostenible. Son evidentes los casos en que por desconocimiento o por omisión, se han desaprovechado o esterilizado los recursos naturales o no se han prevenido ni manejado adecuadamente los desastres naturales. El subsuelo y sus procesos tienen así implicaciones dentro del ordenamiento del territorio, con un sinnúmero de variables que generalmente son difíciles de cuantificar, ponderar e integrar. Se han hecho muchos intentos de diseños metodológicos para abordar el tema del ordenamiento territorial. En este documento se presenta una propuesta integral considerando con mayor relevancia al subsuelo más los aspectos bióticos y antrópicos. Este documento titulado "Consideración del subsuelo en el ordenamiento territorial" es una propuesta metodológica para la gestión del ordenamiento territorial de las regiones, haciendo énfasis en el subsuelo. Se demuestra como éste ocupa un papel determinante dentro de los criterios de construcción de propuestas, escenarios y finalmente en el desarrollo humano. Tres casos de estudios son desarrollados. Se tienen varias metodologías e infinidad de casos dentro del estado del arte que se revisó para el ordenamiento territorial. Mucho de lo reportado hace hincapié en lo urbano, turístico, económico, legal, político, cultural, entre muchas variables. Sin embargo, el subsuelo, la geología, los recursos minerales y las restricciones naturales allí presentes, son poco considerados en la mayoría de planes, metodologías y sobre todo en los casos de estudio. Esas son razones para proponer una metodología que haga énfasis en el subsuelo y que no solo se quede en lo conceptual, sino que se muestre con ejemplos concretos. El subsuelo estaría conformado por los recursos minerales e hídricos subterráneos, y también por las restricciones naturales, como la sismicidad, los volcanes, y deslizamientos, entre otros. Ello a su vez tiene implicaciones con la edafología, con las geoformas, con la geografía, con lo biótico (flora y fauna) y lo antrópico (poblacionales, educación, salud y cultura). Así que podría evidenciarse que considerar el subsuelo es fundamental dentro de cualquier proceso de ordenamiento del territorio. El subsuelo debería estar siempre presente dentro de las variables a considerar, ya que representa el largo plazo. El hecho de que algunos proyectos, ciudades, y regiones hayan sido planificados u ordenados sin considerar el subsuelo, y no hayan tenido incidentes, no quiere decir que sean correctos. Se tiene el caso de ciudades o regiones planificadas que después de varias décadas han sido arrasadas por deslizamientos, flujo de lodos, actividad sísmica, o simplemente no dispongan de materiales para la construcción, agua para consumo o energía. El propósito de este documento es el de presentar una metodología de ordenamiento territorial integral, holística, soportada en el subsuelo, que involucre diversos componentes y variables como el medio físico, biótico y antrópico. En la metodología se presenta un dimensionamiento de cómo las diferentes variables puede ser medidas, correlacionadas e integradas jerárquicamente con el fin de ir construyendo indicadores del geopotencial, biopotencial y el sociopotencial. Posteriormente se puede estimar la capacidad de acogida de un territorio frente a diferentes usos y a sus potenciales. Se van generando indicadores integrados frente a los diferentes conflictos ambientales y con los conocimientos de las personas que intervienen en los procesos de planificación y desarrollo, se pueden construir diferentes escenarios de ordenamiento territorial. La metodología se aplica para tres regiones en Colombia. La primera es una región de carácter amplio y diverso en cuanto a los aspectos geográficos, humanos y culturales, como lo es el Departamento de Cundinamarca, con más de 20 mil km2 de área, donde el componente físico tiene una mayor consideración. Un segundo caso es considerado a nivel más local, donde los diferentes componentes del sistema son tratados, haciendo énfasis en lo social y cultural hasta construir escenarios de desarrollo a nivel del municipio de La Peña. Y el tercer caso, es una propuesta para el ordenamiento de la minería de arcillas en la ciudad de Bogotá, donde se trata de racionalizar el uso del recurso mineral, haciéndolo en las zonas con mayor potencialidad y sin tanta dispersión en el territorio, haciéndolo compatible con otras demandas de uso del suelo principalmente. En los tres casos se parte de información del territorio, se estiman los diferentes potenciales y las restricciones, se determinan las capacidades de acogida, involucrando los diferentes actores, comunidades, políticos, y profesionales interdisciplinarios; además, se proponen diferentes escenarios de ordenamiento territorial, acorde con principios de alto consumo, de conservación o de sostenibilidad de la naturaleza. Esta metodología presenta algunas limitaciones y requiere ciertos ajustes para que tenga un mayor impacto en la sociedad civil; las limitaciones son más que todo de carácter político, ya que por más planteamientos objetivos que se hagan, la toma de decisiones está influenciada por los sentimientos, las presiones, los compromisos, y el ego. Sin embargo, se espera que esta contribución mas técnica desde las geociencias y los recursos naturales tenga una mayor relevancia en el desarrollo de la comunidad humana mundial. ; Human being with their intelligence and knowledge should be a key factor to achieve a better sustainable development. There are obvious cases which many resources have been sterilized due to decisions without non appropriate information or its omission, and natural hazards had had no prevention and even adequate management. The subsoil and its process have implications in the land use planning, with several variables that are difficult to quantify to average and to integrate when land use planning of the regions is made. Many methodological designs have made to approach the land use planning. This document tries to give a holistic proposal with more relevance to the subsoil, biotic and anthropogenic factors. This untitled document "Consideration of subsoil in the land use planning" is a methodological proposal for the regional management and planning for the regions with emphasis on the subsoil. It is demonstrated how subsoil has an important role when is used as a criteria to construct proposals, scenarios and human development. Three study cases are analyzed. There are several methodologies and infinity cases according to revised state of the art. Most of the reported does emphasis in urban, tourism, economy, legal and cultural among many variables. However subsoil, geology, mineral resources and natural hazards are few considered in most of plans, methodologies and study cases. These are reasons to propose a methodology with main emphasis in the subsoil, not only in conceptual terms, but with concrete equations and examples. Subsoil could be conformed by the mineral and groundwater resources and by the natural restrictions, such as the seismcity, volcanoes and landslides. All of these features have also some implications into the edaphology, geoforms, geography, biota (flora and fauna) and the anthropogenic matters (population, education, health and culture). In this way, the subsoil is a fundamental aspect in any territorial management process. Subsoil should be included within the set of variables to be considered. It represents de long term. The fact than some projects, cities and regions have been planned and ordered without considering the subsoil and any incident has occurred; does not mean that this is correct. In spite of the previous considerations, the cases of planned cities or regions are know, and after several decades have been devastated by landslides, floods, seismic activity, or simply they do not have building materials, water to consumption or energy. The purpose of this document is to show an integral, holistic methodology based in the subsoil, whose involve several and diverse components and variables such as the environment, biota and anthropogenic. The methodology shows a background of how the different variables can be measured, correlated and integrated hierarchically with the purpose of build indicators of the geopotential, biopotential and sociopotential. Subsequent, the carrying capacity of the territory for the different uses and to their potentials can be estimated. Integral indicators commence to be generated to respect of the different environmental conflicts and with the knowledge of the people who takes part in the processes of planning and development, different scenarios of environmental land use planning may be constructed. The methodology is applied for three regions in Colombia. First region is of huge and diverse character in the geographic, human and cultural aspects, as the Department of Cundinamarca, with and area more than 20.000 km2, in which the physical component has a greater relievance. A second case is considered as a local level, which the different components of the system are treated, making emphasis in the social and cultural matters to construct scenario of development in the La Peña municipality. The third case is a proposal for the ordering of the clay mining in the city of Bogotá, to rationalize the use of the mineral resource, doing it in the zones with greater potentiality and without much dispersion in the territory, doing it compatible with other use demands of the soil. The existent information of the territory is used for the three cases. The different potentials and restrictions are assessed, the carrying capacity is also determined, involving the different actors, such as the communities, politicians, and interdisciplinary professionals. Different scenarios of land use planning are proposed, according to the high consumption, conservation or sustainability principles for the nature. This methodology presents some limitations and requires certain adjustments to have a greater impact in the civil society. The limitations are mainly of the political character, because besides to many clear proposals, the decision making is influenced by the feelings, the pressures, the commitments, and the ego. Nevertheless, it is hope that this technical contribution from the geosciences and natural resources has a greater relevance in the development of the world-wide human community. ; Postprint (published version)
The aim of this report is to describe and analyze the embodiment of acceptance and recognition in discourses and practices which address cultural diversity in the Swedish educational system. In order to fulfil this general aim, we study how different categories of practitioners in the Swedish school system, such as teachers, headmasters and union representatives, and other stakeholders, such as civil servants, and representatives of political parties and the civil society, discuss and relate to the claims of recognition put forth by Muslim practitioners and/or policy measures designed to reach the fulfilment of those claims. Two cases are studied: the establishment of Muslim independent schools and the claims to dress veiled in public schools, out forth by Muslim youth. The cases are selected with consideration to a number of circumstances. First, the faith and belief practices of Muslim migrants have been debated on a large scale in Swedish media during the last decade, as in many other West European and North American countries. It is quite common that these practices has been put under scrutiny, and subjected to extensive critique. The attention paid to Muslim belief practices and institutions has also reached Muslim denominational schools and the practice of Burqa and Niqab. The establishment of denominational schools during the last two decades, whether Islamic or not, has also received a lot of attention, in mainstream media as well as in debates on education policy. For instance, a number of political parties have voiced demands to keep down the number of Islamic denominational schools. Second, Muslim migrants has, according to a number of studies, been subjected to direct and indirect discrimination. Whether this discrimination primarily is religious to its nature, or ethnic, and hence targeting their ethnic identity, is not always concluded, but the extensive negative attention mentioned above suggests that the scope of religiously motivated discrimination is either predominant or on the rise. The enactment of Muslim belief practices is not infrequently obstructed. For example, the construction of Mosques does seldom take place in silence; frequent and high-pitched voices of rejection and disapproval are common, and when the buildings once are completed, the congregations receive numerous threats and insults. The opposition is evident, and two mosques have been burned down. Moreover, women wearing burqa or niqab report being harassed in public. Apart from the lack of recognition and acceptance in religious matters, the prevalence of discriminatory mechanisms might also obstruct the access to welfare services and the entry to the labour market. This report consists of two cases studies, which relies solely on qualitative data. The main part of the empirical material consists of interviews with 13 persons – three teachers, three headmasters, two union representatives, two civil servants, one jurist, one imam and one representative of a political party. The interviews are used as a source for both cases. As additions to interviews, we have collected newspaper articles, memos from public authorities, bills introduced to the parliament, debates on commentary fields in web-edition of newspapers, et cetera. Being a minor study, it is necessary to make some reservations concerning the reliability of our material. Thus, it is difficult to determine whether or not it is possible to make generalizations from our material, thus asserting that the viewpoints found in our material are overlapping with or similar to the attitudes of other teachers, headmasters et cetera. In sum, a number of disadvantages with the establishment of Islamic denominational schools are expressed. They are allegedly divisive, both culturally and socially, and the quality of their instructions is supposed to be inadequate, in relation to the standards explicated in the national curriculum and syllabi. If the attitudes found in this study is spread all over Sweden, it could reasonable be said that Muslim schools are met by suspicion. Still, few calls for shutting down of these schools are voiced. It seems that the Muslim denominational schools are tolerated in a literal sense: it is accepted, sometimes pragmatically, but not liked. On the other hand, it could be said that the provision of a juridical and institutional space for religious minorities to establish denominational schools is part of politics of recognition; i.e. an educational policy which, under auspicious circumstances might provide the means for religious minorities to receive respect as equal and gain admission as normal. It must also be noted that the some of the objections to the existence of denominational schools implicitly and explicitly related to some central notions in Swedish educational policy. The notion of equivalence is a keyword in this context, and signifies on the one hand a demand for abidance by the national curriculum and syllabi, and on the other a priority of equalizing measures over freedom of choice. The equalizing and integrative objectives of the compulsory school project seem to be vital, but the quest for recognition of minority beliefs systems is circumscribed. Thus, the reproduction of "demos" is given priority over the recognition of "ethnos". As such, the notion of "equivalence" [likvärdighet] has been a keyword in Swedish educational policy since the 1980's, denoting equalizing ambitions as well as educational uniformity and compliance to steering documents. A number of objections to the practice of wearing Burqa or Niqab are put forth by our interviewees. In contrast to the media debate, the argument of gender equality was relatively downgraded. Rather, the interviewees focused on assumed problems with identification and communication. It was said that the abovementioned veiling practices obstructed the possibility of identifying the students at school, and also rendered the communication – and hence the instructions – at school more difficult. In comparison with the question of Islamic denominational schools, the non-tolerant stance was more manifest, although few explicit calls for a prohibition were made. Moreover, a specific discursive framing of the veiling practices could be discerned. The wearing of Burqa or Niqab was associated with phenomena such as mischief and the hidden, thus casting suspicion over the practice in question. As an instance of the everyday life, rather than an institutional arrangement, veiling practices could arguably be considered to be of less concern for educational policy than the establishment and maintenance of Islamic denominational schools. Still, the question of prohibition has gained a lot media attention during the last years, and brought the regulating dimension to the fore. And though our material contains few explicit calls for prohibition, several interviewees claimed that a teacher must see the face of the student in order to instruct and educate. And although the goal of equivalence was less relevant in this matter, the practice of veiling was questioned with reference to universal human rights, as the rights of the child. The right of the parent to exert influence in religious matter was questioned, since it could be regarded as a limitation of the freedom to choose direction to the walk of life. Thus, it seems like that the right to wear Burqa and Niqab in public schools are among the non-tolerable, although few explicit calls for prohibition can be discerned. So far, the material in our report, consisting of relatively limited set of qualitative data resonates with the broader tendency discerned by Orlando Mella, Irving Palm and Kristin Bromark (2011): the resistance in Sweden against the Burqa and the Niqab is compact; almost nine Swedes out of ten find it (totally or partly) unacceptable to wear Burqa and Niqab, respectively, at school or at work (Mella et al 2011:30), whereas seven out of ten find it (totally or partly) unacceptable to wear Burqa and Niqab at other public places. As noted above, the stress on equivalence consists of two distinct although related arguments. On the one hand, there is a demand for abidance by the law (here: steering documents such as national curriculum and syllabi), which among other things are paid attention to because Islamic schools are suspected not to follow these steering documents accordingly. This interpretation of "equivalence" is related to an understanding of the term which has become more and more frequent since the introduction of freedom of choice and independent schools in Swedish educational policy, and the decentralized system of governance of education in Sweden (Lindensjö & Lundgren 2002). In this context, where regulation is obtained through management by objective and evaluation, and responsibilities are spread between numerous responsible organizations, the goal of equivalence is equivalent (!) to abiding by the law. On the other hand, there is wish to maintain socially integrated educational environments, in which students from different ethnicities, classes and gender meets and interacts. Thus, it seems like the equalizing and integrative objectives which were central to the compulsory school project implemented during the heyday of the Scandinavian welfare regime (Esping-Andersen 1990) seem to be "alive and kicking". But the quest for recognition of minority beliefs systems, central to the policy of multiculturalism, is circumscribed. In so far, the arguments employed here gives priority to the reproduction of "demos" over the reproduction of "ethnos". It must also be noted that the freedom of choice, an important feature in the neoliberal turn of educational policy, does not seems to be so important for the interviewees in this particular matter. If we focus on the most elaborated objections in the report, we find arguments which 1) was presented as a response to the presumably universalist claims of freedom of religion, thus setting the professional considerations which are presented above in a more general, ethical context, and 2) focused on an ethical value of overriding importance, viz. the rights of the child. Emphasis is laid on the right of the child to "choose his own path", a wording which is used by several interviewees, which most of all seems to refer to the first paragraph in article 14 in the United Nations convention on the Rights of the Child, which aims at protecting "the right of the child to freedom of thought, conscience and religion". In the arguments of the teachers, the headmasters and the union representatives, this ethical principle makes it to possible to assert that children possess the freedom from the religion (as well as from other loyalties, or sets of ideas and beliefs) of their parents. Although not explicitly questioning the parents rights' to raise and guide their own children, they distinctly emphasize the autonomy of the child, and it's potential to choose something else than that which is given within the family. The emphasis on the rights of the child is regularly explicated in a specific discursive context. The right to "choose one's own path" is contrasted to the restrictions inherent in the religiosity of the parents. Religion is regularly depicted as the repressive force, and the secular mind-set as the entity in need of protection. The possibility of secular parents putting down religious inclinations among their children is never represented. Evidently, a discursive coupling of religion with repression and secularism with liberation may be discerned in the claims for freedom from religion. It may also be noted, that the impact from parental (Islamic) faith is the only aspect of upbringing which is questioned in this context. The arguments against tolerance or recognition of Islamic belief practices in this report are not primarily based on islamophobic or orientalistic discourses, but with reference to notions of equality. The interviewees stress the professional aspect of their opposition against veiling practices. They dissociate themselves from standpoints put forth in media, above all those who solely focus on the gender aspect of complete veiling practices. Instead, their emphasis on the professional educator dimension entails a focus on communication and identification. These acts of discursive positioning might be seen as an effort to "maximize the intertextual gap" between their own argument and the discourse in media, which to a fair-sized extent was articulated by radical right-wing populists. This dilemma is solved by the rhetoric of equivalence, which offers a way to reject claims of recognition in tandem with the defence of values as diverse and important as social justice, the rule of law and the freedom of the individual (child). Thus, the non-tolerance of religiously motivated veiling practices could be motivated with values which is central to diverse but culturally dominant ideological universes, such as socialism and (neo-)liberalism. ; Accept Pluralism
Covering one-third of the world's land, forests provide many essential resources and services on which more than 25% of the world's population directly depend for their livelihoods. Moreover, because of their importance in the carbon and water cycle and in the production of renewable materials, forests are one of the main means of mitigating the consequences of current and future global climate change. The need to balance ecological, economic and social roles to achieve sustainable management of forest resources is therefore evident from the local to the global scale. Accurate and up-to-date information on forests dynamics is essential to design and implement sustainable forest management plans. In particular, the development of customized management scenarios based on forest stand characteristics require accurate growth and yield estimates that can only be derived from permanent forest inventory data. This thesis focuses on the relevance of forest modelling from permanent forest inventory data to improve knowledge on current forest resource stocks and dynamics and help design sustainable forest management policies. The study concerns the forests of Wallonia, the southern Region of Belgium. Wallonia forests represent an interesting case study as they are intensively managed, highly fragmented, heterogeneous in structure and composition and undergoing fast change to adapt to new environmental and economic conditions. The first part of this thesis concerns the research for an appropriate modelling methodology to assess the level of productivity and species-station suitability of forest plantations. We developed an easily replicable methodology relying on stem analysis data to fit robust site index (SI) models which make it possible to estimate the level of productivity of plantations according to their top-height and age. New SI models were calibrated for the three main plantation species in Wallonia: Norway spruce, douglas-fir and Larches. These new models make it possible to account for the significant changes observed in the SI distribution over time which caused biases in previous ones. Our work suggests that these changes are mainly due to later harvesting and lower replanting rate of softwood plantations located on less productive sites, practices that are not exclusive to Wallonia. The second part of this work concerns the development of forest models able to accurately estimate the effect of original management scenarios on the development of forest resources. Using our new SI models and thousands of field experiment data allowed us to develop new harmonized growth models for even-aged pure stands of Norway spruce, douglas-fir and larches that have a wider validity than is possible with regional data. They highlight the very significant effect of stand density and social status on individual tree growth and demonstrate a lasting effect of early selective thinning on growth which does not decline as rapidly with age as previously thought. We also found that under similar growing conditions, douglas-fir yields are one-third higher than Norway spruce and larches. An economic analysis of the evolution of harvest revenues in Norway spruce and douglas-fir plantations also shows that although the actualized value of final harvests peaks at around 60 years, selective thinning can provide significant regular income until much later. The final part of this thesis concerns the design of a forest modelling methodology that A) provides detailed estimates about stock, growth, yield and harvested volumes, B) allows a direct use of permanent forest inventory data for calibration and initiation of the simulation, and C) can be applied at the regional level. It resulted in the development of the individual-tree distance-independent forest model SIMREG which is based on permanent data from the Permanent Forest Inventory of Wallonia (IPRFW). The tree-level approach made it possible to identify several inconsistencies in previous estimates from the IPRFW. In particular, we conclude that the net growing stock increase between the first and the second IPRFW cycle was previously overestimated by several million m³ while the total yield was underestimated by about 25%. These differences result from SIMREG's consideration of previously unmeasured forest resources and significant methodological changes between the two IPRFW cycles. Our simulations also confirm the rapid decline of spruce in favour of douglas-fir and various hardwood species but suggest that the higher yield of douglas-fir and the renewal of aging/low productive Norway spruce plantations could mostly compensate for the net decline in softwood forest area. We conclude that the standing stock of Walloon forests is managed sustainably overall but not at the species level. The decline of the Norway spruce plantations is arguably justified by numerous relevant reasons such as its presence in unsuitable stations, its sensitivity to drought, bark beetles and windfall damage, the higher yield of douglas-fir, etc. Nevertheless, Norway spruce is still the main production species in Wallonia and our work has resulted in several new recommendations to improve its management and that of its main substitute species: douglas-fir. Forest modelling and simulation is a timely issue and similar work is currently being done worldwide. Although our study has focused on Wallonia, our modelling methods were designed to only use data collected by most permanent NFI. For example, SIMREG was successfully adapted to the Flemish and Brussels forests in order to develop the national forest accounting plan of Belgium. We are thus confident that our modelling framework is generalizable to other countries. ; Couvrant un tiers des terres de la planète, les forêts fournissent de nombreuses ressources et services essentiels dont plus de 25% de la population mondiale dépend directement pour sa subsistance. En raison de leur importance dans le cycle du carbone et de l'eau et dans la production de matériaux renouvelables, les forêts sont également l'un des principaux moyens d'atténuer les conséquences du changement climatique mondial actuel et futur. La nécessité d'équilibrer les rôles écologiques, économiques et sociaux pour parvenir à une gestion durable des ressources forestières à toutes les échelles est donc évidente. Des informations précises et actualisées sur la dynamique des forêts sont essentielles pour concevoir et mettre en œuvre des plans de gestion durable. En particulier, l'élaboration de scénarios de gestion personnalisés tenant compte des caractéristiques des peuplements forestiers nécessite des estimations précises de la croissance et de la production qui ne peuvent être obtenues qu'à partir de données d'inventaire forestier permanentes. Cette thèse se concentre sur la pertinence de la modélisation à partir des données d'inventaire forestier permanent pour améliorer l'évaluation des ressources existantes et de leur évolution et aider à concevoir une politique de gestion durable des forêts. La zone d'étude est la Wallonie, la région qui occupe la moitié sud de la Belgique. Les forêts de Wallonie représentent un cas d'étude intéressant car elles sont très fragmentées, hétérogènes dans leur structure et leur composition, soumises à une gestion intensive et subissent des changements rapides pour s'adapter aux nouvelles conditions environnementales et économiques. La première partie de cette thèse qui concerne ainsi la recherche d'une méthodologie de modélisation appropriée pour évaluer le niveau de productivité et l'adéquation essence-station des plantations résineuses. Une méthodologie facilement reproductible s'appuyant sur des données d'analyse de tiges a ainsi été développée pour ajuster des modèles robustes permettant d'estimer le niveau de productivité des plantations en fonction de leur hauteur de sommet et de leur âge. De nouveaux modèles de productivité ont ainsi été calibrés pour les trois principales essences résineuses de plantations de Wallonie : épicéa, douglas et mélèzes. Ils permettent de tenir compte de l'évolution récente de la productivité qui entrainait des biais dans les modèles précédents. Nos travaux suggèrent que cette évolution est due à une récolte plus tardive et à un taux de replantation plus faible des plantations résineuses situées sur des sites moins productifs, des pratiques qui ne sont certainement pas exclusives à la Wallonie. La seconde partie de ce travail concerne le développement de modèles permettant d'estimer avec précision l'effet de scénarios de gestion originaux sur l'évolution des ressources forestières. Des modèles de croissance harmonisés pour les peuplements purs équiennes d'épicéas, de douglas et de mélèzes ont ainsi été développés sur base de nos nouveaux modèles de productivité et de milliers de données issues de dispositifs expérimentaux. Ces nouveaux modèles ont une validité plus large que ce qui est possible avec des données régionales. Ils mettent en évidence l'effet très significatif de la densité et du statut social sur la croissance des arbres individuels et démontrent un effet durable des éclaircies sélectives précoces sur la croissance qui ne décline pas aussi rapidement avec l'âge qu'on le pensait auparavant. Nous avons également constaté que dans des conditions de croissance similaires, la production du douglas est supérieure d'un tiers à celle de l'épicéa et du mélèze. Une analyse économique de l'évolution des revenus issus des prélèvements dans les plantations d'épicéas de Norvège et de douglas montre également que, bien que la valeur actualisée des récoltes finales culmine à environ 60 ans, les éclaircies sélectives peuvent longtemps continuer à fournir un revenu régulier. La partie finale de cette thèse concerne la conception d'une méthodologie de modélisation forestière pour A) fournir des estimations détaillées sur le stock, la croissance, la production et les volumes récoltés, B) permettre une utilisation directe des données d'inventaire forestier permanent pour le calibrage et le lancement de la simulation, et C) être compatible avec une application à l'échelle régionale. Elle a abouti à l'élaboration de SIMREG, un simulateur « arbre » indépendant des distances basé sur les données permanentes de l'Inventaire forestier permanent de Wallonie (IPRFW). Cette approche a permis d'identifier plusieurs incohérences dans les estimations précédentes de l'IPRFW. En particulier, nous concluons que la capitalisation du stock sur pied entre le premier et le deuxième cycle de l'IPRFW était auparavant surestimée de plusieurs millions de m³ alors que la production totale était sous-estimée d'environ 25%. Ces différences résultent de la prise en compte par SIMREG de ressources forestières jusqu'alors non mesurées et de changements méthodologiques importants entre les deux cycles de l'IPRFW. Nos simulations confirment également le déclin rapide de l'épicéa au profit du douglas et de diverses essences feuillues. Elles suggèrent néanmoins que la production plus élevée du douglas et le renouvellement des pessières vieillissantes et peu productives pourraient en grande partie compenser le déclin net des surfaces enrésinées. Nous concluons que, dans l'ensemble, le stock sur pied des forêts wallonnes est géré durablement, mais pas pour chaque espèce. Le déclin des plantations d'épicéas est certainement justifié par de nombreuses raisons pertinentes telles que sa présence dans des stations inadaptées, sa sensibilité à la sécheresse, aux dommages causés par les scolytes et les chablis, la production plus élevée du douglas, etc. L'épicéa reste néanmoins toujours la principale essence de production en Wallonie et nos travaux ont abouti à plusieurs nouvelles recommandations pour améliorer sa gestion et celle du douglas qui est sa principale essence de substitution. La modélisation et la simulation forestière est une question d'actualité et des travaux similaires sont actuellement menés dans le monde entier. Bien que notre étude se soit concentrée sur la Wallonie, nos méthodes de modélisation ont été conçues pour ne nécessiter que des données collectées par la plupart des inventaires forestiers nationaux permanents. Par exemple, SIMREG a été adapté avec succès aux inventaires flamands et bruxellois afin d'élaborer le National Forest Accounting Plan de la Belgique. Nous sommes donc convaincus que nos méthodes et nos modèles sont généralisables à d'autres pays.
In poultry farming, the intestinal microbiota and the "gut health" are topical subjects, especially since the EU has banned the use of auxinic antibiotics to avoid the onset of antibiotic resistance and safeguard the consumer health. As a consequence of their prohibition, a higher incidence of enteric diseases is observed in poultry farming with loss of productivity and increased mortality. In the post-antibiotics era, probiotics and prebiotics are proposed as a solution to the intestinal problems of poultry. Studies carried on these bioactives, administered in feed or water, show conflicting results due to the different environmental conditions (experimental and field conditions) and the way of use. The "in ovo" injection of pre-/pro-biotics and their combination (synbiotic), an emergent and original technique, shows promising results similar to those of auxinic antibiotics. The work aimed to evaluate the effects of these substances, "in ovo" administered, on growth performance, meat quality traits (cholesterol content, intramuscular collagen properties, fiber measurements), and the presence of histopathological changes in the pectoral muscle (PS) of Ross 308 broiler chickens. On d 12 of incubation, 480 eggs were randomly divided into five experimental groups treated with different bioactives, in ovo injected: C, control with physiological saline solution; T1 with 1.9 mg of Raffinose Family Oligosaccharides (RFOs); T2 and T3 with 1.9 mg of RFOs enriched with two different homemade probiotic bacteria (from Microbiological Bank of Institute of Biochemistry and Biophysics, Warsaw, Poland), specifically 1,000 cfu of Lactococcus lactis ssp. lactis SL1 and Lactococcus lactis ssp. cremoris IBB SC1, respectively; T4 with commercially available synbiotic Duolac, containing 500 cfu of both Lactobacillus acidophilus and Streptococcus faecium with the addition of lactose (0.001 mg/embryo). Among the hatched chickens, sixty males were randomly chosen (12 birds for each group) andreared according to the animal welfare recommendations of European Union directive 86/609/EEC in an experimental poultry house that provided good husbandry conditions. Birds were grown up to 42 d in collective cages (n = 3 birds in each 4 cages: replications for experimental groups). Broilers were fed ad libitum commercial diets according to their age and water was provided ad libitum. Amounts of feed offered to each cage were recorded, and uneaten feed in each cage was weighed daily (from 1 to 42 d). Cumulative feed intake and feed conversion ratio (FCR) were calculated on a cage basis. At 42 d of age, broilers were weighed individually (after a fasting period of 12 h) and then were electrically stunned and slaughtered at a commercial poultry slaughterhouse. At slaughter, hot carcass weight was recorded, and carcass yield percentage was calculated. Abdominal fat was removed, measured and its percentage was calculated based on hot carcass weight. The pectoral muscle was removed from all carcasses (n = 60) and its percentage was calculated based on hot carcass weight. In addition, pectoral muscle pH was recorded at 45 min (pH45), 12 h (pH12), and 24 h (pH24) postmortem. Samples of the right pectoral muscle of 40 animals, 8 birds from each experimental group, were taken and frozen in liquid nitrogen (-196ºC) for histological and histopathological analyses. The left pectoral muscle was vacuum packaged and stored frozen (-40°C) until intramuscular collagen (IMC) and cholesterol analyses. To verify significant differences in relation to the treatments, the data were evaluated by using 1-way ANOVA and means were separated by Scheffe's battery of pairwise tests (SPSS Inc., 2010). In ovo prebiotic and synbiotics administration had a low effect on investigated traits, but depend on the kind of bioactives administered. Commercial synbiotic treatment (T4) reduced carcass yield percentage, and the feed conversion ratio was higher in T3 and T4 groups compared with other groups. The abdominal fat, the ultimate pH, and cholesterol of the PS were not affected by treatment. Broiler chickens of treated groups with both slightly greater PS and fiber diameter had a significantly lower amount of collagen. The greater thickness of muscle fibers (not significant) and the lower fiber density (statistically significant), observed in treated birds in comparison with those of C group, are not associated with histopathological changes in the PS of broilers. The incidence of histopathological changes in broiler chickens from examined groups was low, which did not affect the deterioration of meat quality obtained from these birds. Overall, the results obtained at the end of this work have asserted how the in ovo administration, in showing greater effectiveness in terms of uniformity of application, dose used and duration of treatment, as well as homogeneity of the study population (age, weight), may represent a valid alternative to the traditional and well-established methods of post-hatching administration (feed and water) in order to minimize all those variables that could affect the effectiveness of bioactive. Moreover, the study has provided information for an effective application of these natural agents to be used in the future in breeding industry, with significant and positive impact on animal welfare and public health. ; Nel pollame, il microbiota intestinale e la "gut health" sono temi di attualità, soprattutto da quando l'UE ha vietato l'uso di antibiotici auxinici per evitare l'insorgenza dell'antibiotico-resistenza e salvaguardare, così, la salute del consumatore. Come conseguenza della loro proibizione, in avicoltura si è osservata una maggiore incidenza di malattie enteriche che hanno determinato sia perdite in termini di produttività che un aumento della mortalità. Nell'era post-antibiotici, i probiotici ed i prebiotici sono proposti come soluzione dei problemi intestinali dei polli. Gli studi condotti su questi composti bioattivi, somministrati direttamente nel cibo o nell'acqua, mostrano risultati contrastanti a causa delle diverse condizioni ambientali (condizioni sperimentali e di campo) e del modo di utilizzo. L'iniezione in ovo di pre-/pro-biotici e loro combinazione (simbiotici), tecnica emergente ed innovativa, mostra risultati promettenti simili a quelli degli antibiotici auxinici. Il presente lavoro ha avuto come scopo quello di valutare gli effetti di prebiotici, utilizzati da soli od in combinazione con batteri probiotici rigorosamente selezionati e caratterizzati (simbiotici; Bardowski e Kozak, 1981;. Boguslawska et al, 2009), somministrati "in ovo", sulle performance di crescita, sulla qualità della carne (contenuto di colesterolo, proprietà del collagene intramuscolare, misure delle fibre muscolari), nonché sull'incidenza di alterazioni istopatologiche a carico del muscolo pettorale di polli da carne linea Ross 308. Al 12° giorno di incubazione, 480 uova sono state divise a random in cinque gruppi sperimentali, cui sono stati somministrati, mediante tecnica di iniezione in ovo, differenti bioattivi: C, gruppo controllo iniettato con soluzione fisiologica; gruppo T1 iniettato con 1,9 mg di prebiotico (oligosaccaride della famiglia del raffinosio, RFO's); gruppi T2 e T3 iniettati condue formulazioni simbiotiche contenenti 1,9 mg di RFO's arricchiti con due diversi batteri probiotici appartenenti alla banca microbiologica dell'Istituto di Ricerca "Biochemistry and Biophysics" (IBB) di Varsavia (Polonia), nello specifico 1,000 ufc di Lactococcus lactis ssp. lactis SL1 e Lactococcus lactis ssp. cremoris IBB SC1, rispettivamente; gruppo T4 iniettato con un probiotico disponibile in commercio quale Duolac, contenente 500 ufc di Lactobacillus acidophilus e 500 ufc di Streptococcus faecium con aggiunta di lattosio (0.001 mg/embrione). Dopo la schiusa, sessanta pulcini maschi sono stati scelti a random (12 pulcini per ogni gruppo) e allevati in azienda sperimentale in buone condizioni di allevamento secondo le raccomandazioni previste dall'Unione europea in materia di benessere animale (direttiva 86/609/CEE). I polli sono stati allevati fino a 42 giorni di età in gabbie collettive (n = 3 uccelli per gabbia: 4 repliche per ciascun gruppo sperimentale), sono stati alimentati ad libitum con diete commerciali formulate in funzione della loro età e hanno potuto usufruire di acqua ad libitum. E' stata registrata la quantità di mangime offerto per gabbia, ed il cibo non consumato è stato pesato giornalmente per ciascuna gabbia (1-42 giorni). L'assunzione di cibo e l'indice di conversione alimentare (FCR) sono stati calcolati per gabbia. A 42 giorni di età, i polli da carne sono stati pesati individualmente (dopo un periodo di digiuno di 12 ore) e sono stati, successivamente, storditi elettricamente e macellati presso un macello commerciale. Al momento della macellazione, è stato registrato il peso della carcassa e calcolata la resa. Il grasso addominale ed il muscolo pettorale sono stati rimossi da tutte le carcasse (n = 60), pesati e le rispettive percentuali sono state calcolate in funzione del peso della carcassa. Inoltre, è stato registrato il pH del muscolo pettorale a 45 min (pH45), 12 ore (pH12), e 24 h (pH24) post-mortem. I campioni di muscolo pettorale destro di 40 animali, 8 volatili per ciascun gruppo sperimentale, sono stati prelevati e congelati in azoto liquido (-196°C) per le analisi istologiche ed istopatologiche. Il muscolo pettorale sinistro è stato imballato sottovuoto e congelato (-40°C) fino al momento delle analisi del collagene intramuscolare (IMC) e del contenuto di colesterolo. Al fine di verificare le differenze significative rispetto al trattamento applicato, i dati sono stati valutati utilizzando 1'ANOVA ad una via. Le differenze tra le medie sono state valutate mediante il test di Scheffé (SPSS Inc., 2010). La somministrazione in ovo di prebiotico e simbiotico ha avuto un impatto limitato sulle caratteristiche prese in esame, ciò in dipendenza del tipo di bioattivo impiegato. Il trattamento con simbiotico commerciale (T4) ha determinato una riduzione della resa alla macellazione, e l'indice di conversione alimentare è risultato maggiore nei gruppi T3 e T4 rispetto agli altri gruppi. Il grasso addominale, il pH24, ed il contenuto di colesterolo del muscolo pettorale non sono stati influenzati dal trattamento. I polli da carne dei gruppi trattati, con un'incidenza lievemente maggiore del muscolo pettorale ed un diametro delle fibre leggermente più grande, presentavano una quantità di collagene intramuscolare significativamente inferiore. Il maggiore spessore delle fibre muscolari (non significativa) e la minore densità delle fibre (statisticamente significativa), osservata nei polli trattati rispetto a quelli del gruppo controllo, non sono stati associati alle alterazioni istopatologiche del muscolo pettorale. L'incidenza delle istopatologie nei polli da carne dei gruppi esaminati è risultata bassa e non ha influenzato il deterioramento, in termini di qualità, della carne ottenuta da questi polli. Nel complesso, i risultati ottenuti da questo lavoro hanno consentito di affermare come la somministrazione in ovo, presentando maggiore efficacia in termini di uniformità di applicazione, di dose impiegata e di durata del trattamento nonché di omogeneità della popolazione di studio (età, peso), può rappresentare una valida alternativa ai tradizionali e già consolidati metodi di somministrazione post-schiusa (mangime e acqua) per ridurre tutte quelle variabili che possono inficiare l'efficacia dei bioattivi. Inoltre, hanno fornito informazioni per un uso efficace di questi agenti naturali da impiegare, in futuro, in allevamento industriale, con importanti e positive ricadute sulla salute animale e pubblica. ; Dottorato di ricerca internazionale in Welfare, biotechnology and quality of animal production (XXIV ciclo)
Thank you Chairman I would like to extend a warm welcome to our keynote speakers, David Byrne of the European Commission, Derek Yach from the World Health Organisation, and Paul Quinn representing Congressman Marty Meehan who sends his apologies. When we include the speakers who will address later sessions, this is, undoubtedly, one of the strongest teams that have been assembled on tobacco control in Europe. The very strength of the team underlines what I see as a shift – a very necessary shift – in the way we perceive the tobacco issue. For the last twenty years, we have lived out a paradox. It isnÃ'´t a social side issue. I make no apology for the bluntness of what IÃ'´m saying, and will come back, a little later, to the radicalism I believe we need to bring – nationally – to this issue. For starters, though, I want to lay it on the line that what weÃ'´re talking about is an epidemic as deadly as any suffered by human kind throughout the centuries. Slower than some of those epidemics in its lethal action, perhaps. But an epidemic, nonetheless. According to the World Health Organisation tobacco accounted for just over 3 million annual deaths in 1990, rising to 4.023 million annual deaths in 1998. The numbers of deaths due to tobacco will rise to 8.4 million in 2020 and reach roughly 10 million annually by 2030. This is quite simply ghastly. Tobacco kills. It kills in many different ways. It kills increasing numbers of women. It does its damage directly and indirectly. For children, much of the damage comes from smoking by adults where children live, study, play and work. The very least we should be able to offer every child is breathable air. Air that doesnÃ'´t do them damage. WeÃ'´re now seeing a global public health response to the tobacco epidemic. The Tobacco Free Initiative launched by the World Health Organisation was matched by significant tobacco control initiatives throughout the world. During this conference we will hear about the experiences our speakers had in driving these initiatives. This Tobacco Free Initiative poses unique challenges to our legal frameworks at both national and international levels; in particular it raises challenges about the legal context in which tobacco products are traded and asks questions about the impact of commercial speech especially on children, and the extent of the limitations that should be imposed on it. Politicians, supported by economists and lawyers as well as the medical profession, must continue to explore and develop this context to find innovative ways to wrap public health considerations around the trade in tobacco products – very tightly. We also have the right to demand a totally new paradigm from the tobacco industry. Bluntly, the tobacco industry plays the PR game at its cynical worst. The industry sells its products without regard to the harm these products cause. At the same time, to gain social acceptance, it gives donations, endowments and patronage to high profile events and people. Not good enough. This model of behaviour is no longer acceptable in a modern society. We need one where the industry integrates social responsibility and accountability into its day-to-day activities. We have waited for this change in behaviour from the tobacco industry for many decades. Unfortunately the documents disclosed during litigation in the USA and from other sources make very depressing reading; it is clear from them that any trust society placed in the tobacco industry in the past to address the health problems associated with its products was misplaced. This industry appears to lack the necessary leadership to guide it towards just and responsible action. Instead, it chooses evasion, deception and at times illegal activity to protect its profits at any price and to avoid its responsibilities to society and its customers. It has engaged in elaborate Ã'´spinÃ'´ to generate political tolerance, scientific uncertainty and public acceptance of its products. Legislators must act now. I see no reason why the global community should continue to wait. Effective legal controls must be laid on this errant industry. We should also keep these controls under review at regular intervals and if they are failing to achieve the desired outcomes we should be prepared to amend them. In Ireland, as Minister for Health and Children, I launched a comprehensive tobacco control policy entitled "Towards a Tobacco Free Society". OTT?Excessive?Unrealistic? On the contrary – I believe it to be imperative and inevitable. I honestly hold that, given the range of fatal diseases caused by tobacco use we have little alternative but to pursue the clear objective of creating a tobacco free society. Aiming at a tobacco free society means ensuring public and political opinion are properly informed. It requires help to be given to smokers to break the addiction. It demands that people are protected against environmental tobacco smoke and children are protected from any inducement to experiment with this product. Over the past year we have implemented a number of measures which will support these objectives; we have established an independent Office of Tobacco Control, we have introduced free nicotine replacement therapy for low-income earners, we have extended our existing prohibitions on tobacco advertising to the print media with some minor derogations for international publications. We have raised the legal age at which a person can be sold tobacco products to eighteen years. We have invested substantially more funds in health promotion activities and we have mounted sustained information campaigns. We have engaged in sponsorship arrangements, which are new and innovative for public bodies. I have provided health boards with additional resources to let them mount a sustained inspection and enforcement service. Health boards will engage new Directors of Tobacco Control responsible for coordinating each health boardÃ'´s response and for liasing with the Tobacco Control Agency I set up earlier this year. Most recently, I have published a comprehensive Bill – The Public Health (Tobacco) Bill, 2001. This Bill will, among other things, end all forms of product display and in-store advertising and will require all retailers to register with the new Tobacco Control Agency. Ten packs of cigarettes will be banned and transparent and independent testing procedures of tobacco products will be introduced. Enforcement officers will be given all the necessary powers to ensure there is full compliance with the law. On smoking in public places we will extend the existing areas covered and it is proposed that I, as Minister for Health and Children, will have the powers to introduce further prohibitions in public places such as pubs and the work place. I will also provide for the establishment of a Tobacco Free Council to advise and assist on an ongoing basis. I believe the measures already introduced and those additional ones proposed in the Bill have widespread community support. In fact, youÃ'´re going to hear a detailed presentation from the MRBI which will amply illustrate the extent of this support. The great thing is that the support comes from smokers and non-smokers alike. Bottom line, Ladies and Gentlemen, is that we are at a watershed. As a society (if youÃ'´ll allow me to play with a popular phrase) weÃ'´ve realised itÃ'´s time to Ã'´wake up and smell the cigarettes.Ã'´ Smell them. See them for what they are. And get real about destroying their hold on our people. The MRBI survey makes it clear that the single strongest weapon we have when it comes to preventing the habit among young people is price. Simple as that. Price. Up to now, the fear of inflation has been a real impediment to increasing taxes on tobacco. It sounds a serious, logical argument. Until you take it out and look at it a little more closely. Weigh it, as it were, in two hands. I believe – and I believe this with a great passion – that we must take cigarettes out of the equation we use when awarding wage increases. I am calling on IBEC and ICTU, on employers and trade unions alike, to move away from any kind of tolerance of a trade that is killing our citizens. At one point in industrial history, cigarettes were a staple of the workingmanÃ'´s life. So it was legitimate to include them in the Ã'´basketÃ'´ of goods that goes to make up the Consumer Price Index. It isnÃ'´t legitimate to include them any more. Today, IÃ'´m saying that society collectively must take the step to remove cigarettes from the basket of normality, from the list of elements which constitute necessary consumer spending. IÃ'´m saying: "We can no longer delude ourselves. We must exclude cigarettes from the considerations we address in central wage bargaining. We must price cigarettes out of the reach of the children those cigarettes will kill." Right now, in the monthly Central Statistics Office reports on consumer spending, the figures include cigarettes. But – right down at the bottom of the page – thereÃ'´s another figure. Calculated without including cigarettes. I believe that if we continue to use the first figure as our constant measure, it will be an indictment of us as legislators, as advocates for working people, as public health professionals. If, on the other hand, we move to the use of the second figure, we will be sending out a message of startling clarity to the nation. We will be saying "We donÃ'´t count an addictive, killer drug as part of normal consumer spending." Taking cigarettes out of the basket used to determine the Consumer Price Index will take away the inflation argument. It will not be easy, in its implications for the social partners. But it is morally inescapable. We must do it. Because it will help us stop the killer that is tobacco. If we can do it, we will give so much extra strength to health educators and the new Tobacco Control Association. This new organisation of young people who already have branches in over fifteen counties, is represented here today. The young adults who make up its membership are well placed to advise children of the dangers of tobacco addiction in a way that older generations cannot. It would strengthen their hand if cigarettes move – in price terms – out of the easy reach of our children Finally, I would like to commend so many public health advocates who have shown professional and indeed personal courage in their commitment to this critical public health issue down through the years. We need you to continue to challenge and confront this grave public health problem and to repudiate the questionable science of the tobacco industry. The Research Institute for a Tobacco Free Society represents a new and dynamic form of partnership between government and civil society. It will provide an effective platform to engage and mobilise the many different professional and academic skills necessary to guide and challenge us. I wish the conference every success.
The very name of this journal, Frontiers in Education, begs the questions of where the "frontiers" should be at present in education research, and how we can ensure that breakthroughs at these frontiers be rapidly put into practice. In this context, this short opinion article argues that in order to answer these two questions adequately, we, educators, should stop zeroing in on the content or skills we want students to have acquired right at the end of a cycle of studies, be it elementary, high school, or university. Instead, we should not be afraid to ask people who are many years, even decades, past formal education what they think we should focus on, where our teaching can be most effective. My own experience with two eye-opening events suggests that from this lifelong perspective, teaching activities that are likely to be most influential over the course of a person's life do not necessarily follow traditional formats. In terms of facilitating implementation of innovative teaching activities, since it seems very arduous to convince schools to add anything to already overloaded curricula, perhaps this is not where we should concentrate our efforts. I argue that the easiest way to implement change may be to first demonstrate practically to future teachers, enrolled in teacher education programs, that even brief learning activities can have enormous merit. Input from the "real world"If one considers that the objective of high-school and university education is to eventually equip students, at the age of 22 or 23, with the "body of knowledge" and skills that are relevant in their field of specialty, then the frontier one needs to consider in research encompasses all the novel approaches currently under development to better achieve this specific objective. Alternatively, one might adopt the perspective that what happens at the end of undergraduate studies is not necessarily that meaningful, given the fact that graduates will generally work for 42 or 43 years afterwards, change jobs a number of times, and end up doing things for which the content and skills acquired during their undergraduate studies may not be relevant at all. In that lifelong context, one could assume that the "frontier" on which our research in education should focus relates to novel activities that would prepare individuals for the constantly changing landscape that they will face all along their life, including, but not exclusively, during their professional career. One way to get ideas about novel teaching activities we should develop in that latter context consists of simply asking individuals who were in school or at the university 20, 30, or even 40 years ago, to identify the part of their education, taken broadly, that was most valuable to them. The idea is not entirely new, of course, since it was adopted 60 years ago already in the pioneering work of Bertrand Schwartz, in preparation for his landmark reform of the School of Mines in Nancy (France) during the late 1950s (Lambrichs, 2009). Nevertheless, given the strong emphasis in education research on psychometric tests and the application of statistical methods to surveys of large cohorts, a perspective centered on individuals' opinions is likely to be criticized as "anecdotalist", potentially biased, and not sufficiently rigorous. Fortunately, it is not necessary to simply rely on individuals' perceptions, which may indeed be occasionally off if questions are not asked appropriately. Over the last few decades, various educational theorists (e.g., Dominicé, 2000; Dhunpath and Samuel, 2009) have proposed and perfected several techniques to indirectly derive meaningful information about the education of individuals by carefully analyzing biographical stories they may have written. Perhaps, as in the field of self-directed learning (Baveye, 2003), this "life history" approach could help us establish on a firm foundation the expanding frontier(s) in education. This message is clearly conveyed by Goodson and Gill (2014), who argue that "life narratives" can help educators shift from a disenfranchised tradition to one of empowerment.Of course, clearly, this incorporation of the experience of people "in the real world" into school and university curricula needs to be carried out cautiously. It would not be appropriate for economic and competitive interests of groups outside the school community to sway the educational process in a direction that is not ultimately in the best interest of students. One also needs to be careful not to disenfranchise teachers or weaken their agency and collaboration. Because they are intimately involved in the educational process, any discussion about its evolution in the future absolutely needs to involve them directly. Anyway, in this context of input "from the real world", the question of which activities were most influential in my education has intrigued me a lot over the last two or three decades. Clearly, the answer is not to be found in the formal lectures I attended, since I promptly forgot just about everything I learned from them, except in a few rare instances where I had a strong prior motivation to learn the material, on my own if need be. Even in these few cases, most of the content and skills I learned back then turned out to be obsolete after a decade, and had little influence on my later career in science. Careful reflection suggests that by far the most significant learning experiences of my entire schooling were associated with two very short, active learning events, which perhaps paradoxically, had nothing at all to do with the training I got at the university.Short teachable moments…The first of these two influential events took place when I was 17, in 1974, many years before Internet or Facebook existed. One of the teachers in my junior year in high school asked our class to attend a politically-motivated street demonstration held during the week end in the nearest big city. The protest was against an exhibition that was organized to promote tourism in a foreign country, which at that time was under a brutal dictatorship. Many non-governmental organizations and citizens' groups were staunchly opposed to the exhibition. The experience of attending the peaceful demonstration, among a crowd we estimated to be around 250,000 to 300,000 people, was eye-opening in itself, but what made it unique was what happened in school the following Monday. In the early morning, the teacher had purchased all the local newspapers he could find. He assigned us to read and analyze the different accounts these newspapers provided of the street demonstration. Whereas left-leaning newspapers reported that upwards of 500,000 people had marched peacefully through the streets, right-wing journals described the demonstration as a small gathering of merely 50,000 people, and commented incorrectly that some of them had become violent. Several newspapers did not report the protest at all. Meanwhile, the police account of the demonstration mentioned an intermediate number of approximately 200,000 people. After less than an hour of reviewing the different articles, many of us admitted in the ensuing discussion that we were stunned by what we had found. Even though nowadays "alternate facts" have become common place, at the time none of us anticipated disparities to be so blatant between different newspapers. In hindsight, I feel that in one hour of hands-on, discovery-based exposure to a variety of perspectives on the same event, I learned more about the need for a critical analysis of texts, and for a proper account of the socio-economic and historical context in which texts are written, than in any of the formal courses I took on the subject later on in college. Undoubtedly, my upbringing in a family of intellectuals made me particularly receptive to this kind of teachable moment, and individuals reared in very different environments may not as fertile ground as I was, but the fact is that I have never looked at the written word after this single exercise as I did before, nor for that matter at anything that lecturers would talk about in the courses I took afterwards (some of which turned out after a little scrutiny to deliver information that was obsolete or even erroneous). The second event occurred during the Summer just before I started attending the university. My high school had made it possible for a group of students to help with development activities in what was still commonly referred to at the time as the "Third World". Specifically, we were supposed to help missionaries in Rwanda with various practical, mostly agricultural, projects. In the evenings, in the tiny village where we were staying, there was very little to do, except play card- and board games. A favorite activity of the missionaries was to play Scrabble. On one of the first nights I was there, we started a game with them, and basically I played the way my mother had shown me, to try to win by placing words that would earn me more points than what the other players could do with their letters. After a couple of moves, the missionaries stopped the game and told me that this was not how they played it at all. They proceeded to show me how each of them was trying to place words in such a way that all players together would eventually make the most points possible. So instead of proudly putting "taxes" in one move, someone would only put down "ax", then another would add a "t", to obtain "tax", then a third could make it "taxes", and finally, someone could add "sur" to come up with "surtaxes". All in all, that means that the points of the very valuable letter "x" would count five times instead of one, increasing the combined score of all the players at the end of the game. The details of the Scrabble strategy do not matter that much, but what was really a revelation in what the missionaries showed me is that rules can easily be changed to maximize the benefits, in this case the enjoyment, for all involved. The original rules of Scrabble require individual competition among players, and they invariably lead to one player being happy at the end while all the others are less so, to put it mildly. The lesson I got was that it is easy to transform the game into something that is eminently collaborative, such that everyone feels pretty much the same way at the end, depending on how well the group did. As a result, what could be a divisive pastime, especially if one of the players happens to be unreasonably competitive, becomes an activity that strengthens the cohesion of the group and enhances team spirit.Long-lasting lessons learned…Over the years, during my subsequent career as a researcher, I have had many occasions to reminisce about these two events. The older I get, the more I realize how extremely significant they were in my education, overshadowing many other aspects of it. As short as it was, the newspapers-related activity singlehandedly shaped my attitude toward the written word. Every single time I now read an article or a book, I remember the fateful Monday when my classmates and I compared multiple newspaper clippings. As a result, I never take any text (or, for that matter, lecture notes) for granted any more, and I am compelled to systematically look for alternative perspectives on the same topics before I formulate my own opinion. This critical attitude toward texts has played a key role in my work, as it has stimulated me not only to keep learning efficiently on my own, as a self-directed learner, from a variety of sources, but also to try to foster self-directed learning in my students (Baveye, 1994). This capacity is particularly crucial in fields in which phenomenal technological breakthroughs occur at very regular intervals, and the need to renew one's knowledge base is constant.
The very name of this journal, Frontiers in Education, begs the questions of where the "frontiers" should be at present in education research, and how we can ensure that breakthroughs at these frontiers be rapidly put into practice. In this context, this short opinion article argues that in order to answer these two questions adequately, we, educators, should stop zeroing in on the content or skills we want students to have acquired right at the end of a cycle of studies, be it elementary, high school, or university. Instead, we should not be afraid to ask people who are many years, even decades, past formal education what they think we should focus on, where our teaching can be most effective. My own experience with two eye-opening events suggests that from this lifelong perspective, teaching activities that are likely to be most influential over the course of a person's life do not necessarily follow traditional formats. In terms of facilitating implementation of innovative teaching activities, since it seems very arduous to convince schools to add anything to already overloaded curricula, perhaps this is not where we should concentrate our efforts. I argue that the easiest way to implement change may be to first demonstrate practically to future teachers, enrolled in teacher education programs, that even brief learning activities can have enormous merit. Input from the "real world"If one considers that the objective of high-school and university education is to eventually equip students, at the age of 22 or 23, with the "body of knowledge" and skills that are relevant in their field of specialty, then the frontier one needs to consider in research encompasses all the novel approaches currently under development to better achieve this specific objective. Alternatively, one might adopt the perspective that what happens at the end of undergraduate studies is not necessarily that meaningful, given the fact that graduates will generally work for 42 or 43 years afterwards, change jobs a number of times, and end up doing things for which the content and skills acquired during their undergraduate studies may not be relevant at all. In that lifelong context, one could assume that the "frontier" on which our research in education should focus relates to novel activities that would prepare individuals for the constantly changing landscape that they will face all along their life, including, but not exclusively, during their professional career. One way to get ideas about novel teaching activities we should develop in that latter context consists of simply asking individuals who were in school or at the university 20, 30, or even 40 years ago, to identify the part of their education, taken broadly, that was most valuable to them. The idea is not entirely new, of course, since it was adopted 60 years ago already in the pioneering work of Bertrand Schwartz, in preparation for his landmark reform of the School of Mines in Nancy (France) during the late 1950s (Lambrichs, 2009). Nevertheless, given the strong emphasis in education research on psychometric tests and the application of statistical methods to surveys of large cohorts, a perspective centered on individuals' opinions is likely to be criticized as "anecdotalist", potentially biased, and not sufficiently rigorous. Fortunately, it is not necessary to simply rely on individuals' perceptions, which may indeed be occasionally off if questions are not asked appropriately. Over the last few decades, various educational theorists (e.g., Dominicé, 2000; Dhunpath and Samuel, 2009) have proposed and perfected several techniques to indirectly derive meaningful information about the education of individuals by carefully analyzing biographical stories they may have written. Perhaps, as in the field of self-directed learning (Baveye, 2003), this "life history" approach could help us establish on a firm foundation the expanding frontier(s) in education. This message is clearly conveyed by Goodson and Gill (2014), who argue that "life narratives" can help educators shift from a disenfranchised tradition to one of empowerment.Of course, clearly, this incorporation of the experience of people "in the real world" into school and university curricula needs to be carried out cautiously. It would not be appropriate for economic and competitive interests of groups outside the school community to sway the educational process in a direction that is not ultimately in the best interest of students. One also needs to be careful not to disenfranchise teachers or weaken their agency and collaboration. Because they are intimately involved in the educational process, any discussion about its evolution in the future absolutely needs to involve them directly. Anyway, in this context of input "from the real world", the question of which activities were most influential in my education has intrigued me a lot over the last two or three decades. Clearly, the answer is not to be found in the formal lectures I attended, since I promptly forgot just about everything I learned from them, except in a few rare instances where I had a strong prior motivation to learn the material, on my own if need be. Even in these few cases, most of the content and skills I learned back then turned out to be obsolete after a decade, and had little influence on my later career in science. Careful reflection suggests that by far the most significant learning experiences of my entire schooling were associated with two very short, active learning events, which perhaps paradoxically, had nothing at all to do with the training I got at the university.Short teachable moments…The first of these two influential events took place when I was 17, in 1974, many years before Internet or Facebook existed. One of the teachers in my junior year in high school asked our class to attend a politically-motivated street demonstration held during the week end in the nearest big city. The protest was against an exhibition that was organized to promote tourism in a foreign country, which at that time was under a brutal dictatorship. Many non-governmental organizations and citizens' groups were staunchly opposed to the exhibition. The experience of attending the peaceful demonstration, among a crowd we estimated to be around 250,000 to 300,000 people, was eye-opening in itself, but what made it unique was what happened in school the following Monday. In the early morning, the teacher had purchased all the local newspapers he could find. He assigned us to read and analyze the different accounts these newspapers provided of the street demonstration. Whereas left-leaning newspapers reported that upwards of 500,000 people had marched peacefully through the streets, right-wing journals described the demonstration as a small gathering of merely 50,000 people, and commented incorrectly that some of them had become violent. Several newspapers did not report the protest at all. Meanwhile, the police account of the demonstration mentioned an intermediate number of approximately 200,000 people. After less than an hour of reviewing the different articles, many of us admitted in the ensuing discussion that we were stunned by what we had found. Even though nowadays "alternate facts" have become common place, at the time none of us anticipated disparities to be so blatant between different newspapers. In hindsight, I feel that in one hour of hands-on, discovery-based exposure to a variety of perspectives on the same event, I learned more about the need for a critical analysis of texts, and for a proper account of the socio-economic and historical context in which texts are written, than in any of the formal courses I took on the subject later on in college. Undoubtedly, my upbringing in a family of intellectuals made me particularly receptive to this kind of teachable moment, and individuals reared in very different environments may not as fertile ground as I was, but the fact is that I have never looked at the written word after this single exercise as I did before, nor for that matter at anything that lecturers would talk about in the courses I took afterwards (some of which turned out after a little scrutiny to deliver information that was obsolete or even erroneous). The second event occurred during the Summer just before I started attending the university. My high school had made it possible for a group of students to help with development activities in what was still commonly referred to at the time as the "Third World". Specifically, we were supposed to help missionaries in Rwanda with various practical, mostly agricultural, projects. In the evenings, in the tiny village where we were staying, there was very little to do, except play card- and board games. A favorite activity of the missionaries was to play Scrabble. On one of the first nights I was there, we started a game with them, and basically I played the way my mother had shown me, to try to win by placing words that would earn me more points than what the other players could do with their letters. After a couple of moves, the missionaries stopped the game and told me that this was not how they played it at all. They proceeded to show me how each of them was trying to place words in such a way that all players together would eventually make the most points possible. So instead of proudly putting "taxes" in one move, someone would only put down "ax", then another would add a "t", to obtain "tax", then a third could make it "taxes", and finally, someone could add "sur" to come up with "surtaxes". All in all, that means that the points of the very valuable letter "x" would count five times instead of one, increasing the combined score of all the players at the end of the game. The details of the Scrabble strategy do not matter that much, but what was really a revelation in what the missionaries showed me is that rules can easily be changed to maximize the benefits, in this case the enjoyment, for all involved. The original rules of Scrabble require individual competition among players, and they invariably lead to one player being happy at the end while all the others are less so, to put it mildly. The lesson I got was that it is easy to transform the game into something that is eminently collaborative, such that everyone feels pretty much the same way at the end, depending on how well the group did. As a result, what could be a divisive pastime, especially if one of the players happens to be unreasonably competitive, becomes an activity that strengthens the cohesion of the group and enhances team spirit.Long-lasting lessons learned…Over the years, during my subsequent career as a researcher, I have had many occasions to reminisce about these two events. The older I get, the more I realize how extremely significant they were in my education, overshadowing many other aspects of it. As short as it was, the newspapers-related activity singlehandedly shaped my attitude toward the written word. Every single time I now read an article or a book, I remember the fateful Monday when my classmates and I compared multiple newspaper clippings. As a result, I never take any text (or, for that matter, lecture notes) for granted any more, and I am compelled to systematically look for alternative perspectives on the same topics before I formulate my own opinion. This critical attitude toward texts has played a key role in my work, as it has stimulated me not only to keep learning efficiently on my own, as a self-directed learner, from a variety of sources, but also to try to foster self-directed learning in my students (Baveye, 1994). This capacity is particularly crucial in fields in which phenomenal technological breakthroughs occur at very regular intervals, and the need to renew one's knowledge base is constant.
Keywords: the concept of «technology transfer», technology fields, local and basictechnologies, the object of transfer, contractual relations, intellectual property. The article considers the methodology of technologytransfer from the point of economic and legal content in the field of intellectualproperty. It is noted that there is no single definition of technology transfer, as scientistsin various fields interpret it due to the peculiarities of their field of activity. Atthe general level, the field of technology is considered as the birth of technologies,their types and maturity, which are the objects of transfer, taking into account the peculiaritiesof state regulation in the field of transfer. It is in the field of technologythat an invention (utility model) is born, as a result of intellectual, creative human activity;that is, they associate this process with the material carriers of technologies, orthe intangible phenomenon becomes a material state. The transfer of technology is associatedwith the transition to technical means, technological processes, and computernetworks. It is considered from the point of law as a type of communication betweenbusiness entities on the basis of contractual relations. It is determined thatfrom the point of methodology of technologies and their components transfer, theissue of technologies origin and the nature of their creation require in-depth study,and that is important to indicate the author(s) (owner(s)) of the result of intellectual,creative activity in the field of intellectual property. The main goal of the technology can be achieved only if there is a quantitative assessment of the perfection of theprocess and product quality. Technology uses two types of models: ideal objects ofbasic sciences, on the basis of which the most general laws and regularities of naturalsciences are formulated, and ideal objects of technology itself, on the basis of whichmorphological descriptions of separate stages and functional descriptions of the structureof technological lines are made. New local technologies are the result of inventions,utility models in the field of technologies, which have a specific author(s) (inventor(s)) and which are the object of transfer. Amendments to the terms of Article 1of the Law of Ukraine «On State Regulation of Technology Transfer Activities». ; Ключові слова: поняття «трансфер технології», сфери технологій, локальні та базовітехнології, об'єкт трансферу, договірні відносини, інтелектуальна власність У статті розглянуто методологію трансферу технологій з позиції економіко-правовогозмісту у сфері інтелектуальної власності. Зазначено, що досі немає єдиного визначеннятрансферу технологій, оскільки науковці різних галузей трактують його через особливо-сті своєї сфери діяльності. На загальному рівні розглянуто сферу технології як народ-ження технологій, їх видів та зрілості, що є об'єктами трансферу, з урахуванням особли-востей державного регулювання у сфері трансферу. Саме у сфері технології народжу-ється винахід (корисна модель) як результат інтелектуальної, творчої діяльності людини.Цей процес пов'язують з матеріальними носіями технологій, або нематеріальне явищепереходить у матеріальний стан. З трансфером технології пов'язують перехід до техніч-них засобів, технологічних процесів, комп'ютерних мереж. Він розглядається з позиціїправа як вид комунікацій між суб'єктами господарювання на основі договірних відносин.Визначено, що з позиції методології трансферу технологій та їх складових поглибленогодослідження потребує проблема походження технологій та природи її створення, що маєзначення для встановлення автора/ів (власника/ів) результату інтелектуальної, творчоїдіяльності у сфері інтелектуальної власності. Головна мета технології може бути досяг-нута тільки за наявності кількісної оцінки довершеності процесу та якості продукту.Технологія користується двома типами моделей: ідеальні об'єкти фундаментальнихнаук, на базі яких сформульовано найбільш загальні закони та закономірності природо-знавчих наук, та ідеальні об'єкти власне самої технології, на базі яких складено морфо-логічні описи окремих стадій (технологічних операцій) та функціональні описи струк-ту¬ри технологічних ліній. Нові локальні технології (інноваційні технології) є результа-том винаходів, корисних моделей у сфері технологій, які мають конкретного/их автора/ів(винахідника/ків) і стають об'єктом трансферу. Запропоновано зміни та доповнення дотермінів статті 1 Закону України «Про державне регулювання діяльності у сфері транс-феру технологій»: «нематеріальні активи» «об'єкт технології», «складова технології» та«технологія», які знайшли більш нове та системне відображення.Список використаних джерел: 1. Шиващкевич Д. С. Формирование сетевых инновационных структур как условие перехода к новым технологическим укладам. Вестник СГСЭУ, 2013. № 2 (46). 2. Трансфер технологій / Матеріал з Вікіпедії — вільної енциклопедії. 3. Большой экономический словарь: 25000 терминов / под ред. А. Н. Азрилияна. 7-е изд., доп. 4. Цибульова П. М., Чеботарьова В. П., Зінова В. Г., Суіні Ю. Управління інтелектуальною власністю : монографія / за ред. П. М. Цибульова. Київ : «К.І.С.», 2005. 448 с. 5. Ляшенко О. М. Моделі комерціалізації та трансферу технологій в умовах глобального середовища : монографія. Тернопіль : Економічна думка, 2007. 366 с. 6. Перерва П. Г., Коциски Д., Сакай Д., Верешне Шомоши М. Трансфер технологий : монография. Харьков — Мишкольц : НТУ «ХПИ», 2012. 599 с. URL: http://repository.kpi.kharkov.ua/bitstream/KhPI-Press/26368/1/Pererva_ Transfer_tekhnologiy_2012.pdf/. 7. Трансфер технологій та охорона інтелектуальної власності в наукових установах : монографія / Ю. М. Капіца, К. С. Шахбазян, Д. С. Махновський, І. І. Хоменко / за ред. Ю. М. Капіци. Київ : Центр інтелектуальної власності та передачі технологій НАН України, 2015. 431 с. 8. Новіков Є. А. Правове регулювання діяльності мережі трансфертехнологій : монографія. Харків : НДІ ПЗІР НАПрНУ, 2019. 173 с. 9.Титов В. В. Трансфер технологий : учебное пособие. Москва : ВНИИПИ, 2000. 10. Трансфер технологій : підручник / А. А. Мазаракі, Г. О. Андрощук, С. І. Бай та ін. / за заг. ред. А. А. Мазаракі. Київ : Київ. нац. торг.-екон. ун-т, 2014. 556 с. 11. Бутенко Д. С., Ткачук І. І. Розвиток трансферу технологій — запорука активізації інноваційних процесів в Україні. Глобальні та національні проблеми економіки. 2015. № 7. С. 261–263. 12. Кваша Т. К., Паладченко О. Ф., Молчанова І. В. Трансфер технологій як реалізація науково-технічного та інтелектуального потенціалу України. Наука, технології, інновації. 2018. № 1. С. 72–79. 13. Корнілова І. М., Руденко Є. О. Методичне забезпечення обґрунтування трансферу технологій. БІЗНЕСІНФОРМ. 2019. № 2. C. 85–94. URL: https://www.readcube.com/articles/10.32983%2F2222-4459-2019-2-85-94. 14. Communication From the Commission to the Council, the European Parliament, the European Economic and Social Committee and the Committee of the Regions «Improving knowledge transfer between research institutions and industry across Europe: Embracing Open Innovation – Implementing the Lisbon agenda», Brussels, 4.4.2007, COM 2007) 182 final. 15. Ляшенко О. М. Методи та моделі комерціалізації трансферу технологій : автореф. дис. … д-ра екон. наук. Київ, 2009. С. 36. URL: http://dspace.wunu.edu.ua/bitstream/316497/3051/1/secur_lyashenko_doc_aref.pdf. 16. Ляшенко О. М. Комерціалізація та трансферт технології: категорії та методи інноваційної діяльності. Інноваційна економіка. Всеукраїнський науково-виробничий журнал. 2013. С. 8–13. URL: http://masters.donntu.org/ 2013 /iem/suhanov/library/ar2.pdf. 17. Зінчук Т. О., Кащук К. М. Трансфер інноваційних технологій: сутність та значення у розвитку вітчизняної економіки. URL: http://ir.znau. edu.ua/bitstream/123456789/4262/3/Znptdau_2012_2_199_208.pdf.18. Про державне регулювання діяльності у сфері трансферу технологій : Закон України від 14 вересня 2006 року № 143-V. URL: https://zakon.rada.gov.ua/laws/show/143-16#Text. 19. Родіонова І. В. Основні форми та етапи здійснення трансферу технологій промислових підприємств. Вісник Запорізького національного університету. № 3 (15), 2012. С. 59–64. URL: https://web.znu.edu.ua/ herald//issues/2012/eco-3-2012/059-64.pdf. 20. Білоус О. Ю. Державне регулювання у сфері трансферу знань та технологій як чинник інноваційного розвитку економіки України. Вісник соціально-економічних досліджень. Випуск 2 (57), 2015. С. 100–107. URL: http://vsed.oneu.edu.ua/collections/2015/57/pdf/100-107.pdf. 21. Громов Г. Сфера» технологии. URL: http://www.wdigest.ru /nir_ sphere.htm. 22. Основні технологічні поняття та визначення. URL: https://lektsii.org/ 5-62188.html. ========================= 1. Shyvashchkevych D. S. Formyrovanye setevыkh ynnovatsyonnыkh struktur kak uslovye perekhoda k novыm tekhnolohycheskym ukladam. Vestnyk SHSЭU, 2013. № 2 (46). 2. Transfer tekhnolohii / Material z Vikipedii — vilnoi entsyklopedii. 3. Bolshoi эkonomycheskyi slovar: 25000 termynov / pod red. A. N. Azrylyiana. 7-e yzd., dop. 4. Tsybulova P. M., Chebotarova V. P., Zinova V. H., Suini Yu. Upravlinnia intelektualnoiu vlasnistiu : monohrafiia / za red. P. M. Tsybulova. Kyiv : «K.I.S.», 2005. 448 s. 5. Liashenko O. M. Modeli komertsializatsii ta transferu tekhnolohii v umovakh hlobalnoho seredovyshcha : monohrafiia. Ternopil : Ekonomichna dumka, 2007. 366 s. 6. Pererva P. H., Kotsysky D., Sakai D., Vereshne Shomoshy M. Transfer tekhnolohyi: monohrafyia. Kharkov — Myshkolts : NTU «KhPY», 2012. 599 s. URL: http://repository.kpi.kharkov.ua/bitstream/KhPI-Press/26368/1/Pererva_ Transfer_tekhnologiy_2012.pdf/. 7. Transfer tekhnolohii ta okhorona intelektualnoi vlasnosti v naukovykh ustanovakh : monohrafiia / Yu. M. Kapitsa, K. S. Shakhbazian, D. S. Makhnovskyi, I. I. Khomenko / za red. Yu. M. Kapitsy. Kyiv : Tsentr intelektualnoi vlasnosti ta peredachi tekhnolohii NAN Ukrainy, 2015. 431 s. 8. Novikov Ye. A. Pravove rehuliuvannia diialnosti merezhi transfertekhnolohii : monohrafiia. Kharkiv : NDI PZIR NAPrNU, 2019. 173 s. 9.Tytov V. V. Transfer tekhnolohyi : uchebnoe posobye. Moskva : VNYYPY, 2000. 10. Transfer tekhnolohii : pidruchnyk / A. A. Mazaraki, H. O. Androshchuk, S. I. Bai ta in. / za zah. red. A. A. Mazaraki. Kyiv : Kyiv. nats. torh.-ekon. un-t, 2014. 556 s. 11. Butenko D. S., Tkachuk I. I. Rozvytok transferu tekhnolohii — zaporuka aktyvizatsii innovatsiinykh protsesiv v Ukraini. Hlobalni ta natsionalni problemy ekonomiky. 2015. № 7. S. 261–263. 12. Kvasha T. K., Paladchenko O. F., Molchanova I. V. Transfer tekhnolohii yak realizatsiia naukovo-tekhnichnoho ta intelektualnoho potentsialu Ukrainy. Nauka, tekhnolohii, innovatsii. 2018. № 1. S. 72–79. 13. Kornilova I. M., Rudenko Ye. O. Metodychne zabezpechennia obgruntuvannia transferu tekhnolohii. BIZNESINFORM. 2019. № 2. C. 85–94. URL: https://www.readcube.com/articles/10.32983%2F2222-4459-2019-2-85-94. 14. Communication From the Commission to the Council, the European Parliament, the European Economic and Social Committee and the Committee of the Regions «Improving knowledge transfer between research institutions and industry across Europe: Embracing Open Innovation – Implementing the Lisbon agenda», Brussels, 4.4.2007, COM 2007) 182 final. 15. Liashenko O. M. Metody ta modeli komertsializatsii transferu tekhnolohii : avtoref. dys. … d-ra ekon. nauk. Kyiv, 2009. S. 36. URL: http://dspace.wunu.edu.ua/bitstream/316497/3051/1/secur_lyashenko_doc_aref.pdf. 16. Liashenko O. M. Komertsializatsiia ta transfert tekhnolohii: katehorii ta metody innovatsiinoi diialnosti. Innovatsiina ekonomika. Vseukrainskyi naukovo-vyrobnychyi zhurnal. 2013. S. 8–13. URL: http://masters.donntu.org/ 2013 /iem/suhanov/library/ar2.pdf. 17. Zinchuk T. O., Kashchuk K. M. Transfer innovatsiinykh tekhnolohii: sutnist ta znachennia u rozvytku vitchyznianoi ekonomiky. URL: http://ir.znau. edu.ua/bitstream/123456789/4262/3/Znptdau_2012_2_199_208.pdf.18. Pro derzhavne rehuliuvannia diialnosti u sferi transferu tekhnolohii : Zakon Ukrainy vid 14 veresnia 2006 roku № 143-V. URL: https://zakon.rada.gov.ua/laws/show/143-16#Text. 19. Rodionova I. V. Osnovni formy ta etapy zdiisnennia transferu tekhnolohii promyslovykh pidpryiemstv. Visnyk Zaporizkoho natsionalnoho universytetu. № 3 (15), 2012. S. 59–64. URL: https://web.znu.edu.ua/ herald//issues/2012/eco-3-2012/059-64.pdf. 20. Bilous O. Yu. Derzhavne rehuliuvannia u sferi transferu znan ta tekhnolohii yak chynnyk innovatsiinoho rozvytku ekonomiky Ukrainy. Visnyk sotsialno-ekonomichnykh doslidzhen. Vypusk 2 (57), 2015. S. 100–107. URL: http://vsed.oneu.edu.ua/collections/2015/57/pdf/100-107.pdf. 21. Hromov H. Sfera» tekhnolohyy. URL: http://www.wdigest.ru /nir_ sphere.htm. 22. Osnovni tekhnolohichni poniattia ta vyznachennia. URL: https://lektsii.org/ 5-62188.html.
Keywords: the concept of «technology transfer», technology fields, local and basictechnologies, the object of transfer, contractual relations, intellectual property. The article considers the methodology of technologytransfer from the point of economic and legal content in the field of intellectualproperty. It is noted that there is no single definition of technology transfer, as scientistsin various fields interpret it due to the peculiarities of their field of activity. Atthe general level, the field of technology is considered as the birth of technologies,their types and maturity, which are the objects of transfer, taking into account the peculiaritiesof state regulation in the field of transfer. It is in the field of technologythat an invention (utility model) is born, as a result of intellectual, creative human activity;that is, they associate this process with the material carriers of technologies, orthe intangible phenomenon becomes a material state. The transfer of technology is associatedwith the transition to technical means, technological processes, and computernetworks. It is considered from the point of law as a type of communication betweenbusiness entities on the basis of contractual relations. It is determined thatfrom the point of methodology of technologies and their components transfer, theissue of technologies origin and the nature of their creation require in-depth study,and that is important to indicate the author(s) (owner(s)) of the result of intellectual,creative activity in the field of intellectual property. The main goal of the technology can be achieved only if there is a quantitative assessment of the perfection of theprocess and product quality. Technology uses two types of models: ideal objects ofbasic sciences, on the basis of which the most general laws and regularities of naturalsciences are formulated, and ideal objects of technology itself, on the basis of whichmorphological descriptions of separate stages and functional descriptions of the structureof technological lines are made. New local technologies are the result of inventions,utility models in the field of technologies, which have a specific author(s) (inventor(s)) and which are the object of transfer. Amendments to the terms of Article 1of the Law of Ukraine «On State Regulation of Technology Transfer Activities». ; Ключові слова: поняття «трансфер технології», сфери технологій, локальні та базовітехнології, об'єкт трансферу, договірні відносини, інтелектуальна власність У статті розглянуто методологію трансферу технологій з позиції економіко-правовогозмісту у сфері інтелектуальної власності. Зазначено, що досі немає єдиного визначеннятрансферу технологій, оскільки науковці різних галузей трактують його через особливо-сті своєї сфери діяльності. На загальному рівні розглянуто сферу технології як народ-ження технологій, їх видів та зрілості, що є об'єктами трансферу, з урахуванням особли-востей державного регулювання у сфері трансферу. Саме у сфері технології народжу-ється винахід (корисна модель) як результат інтелектуальної, творчої діяльності людини.Цей процес пов'язують з матеріальними носіями технологій, або нематеріальне явищепереходить у матеріальний стан. З трансфером технології пов'язують перехід до техніч-них засобів, технологічних процесів, комп'ютерних мереж. Він розглядається з позиціїправа як вид комунікацій між суб'єктами господарювання на основі договірних відносин.Визначено, що з позиції методології трансферу технологій та їх складових поглибленогодослідження потребує проблема походження технологій та природи її створення, що маєзначення для встановлення автора/ів (власника/ів) результату інтелектуальної, творчоїдіяльності у сфері інтелектуальної власності. Головна мета технології може бути досяг-нута тільки за наявності кількісної оцінки довершеності процесу та якості продукту.Технологія користується двома типами моделей: ідеальні об'єкти фундаментальнихнаук, на базі яких сформульовано найбільш загальні закони та закономірності природо-знавчих наук, та ідеальні об'єкти власне самої технології, на базі яких складено морфо-логічні описи окремих стадій (технологічних операцій) та функціональні описи струк-ту¬ри технологічних ліній. Нові локальні технології (інноваційні технології) є результа-том винаходів, корисних моделей у сфері технологій, які мають конкретного/их автора/ів(винахідника/ків) і стають об'єктом трансферу. Запропоновано зміни та доповнення дотермінів статті 1 Закону України «Про державне регулювання діяльності у сфері транс-феру технологій»: «нематеріальні активи» «об'єкт технології», «складова технології» та«технологія», які знайшли більш нове та системне відображення.Список використаних джерел: 1. Шиващкевич Д. С. Формирование сетевых инновационных структур как условие перехода к новым технологическим укладам. Вестник СГСЭУ, 2013. № 2 (46). 2. Трансфер технологій / Матеріал з Вікіпедії — вільної енциклопедії. 3. Большой экономический словарь: 25000 терминов / под ред. А. Н. Азрилияна. 7-е изд., доп. 4. Цибульова П. М., Чеботарьова В. П., Зінова В. Г., Суіні Ю. Управління інтелектуальною власністю : монографія / за ред. П. М. Цибульова. Київ : «К.І.С.», 2005. 448 с. 5. Ляшенко О. М. Моделі комерціалізації та трансферу технологій в умовах глобального середовища : монографія. Тернопіль : Економічна думка, 2007. 366 с. 6. Перерва П. Г., Коциски Д., Сакай Д., Верешне Шомоши М. Трансфер технологий : монография. Харьков — Мишкольц : НТУ «ХПИ», 2012. 599 с. URL: http://repository.kpi.kharkov.ua/bitstream/KhPI-Press/26368/1/Pererva_ Transfer_tekhnologiy_2012.pdf/. 7. Трансфер технологій та охорона інтелектуальної власності в наукових установах : монографія / Ю. М. Капіца, К. С. Шахбазян, Д. С. Махновський, І. І. Хоменко / за ред. Ю. М. Капіци. Київ : Центр інтелектуальної власності та передачі технологій НАН України, 2015. 431 с. 8. Новіков Є. А. Правове регулювання діяльності мережі трансфертехнологій : монографія. Харків : НДІ ПЗІР НАПрНУ, 2019. 173 с. 9.Титов В. В. Трансфер технологий : учебное пособие. Москва : ВНИИПИ, 2000. 10. Трансфер технологій : підручник / А. А. Мазаракі, Г. О. Андрощук, С. І. Бай та ін. / за заг. ред. А. А. Мазаракі. Київ : Київ. нац. торг.-екон. ун-т, 2014. 556 с. 11. Бутенко Д. С., Ткачук І. І. Розвиток трансферу технологій — запорука активізації інноваційних процесів в Україні. Глобальні та національні проблеми економіки. 2015. № 7. С. 261–263. 12. Кваша Т. К., Паладченко О. Ф., Молчанова І. В. Трансфер технологій як реалізація науково-технічного та інтелектуального потенціалу України. Наука, технології, інновації. 2018. № 1. С. 72–79. 13. Корнілова І. М., Руденко Є. О. Методичне забезпечення обґрунтування трансферу технологій. БІЗНЕСІНФОРМ. 2019. № 2. C. 85–94. URL: https://www.readcube.com/articles/10.32983%2F2222-4459-2019-2-85-94. 14. Communication From the Commission to the Council, the European Parliament, the European Economic and Social Committee and the Committee of the Regions «Improving knowledge transfer between research institutions and industry across Europe: Embracing Open Innovation – Implementing the Lisbon agenda», Brussels, 4.4.2007, COM 2007) 182 final. 15. Ляшенко О. М. Методи та моделі комерціалізації трансферу технологій : автореф. дис. … д-ра екон. наук. Київ, 2009. С. 36. URL: http://dspace.wunu.edu.ua/bitstream/316497/3051/1/secur_lyashenko_doc_aref.pdf. 16. Ляшенко О. М. Комерціалізація та трансферт технології: категорії та методи інноваційної діяльності. Інноваційна економіка. Всеукраїнський науково-виробничий журнал. 2013. С. 8–13. URL: http://masters.donntu.org/ 2013 /iem/suhanov/library/ar2.pdf. 17. Зінчук Т. О., Кащук К. М. Трансфер інноваційних технологій: сутність та значення у розвитку вітчизняної економіки. URL: http://ir.znau. edu.ua/bitstream/123456789/4262/3/Znptdau_2012_2_199_208.pdf.18. Про державне регулювання діяльності у сфері трансферу технологій : Закон України від 14 вересня 2006 року № 143-V. URL: https://zakon.rada.gov.ua/laws/show/143-16#Text. 19. Родіонова І. В. Основні форми та етапи здійснення трансферу технологій промислових підприємств. Вісник Запорізького національного університету. № 3 (15), 2012. С. 59–64. URL: https://web.znu.edu.ua/ herald//issues/2012/eco-3-2012/059-64.pdf. 20. Білоус О. Ю. Державне регулювання у сфері трансферу знань та технологій як чинник інноваційного розвитку економіки України. Вісник соціально-економічних досліджень. Випуск 2 (57), 2015. С. 100–107. URL: http://vsed.oneu.edu.ua/collections/2015/57/pdf/100-107.pdf. 21. Громов Г. Сфера» технологии. URL: http://www.wdigest.ru /nir_ sphere.htm. 22. Основні технологічні поняття та визначення. URL: https://lektsii.org/ 5-62188.html. ========================= 1. Shyvashchkevych D. S. Formyrovanye setevыkh ynnovatsyonnыkh struktur kak uslovye perekhoda k novыm tekhnolohycheskym ukladam. Vestnyk SHSЭU, 2013. № 2 (46). 2. Transfer tekhnolohii / Material z Vikipedii — vilnoi entsyklopedii. 3. Bolshoi эkonomycheskyi slovar: 25000 termynov / pod red. A. N. Azrylyiana. 7-e yzd., dop. 4. Tsybulova P. M., Chebotarova V. P., Zinova V. H., Suini Yu. Upravlinnia intelektualnoiu vlasnistiu : monohrafiia / za red. P. M. Tsybulova. Kyiv : «K.I.S.», 2005. 448 s. 5. Liashenko O. M. Modeli komertsializatsii ta transferu tekhnolohii v umovakh hlobalnoho seredovyshcha : monohrafiia. Ternopil : Ekonomichna dumka, 2007. 366 s. 6. Pererva P. H., Kotsysky D., Sakai D., Vereshne Shomoshy M. Transfer tekhnolohyi: monohrafyia. Kharkov — Myshkolts : NTU «KhPY», 2012. 599 s. URL: http://repository.kpi.kharkov.ua/bitstream/KhPI-Press/26368/1/Pererva_ Transfer_tekhnologiy_2012.pdf/. 7. Transfer tekhnolohii ta okhorona intelektualnoi vlasnosti v naukovykh ustanovakh : monohrafiia / Yu. M. Kapitsa, K. S. Shakhbazian, D. S. Makhnovskyi, I. I. Khomenko / za red. Yu. M. Kapitsy. Kyiv : Tsentr intelektualnoi vlasnosti ta peredachi tekhnolohii NAN Ukrainy, 2015. 431 s. 8. Novikov Ye. A. Pravove rehuliuvannia diialnosti merezhi transfertekhnolohii : monohrafiia. Kharkiv : NDI PZIR NAPrNU, 2019. 173 s. 9.Tytov V. V. Transfer tekhnolohyi : uchebnoe posobye. Moskva : VNYYPY, 2000. 10. Transfer tekhnolohii : pidruchnyk / A. A. Mazaraki, H. O. Androshchuk, S. I. Bai ta in. / za zah. red. A. A. Mazaraki. Kyiv : Kyiv. nats. torh.-ekon. un-t, 2014. 556 s. 11. Butenko D. S., Tkachuk I. I. Rozvytok transferu tekhnolohii — zaporuka aktyvizatsii innovatsiinykh protsesiv v Ukraini. Hlobalni ta natsionalni problemy ekonomiky. 2015. № 7. S. 261–263. 12. Kvasha T. K., Paladchenko O. F., Molchanova I. V. Transfer tekhnolohii yak realizatsiia naukovo-tekhnichnoho ta intelektualnoho potentsialu Ukrainy. Nauka, tekhnolohii, innovatsii. 2018. № 1. S. 72–79. 13. Kornilova I. M., Rudenko Ye. O. Metodychne zabezpechennia obgruntuvannia transferu tekhnolohii. BIZNESINFORM. 2019. № 2. C. 85–94. URL: https://www.readcube.com/articles/10.32983%2F2222-4459-2019-2-85-94. 14. Communication From the Commission to the Council, the European Parliament, the European Economic and Social Committee and the Committee of the Regions «Improving knowledge transfer between research institutions and industry across Europe: Embracing Open Innovation – Implementing the Lisbon agenda», Brussels, 4.4.2007, COM 2007) 182 final. 15. Liashenko O. M. Metody ta modeli komertsializatsii transferu tekhnolohii : avtoref. dys. … d-ra ekon. nauk. Kyiv, 2009. S. 36. URL: http://dspace.wunu.edu.ua/bitstream/316497/3051/1/secur_lyashenko_doc_aref.pdf. 16. Liashenko O. M. Komertsializatsiia ta transfert tekhnolohii: katehorii ta metody innovatsiinoi diialnosti. Innovatsiina ekonomika. Vseukrainskyi naukovo-vyrobnychyi zhurnal. 2013. S. 8–13. URL: http://masters.donntu.org/ 2013 /iem/suhanov/library/ar2.pdf. 17. Zinchuk T. O., Kashchuk K. M. Transfer innovatsiinykh tekhnolohii: sutnist ta znachennia u rozvytku vitchyznianoi ekonomiky. URL: http://ir.znau. edu.ua/bitstream/123456789/4262/3/Znptdau_2012_2_199_208.pdf.18. Pro derzhavne rehuliuvannia diialnosti u sferi transferu tekhnolohii : Zakon Ukrainy vid 14 veresnia 2006 roku № 143-V. URL: https://zakon.rada.gov.ua/laws/show/143-16#Text. 19. Rodionova I. V. Osnovni formy ta etapy zdiisnennia transferu tekhnolohii promyslovykh pidpryiemstv. Visnyk Zaporizkoho natsionalnoho universytetu. № 3 (15), 2012. S. 59–64. URL: https://web.znu.edu.ua/ herald//issues/2012/eco-3-2012/059-64.pdf. 20. Bilous O. Yu. Derzhavne rehuliuvannia u sferi transferu znan ta tekhnolohii yak chynnyk innovatsiinoho rozvytku ekonomiky Ukrainy. Visnyk sotsialno-ekonomichnykh doslidzhen. Vypusk 2 (57), 2015. S. 100–107. URL: http://vsed.oneu.edu.ua/collections/2015/57/pdf/100-107.pdf. 21. Hromov H. Sfera» tekhnolohyy. URL: http://www.wdigest.ru /nir_ sphere.htm. 22. Osnovni tekhnolohichni poniattia ta vyznachennia. URL: https://lektsii.org/ 5-62188.html.
Authors' IntroductionSimilar to race, class, and gender, the body is an important signifier that shapes identity, social processes, and life outcomes. In our article, we examine the individual and institutional rewards conferred upon physically attractive individuals and the social stigma and discrimination experienced by the less physically attractive. This body hierarchy is tied in part to the performance of beauty work, including attempts to transform and/or manipulate one's hair, make‐up, and body shape or size. We explore these beauty work practices, highlight the gendered nature of this body hierarchy, and situate these practices in debates about agency and cultural structure. Are beauty conformists 'cultural dopes' who buy into an oppressive patriarchal beauty culture that creates docile bodies? Or, are these individuals 'savvy cultural negotiators' who participate in beauty work practices to reap material and psychological rewards?Authors recommendsBordo, Susan. 2003. Unbearable Weight: Feminism, Western Culture & the Body. Berkeley, CA: University of California Press.A series of essays that examine Western body culture, including media images, weight loss practices, reproduction, psychology, medicine, and eating disorders. In her analysis, Bordo adopts a postmodern feminist interpretation, problematizing the female body as a cultural construct.Davis, Kathy. 1991. 'Remaking the She‐Devil: A Critical Look at Feminist Approaches to Beauty'. Hypatia, 6, 21–43.Drawing on interviews with Dutch cosmetic surgery patients, Davis examines how women account for their decisions to participate in cosmetic surgery and how they view it in light of surgery outcomes. She argues that women actively pursue cosmetic surgery for instrumental reasons including regaining control of their lives, feeling normal, and/or righting the wrong of an ongoing suffering.Dellinger, Kirsten and Christine L. Williams. 1997. 'Makeup at Work: Negotiating Appearance Rules in the Workplace'. Gender & Society, 11, 151–77.Dellinger and Williams analyze in‐depth interviews to understand the reasons why women do – or do not – wear makeup in the workplace. Women are negatively sanctioned when they do not wear makeup (e.g. they are questioned about their health or heterosexuality) and are positively rewarded when they do wear makeup (e.g. they are seen as more credible, feel more confident, etc.). The authors argue that such practices ultimately reinforce inequality between women and men, but that individual resistance strategies are unlikely to be successful given the institutional and structural constraints faced by women.Gagné, Patricia and Deanna McGaughey. 2002. 'Designing Women: Cultural Hegemony and the Exercise of Power Among Women Who have Undergone Elective Mammoplasty'. Gender & Society, 16, 814–438.The authors address two feminist perspectives on cosmetic surgery using interviews with women who have undergone elective mammoplasty. One perspective suggests that women who elect cosmetic surgery are victims of false consciousness whose bodies are disciplined by a male gaze. A second perspective centralizes women's agency; surgery enables women to achieve greater power and control over their lives. They propose a grounded theoretical synthesis, maintaining that surgery can be empowering at an individual level, but can also reinforce hegemonic ideals that oppress women as a group.Gimlin, Debra L. 2002. Body Work: Beauty and Self‐Image in American Culture. Berkeley, CA: University of California Press.Gimlin examines four sites of body work – the beauty salon, aerobics classes, a plastic surgery clinic, and a fat acceptance organization. Relying on ethnographic and interview data, she discusses women's body transformation efforts and how they negotiate the relationship between body and self.Lovejoy, Meg. 2001. 'Disturbances in the Social Body: Differences in Body Image and Eating Problems among African American and White Women'. Gender & Society, 15, 239–61.Lovejoy reviews several perspectives on racial/ethnic differences in body image and eating disorders including: (1) a psychometric perspective that focuses on attitudinal and perceptual body image; (2) white feminist perspectives that focus on social control and changing gender roles; and (3) black feminist perspectives that claim obesity is a problem for black women, see eating as a mechanism to cope with oppression, and acknowledge black women's susceptibility to eating disorders. According to Lovejoy, black women's positive body satisfaction can be explained through an alternative beauty aesthetic and the cultural construction of femininity in black communities.Pope, Harrison G., Jr., Katharine A. Phillips and Roberto Olivardia. 2000. The Adonis Complex: The Secret Crisis of Male Body Obsession. New York: The Free Press.In contrast to the many works that focus on women, these authors discuss appearance stereotypes and appearance work related to men and masculinity. While more journalistic than academic in tone (and quality of research design), the authors draw on surveys, interviews, and archival documents to argue that women's entrance into previously masculine arenas (e.g. male‐dominated occupations) has led to a sort of 'threatened masculinity.' As a result, men use their bodies to demonstrate masculinity (e.g. increased musculature) – often through unhealthy behaviors and practices, including steroid use and eating disorders.Weitz, Rose. 2001. 'Women and Their Hair: Seeking Power through Resistance and Accommodation'. Gender & Society, 15, 667–86.Based on in‐depth interviews with women, Weitz shows how women use their hair (style, length, color, etc.) to conform to, resist, and negotiate hegemonic beauty norms, thereby gaining – or losing – personal and professional power and other advantages. Weitz's article is particularly useful for illuminating how personal advantages can belie group advantages as well as the limitations of the agency versus docile bodies argument.West, Candace and Don H. Zimmerman. 1987. 'Doing Gender'. Gender & Society, 1, 125–51.This article introduces the idea of gender as an accomplishment or a performance. Femininity and masculinity, the authors argue, do not automatically follow from biological sex. Rather, males and females perform gender in their daily routines and interactions with others. We 'do gender,' for example, through our appearance, behaviors, speech patterns, etc.Wolf, Naomi. 1991. The Beauty Myth: How Images of Beauty are Used Against Women. New York: Harper Collins.This book explores the relationship between unattainable beauty ideals and women's social advancement. Examining issues including work, culture, religion, sex, and hunger, Wolf argues that despite increased advancement in the public sphere, women's self‐esteem and equality are stymied by the beauty myth and an obsession with body perfection.Online materialsAbout Face! http://www.about‐face.org/ About Face is an organization whose mission is to equip women and girls with tools to understand and resist harmful media messages that affect their self‐esteem and body image. Website contains images of positive and negative advertisements (along with discussion questions and company contact information), further reading suggestions, and links to other organizations dealing with either body image or media literacy.Adios Barbie http://www.adiosbarbie.com/ A website devoted to creating awareness about disempowering cultural messages about bodies, encouraging positive body image, and taking an active role in creating unique versions of beauty and identity.Jean Kilbourne http://www.jeankilbourne.com/lectures.html Jean Kilbourne is an author and lecturer whose works focuses extensively on the depiction of women in advertising. Her website includes recourses for change and postings from organizations with opportunities for individuals to get involved in activities/events that challenge destructive media images. The 'Film & Video' link also includes films on advertising and western beauty culture.Lauren Greenfield http://www.laurengreenfield.com/ Lauren Greenfield is a photographer whose images capture, among other things, the toll of beauty stereotypes and beauty work on women of all ages. Particularly relevant are Greenfield's collections titled Girl Culture and Thin. The website includes photographic images, short films, links to Greenfield's books and films, and further resources, including readings for teens, activists, and educators (including an extensive discussion/exercise guide for Girl Culture).Love Your Body Day Campaign (National Organization of Women) http://loveyourbody.nowfoundation.org/ Website for NOW's annual body‐image campaign that began in 1998. Includes activism resources (primarily for college campuses), including a Powerpoint presentation with images and text about how commercial images (with a focus on advertising) affect both women and men ('Sex, Stereotypes and Beauty: The ABCs and Ds of Commercial Images of Women'). Newsweek, Lifetime Spending on Beauty http://www.newsweek.com/id/187758 Interactive graphic, 'The Beauty Breakdown', shows the average cost that women in various age groups spend on beauty products and services. Graphic also includes links on the right‐side menu to other Newsweek articles and photo essays related to beauty work.Sample SyllabusWe encourage use of this article in various Sociology, Gender and Women's Studies, and Cultural Studies courses including Introduction to Sociology, Sociology of Gender, and the Sociology of Body.Focus Questions
In what ways does your level of physical attractiveness affect how others treat you? How does your race and gender shape your response? Consider various contexts including school, work, gym, church, etc., and how social context might affect social treatment. What are some individual and institutional rewards conferred upon physically attractive individuals? How are physically unattractive individuals stigmatized and treated differently? Why do you think individuals make assumptions and treat people differently based on physical attractiveness? What are some common forms of beauty work practices? Do you engage in any of these practices? Why? Why do you think others engage in these practices? How do practices and consequences differ by gender? By race? By sexual orientation? How is beauty work a gendered double standard? That is, how do beauty work 'obligations' differ for women and men? Also, what are some contradictions women face when they perform beauty work? In other words, what are some of the costs to performing – as well as not performing – beauty work? What, if any, forms of resistance are an effective means of social change? Do 'alternative' appearances, i.e., body piercings, scarring, or tattoos, or advertising campaigns such as the Dove Real Body campaign constitute resistance to beauty ideals that promote social change? How might different strands of feminist thought envision social change?
Seminar/Project IdeasReading Assignment: Beauty AssumptionsSelect photos of both conventionally attractive and unattractive men and women from various racial and ethnic backgrounds. Select these photos in pairs, varying preferably all but the level of physical beauty, e.g. attractive white woman versus unattractive white woman, attractive black man versus unattractive black man. If possible, use 'before and after' makeover photos. Before students read the assigned article, ask them to rate the person depicted in each photo on various personality characteristics. Use semantic differential scales and pairs such as happy‐sad, beautiful‐ugly, intelligent‐unintelligent, healthy‐unhealthy, honest‐dishonest, friendly‐unfriendly, etc. After students have read the article, revisit their responses. Are there any patterns of assumed characteristics based on level of physical attractiveness? How does race and/or gender affect responses? Use this exercise to transition into a discussion of the article.Journal Assignment: Media and Our Beauty CultureAsk students to examine critically and document observations about the beauty culture that surrounds them. In a week, students should pay special attention to what they see on television. In terms of physical attractiveness, who is depicted on television? Moreover, how do depictions vary by physical attractiveness? What roles do physically attractive individuals play? How are they depicted? Conversely, what roles and portrayals are associated with less physically attractive individuals? Would they see similar depictions in other media such as film, magazines, and the internet? In their write‐up, students should also discuss the social meanings and significance of these television depictions. For example, do they think these portrayals affect their views of beauty, their assumptions about others, and how they treat others?
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
This post takes up from two previous posts (part 1; part 2), asking just what do we (we economists) really know about how interest rates affect inflation. Today, what does contemporary economic theory say? As you may recall, the standard story says that the Fed raises interest rates; inflation (and expected inflation) don't immediately jump up, so real interest rates rise; with some lag, higher real interest rates push down employment and output (IS); with some more lag, the softer economy leads to lower prices and wages (Phillips curve). So higher interest rates lower future inflation, albeit with "long and variable lags." Higher interest rates -> (lag) lower output, employment -> (lag) lower inflation. In part 1, we saw that it's not easy to see that story in the data. In part 2, we saw that half a century of formal empirical work also leaves that conclusion on very shaky ground. As they say at the University of Chicago, "Well, so much for the real world, how does it work in theory?" That is an important question. We never really believe things we don't have a theory for, and for good reason. So, today, let's look at what modern theory has to say about this question. And they are not unrelated questions. Theory has been trying to replicate this story for decades. The answer: Modern (anything post 1972) theory really does not support this idea. The standard new-Keynesian model does not produce anything like the standard story. Models that modify that simple model to achieve something like result of the standard story do so with a long list of complex ingredients. The new ingredients are not just sufficient, they are (apparently) necessary to produce the desired dynamic pattern. Even these models do not implement the verbal logic above. If the pattern that high interest rates lower inflation over a few years is true, it is by a completely different mechanism than the story tells. I conclude that we don't have a simple economic model that produces the standard belief. ("Simple" and "economic" are important qualifiers.) The simple new-Keynesian model The central problem comes from the Phillips curve. The modern Phillips curve asserts that price-setters are forward-looking. If they know inflation will be high next year, they raise prices now. So Inflation today = expected inflation next year + (coefficient) x output gap. \[\pi_t = E_t\pi_{t+1} + \kappa x_t\](If you know enough to complain about \(\beta\approx0.99\) in front of \(E_t\pi_{t+1}\) you know enough that it doesn't matter for the issues here.)Now, if the Fed raises interest rates, and if (if) that lowers output or raises unemployment, inflation today goes down. The trouble is, that's not what we're looking for. Inflation goes down today, (\(\pi_t\))relative to expected inflation next year (\(E_t\pi_{t+1}\)). So a higher interest rate and lower output correlate with inflation that is rising over time. Here is a concrete example: The plot is the response of the standard three equation new-Keynesian model to an \(\varepsilon_1\) shock at time 1:\[\begin{align} x_t &= E_t x_{t+1} - \sigma(i_t - E_t\pi_{t+1}) \\ \pi_t & = \beta E_t \pi_{t+1} + \kappa x_t \\ i_t &= \phi \pi_t + u_t \\ u_t &= \eta u_{t-1} + \varepsilon_t. \end{align}\] Here \(x\) is output, \(i\) is the interest rate, \(\pi\) is inflation, \(\eta=0.6\), \(\sigma=1\), \(\kappa=0.25\), \(\beta=0.95\), \(\phi=1.2\). In this plot, higher interest rates are said to lower inflation. But they lower inflation immediately, on the day of the interest rate shock. Then, as explained above, inflation rises over time. In the standard view, and the empirical estimates from the last post, a higher interest rate has no immediate effect, and then future inflation is lower. See plots in the last post, or this one from Romer and Romer's 2023 summary:Inflation jumping down and then rising in the future is quite different from inflation that does nothing immediately, might even rise for a few months, and then starts gently going down. You might even wonder about the downward jump in inflation. The Phillips curve makes it clear why current inflation is lower than expected future inflation, but why doesn't current inflation stay the same, or even rise, and expected future inflation rise more? That's the "equilibrium selection" issue. All those paths are possible, and you need extra rules to pick a particular one. Fiscal theory points out that the downward jump needs a fiscal tightening, so represents a joint monetary-fiscal policy. But we don't argue about that today. Take the standard new Keynesian model exactly as is, with passive fiscal policy and standard equilibrium selection rules. It predicts that inflation jumps down immediately and then rises over time. It does not predict that inflation slowly declines over time. This is not a new issue. Larry Ball (1994) first pointed out that the standard new Keynesian Phillips curve says that output is high when inflation is high relative to expected future inflation, that is when inflation is declining. Standard beliefs go the other way: output is high when inflation is rising. The IS curve is a key part of the overall prediction, and output faces a similar problem. I just assumed above that output falls when interest rates rise. In the model it does; output follows a path with the same shape as inflation in my little plot. Output also jumps down and then rises over time. Here too, the (much stronger) empirical evidence says that an interest rate rise does not change output immediately, and output then falls rather than rises over time. The intuition has even clearer economics behind it: Higher real interest rates induce people to consume less today and more tomorrow. Higher real interest rates should go with higher, not lower, future consumption growth. Again, the model only apparently reverses the sign by having output jump down before rising. Key issuesHow can we be here, 40 years later, and the benchmark textbook model so utterly does not replicate standard beliefs about monetary policy? One answer, I believe, is confusing adjustment to equilibrium with equilibrium dynamics. The model generates inflation lower than yesterday (time 0 to time 1) and lower than it otherwise would be (time 1 without shock vs time 1 with shock). Now, all economic models are a bit stylized. It's easy to say that when we add various frictions, "lower than yesterday" or "lower than it would have been" is a good parable for "goes down over time." If in a simple supply and demand graph we say that an increase in demand raises prices instantly, we naturally understand that as a parable for a drawn out period of price increases once we add appropriate frictions. But dynamic macroeconomics doesn't work that way. We have already added what was supposed to be the central friction, sticky prices. Dynamic economics is supposed to describe the time-path of variables already, with no extra parables. If adjustment to equilibrium takes time, then model that. The IS and Phillips curve are forward looking, like stock prices. It would make little sense to say "news comes out that the company will never make money, so the stock price should decline gradually over a few years." It should jump down now. Inflation and output behave that way in the standard model. A second confusion, I think, is between sticky prices and sticky inflation. The new-Keynesian model posits, and a huge empirical literature examines, sticky prices. But that is not the same thing as sticky inflation. Prices can be arbitrarily sticky and inflation, the first derivative of prices, can still jump. In the Calvo model, imagine that only a tiny fraction of firms can change prices at each instant. But when they do, they will change prices a lot, and the overall price level will start increasing right away. In the continuous-time version of the model, prices are continuous (sticky), but inflation jumps at the moment of the shock. The standard story wants sticky inflation. Many authors explain the new-Keynesian model with sentences like "the Fed raises interest rates. Prices are sticky, so inflation can't go up right away and real interest rates are higher." This is wrong. Inflation can rise right away. In the standard new-Keynesian model it does so with \(\eta=1\), for any amount of price stickiness. Inflation rises immediately with a persistent monetary policy shock. Just get it out of your heads. The standard model does not produce the standard story. The obvious response is, let's add ingredients to the standard model and see if we can modify the response function to look something like the common beliefs and VAR estimates. Let's go. Adaptive expectations We can reproduce standard beliefs about monetary policy with thoroughly adaptive expectations, in the 1970s ISLM form. I think this is a large part of what most policy makers and commenters have in mind. Modify the above model to leave out the dynamic part of the intertemporal substitution equation, to just say in rather ad hoc way that higher real interest rates lower output, and specify that the expected inflation that drives the real rate and that drives pricing decisions is mechanically equal to previous inflation, \(E_t \pi_{t+1} = \pi_{t-1}\). We get \[ \begin{align} x_t &= -\sigma (i_t - \pi_{t-1}) \\ \pi_t & = \pi_{t-1} + \kappa x_t .\end{align}\] We can solve this sytsem analytically to \[\pi_t = (1+\sigma\kappa)\pi_{t-1} - \sigma\kappa i_t.\]Here's what happens if the Fed permanently raises the interest rate. Higher interest rates send future inflation down. (\(\kappa=0.25,\ \sigma=1.\)) Inflation eventually spirals away, but central banks don't leave interest rates alone forever. If we add a Taylor rule response \(i_t = \phi \pi_t + u_t\), so the central bank reacts to the emerging spiral, we get this response to a permanent monetary policy disturbance \(u_t\): The higher interest rate sets off a deflation spiral. But the Fed quickly follows inflation down to stabilize the situation. This is, I think, the conventional story of the 1980s. In terms of ingredients, an apparently minor change of index from \(E_t \pi_{t+1}\) to \(\pi_{t-1}\) is in fact a big change. It means directly that higher output comes with increasing inflation, not decreasing inflation, solving Ball's puzzle. The change basically changes the sign of output in the Phillips curve. Again, it's not really all in the Phillips curve. This model with rational expectations in the IS equation and adaptive in the Phillips curve produces junk. To get the result you need adaptive expectations everywhere. The adaptive expectations model gets the desired result by changing the basic sign and stability properties of the model. Under rational expectations the model is stable; inflation goes away all on its own under an interest rate peg. With adaptive expectations, the model is unstable. Inflation or deflation spiral away under an interest rate peg or at the zero bound. The Fed's job is like balancing a broom upside down. If you move the bottom (interest rates) one way, the broom zooms off the other way. With rational expectations, the model is stable, like a pendulum. This is not a small wrinkle designed to modify dynamics. This is major surgery. It is also a robust property: small changes in parameters do not change the dominant eigenvalue of a model from over one to less than one. A more refined way to capture how Fed officials and pundits think and talk might be called "temporarily fixed expectations." Policy people do talk about the modern Phillips curve; they say inflation depends on inflation expectations and employment. Expectations are not mechanically adaptive. Expectations are a third force, sometimes "anchored," and amenable to manipulation by speeches and dot plots. Crucially, in this analysis, expected inflation does not move when the Fed changes interest rates. Expectations are then very slowly adaptive, if inflation is persistent, or if there is a more general loss of faith in "anchoring." In the above new-Keynesian model graph, at the minute the Fed raises the interest rate, expected inflation jumps up to follow the graph's plot of the model's forecast of inflation. As a simple way to capture these beliefs, suppose expectations are fixed or "anchored" at \(\pi^e\). Then my simple model is \[\begin{align}x_t & = -\sigma(i_t - \pi^e) \\ \pi_t & = \pi^e + \kappa x_t\end{align}\]so \[\pi_t = \pi^e - \sigma \kappa (i_t - \pi^e).\] Inflation is expected inflation, and lowered by higher interest rates (last - sign). But those rates need only be higher than the fixed expectations; they do not need to be higher than past rates as they do in the adaptive expectations model. That's why the Fed thinks 3% interest rates with 5% inflation is still "contractionary"--expected inflation remains at 2%, not the 5% of recent adaptive experience. Also by fixing expectations, I remove the instability of the adaptive expectations model... so long as those expectations stay anchored. The Fed recognizes that eventually higher inflation moves the expectations, and with a belief that is adaptive, they fear that an inflation spiral can still break out.Even this view does not give us any lags, however. The Fed and commenters clearly believe that higher real interest rates today lower output next year, not immediately; and they believe that lower output and employment today drive inflation down in the future, not immediately. They believe something like \[\begin{align}x_{t+1} &= - \sigma(i_t - \pi^e) \\ \pi_{t+1} &= \pi^e + \kappa x_t.\end{align}\] But now we're at the kind of non-economic ad-hockery that the whole 1970s revolution abandoned. And for a reason: Ad hoc models are unstable, regimes are always changing. Moreover, let me remind you of our quest: Is there a simple economic model of monetary policy that generates something like the standard view? At this level of ad-hockery you might as well just write down the coefficients of Romer and Romer's response function and call that the model of how interest rates affect inflation. Academic economics gave up on mechanical expectations and ad-hoc models in the 1970s. You can't publish a paper with this sort of model. So when I mean a "modern" model, I mean rational expectations, or at least the consistency condition that the expectations in the model are not fundamentally different from forecasts of the model. (Models with explicit learning or other expectation-formation frictions count too.) It's easy to puff about people aren't rational, and looking out the window lots of people do dumb things. But if we take that view, then the whole project of monetary policy on the proposition that people are fundamentally unable to learn patterns in the economy, that a benevolent Federal Reserve can trick the poor little souls into a better outcome. And somehow the Fed is the lone super-rational actor who can avoid all those pesky behavioral biases. We are looking for the minimum necessary ingredients to describe the basic signs and function of monetary policy. A bit of irrational or complex expectation formation as icing on the cake, a possible sufficient ingredient to produce quantitatively realistic dynamics, isn't awful. But it would be sad if irrational expectations or other behavior is a necessary ingredient to get the most basic sign and story of monetary policy right. If persistent irrationality is a central necessary ingredient for the basic sign and operation of monetary policy -- if higher interest rates will raise inflation the minute people smarten up; if there is no simple supply and demand, MV=PY sensible economics underlying the basic operation of monetary policy; if it's all a conjuring trick -- that should really weaken our faith in the whole monetary policy project. Facts help, and we don't have to get religious about it. During the long zero bound, the same commentators and central bankers kept warning about a deflation spiral, clearly predicted by this model. It never happened. Interest rates below inflation from 2021 to 2023 should have led to an upward inflation spiral. It never happened -- inflation eased all on its own with interest rates below inflation.Getting the desired response to interest rates by making the model unstable isn't tenable whether or not you like the ingredient. Inflation also surged in the 1970s faster than adaptive expectations came close to predicting, and fell faster in the 1980s. The ends of many inflations come with credible changes in regime. There is a lot of work now desperately trying to fix new-Keynesian models by making them more old-Keynesian, putting lagged inflation in the Phillips curve, current income in the IS equation, and so forth. Complex learning and expectation formation stories replace the simplistic adaptive expectations here. As far as I can tell, to the extent they work they largely do so in the same way, by reversing the basic stability of the model. Modifying the new-Keynesian modelThe alternative is to add ingredients to the basic new-Keynesian model, maintaining its insistence on real "micro-founded" economics and forward-looking behavior, and describing explicit dynamics as the evolution of equilibrium quantities. Christiano Eichenbaum and Evans (2005) is one of the most famous examples. Recall these same authors created the first most influential VAR that gave the "right" answer to the effects of monetary policy shocks. This paper modifies the standard new-Keynesian model with a specific eye to matching impulse response functions. The want to match all impulse-responses, with a special focus on output. When I started asking my young macro colleagues for a standard model which produces the desired response shape, they still cite CEE first, though it's 20 years later. That's quite an accomplishment. I'll look at it in detail, as the general picture is the same as many other models that achieve the desired result. Here's their bottom line response to a monetary policy shock: (Figure from the 2018 Christiano Eichenbaum and Trabandt Journal of Economic Perspectives summary paper.) The solid line is the VAR point estimate and gray shading is the 95% confidence band. The solid blue line is the main model. The dashed line is the model with only price stickiness, to emphasize the importance of wage stickiness. The shock happens at time 0. Notice the funds rate line that jumps down at that date. That the other lines do not move at time 0 is a result. I graphed the response to a time 1 shock above. That's the answer, now what's the question? What ingredients did they add above the textbook model to reverse the basic sign and jump problem and to produce these pretty pictures? Here is a partial list: Habit formation. The utility function is \(log(c_t - bc_{t-1})\). A capital stock with adjustment costs in investment. Adjustment costs are proportional to investment growth, \([1-S(i_t/i_{t-1})]i_t\), rather than the usual formulation in which adjustment costs are proportional to the investment to capital ratio \(S(i_t/k_t)i_t\). Variable capital utilization. Capital services \(k_t\) are related to the capital stock \(\bar{k}t\) by \(k_t = u_t \bar{k}_t\). The utilization rate \(u_t\) is set by households facing an upward sloping cost \(a(u_t)\bar{k}_t\).Calvo pricing with indexation: Firms randomly get to reset prices, but firms that aren't allowed to reset prices do automatically raise prices at the rate of inflation.Prices are also fixed for a quarter. Technically, firms must post prices before they see the period's shocks.Sticky wages, also with indexation. Households are monopoly suppliers of labor, and set wages Calvo-style like firms. (Later papers put all households into a union which does the wage setting.) Wages are also indexed; Households that don't get to reoptimize their wage still raise wages following inflation. Firms must borrow working capital to finance their wage bill a quarter in advance, and thus pay a interest on the wage bill. Money in the utility function, and money supply control. Monetary policy is a change in the money growth rate, not a pure interest rate target. Whew! But which of these ingredients are necessary, and which are just sufficient? Knowing the authors, I strongly suspect that they are all necessary to get the suite of results. They don't add ingredients for show. But they want to match all of the impulse response functions, not just the inflation response. Perhaps a simpler set of ingredients could generate the inflation response while missing some of the others. Let's understand what each of these ingredients is doing, which will help us to see (if) they are necessary and essential to getting the desired result. I see a common theme in habit formation, adjustment costs that scale by investment growth, and indexation. These ingredients each add a derivative; they take a standard relationship between levels of economic variables and change it to one in growth rates. Each of consumption, investment, and inflation is a "jump variable" in standard economics, like stock prices. Consumption (roughly) jumps to the present value of future income. The level of investment is proportional to the stock price in the standard q theory, and jumps when there is new information. Iterating forward the new-Keynesian Phillips curve \(\pi_t = \beta E_t \pi_{t+1} + \kappa x_t\), inflation jumps to the discounted sum of future output gaps, \(\pi_t = E_t \sum_{j=0}^\infty \beta^jx_{t+j}.\) To produce responses in which output, consumption and investment as well as inflation rise slowly after a shock, we don't want levels of consumption, investment, and inflation to jump this way. Instead we want growth rates to do so. With standard utility, the consumer's linearized first order condition equates expected consumption growth to the interest rate, \( E_t (c_{t+1}/c_t) = \delta + r_t \) Habit, with \(b=1\) gives \( E_t [(c_{t+1}-c_t)/(c_t-c_{t-1})] = \delta + r_t \). (I left out the strategic terms.) Mixing logs and levels a bit, you can see we put a growth rate in place of a level. (The paper has \(b=0.65\) .) An investment adjustment cost function with \(S(i_t/i_{t-1})\) rather than the standard \(S(i_t/k_t)\) puts a derivative in place of a level. Normally we tell a story that if you want a house painted, doubling the number of painters doesn't get the job done twice as fast because they get in each other's way. But you can double the number of painters overnight if you want to do so. Here the cost is on the increase in number of painters each day. Indexation results in a Phillips curve with a lagged inflation term, and that gives "sticky inflation." The Phillips curve of the model (32) and (33) is \[\pi_t = \frac{1}{1+\beta}\pi_{t-1} + \frac{\beta}{1+\beta}E_{t-1}\pi_{t+1} + (\text{constants}) E_{t-1}s_t\]where \(s_t\) are marginal costs (more later). The \(E_{t-1}\) come from the assumption that prices can't react to time \(t\) information. Iterate that forward to (33)\[\pi_t - \pi_{t-1} = (\text{constants}) E_{t-1}\sum_{j=0}^\infty \beta^j s_{t+j}.\] We have successfully put the change in inflation in place of the level of inflation. The Phillips curve is anchored by real marginal costs, and they are not proportional to output in this model as they are in the textbook model above. That's important too. Instead,\[s_t = (\text{constants}) (r^k_t)^\alpha \left(\frac{W_t}{P_t}R_t\right)^{1-\alpha}\] where \(r^k\) is the return to capital \(W/P\) is the real wage and \(R\) is the nominal interest rate. The latter term crops up from the assumption that firms must borrow the wage bill one period in advance. This is an interesting ingredient. There is a lot of talk that higher interest rates raise costs for firms, and they are reducing output as a result. That might get us around some of the IS curve problems. But that's not how it works here. Here's how I think it works. Higher interest rates raise marginal costs, and thus push up current inflation relative to expected future inflation. The equilibrium-selection rules and the rule against instant price changes (coming up next) tie down current inflation, so the higher interest rates have to push down expected future inflation. CEE disagree (p. 28). Writing of an interest rate decline, so all the signs are opposite of my stories, ... the interest rate appears in firms' marginal cost. Since the interest rate drops after an expansionary monetary policy shock, the model embeds a force that pushes marginal costs down for a period of time. Indeed, in the estimated benchmark model the effect is strong enough to induce a transient fall in inflation.But pushing marginal costs down lowers current inflation relative to future inflation -- they're looking at the same Phillips curve just above. It looks to me like they're confusing current with expected future inflation. Intuition is hard. There are plenty of Fisherian forces in this model that want lower interest rates to lower inflation. More deeply, we see here a foundational trouble of the Phillips curve. It was originally a statistical relation between wage inflation and unemployment. It became a (weaker) statistical relation between price inflation and unemployment or the output gap. The new-Keynesian theory wants naturally to describe a relation between marginal costs and price changes, and it takes contortions to make output equal to marginal costs. Phillips curves fit the data terribly. So authors estimating Phillips curves (An early favorite by Tim Cogley and Argia Sbordone) go back, and separate marginal cost from output or employment. As CET write later, they "build features into the model which ensure that firms' marginal costs are nearly acyclical." That helps the fit, but it divorces the Phillips curve shifter variable from the business cycle! Standard doctrine says that for the Fed to lower inflation it must soften the economy and risk unemployment. Doves say don't do it, live with inflation to avoid that cost. Well, if the Phillips curve shifter is "acyclical" you have to throw all that out the window. This shift also points to the central conundrum of the Phillips curve. Here it describes the adjustment of prices to wages or "costs" more generally. It fundamentally describes a relative price, not a price level. OK, but the phenomenon we want to explain is the common component, how all prices and wage tie together or equivalently the decline in the value of the currency, stripped of relative price movements. The central puzzle of macroeconomics is why the common component, a rise or fall of all prices and wages together, has anything to do with output, and for us how it is controlled by the Fed. Christiano Eichenbaum and Evans write (p.3) that "it is crucial to allow for variable capital utilization." I'll try explain why in my own words. Without capital adjustment costs, any change in the real return leads to a big investment jump. \(r=f'(k)\) must jump and that takes a lot of extra \(k\). We add adjustment costs to tamp down the investment response. But now when there is any shock, capital can't adjust enough and there is a big rate of return response. So we need something that acts like a big jump in the capital stock to tamp down \(r=f'(k)\) variability, but not a big investment jump. Variable capital utilization acts like the big investment jump without us seeing a big investment jump. And all this is going to be important for inflation too. Remember the Phillips curve; if output jumps then inflation jumps too. Sticky wages are crucial, and indeed CEE report that they can dispense with sticky prices. One reason is that otherwise profits are countercyclical. In a boom, prices go up faster than wages so profits go up. With sticky prices and flexible wages you get the opposite sign. It's interesting that the "textbook" model has not moved this way. Again, we don't often enough write textbooks. Fixing prices and wages during the period of the shock by assuming price setters can't see the shock for a quarter has a direct effect: It stops any price or wage jumps during the quarter of the shock, as in my first graph. That's almost cheating. Note the VAR also has absolutely zero instantaneous inflation response. This too is by assumption. They "orthogonalize" the variables so that all the contemporaneous correlation between monetary policy shocks and inflation or output is considered part of the Fed's "rule" and none of it reflects within-quarter reaction of prices or quantities to the Fed's actions. Step back and admire. Given the project "find elaborations of the standard new-Keynesian model to match VAR impulse response functions" could you have come up with any of this? But back to our task. That's a lot of apparently necessary ingredients. And reading here or CEE's verbal intuition, the logic of this model is nothing like the standard simple intuition, which includes none of the necessary ingredients. Do we really need all of this to produce the basic pattern of monetary policy? As far as we know, we do. And hence, that pattern may not be as robust as it seems. For all of these ingredients are pretty, ... imaginative. Really, we are a long way from the Lucas/Prescott vision that macroeconomic models should be based on well tried and measured microeconomic ingredients that are believably invariant to changes in the policy regime. CEE argue hard for the plausibility of these microeconomic specifications (see especially the later CET Journal of Economic Perspectives article), but they have to try so hard precisely because the standard literature doesn't have any of these ingredients. The "level" rather than "growth rate" foundations of consumption, investment, and pricing decisions pervade microeconomics. Microeconomists worry about labor monopsony, not labor monopoly; firms set wages, households don't. (Christiano Eichenbam and Trabandt (2016) get wage stickiness from a more realistic search and matching model. Curiously, the one big labor union fiction is still the most common, though few private sector workers are unionized.) Firms don't borrow the wage bill a quarter ahead of time. Very few prices and wages are indexed in the US. Like habits, perhaps these ingredients are simple stand ins for something else, but at some point we need to know what that something else is. That is especially true if one wants to do optimal policy or welfare analysis. Just how much economics must we reinvent to match this one response function? How far are we really from the ad-hoc ISLM equations that Sims (1980) destroyed? Sadly, subsequent literature doesn't help much (more below). Subsequent literature has mostly added ingredients, including heterogeneous agents (big these days), borrowing constraints, additional financial frictions (especially after 2008), zero bound constraints, QE, learning and complex expectations dynamics. (See CET 2018 JEP for a good verbal survey.) The rewards in our profession go to those who add a new ingredient. It's very hard to publish papers that strip a model down to its basics. Editors don't count that as "new research," but just "exposition" below the prestige of their journals. Though boiling a model down to essentials is maybe more important in the end than adding more bells and whistles. This is about where we are. Despite the pretty response functions, I still score that we don't have a reliable, simple, economic model that produces the standard view of monetary policy. Mankiw and Reis, sticky expectations Mankiw and Reis (2002) expressed the challenge clearly over 20 years ago. In reference to the "standard" New-Keynesian Phillips curve \(\pi_t = \beta E_t \pi_{t+1} + \kappa x_t\) they write a beautiful and succinct paragraph: Ball [1994a] shows that the model yields the surprising result that announced, credible disinflations cause booms rather than recessions. Fuhrer and Moore [1995] argue that it cannot explain why inflation is so persistent. Mankiw [2001] notes that it has trouble explaining why shocks to monetary policy have a delayed and gradual effect on inflation. These problems appear to arise from the same source: although the price level is sticky in this model, the inflation rate can change quickly. By contrast, empirical analyses of the inflation process (e.g., Gordon [1997]) typically give a large role to "inflation inertia."At the cost of repetition, I emphasize the last sentence because it is so overlooked. Sticky prices are not sticky inflation. Ball already said this in 1994: Taylor (1979, 198) and Blanchard (1983, 1986) show that staggering produces inertia in the price level: prices just slowly to a fall in th money supply. ...Disinflation, however, is a change in the growth rate of money not a one-time shock to the level. In informal discussions, analysts often assume that the inertia result carries over from levels to growth rates -- that inflation adjusts slowly to a fall in money growth. As I see it, Mankiw and Reis generalize the Lucas (1972) Phillips curve. For Lucas, roughly, output is related to unexpected inflation\[\pi_t = E_{t-1}\pi_t + \kappa x_t.\] Firms don't see everyone else's prices in the period. Thus, when a firm sees an unexpected rise in prices, it doesn't know if it is a higher relative price or a higher general price level; the firm expands output based on how much it thinks the event might be a relative price increase. I love this model for many reasons, but one, which seems to have fallen by the wayside, is that it explicitly founds the Phillips curve in firms' confusion about relative prices vs. the price level, and thus faces up to the problem why should a rise in the price level have any real effects. Mankiw and Reis basically suppose that firms find out the general price level with lags, so output depends on inflation relative to a distributed lag of its expectations. It's clearest for the price level (p. 1300)\[p_t = \lambda\sum_{j=0}^\infty (1-\lambda)^j E_{t-j}(p_t + \alpha x_t).\] The inflation expression is \[\pi_t = \frac{\alpha \lambda}{1-\lambda}x_t + \lambda \sum_{j=0}^\infty (1-\lambda)^j E_{t-1-j}(\pi_t + \alpha \Delta x_t).\](Some of the complication is that you want it to be \(\pi_t = \sum_{j=0}^\infty E_{t-1-j}\pi_t + \kappa x_t\), but output doesn't enter that way.) This seems totally natural and sensible to me. What is a "period" anyway? It makes sense that firms learn heterogeneously whether a price increase is relative or price level. And it obviously solves the central persistence problem with the Lucas (1972) model, that it only produces a one-period output movement. Well, what's a period anyway? (Mankiw and Reis don't sell it this way, and actually don't cite Lucas at all. Curious.) It's not immediately obvious that this curve solves the Ball puzzle and the declining inflation puzzle, and indeed one must put it in a full model to do so. Mankiw and Reis (2002) mix it with \(m_t + v = p_t + x_t\) and make some stylized analysis, but don't show how to put the idea in models such as I started with or make a plot. Their less well known follow on paper Sticky Information in General Equilibrium (2007) is much better for this purpose because they do show you how to put the idea in an explicit new-Keynesian model, like the one I started with. They also add a Taylor rule, and an interest rate rather than money supply instrument, along with wage stickiness and a few other ingredients,. They show how to solve the model overcoming the problem that there are many lagged expectations as state variables. But here is the response to the monetary policy shock: Response to a Monetary Policy Shock, Mankiw and Reis (2007). Sadly they don't report how interest rates respond to the shock. I presume interest rates went down temporarily. Look: the inflation and output gap plots are about the same. Except for the slight delay going up, these are exactly the responses of the standard NK model. When output is high, inflation is high and declining. The whole point was to produce a model in which high output level would correspond to rising inflation. Relative to the first graph, the main improvement is just a slight hump shape in both inflation and output responses. Describing the same model in "Pervasive Stickiness" (2006), Mankiw and Reis describe the desideratum well: The Acceleration Phenomenon....inflation tends to rise when the economy is booming and falls when economic activity is depressed. This is the central insight of the empirical literature on the Phillips curve. One simple way to illustrate this fact is to correlate the change in inflation, \(\pi_{t+2}-\pi_{t-2}\) with [the level of] output, \(y_t\), detrended with the HP filter. In U.S. quarterly data from 1954-Q3 to 2005-Q3, the correlation is 0.47. That is, the change in inflation is procyclical.Now look again at the graph. As far as I can see, it's not there. Is this version of sticky inflation a bust, for this purpose? I still think it's a neat idea worth more exploration. But I thought so 20 years ago too. Mankiw and Reis have a lot of citations but nobody followed them. Why not? I suspect it's part of a general pattern that lots of great micro sticky price papers are not used because they don't produce an easy aggregate Phillips curve. If you want cites, make sure people can plug it in to Dynare. Mankiw and Reis' curve is pretty simple, but you still have to keep all past expectations around as a state variable. There may be alternative ways of doing that with modern computational technology, putting it in a Markov environment or cutting off the lags, everyone learns the price level after 5 years. Hank models have even bigger state spaces! Some more modelsWhat about within the Fed? Chung, Kiley, and Laforte 2010, "Documentation of the Estimated, Dynamic, Optimization-based (EDO) Model of the U.S. Economy: 2010 Version" is one such model. (Thanks to Ben Moll, in a lecture slide titled "Effects of interest rate hike in U.S. Fed's own New Keynesian model") They describe it as This paper provides documentation for a large-scale estimated DSGE model of the U.S. economy – the Federal Reserve Board's Estimated, Dynamic, Optimization- based (FRB/EDO) model project. The model can be used to address a wide range of practical policy questions on a routine basis.Here are the central plots for our purpose: The response of interest rates and inflation to a monetary policy shock. No long and variable lags here. Just as in the simple model, inflation jumps down on the day of the shock and then reverts. As with Mankiw and Reis, there is a tiny hump shape, but that's it. This is nothing like the Romer and Romer plot. Smets and Wouters (2007) "Shocks and Frictions in US Business Cycles: A Bayesian DSGE Approach" is about as famous as Christiano Eichenbaum and Evans as a standard new-Keynesian model that supposedly matches data well. It "contains many shocks and frictions. It features sticky nominal price and wage settings that allow for backward inflation indexation, habit formation in consumption, and investment adjustment costs that create hump-shaped responses... and variable capital utilization and fixed costs in production"Here is their central graph of the response to a monetary policy shockAgain, there is a little hump-shape, but the overall picture is just like the one we started with. Inflation mostly jumps down immediately and then recovers; the interest rate shock leads to future inflation that is higher, not lower than current inflation. There are no lags from higher interest rates to future inflation declines. The major difference, I think, is that Smets and Wouters do not impose the restriction that inflation cannot jump immediately on either their theory or empirical work, and Christiano, Eichenbaum and Evans impose that restriction in both places. This is important. In a new-Keynesian model some combination of state variables must jump on the day of the shock, as it is only saddle-path stable. If inflation can't move right away, that means something else does. Therefore, I think, CEE also preclude inflation jumping the next period. Comparing otherwise similar ingredients, it looks like this is the key ingredient for producing Romer-Romer like responses consistent with the belief in sticky inflation. But perhaps the original model and Smets-Wouters are right! I do not know what happens if you remove the CEE orthogonalization restriction and allow inflation to jump on the day of the shock in the date. That would rescue the new-Keynesian model, but it would destroy the belief in sticky inflation and long and variable lags. Closing thoughtsI'll reiterate the main point. As far as I can tell, there is no simple economic model that produces the standard belief. Now, maybe belief is right and models just have to catch up. It is interesting that there is so little effort going on to do this. As above, the vast outpouring of new-Keynesian modeling has been to add even more ingredients. In part, again, that's the natural pressures of journal publication. But I think it's also an honest feeling that after Christiano Eichenbaun and Evans, this is a solved problem and adding other ingredients is all there is to do. So part of the point of this post (and "Expectations and the neutrality of interest rates") is to argue that this is not a solved problem, and that removing ingredients to find the simplest economic model that can produce standard beliefs is a really important task. Then, does the model incorporate anything at all of the standard intuition, or is it based on some different mechanism al together? These are first order important and unresolved questions!But for my lay readers, here is as far as I know where we are. If you, like the Fed, hold to standard beliefs that higher interest rates lower future output and inflation with long and variable lags, know there is no simple economic theory behind that belief, and certainly the standard story is not how economic models of the last four decades work. Update:I repeat a response to a comment below, because it is so important. I probably wasn't clear enough that the "problem" of high output with inflation falling rather than rising is a problem of models vs. traditional beliefs, rather than of models vs. facts. The point of the sequence of posts, really, is that the traditional beliefs are likely wrong. Inflation does not fall, following interest rate increases, with dependable, long, and perhaps variable lags. That belief is strong, but neither facts, empirical evidence, or theory supports it. ("Variable" is a great way to scrounge data to make it fit priors.) Indeed many successful disinflations like ends of hyperinflations feature a sigh of relief and output surge on the real side.