APPROVED ; The main topic of this dissertation revolves around the problem of ru- ral FTTH service roll-out. The problem has become topical worldwide, typically in the countries with rural populations scattered around. Never- theless, the FTTH service penetration rate generally remains low, which still creates room for research and innovation, whereby the planning of FTTH network deployment and roll-out can be advanced. In fact, the rural FTTH roll-out problem falls into category of worst-case scenarios, thereby driving technological progress to an even higher extent. Inadequate future-proof broadband service in rural areas leads to negative consequences, widely known as the Digital Divide problem. Unequal devel- opment of urban and rural areas deepens the isolation of rural areas from the information society. The resolution of the problem is thus of great importance, also beyond purely technical outcomes. In fact, this work evolves beyond technical aspects and includes recommendations for lo- cal governments, policy regulators, investors, utility companies, and other likely contributors. The first half of this work presents the following content. The background to the cutting-edge broadband technologies was presented with regard to the ultimate success of rural fibre roll-out. The discussion focused on one of the most promising future-proof broadband technologies with re- gard to the rural case, i.e. PON or LR-PON technology. A number of deliberations targeted the statement of the problem, research questions and hypotheses, and finally the research methodology. Key aspects of the problem were further explored. The literature review focused on the state-of-the-art means and deployment technologies, in different localities worldwide, and suitable mathematical formulations needed to model and optimise the deployment. Finally, research gaps were identified, and de- tailed research plan was proposed. The increased deployment cost with respect to expected revenue is over- all perceived as the main barrier to FTTH ...
Im vergangenen Jahrzehnt hat die Bedeutung reflexiver Organisationsforschung stark zugenommen. Die Diskussion zeigt die hohe Bedeutung der eigenen Interpretationen und der Veröffentlichung von Forschungsergebnissen. Demgegenüber steht die Beziehung zwischen den Forschungspartner/innen in der Praxis und den Forschenden kaum im Fokus der Betrachtung. Jedoch ist diese Beziehung die Basis, auf der Daten generiert werden und Interpretationen entstehen. In diesem Beitrag verfolgen wir das Ziel, eine methodologische Heuristik als Orientierung anzubieten, welche diese Beziehung zum Ausgangspunkt nimmt. Die von der Theorie sozialer Systeme inspirierte "reflexive system theory heuristic" wird empirisch anhand einer longitudinalen Einzelfallstudie zu einem Veränderungsprozess in einem Krankenhaus illustriert. Die Heuristik unterstützt die Beobachtung und Erklärung der Dynamik in der Feldbeziehung und hilft dabei, den Status der forscherischen Ergebnisse zu klären. Die beforschte Praxis und die Forschungspraxis werden als Kommunikationssystem konzeptualisiert. Forschung erhält damit eine generative Komponente und wird als reflexiver Zusammenhang aufgefasst. Die Berücksichtigung des jeweiligen Kontextes von Praxis und von Wissenschaft erlaubt es zudem, die bisherigen Erkenntnisse zu reflexiver Organisationsforschung einzubinden. Für die Forschung im Bereich der Systemtheorie wiederum kann die hier vorgestellte Heuristik als Ausgangspunkt für eine bislang marginalisierte methodologische Diskussion dienen.
Over the April-June 2008 period, prices of the commodities such as wheat, maize, rice and vegetable oils, reached impressive yet not exceptional peaks. By contrast, the populations of 48 countries were stricken by severe under nutrition. Most of them had already been weakened not only by conflicts, social disorders, dramatic and unusual climatic and natural disasters, but also by outbreaks, epizooties, and population displacements. In some cases, all these factors together played a significant role in the worsening situation. However, another important reason could be advanced to explain why the 2008 food crisis was an "extraordinary problem". This one is that "normal" scientific knowledge was defeated by the complexity of what it appears now as a food "poly-crises" (Morin, 2011). We answered by designing an epistemological, methodological, and technical knowledge base from two very different and alternative economics approaches of facing complexity. The first is the Hayekian approach (1899-1992), and the second, the Simonian approach (1916-2001). The research intends to fulfill cumulativity criteria, traditionally difficult to satisfy with the ones of complexity. From the following analysis we mostly learned two things. First, epistemological grounds of economics needed to be broken in complex environment(s): 1) from certainty/objectivity to uncertainty/subjectivity, 2) from accurate prediction to design, 3) from linear causality deemed inappropriate or, worse, threatening people freedoms, to complex causality. Second, in the adaptation process, the role of "tacit" knowledge production and sharing is central. For that reason, the core of economics problem is not allocation of resources anymore. Now, the main problem for humans whose cognitive capacity are "bounded" is to compute, to "socialize" (Nonaka et alii, 1994, 2001), available but dispersed information and knowledge and to converse them into heuristics or patterns allowing the adaptation to complex and uncertain environment(s). Two others auxiliary hypotheses –E. Ostrom (2011) will endorse them later- can be drawn from that preliminary work: 1) the dynamics of change rooted "in the thinking and in the creativity of people involved in complex situations and their capacity to restructure their own models for interactions", 2) reciprocal altruism (Simon, 1992, 1993) is a rational behavior which can be more effective in/for the social interactions in complex environment(s) than maximizing or selfish behavior. To present preliminary results in an effective way, we created a very simple interface scheme. It takes the form of a three-dimensional knowledge loop with two strands, "generic" and "tacit" knowledge connected between themselves to produce by recursion a meta-knowledge. We made the choice of the interface because it reflects with the most accuracy the position defended by Hayek and Simon which is that economics is a frontier science. Moreover, the interface has the advantage of being both open and closed. A part of the research is more specifically dedicated to design tools increasing the understanding of the "polyfood" crises. We elaborated a three-level indicator with: 1) perceptions of the contribution of each factor to the outbreak and the worsening of the situation; 2) contributions of actors to the explanation of the food crisis proposed in 2008. It was developed from: 1) a case study comparing and contrasting explanations proposed a) in their statements by 138 Heads of State and Government attended the High Level Conference on World Food Security (3-5 June 2008), b) in their analyses by economists, c) in their testimonies by people hit by under nutrition/rising food prices (database IRIN); 2) a new and more updated typology focused on the responses addressed by 18 countries split into 3 groups [.]. ; Entre avril et juin 2008, le prix des commodités (blé, maïs, riz) a atteint un niveau impressionnant, mais pas exceptionnel. Les populations de 48 pays ont été affectées par une sévère sous-alimentation. La plupart d'entre eux avait déjà été affaiblie par des conflits et des catastrophes naturelles inhabituelles et dramatiques. Ces facteurs ont souvent interagi pour aggraver la situation. Pourtant, si la crise de 2007-2008 a été un «problème extraordinaire», c'est aussi parce que la connaissance scientifique «normale» a échoué face à la complexité de la «poly-crises» alimentaire (Morin, 2011). En réponse, nous avons conçu un cadre épistémologique, méthodologique, et technique, à partir de deux approches face à la complexité, celles de Hayek (1899-1992) et de Simon (1916-2001), avec un objectif, satisfaire au critère de cumulativité, un reproche traditionnellement adressé à ce type d'approche. Ce travail a produit deux enseignements. Premièrement, les fondements épistémologiques de la production de la connaissance en économie doivent être révisés en environnement complexe et incertain: 1) du certain/de l'objectif vers l'incertain/le subjectif; 2) de la prédiction exacte vers la conception; 3) de la causalité linéaire inappropriée, ou pire, menaçant la liberté individuelle, vers une causalité complexe. Deuxièmement, dans le processus d'adaptation, le rôle de la production et du partage de la connaissance «tacite» est central. Pour cette raison, le problème économique n'est plus un problème d'allocation des ressources. Il est de savoir comment des êtres humains aux capacités cognitives «limitées» computent et socialisent (Nonaka et alii, 1994, 2001) la connaissance et l'information disponibles, mais dispersées, pour la convertir en heuristiques ou patterns favorisant l'adaptation. Deux autres hypothèses les renforcent : 1) les dynamiques du changement s'enracinent «dans la pensée et la créativité des gens impliqués dans des situations complexes et dans leur capacité à restructurer leurs propres modèles d'interactions», (Ostrom, 2011) ; 2) l'altruisme réciproque (Simon, 1992, 1993) est un comportement rationnel qui peut être plus efficient dans les interactions sociales en environnement complexe que le comportement maximisateur ou égoïste. Ces résultats ont été synthétisés dans une interface que nous avons créée et qui a pris la forme d'une boucle de la connaissance à deux allèles, une pour la connaissance générique, l'autre, pour la tacite, qui, par récursion, produisent une méta-connaissance. Cette interface est à la fois ouverte et fermée et reflète ainsi la position défendue par Hayek et Simon pour qui la science économique est une «science frontière». Une part de la recherche est consacrée à la création d'outils, par exemple à un indicateur de perception de la contribution des facteurs au déclenchement et/ou à l'aggravation de la crise, à partir : 1) des allocutions des 138 Chefs d'État et de Gouvernement présents à la Conférence de Haut Niveau sur la Sécurité alimentaire mondiale (3-5 juin 2008) ; 2) des analyses des économistes, 3) des témoignages des gens qui ont subi la sous-nutrition ou la hausse des prix des denrées alimentaires (database IRIN). Nous proposons également une typologie actualisée des policy-mix mis en œuvre par 18 pays divisés en 3 groupes : des pays en développement, pour la plupart importateurs nets, sévèrement touchés par la crise et qui ont connu des «émeutes de la faim» (Égypte, Tunisie, Cameroun, Côte d'Ivoire, Sénégal, Mauritanie, Haïti, Bangladesh) ; des pays Membres du groupe de Cairns ayant connu soit des «émeutes de la faim», soit des désordres sociaux (Indonésie, Philippines, Thaïlande, Afrique du Sud) ; enfin, des pays ayant adopté des restrictions et/ou prohibitions aux exportations (Chine, Inde, Indonésie, Égypte, Cambodge, Ukraine, Vietnam) [.].
Das Buch ist der Biographie und dem wissenschaftlichen Nachlass von John Michael Steiner gewidmet – einem tschechisch-amerikanischen Soziologen, geboren 1925 in Prag, verstorben 2014 im kalifornischen Novato. Er hat Auschwitz und andere Konzentrationslager überlebt und ist einer der wenigen Verfolgten, die sich später in der wissenschaftlichen Täterforschung engagierten. Nach dem Krieg musste er wegen der Machtübernahme der KP aus Prag fliehen, zunächst nach Australien, dann in die USA. Er kam 1962 nach Deutschland und konnte seine Forschung auch nach seiner Promotion an der Universität Freiburg i.Br. fortsetzen, unterstützt durch Stipendien der Alexander von Humboldt Foundation und der Fulbright Commission. – Einzigartig ist seine vergleichende Untersuchung (1962 bis 1966) zwischen ehemaligen Angehörigen der Waffen-SS und SS und ehemaligen Angehörigen der Wehrmacht mit dem 1970 publizierten Ergebnis einer weiterbestehenden, relativ stärkeren autoritären Einstellung der ehemaligen Angehörigen der Waffen-SS und SS. Weitaus wichtiger waren ihm Lebensläufe und Sozialverhalten typischer SS-Männer. Aus verschiedenen Gründen kam es nicht zu einer Publikation der für ihn verfassten Lebensläufe. Hier ragen 10 Lebensläufe hervor: Vier Verfasser waren zu lebenslänglicher Haft verurteilt wegen ihrer vielfachen Morde in den Lagern Auschwitz, Buchenwald oder Sobibor, sechs waren Offiziere der Waffen-SS, darunter ein Adjutant Himmlers und ein Adjutant Hitlers. Diese Lebensläufe werden hier erstmals publiziert (dazu zwei transkribierte Interviews). Im Kapitel 1 wird Steiners Biographie geschildert, seine wissenschaftlichen Arbeiten und sein öffentliches Engagement. Dieses Kapitel enthält auch seine Aufsätze über das Fragmentierte Gewissen und über Role margin, d.h. den individuellen, von sozialer und von moralischer Intelligenz abhängigen Ermessenspielraum eines SS-Mannes im Lagerkommando, außerdem seinen Briefwechsel mit Erich Fromm über die Interpretation der Lebensläufen von SS-Angehörigen. Übersetzt wurden Steiners drei eindringliche Erfahrungsberichte: KZ-Arbeitseinsatz Blechhammer, Todesmarsch nach Reichenbach, Im Viehwagen nach Dachau. Im Kapitel 2 Grundlegende Studien zur Autoritären Persönlichkeit: Konzeptionen und Forschungsergebnisse wird ein Bezugsrahmen für die folgenden Interpretationen entwickelt. Dazu gehört eine Übersicht über zwei Meilenstein-Studien zur Autoritäten Persönlichkeit: (1) Erich Fromm und Mitarbeiter (1936, 1980), deren Publikation von Max Horkheimer untersagt wurde, und (2) Adorno, Frenkel-Brunswik, Levinson und Sanford (1950). Der ausführliche Kommentar zur neueren Forschung auf diesem Gebiet bezieht sich einerseits auf die "Autoritäre Persönlichkeit" als multimethodisch und prozessanalytisch zu erfassender Persönlichkeitstypus versus "Autoritarismus" als Einstellung, gewöhnlich nur als Selbstbeurteilung ohne Verhaltensdaten erfasst. – Wichtige Arbeiten der Täterforschung werden dargestellt, u.a. die herausragende Forschung von Henry Dicks (1950, 1972). Weitere Themen sind: das geringe Interesse deutscher Psychologen an der Täterforschung, die sog. neue Täterforschung deutscher Historiker und der gravierende Mangel an Interdisziplinarität. Im Kapitel 3 Täterforschung: Steiners Konzepte und Methoden wird die Fragebogenstudie dargestellt und ausführlich über die zehn Lebensläufe berichtet. Sie werden psychologisch interpretiert, u.a. hinsichtlich Fromms Konzeption der Autoritären Persönlichkeit und Steiners Fragen nach Fragmentierung des Gewissens und Entscheidungsspielraum. Diese Leitkonzepte sind zweifellos als Heuristiken geeignet, doch mangelt es oft an hinreichend detaillierter Lebenslauf-Information, um die situative Auslösung von latenter Gewalttätigkeit und von sadistischen Handlungen adäquat erfassen zu können. Die mit Absicht erst nachträglich recherchierten, sekundären Informationen sind als Korrektiv des Gesamtbildes wichtig, können jedoch eine systematische Exploration der psychosozialen Prozesse nicht ersetzen. Im Kapitel 4 Täterforschung und Erziehungsreform wird Steiners Forderung nach einer fundamentalen Erziehungsreform im Kontext anderer Reformideen geschildert: u.a. Neills' Summerhill, Entnazifizierungsprogramme (Re-education); Aussteiger-Programme für Angehörige extremistischer Gruppen. – "Erziehung nach Ausschwitz" fand jedoch kaum statt. Es mangelt weithin an einem Ethik-Unterricht für Alle und auf allen Schulstufen. Erst während der letzten Jahre gibt es in Deutschland mehr Initiativen zur Reform des Ethik-Unterrichts, beispielsweise durch Projekte der Bertelsmann-Stiftung, durch neue pädagogische Initiativen und Schulbücher mit vielen Anregungen, nicht nur das Wissen über Ethik zu lehren, sondern Kompetenzen wie Perspektivenwechsel und Einfühlung auch praktisch zu üben. Das Kapitel 5 fasst die wichtigsten Perspektiven zusammen und zitiert Steiners Fragen und Thesen (wie sie als Untertitel dieses Buches gewählt wurden: " …wie können Menschen so werden, dass sie so etwas tun können?" – "Je mehr ich verstehe, desto weniger muss ich hassen." – "Nur Erinnerung und Erziehung können neuen Furchtbarkeiten und Genoziden vorbeugen." Der Anhang enthält außer den Kopien der Lebensläufe u.a. Steiner Publikationsliste, seine an der Sonoma State University inzwischen gelöschte Homepage und sein Engagement in den Medien. So war er an der vierteiligen Reportage (SPIEGEL-TV, 2009, und ZDF) Gesichter des Bösen als direkter Zeitzeuge und als wissenschaftlicher Kommentator beteiligt. ; The book is dedicated to the biography and academic bequest of John Michael Steiner – a Czech American sociologist born in Prague in 1925, died in Novato, California, in 2014. He survived Auschwitz and other concentration camps and is one of the few persecuted people who later became involved in scientific research into perpetrators. After the war he had to flee Prague because of the seizure of power by the Communist Party, first to Australia, then to the USA. He came to Germany in 1962 and was able to continue his research after his doctorate at the University of Freiburg i. Br, supported by fellowships from the Alexander von Humboldt-Foundation and as a Senior Fulbright-Professor. His comparative study (1962 to 1966) of former members of the Waffen-SS and SS and former members of the Wehrmacht is unique. The results, indicating a persisting authoritarian attitude, remarkably higher in former members of Waffen-SS and SS, were published in 1970. Far more important to him were the CVs and social behavior of typical SS-men. For various reasons, the CVs were not published. Ten of these CV written for him stand out here: four authors were sentenced to life imprisonment for their multiple murders in the Auschwitz, Buchenwald or Sobibor camps; six were officers of the Waffen-SS, including an adjutant of Himmler and an adjutant of Hitler. These CVs are published here for the first time (plus two transcribed interviews). Chapter 1 describes Steiner's biography, his scientific work, and his public commitment. This chapter also contains his essays on the Fragmentation of Conscience and on Role Margin, i.e., the individual discretion of an SS man's action, dependent on social and moral intelligence, and, Steiner's correspondence with Erich Fromm on the interpretation of the biographies of SS-men. Three reports of Steiner's extreme experience were translated: Slave Laborer at the Blechhammer (Ehrenforst) Synfuel Plant; On a Death March from Blechhammer to Reichenbach; In a Cattle Wagon to Dachau. Chapter 2 Basic Studies on Authoritarian Personality: Concepts and Research Results develop a frame of reference for the following interpretations. This includes an overview of two mile-stone studies on the Authoritarian personality:(1) Erich Fromm and co-workers (1936, 1980, initial publication prohibited by Max Horkheimer), and (2) Adorno, Frenkel-Brunswik, Levenson and Sanford (1950). The detailed commentary on recent research in this field refers on the one hand to the "authoritarian personality" as a personality type suggesting a multimethod assessment of motivation and action, versus "authoritarianism" as a social attitude, usually a self-assessment, i.e., a questionnaire without behavioral data. Important work of perpetrator research is presented, e.g., the outstanding research by Henry Dicks (1950, 1972). Other topics are: the low interest of German psychologists in perpetrator research, the so-called new perpetrator research by German historians, and the serious lack of interdisciplinarity. In Chapter 3 Perpetrator Research: Steiner's Concepts and Methods, the questionnaire study is presented, and the ten CVs are reported in detail. The psychological interpretation is primarily based on Fromm's conception of the authoritarian personality and Steiner's questions about fragmentation of conscience and role margin. These guiding concepts are undoubtedly suitable as heuristics, but there is often a lack of sufficiently detailed CV information to adequately capture the situational triggering of latent violence and sadistic actions. Secondary information, obtained after interpretation, is essential as a corrective to the overall picture, but cannot replace a systematic exploration of psychosocial processes. In Chapter 4 Perpetrator Research and Educational Reform, Steiner's demand for a fundamental education reform is discussed in the context of, among others, Neill's Summerhill, De-nazification programs (re-education); Drop-out programs for members of extremist groups. However, "Education after Auschwitz" hardly ever took place. There is a widespread lack of ethics teaching for all, and at all school levels in Germany. Only in recent years have there been more initiatives in Germany to renewing ethics teaching, for example, projects by the Bertelsmann Foundation, new pedagogical initiatives, and textbooks with many suggestions not only to teach knowledge about ethics, but also to practice skills, such as a change of perspective, and empathy. Chapter 5 summarizes the essential perspectives, and quotes Steiner's questions and principles (chosen as subtitles of the present book): "How can people become such that they can do something like this?"; "The more I understand, the less I have to hate"; "Only memory and education can prevent new dreadfulness and genocide". Appendix: In addition to copies of the original CVs, it contains Steiner's list of publications; his homepage, which has since been deleted at Sonoma State University; and his media appearances., e.g., in the four-part Reportage Faces of Evil (SPIEGEL-TV 2009, and ZDF) as a direct contemporary witness, and as an expert in perpetrator research.
Das Buch ist der Biographie und dem wissenschaftlichen Nachlass von John Michael Steiner gewidmet – einem tschechisch-amerikanischen Soziologen, geboren 1925 in Prag, verstorben 2014 im kalifornischen Novato. Er hat Auschwitz und andere Konzentrationslager überlebt und ist einer der wenigen Verfolgten, die sich später in der wissenschaftlichen Täterforschung engagierten. Nach dem Krieg musste er wegen der Machtübernahme der KP aus Prag fliehen, zunächst nach Australien, dann in die USA. Er kam 1962 nach Deutschland und konnte seine Forschung auch nach seiner Promotion an der Universität Freiburg i.Br. fortsetzen, unterstützt durch Stipendien der Alexander von Humboldt Foundation und der Fulbright Commission. – Einzigartig ist seine vergleichende Untersuchung (1962 bis 1966) zwischen ehemaligen Angehörigen der Waffen-SS und SS und ehemaligen Angehörigen der Wehrmacht mit dem 1970 publizierten Ergebnis einer weiterbestehenden, relativ stärkeren autoritären Einstellung der ehemaligen Angehörigen der Waffen-SS und SS. Weitaus wichtiger waren ihm Lebensläufe und Sozialverhalten typischer SS-Männer. Aus verschiedenen Gründen kam es nicht zu einer Publikation der für ihn verfassten Lebensläufe. Hier ragen 10 Lebensläufe hervor: Vier Verfasser waren zu lebenslänglicher Haft verurteilt wegen ihrer vielfachen Morde in den Lagern Auschwitz, Buchenwald oder Sobibor, sechs waren Offiziere der Waffen-SS, darunter ein Adjutant Himmlers und ein Adjutant Hitlers. Diese Lebensläufe werden hier erstmals publiziert (dazu zwei transkribierte Interviews). Im Kapitel 1 wird Steiners Biographie geschildert, seine wissenschaftlichen Arbeiten und sein öffentliches Engagement. Dieses Kapitel enthält auch seine Aufsätze über das Fragmentierte Gewissen und über Role margin, d.h. den individuellen, von sozialer und von moralischer Intelligenz abhängigen Ermessenspielraum eines SS-Mannes im Lagerkommando, außerdem seinen Briefwechsel mit Erich Fromm über die Interpretation der Lebensläufen von SS-Angehörigen. Übersetzt wurden Steiners drei eindringliche Erfahrungsberichte: KZ-Arbeitseinsatz Blechhammer, Todesmarsch nach Reichenbach, Im Viehwagen nach Dachau. Im Kapitel 2 Grundlegende Studien zur Autoritären Persönlichkeit: Konzeptionen und Forschungsergebnisse wird ein Bezugsrahmen für die folgenden Interpretationen entwickelt. Dazu gehört eine Übersicht über zwei Meilenstein-Studien zur Autoritäten Persönlichkeit: (1) Erich Fromm und Mitarbeiter (1936, 1980), deren Publikation von Max Horkheimer untersagt wurde, und (2) Adorno, Frenkel-Brunswik, Levinson und Sanford (1950). Der ausführliche Kommentar zur neueren Forschung auf diesem Gebiet bezieht sich einerseits auf die "Autoritäre Persönlichkeit" als multimethodisch und prozessanalytisch zu erfassender Persönlichkeitstypus versus "Autoritarismus" als Einstellung, gewöhnlich nur als Selbstbeurteilung ohne Verhaltensdaten erfasst. – Wichtige Arbeiten der Täterforschung werden dargestellt, u.a. die herausragende Forschung von Henry Dicks (1950, 1972). Weitere Themen sind: das geringe Interesse deutscher Psychologen an der Täterforschung, die sog. neue Täterforschung deutscher Historiker und der gravierende Mangel an Interdisziplinarität. Im Kapitel 3 Täterforschung: Steiners Konzepte und Methoden wird die Fragebogenstudie dargestellt und ausführlich über die zehn Lebensläufe berichtet. Sie werden psychologisch interpretiert, u.a. hinsichtlich Fromms Konzeption der Autoritären Persönlichkeit und Steiners Fragen nach Fragmentierung des Gewissens und Entscheidungsspielraum. Diese Leitkonzepte sind zweifellos als Heuristiken geeignet, doch mangelt es oft an hinreichend detaillierter Lebenslauf-Information, um die situative Auslösung von latenter Gewalttätigkeit und von sadistischen Handlungen adäquat erfassen zu können. Die mit Absicht erst nachträglich recherchierten, sekundären Informationen sind als Korrektiv des Gesamtbildes wichtig, können jedoch eine systematische Exploration der psychosozialen Prozesse nicht ersetzen. Im Kapitel 4 Täterforschung und Erziehungsreform wird Steiners Forderung nach einer fundamentalen Erziehungsreform im Kontext anderer Reformideen geschildert: u.a. Neills' Summerhill, Entnazifizierungsprogramme (Re-education); Aussteiger-Programme für Angehörige extremistischer Gruppen. – "Erziehung nach Ausschwitz" fand jedoch kaum statt. Es mangelt weithin an einem Ethik-Unterricht für Alle und auf allen Schulstufen. Erst während der letzten Jahre gibt es in Deutschland mehr Initiativen zur Reform des Ethik-Unterrichts, beispielsweise durch Projekte der Bertelsmann-Stiftung, durch neue pädagogische Initiativen und Schulbücher mit vielen Anregungen, nicht nur das Wissen über Ethik zu lehren, sondern Kompetenzen wie Perspektivenwechsel und Einfühlung auch praktisch zu üben. Das Kapitel 5 fasst die wichtigsten Perspektiven zusammen und zitiert Steiners Fragen und Thesen (wie sie als Untertitel dieses Buches gewählt wurden: " …wie können Menschen so werden, dass sie so etwas tun können?" – "Je mehr ich verstehe, desto weniger muss ich hassen." – "Nur Erinnerung und Erziehung können neuen Furchtbarkeiten und Genoziden vorbeugen." Der Anhang enthält außer den Kopien der Lebensläufe u.a. Steiner Publikationsliste, seine an der Sonoma State University inzwischen gelöschte Homepage und sein Engagement in den Medien. So war er an der vierteiligen Reportage (SPIEGEL-TV, 2009, und ZDF) Gesichter des Bösen als direkter Zeitzeuge und als wissenschaftlicher Kommentator beteiligt. ; The book is dedicated to the biography and academic bequest of John Michael Steiner – a Czech American sociologist born in Prague in 1925, died in Novato, California, in 2014. He survived Auschwitz and other concentration camps and is one of the few persecuted people who later became involved in scientific research into perpetrators. After the war he had to flee Prague because of the seizure of power by the Communist Party, first to Australia, then to the USA. He came to Germany in 1962 and was able to continue his research after his doctorate at the University of Freiburg i. Br, supported by fellowships from the Alexander von Humboldt-Foundation and as a Senior Fulbright-Professor. His comparative study (1962 to 1966) of former members of the Waffen-SS and SS and former members of the Wehrmacht is unique. The results, indicating a persisting authoritarian attitude, remarkably higher in former members of Waffen-SS and SS, were published in 1970. Far more important to him were the CVs and social behavior of typical SS-men. For various reasons, the CVs were not published. Ten of these CV written for him stand out here: four authors were sentenced to life imprisonment for their multiple murders in the Auschwitz, Buchenwald or Sobibor camps; six were officers of the Waffen-SS, including an adjutant of Himmler and an adjutant of Hitler. These CVs are published here for the first time (plus two transcribed interviews). Chapter 1 describes Steiner's biography, his scientific work, and his public commitment. This chapter also contains his essays on the Fragmentation of Conscience and on Role Margin, i.e., the individual discretion of an SS man's action, dependent on social and moral intelligence, and, Steiner's correspondence with Erich Fromm on the interpretation of the biographies of SS-men. Three reports of Steiner's extreme experience were translated: Slave Laborer at the Blechhammer (Ehrenforst) Synfuel Plant; On a Death March from Blechhammer to Reichenbach; In a Cattle Wagon to Dachau. Chapter 2 Basic Studies on Authoritarian Personality: Concepts and Research Results develop a frame of reference for the following interpretations. This includes an overview of two mile-stone studies on the Authoritarian personality:(1) Erich Fromm and co-workers (1936, 1980, initial publication prohibited by Max Horkheimer), and (2) Adorno, Frenkel-Brunswik, Levenson and Sanford (1950). The detailed commentary on recent research in this field refers on the one hand to the "authoritarian personality" as a personality type suggesting a multimethod assessment of motivation and action, versus "authoritarianism" as a social attitude, usually a self-assessment, i.e., a questionnaire without behavioral data. Important work of perpetrator research is presented, e.g., the outstanding research by Henry Dicks (1950, 1972). Other topics are: the low interest of German psychologists in perpetrator research, the so-called new perpetrator research by German historians, and the serious lack of interdisciplinarity. In Chapter 3 Perpetrator Research: Steiner's Concepts and Methods, the questionnaire study is presented, and the ten CVs are reported in detail. The psychological interpretation is primarily based on Fromm's conception of the authoritarian personality and Steiner's questions about fragmentation of conscience and role margin. These guiding concepts are undoubtedly suitable as heuristics, but there is often a lack of sufficiently detailed CV information to adequately capture the situational triggering of latent violence and sadistic actions. Secondary information, obtained after interpretation, is essential as a corrective to the overall picture, but cannot replace a systematic exploration of psychosocial processes. In Chapter 4 Perpetrator Research and Educational Reform, Steiner's demand for a fundamental education reform is discussed in the context of, among others, Neill's Summerhill, De-nazification programs (re-education); Drop-out programs for members of extremist groups. However, "Education after Auschwitz" hardly ever took place. There is a widespread lack of ethics teaching for all, and at all school levels in Germany. Only in recent years have there been more initiatives in Germany to renewing ethics teaching, for example, projects by the Bertelsmann Foundation, new pedagogical initiatives, and textbooks with many suggestions not only to teach knowledge about ethics, but also to practice skills, such as a change of perspective, and empathy. Chapter 5 summarizes the essential perspectives, and quotes Steiner's questions and principles (chosen as subtitles of the present book): "How can people become such that they can do something like this?"; "The more I understand, the less I have to hate"; "Only memory and education can prevent new dreadfulness and genocide". Appendix: In addition to copies of the original CVs, it contains Steiner's list of publications; his homepage, which has since been deleted at Sonoma State University; and his media appearances., e.g., in the four-part Reportage Faces of Evil (SPIEGEL-TV 2009, and ZDF) as a direct contemporary witness, and as an expert in perpetrator research. ; unknown ; unknown
En la actualidad, en torno al 90% de la población de la Unión Europea se encuentra expuesta a altas concentraciones de algunos de los contaminantes atmosféricos más nocivos para la salud, reduciendo la esperanza de vida de la población y ocasionando un fuerte impacto económico en el producto interior bruto de los países. Dentro de los sectores económicos, el transporte se presenta como una de las principales fuentes de contaminación ya que genera niveles nocivos de emisiones contaminantes y es el responsable de hasta el 24% de las emisiones de gases de efecto invernadero (GEI) en la Unión Europea. Estas emisiones dependen en gran medida de los combustibles utilizados, de la carga y tecnología del motor de los vehículos y principalmente de las distancias recorridas. El problema de la distribución de productos desde los almacenes a los usuarios finales juega un papel central en la gestión de algunos sistemas logísticos, donde la determinación de rutas de reparto eficientes es fundamental en la reducción de costes. Este problema en la vida real, se caracteriza por disponer las empresas de distribución de una flota heterogénea, en la que vehículos con diferentes características son incorporados a lo largo del tiempo para una mejor adaptación a las demandas de los clientes. Entre las características más destacadas se encuentran vehículos con diferentes capacidades y antigüedad, usos de combustibles alternativos y tecnología del motor. Por todo ello, en el contexto actual la componente medioambiental tiene que ser añadida en el proceso de toma de decisiones a las estrategias logísticas tradicionales, basadas en costes y tiempos. Esta Tesis Doctoral se ha centrado en su mayor parte al desarrollo de nuevos modelos y algoritmos para la resolución del Problema de Enrutamiento de Vehículos con Flota Fija Heterogénea y Ventanas de Tiempo (HVRPTW), con la consideración adicional de reducir las emisiones de GEI y de partículas contaminantes. La formulación del problema se realiza desde dos perspectivas muy diferenciadas. La primera de ellas incorpora una metodología basada en la estimación de los costes asociados a las externalidades presentes en las actividades del transporte. La segunda perspectiva comprende técnicas de optimización multiobjetivo con asignación de preferencias a priori, en el que el decisor puede establecer sus preferencias por adelantado. La elaboración de las rutas eco-eficientes se plantea mediante modelos lineales de programación matemática y se resuelve usando técnicas cuantitativas. Estas técnicas comprenden algoritmos heurísticos y metaheurísticos que combinan diversos procedimientos avanzados para tratar la complejidad del problema. En particular, esta Tesis describe una heurística de inserción secuencial semi-paralela y una metaheurística híbrida de búsqueda de entorno variable descendente con búsqueda tabú y lista de espera, que introduce una mayor flexibilidad para la resolución de cualquier variante del problema HVRPTW. Los algoritmos han sido aplicados a problemas típicos de recogida y reparto de mercancías de la literatura científica y a un caso real, que comprende la planificación de rutas y personal en una empresa de servicios con características y restricciones muy peculiares. Los resultados demuestran que el algoritmo resuelve de manera eficiente la variante del problema abordado y es extensible para la resolución de otras variantes. El resultado de la Tesis es el desarrollo de una herramienta para la ayuda a la toma de decisiones en el diseño y control de rutas eco-eficientes. Dicha herramienta podrá integrarse con el sistema de información geográfica (GIS) particular de cada empresa y permitirá la visualización de las rutas eco-eficientes, evaluando el impacto producido en los ámbitos económico, energético, operativo y medioambiental. Por ello, la herramienta tendrá un impacto económico directo sobre los usuarios finales y permitirá la comparación de rutas y resultados obtenidos a partir de diferentes alternativas, logrando una mayor competitividad y el cumplimiento de los compromisos de sostenibilidad en la empresa. Por otro lado, a nivel global, la herramienta contribuye a una mejora social derivada de una reducción del consumo energético y de una disminución de las emisiones contaminantes de las flotas de transporte de mercancías por carretera, que tienen un impacto a nivel local, nacional e internacional. En este sentido, la Tesis Doctoral contribuye claramente al desarrollo estratégico del sector transporte de mercancías, aumentando la eficiencia de las flotas de transporte por carretera y logrando una mayor sostenibilidad y competitividad. ; Nowadays, around 90% of city dwellers in the European Union are exposed to high concentrations of healthharmful pollutants, reducing the life expectancy of the population and having a large impact on the gross domestic product of European countries. Among the economic sectors, transport is presented as one of the main sources of pollution because it generates harmful levels of emissions and is responsible for up to 24% of greenhouse gases (GHG) emissions in the European Union. These emissions depend heavily on the fuel type used, the carried load, the engine technology and the total distance covered. The problem of the distribution of goods from warehouses to end users plays a central role in the logistics systems management, where the design of efficient routes is critical in reducing costs. This real-life problem is characterized by presenting a heterogeneous fleet where vehicles with different features are incorporated over the time for a better adaptation to the changing customer demands. These features include vehicles with different capacities and age, alternative fuels and motor technologies. Therefore, in the present context, environmental targets are to be added to traditional logistics strategies based on cost and time in the decision making process. The research of this Thesis has focused on the development of new mathematical models and algorithms for solving the Fixed Fleet Heterogeneous Vehicle Routing Problem with Time Windows (HVRPTW) with the additional consideration of reducing GHG and pollutants emissions. The formulation of the problem is made from two different perspectives. The first incorporates a methodology based on the estimation of the external costs of transport activities. The second perspective comprises a multiobjective optimization method with a priori articulation of preferences, in which the decision maker can establish the preferences in advance. The design of eco-efficient routes is proposed by linear mathematical programming models and is solved using quantitative techniques. These techniques include heuristics and metaheuristics that combine various advanced procedures to deal with the complexity of the problem. In particular, this Thesis describes a semi-parallel insertion heuristic and a hybrid variable neighborhood descent metaheuristic based on a tabu search algorithm for the local search and a holding list that achieves flexibility for solving any HVRPTW variant. The algorithms have been applied to benchmark problems from the scientific literature and to a real-world case that deals with a routing and scheduling problem in a service company with particular characteristics and constraints. The results show that the algorithm efficiently solves the problem addressed and it can be extended to other problem variants. The result of the Thesis is the development of a decision making process tool aimed to help in the design and control of eco-efficient routes. This tool can be integrated with the particular geographic information system (GIS) of each company, allowing the display of eco-efficient routes and assessing the economic, energy, operational and environmental impacts. Therefore, the tool will have an economic impact on the end users, with a comparison of the final routes and the results obtained from different alternatives, achieving greater competitiveness and fulfilling the sustainability commitments in the company. On the other hand, the tool globally contributes to a social improvement resulting from the fuel consumption and pollutant emissions reductions from road transport, which have an impact at local, national and international level. In this sense, the Thesis clearly contributes to the strategic development of the transport sector, increasing the efficiency of road transport fleets and achieving greater sustainability and competitiveness.
[spa] La tesis doctoral titulada "Fotografía y educación en la prensa de guerra republicana en España (1936-1939)" está centrada en la denominada prensa de guerra o prensa de trincheras (soldier newspapers en el mundo anglosajón). Entendemos por prensa de guerra aquella producida por y para los soldados de un ejército durante un conflicto bélico. En nuestro caso acotamos a la prensa de guerra ilustrada, publicada por el ejército republicano en España durante la Guerra Civil española. Esta tesis tiene como principal objetivo analizar cómo, a través de los artículos educativos y culturales, se usó la imagen para educar e influir en los soldados republicanos. La labor educativa realizada por el gobierno republicano en los frentes durante la Guerra Civil española ha sido estudiada con anterioridad por diferentes historiadores de la educación, que han hecho uso, entre otras fuentes, de la prensa de guerra para documentar experiencias y el proyecto educativo desarrollado durante la contienda. Esta tesis, sin embargo, pretende ofrecer un enfoque distinto y a la vez complementario en el estudio de la educación en el frente. Aquí se propone analizar los usos de la imagen de esta prensa de guerra como recurso educativo usado por el propio gobierno republicano para influir a sus combatientes ideológica y moralmente. La metodología empleada es el método histórico adaptado al campo de la historia de la educación, junto con aportaciones de otras disciplinas como la sociología o la semiótica para la interpretación de las fotografías que, como constructos sociales que son, son manifestaciones subjetivas de una realidad. A pesar de gozar de la apariencia de veracidad, puesto que lo que muestran existió, en realidad estos textos visuales nos muestran un momento que el fotógrafo, o quien encargó la fotografía, decidió inmortalizar y que, en muchas ocasiones, está teatralizado o coreografiado para transmitir una determinada visión de lo que reflejan. Las fuentes utilizadas han sido las publicaciones que dentro del conjunto de prensa de guerra no tenían periodicidad diaria y superaban al menos el 15% de ilustración. Otras fuentes complementarias han sido los documentos oficiales, sobre todo Decretos de la época. En referencia a la heurística, señalamos que el acceso a la prensa de guerra republicana se ha realizado a través del Centro Documental de la Memoria Histórica (CDMH), el Archivo General Militar de Ávila (AGMA) y la Hemeroteca Municipal de Madrid (HMM). Las tres colecciones son complementarias entre sí, ya que en muchas ocasiones conservan las mismas cabeceras, pero diferentes números. En cualquier caso, muchas de las publicaciones están incompletas y de otras sólo se conserva uno o dos números. Como principales resultados de la tesis se apunta que la fotografía de temática educativa y cultural aparecida en la prensa de guerra nos permite conocer qué estrategias visuales se diseñaron para promocionar determinados valores y actitudes entre los soldados republicanos. Teniendo en cuenta la alta tasa de analfabetismo entre la tropa, el uso de la imagen como recurso de trasmisión directo fue de gran utilidad y creemos que las imágenes que aparecen en la prensa de guerra fueron expresamente pensadas y seleccionadas para producir el efecto deseado. Se crearon discursos visuales que vinculan la capacitación cultural de los soldados con su capacitación política, moral y militar. ; [cat] La tesi doctoral titulada "Fotografía y educación en la prensa de guerra republicana en España (1936-1939)" es centra en la anomenada premsa de guerra o premsa de trinxeres (soldier newspapers en el món angle-saxo). Entenem per premsa de guerra la produïda per i per als soldats d'un exèrcit durant un conflicte bèl·lic. En el nostre cas acotem a la premsa de guerra il·lustrada, publicada per l'exèrcit republicà a Espanya durant la Guerra Civil espanyola. Aquesta tesi té com a principal objectiu analitzar com, mitjançant els articles educatius i culturals, es va fer ús de la imatge per educar i influir els soldats republicans. La tasca educativa duta a terme pel govern republicà en els fronts durant la Guerra Civil espanyola ha estat estudiada amb anterioritat per diferents historiadors de l'educació, que han fet ús, entre altres fonts, de la premsa de guerra per documentar experiències i el projecte educatiu desenvolupat durant el conflicte. Aquesta tesi, en canvi, pretén oferir un enfocament diferent i alhora complementari pel que fa a l'educació al front. Aquí ens proposem analitzar els usos de la imatge d'aquesta premsa de guerra com a recurs educatiu emprat pel mateix govern republicà per influir els seus soldats ideològicament i moralment. La metodologia emprada ha estat el mètode històric adaptat al camp de la història de l'educació, juntament amb aportacions d'altres disciplines com la sociologia o la semiòtica per a la interpretació de les fotografies que, com constructes socials que són, són manifestacions subjectives d'una realitat. Encara que gaudeixin d'aparença de veracitat, donat que el que mostren va existir, en realitat aquests texts visual ens mostren un moment que el fotògraf, o qui va encarregar la fotografia, decidí immortalitzar i que, en moltes ocasions està teatralitzat o coreografiat per transmetre una determinada visió del que reflecteixen. Les fonts emprades han estat aquelles que dins el conjunt de la premsa de guerra no tenien periodicitat diària i superaven un mínim del 15% d'il·lustració. Altres fonts complementàries han estat els documents oficials, sobretot Decrets de l'època. Pel que fa a l'heurística, assenyalem que l'accés a les fonts s'ha realitzat mitjançant el Centre Documental de la Memòria Històrica (CDMH), l'Arxiu Militar General d'Àvila (AGMA), i l'Hemeroteca Municipal de Madrid (HMM). Les tres col·leccions són complementàries entre si, donat que en moltes ocasions conserven les mateixes capçaleres, però diferents números. En tot cas, moltes publicacions estan incompletes i d'altres es conserven només un o dos números. Com a principals resultats de la tesi s'apunta que la fotografia de temàtica educativa i cultural publicada en la premsa de guerra ens permet conèixer quines estratègies visuals es dissenyaren per promocionar determinats valors i actituds entre els soldats republicans. Donada l'alta taxa d'analfabetisme entre les tropes, l'ús de la imatge com a recurs de transmissió directa fou de gran utilitat i creiem que les imatges publicades a la premsa de guerra foren expressament dissenyades i seleccionades per produir l'efecte desitjat. Es crearen discursos visuals que vinculen la capacitació cultural dels soldats amb la seva capacitació política, moral i militar. ; [eng] The doctoral thesis entitled "Fotografía y educación en la prensa de guerra republicana en España (1936-1939)" (Photography and Education in the Republican War Press in Spain (1936-1939)), focuses on the so-called war press or trench press (also soldier newspapers in the Anglo-Saxon world). We understand as war press the one produced by and for the soldiers of an army during a war. In our case, we refer to the illustrated war press published by the Republican army in Spain during the Spanish Civil War. The main objective of this thesis is to analyse how, through educational and cultural articles, the image was used to educate and influence the Republican soldiers. The educational work carried out by the Republican government on the fronts during the Spanish Civil War has been previously studied by different historians of education, who have used, among other sources, soldier newspapers to document experiences and the educational project developed during the war. This thesis, however, aims to offer a different, and at the same time complementary approach to the study of education on the front. Here we propose to analyse the uses of the image of these soldier newspapers as an educational resource used by the Republican government itself to influence its combatants ideologically and morally. The methodology used is the historical method adapted to the field of the history of education, together with contributions from other disciplines such as sociology or semiotics for the interpretation of the photographs which, as the social constructs that they are, are subjective manifestations of reality. Despite enjoying the appearance of veracity, since what they show existed. These visual texts show us a moment that the photographer, or whoever commissioned the photograph, decided to immortalize and that, on many occasions, is theatricalized or choreographed to convey a certain vision of what they reflect. The sources used have been the publications that, within the whole war press, were not daily and exceeded at least 15% of illustration. Other complementary sources have been the official documents, especially the Decrees of the time. Regarding heuristics, we point out that access to the Republican war press has been made through the Historical Memory Documentary Centre (CDMH), the General Military Archive of Avila (AGMA) and the Municipal Newspaper Library of Madrid (HMM). These three collections are complementary to each other, as they often have the same headings, but different numbers. In any case, many of the publications are incomplete and of others, only 6 one or two issues are preserved. The main results of the thesis are that the educational and cultural photography that appeared in the soldier newspapers allows us to know what visual strategies were designed to promote certain values and attitudes among Republican soldiers. Taking into account the high rate of illiteracy among the troops, the use of the image as a resource for direct transmission was very useful and we believe that the images that appear in the soldier newspapers were expressly designed and selected to produce the desired effect. Visual discourses were created that link the cultural training of soldiers with their political, moral, and military training.
The main focus of this thesis is the evaluation of crowdsourcing techniques to measure personalization on the Web. Overall, I apply my methodology in four different aspects on the Web. (1) First, I investigate price discrimination and how personal data can influence online pricing. (2) Then, I turn my attention on targeted web advertisements and investigate how targeted ads can be detected in real-time. (3) Next, I focus on web tracking and develop a methodology to measure the levels of compliance as defined by the new European Union General Data Protection Regulation (GDPR) with respect to the physical location of the tracking servers. (4) Finally, I measure the extent of web tracking on sensitive topic websites as defined by the new EU GDPR regulation. Towards that end, I develop a methodology to identify specialized trackers that operate exclusively on such websites. (1) For the first aspect, related to price discrimination, I present the design, im- plementation, validation, and deployment of the Price $heriff, a highly distributed system for detecting various types of online price discrimination in e-commerce. The Price $heriff uses a peer-to-peer architecture, sandboxing, and secure multiparty com- putation to allow users to tunnel price check requests through the browsers of other peers without tainting their local or server-side browsing history and state. Having operated the Price $heriff for several months, with approximately one thousand real users, I identify several instances of cross-border price discrimination based on the country of origin. Even within national borders, I identify several retailers that re- turn different prices for the same product to different users. I examine whether the observed differences are due to personal-data-induced discrimination or A/B Testing, and conclude that it is the latter. (2) The second aspect is related to targeted ads on the Web. In more details, be- ing able to check whether an online advertisement has been targeted is essential for resolving privacy controversies and implementing in practice data protection regula- tions like GDPR, the California Consumer Privacy Act (CCPA) and the Children's Online Privacy Protection Act (COPPA). In this work, I describe the design, im- plementation, and deployment of an advertisement auditing system called eyeWnder that uses crowdsourcing to reveal in real-time whether a display advertisement has been targeted or not. Crowdsourcing simplifies targeted advertisement detection but expects users to report back encountered advertisements, thereby incurring privacy risks. I break the deadlock with a privacy preserving data sharing protocol that allows eyeWnder to compute global statistics required to detect targeting, while keeping the advertisements seen by users and their browsing history private. Using a total popu- lation of 100 users I show that eyeWnder permits end users to audit in real-time any advertisement that may appear on their browser, without jeopardizing their privacy. eyeWnder can even detect indirect targeting, i.e., marketing campaigns that promote a product or service whose description bears no semantic overlap with the targeted audience. (3) The third aspect is related to web tracking and the new EU GDPR. Towards that end, I define a tracking flow, as a flow between an end user and a web tracking service. I develop an extensive measurement methodology for quantifying at scale the amount of tracking flows that cross data protection borders, be it national or international, such as the EU28 border within which the GDPR applies. My methodology uses the eyeWnder browser extension to fully render advertising and tracking code, various lists and heuristics to extract well known trackers, passive DNS replication to get all the IP ranges of trackers, and state-of-the art geolocation. I employ my methodology on a dataset from 350 real users of the browser extension over a period of more than four months, and then generalize my results by analyzing billions of web tracking flows from more than 60 million broadband and mobile users from 4 large European ISPs. I show that the majority of tracking flows cross national borders in Europe but, unlike popular belief, are pretty well confined within the larger GDPR jurisdiction. Simple DNS redirection and PoP mirroring can increase national confinement while sealing almost all tracking flows within Europe. Last, I show that cross boarder tracking is prevalent even in sensitive and hence protected data categories and groups including health, sexual orientation, minors, and others. (4) Finally, the last aspect is related to sensitive categories as defined by the GDPR. In this work I turn my attention to the elephant in the room of data protection which is none other than the simple and obvious question "Who is tracking sensitive domains". Despite a fast growing amount of work on more complex facets of the interplay between privacy and the business models of the Web, the obvious question of who collects data on users in domains where they would rather not be seen, has been largely ignored. I develop a methodology for discovering the trackers operating at sensitive domains, both those collaborating directly with publishers, as well as those appearing implicitly through recursive inclusions. I identify several trackers that specialize on specific sensitive categories, such as sexual orientation in adult content websites. I also investigate if there is exchange of information between such specialized trackers and other more mainstream advertisers and marketers. ; Der Schwerpunkt dieser Arbeit liegt auf der Evaluation von Crowdsourcing-Verfahren zur Messung von Personalisierung im Web. Wir wenden unsere Methodik auf vier ver- schiedene Aspekte im Internet an. (1) Erstens untersuchen wir Preisdiskriminierung und den Einfluss persönlicher Daten auf Online-Preissetzung. (2) Danach richten wir unsere Aufmerksamkeit auf zielgerichtete Werbung im Web und untersuchen, wie wir diese in Echtzeit erkennen können. (3) Im Anschluss daran legen wir den Schwerpunkt auf "Web Tracking" und entwickeln eine Methodik zur Messung der Einhaltung der Vorgaben der neuen EU Datenschutz-Grundverordnung (DS-GVO). Dies basiert auf der Ermittlung der physischen Standorte jener Server, die ein entsprechendes Tracking durchführen. (4) Zuletzt messen wir das Ausmaß von Web Tracking im Zusammen- hang von Webseiten, die gemäß der DS-GVO als sensibel ("sensitive topic websites") definiert wurden. Hierfür entwickeln wir eine Methodik um spezielle Tracker, die aus- schließlich auf entsprechenden Webseiten aktiv sind, zu identifizieren. (1) Im Rahmen des ersten Aspekts präsentieren wir im Bezug auf Preisdiskrimi- nierung Design, Implementierung, Validierung und Bereitstellung von Price $heriff, einem hochgradig verteilten System zur Erkennung verschiedenster Arten von Online- Preisdiskriminierung im E-Commerce. Das Price $heriff System basiert auf einer Peer- To-Peer Architektur, Sandboxing, und sicherer Mehrparteien-Berechnung. Auf diese Weise werden Nutzern Preisabfragen mittels Tunneln durch die Browser anderer Peers ermöglicht, ohne dass hierdurch deren lokale oder serverseitige Browsing-Verläufe be- einflusst werden. Nach mehrmonatigem Betrieb des Price $heriff Systems mit circa 1000 realen Nutzern stellen wir mehrere Fälle von grenzüberschreitender Preisdiskri- minierung auf der Basis des Ursprungslands fest. Selbst innerhalb nationaler Grenzen identifizieren wir mehrere Händler, die unterschiedlichen Nutzern unterschiedliche Preise für dieselben Produkte angeben. Wir untersuchen zudem, ob die beobachteten Preisunterschiede auf Diskriminierung auf der Basis personenbezogener Daten oder auf A/B Tests zurückzuführen sind. Wir folgern schließlich, dass es sich um Letzteres handelt. (2) Der zweite Aspekt bezieht sich auf gezielte Werbung im Web. Genauer gesagt, die Fähigkeit herauszufinden, ob Online-Werbung zielgerichtet erfolgt. Dies ist un- erlässlich um Kontroversen hinsichtlich der Privatsphäre aufzulösen und auch um Datenschutzregulierungen wie die DS-GVO, den "California Consumer Privacy Act" und den "Children's Online Privacy Protection Act" in der Praxis zu implemen- tieren. In dieser Arbeit beschreiben wir Design, Implementierung und Bereitstellung eines Revisionssystems für Online-Werbung namens "eyeWnder". Dieses System nutzt Crowdsourcing um in Echtzeit herauszufinden ob eine Werbung zielgerichtet ist oder nicht. Crowdsourcing erleichtert die Erkennung zielgerichteter Werbung, beruht je- doch darauf, dass Nutzer Bericht über aufgetretene Werbung erstatten. Hierdurch können Risikenfür die Privatsphäre der Nutzer entstehen. Durch ein Protokoll, das die gemeinsame Datennutzung unter Wahrung der Privatsphäre ermöglicht, finden wir einen Ausweg aus dieser Sackgasse. Das Protokoll erlaubt eyeWnder globale Sta- tistiken zu berechnen, die zur Erkennung von zielgerichteter Werbung notwendig sind, wobei gleichzeitig sichergestellt werden kann, dass die Werbeanzeigen der jeweiligen Nutzer und deren Browsing-Verläufe privat bleiben. Unter Rückgriff auf eine Po- pulation von 100 Nutzern zeigen wir, dass es eyeWnder Endnutzern ermöglicht in Echtzeit jegliche Werbung, die innerhalb ihres Browsers erscheint zu prüfen ohne dass dabei ihre Privatsphäre gefährdet wird. eyeWnder kann darüber hinaus sogar indirekt zielgerichtete Werbung (d.h. Marketingkampagnen, die ein Produkt oder ei- ne Dienstleistung bewerben deren Beschreibung keinerlei semantische Überlappung mit der Zielgruppe aufweist) erkennen. (3) Der dritte Aspekt bezieht sich auf Web Tracking und die neue EU DS-GVO. Hier- für definieren wir einen "Tracking Flow" als einen Flow zwischen einem Endnutzer und einem "Web Tracking" Dienst. Wir entwickeln eine umfangreiche Messmetho- dik um eine große Anzahl solcher "Tracking Flows", die sowohl nationale als auch internationale (z.B. den EU28 Raum innerhalb dessen die DS-GVO Anwendung fin- det) "Datenschutzgrenzen" überschreiten, quantitativ zu erfassen. Unsere Methodik verwendet die eyeWnder Browser-Erweiterung um Werbung und Tracking-Code voll- ständig auszuführen sowie verschiedene Listen und Heuristiken um bekannte Tracker zu identifizieren, passive DNS Replizierungen um alle relevante IP Adressbereiche der Tracker zu ermitteln, sowie neueste Ansätze zur Geolocation. Wir wenden unsere Methodik auf einen Datensatz bestehend aus 350 realen Nutzern der Browsererwei- terung über einen Zeitraum von über vier Monaten an. Wir verallgemeinern dann unsere Resultate durch die Analyse von Milliarden von Web Tracking Flows von mehr als 60 Millionen Breitband- und Mobilfunkkunden von vier großer europäischer ISPs. Wir zeigen, dass der Großteil der Tracking Flows nationale Grenzen in Europa überschreitet; entgegen weit verbreiteter Ansichten sind diese jedoch weitgehend auf den Geltungsbereich der DS-GVO beschränkt. Auf der Basis einfacher DNS Uml- weitungen und PoP Mirroring Mechanismen kann die Beschränkung auf nationale Grenzen erhöht und darüber hinaus fast alle Tracking Flows innerhalb von Europa gehalten werden. Schließlich zeigen wir, dass grenzüberschreitendes Tracking selbst in sensiblen und daher geschützten Datenkategorien und -gruppen wie Gesundheit, sexuelle Orientierung, Minderjährige, etc. vorherrscht. (4) Abschließend bezieht sich der letzte Aspekt auf die durch die DS-GVO als sensibel definierten Kategorien. In dieser Arbeit richten wir unsere Aufmerksamkeit auf die größte Herausforderung des Datenschutzes welche keine geringere ist als die einfa- che und doch offensichtliche Frage "Wer trackt sensible Domains?". Trotz der rasant wachsenden Mengen an Arbeiten zu komplexeren Aspekten des Zusammenspiels zwi- schen Privatsphäre und webbasierten Geschäftsmodellen, wurde die offensichtliche Frage danach, wer Daten über Nutzer in Domains sammelt wo diese lieber nicht ge- sehen werden würden, weitgehend vernachlässigt. Wir entwickeln eine Methodik um Tracker die an sensiblen Domains arbeiten aufzuspüren; sowohl jene die direkt mit den Herausgebern kooperieren als auch jene die implizit durch rekursive Einbeziehung auftauchen. Wir identifizieren mehrere Tracker die auf speziellen Dienstkategorien wie sexuelle Orientierung auf nicht jugendfreien Webseiten spezialisiert sind. Wir unter- suchen zudem, ob es einen Austausch zwischen solchen spezialisierten Trackern und weiteren, eher dem "Mainstream" zuzuordnenden Werbetreibenden und Vermarktern, gibt. ; EC/FP7/607728/EU/Measurement for Europe: Training and Research for Internet Communications Science/METRICS ; EC/H2020/679158/EU/Resolving the Tussle in the Internet: Mapping, Architecture, and Policy Making/ResolutioNet
Es inaudito encontrar un sistema de cualquier tipo basado en computadores en el que el humano no interactúe con él. Incluso aunque se trate de una comunicación exclusivamente entre dos computadores, normalmente se incluye una interfaz que permita solucionar el enlace entre máquinas cuando aparece algún problema [Mulligan et al. 1991]. El usuario, como ser humano, necesita información, lo que se considera una de las pocas constantes en nuestro mundo, independientemente de la cultura a la que pertenezca o en el instante de tiempo en el que transcurra su actividad. Esta necesidad se combina con la ocupación del usuario en otras tareas que requieren su atención, de forma que hay al menos dos focos de atención que compiten por el bien más preciado de cualquier interfaz humano-computador: la atención del usuario. La existencia de diferentes escenarios, en las que el usuario espera resultados diferentes y consecuencias distintas ante la existencia de un error controla la aparición de distintos tipos de aplicaciones y dispositivos para soportar las demandas del usuario en condiciones de multitarea. En estos sistemas, conocidos como Sistemas de Notificación, las interfaces están diseñadas como un medio para acceder a información valiosa para el usuario de una forma efectiva y eficiente sin introducir interrupciones no deseadas en la ejecución de la tarea principal. Pueden encontrarse numerosas implementaciones de Sistemas de Notificación en una gran variedad de plataformas. Quizás en los sistemas de sobremesa son más fácilmente identificables: servicios de mensajería instantánea, aplicaciones que muestran el estado de un sistema, servicios de noticias e indicadores de cantidades almacenadas. Otros ejemplos familiares pueden encontrarse en procesos industriales y tecnológicos como la cadena danzante de Weiser como representación del tráfico de una red [Weiser y Brown 1996], los sistemas de información embarcados en vehículos, los sistemas incorporados en los vehículos para proporcionar información de los sistemas, los sistemas que proporcionan información sobre el entorno, los sistemas multi-monitor que indican los excesos de rango de un sistema y las cabinas de los aviones modernos. De otro lado, un sistema se considera crítico cuando un único fallo debido al sistema, al usuario o al modo en el que el usuario realiza las tareas para las que el sistema se diseñó, puede producir daños graves en el sistema, en su entorno o, aún más grave, en los usuarios del sistema. Ejemplos de este tipo de sistemas pueden encontrarse en los aviones, correspondiéndose con sistemas manejados por usuarios muy especializados. Genéricamente hablando, la evaluación de usabilidad incluye un amplio rango de métodos y técnicas usados por los evaluadores para evaluar, cualitativa o cuantitativamente, distintos aspectos relacionados con la usabilidad. Como soporte al diseño, la evaluación debe permitir, en primer lugar, la verificación de las decisiones de diseño y en segundo lugar elegir entre varias alternativas. Aunque se le conceda gran importancia, la evaluación de cualquier sistema está poco implantada como un proceso más en el ciclo de vida de un producto, ya que se suele considerar un proceso costoso, en tiempo y recursos, y que no añade ningún valor al producto. La evaluación de la interfaz humano-computador permite la comparación de distintas alternativas de diseño de sistemas y, en último lugar, determinar qué sistema es mejor. Por tanto, la evaluación debe proporcionar medidas cuantitativas para determinar la usabilidad de una interfaz y comparar dos interfaces distintas de forma objetiva. Aunque los usuarios de computadores están habituados al uso de Sistemas de Notificación, su evaluación está limitada a probar el valor intrínseco de paradigmas aislados de diseño basados en una única implementación. La evaluación de usabilidad de los Sistemas de Notificación está muy poco desarrollada en las interfaces de propósito general y, particularmente, en las interfaces de los sistemas críticos. Las únicas experiencias disponibles están basadas en datos empíricos reducidos en entornos simulados y en el seguimiento de ciertas guías de diseño probadas por la experiencia. Están guías proceden de aplicaciones militares y, aunque pueden ser cuestionadas desde el punto de vista de la ciencias cognitivas, están ampliamente aceptadas en el diseño de interfaces para Sistemas de Notificación críticos. Los diseñadores de estos sistemas pueden, razonablemente, rechazarlas basándose en su experiencia y en su juicio. Si no existe una evaluación cuantitativa basada en modelos cognitivos entonces el evaluador no puede ser capaz de determinar científicamente los errores en la interfaz. De hecho, puede llegarse a la situación sin salida en la que se enfrenten el juicio y el conocimiento del diseñador con los del evaluador. Esta situación puede resolverse con un marco de evaluación que proporcione datos cuantitativos que muestren si una alternativa de diseño es mejor que otra, basado en principios objetivos. El diseño y la evaluación de las interfaces humano-computador han evolucionado mucho más rápidamente que las teorías basadas en la psicología y la ciencia cognitiva y aún existen un gran número de problemas por resolver. Así, el diseño de interfaces ha evolucionado desde los editores de líneas y aplicaciones a medida con interfaces muy reducidas basadas en crípticos comandos de línea hasta las interfaces WIMP (Windows-Icons-Menus-Pointers), los sistemas de recuperación de la información, los agentes inteligentes, los servidores CSCW (Cooperative and Shared Computer Works), los entornos virtuales, la realidad aumentada, los bits tangibles, la computación ubicua, la web y los múltiples tipos de interfaces capaces de hablar y mostrar emociones. Mientras tanto, la evaluación de la usabilidad ha evolucionado basada principalmente en heurísticas y en la evaluación empírica [Nielsen 1993; Dix et al. 2.003]. La evaluación basada en la teoría ha sido considerada de un valor muy limitado [Landauer 1987] debido a su reducido ámbito de aplicación, aplicable únicamente a características propias de una interfaz y con una capacidad muy reducida para ser aplicada en otros diseños del mundo real, alejado de los laboratorios. Aunque se están realizando grandes esfuerzos para desarrollar teorías que cubran un amplio rango de tópicos, la mayor parte de ellas son solamente ajustes para explicar aplicaciones del día a día [Rudisill et al. 1996]. Actualmente la medida de prestaciones no se realiza frente a requisitos específicos previos al diseño. Es notable la ausencia de este tipo de requisitos en el ciclo de vida para el desarrollo de una interfaz humano-computador. En estos casos, se puede introducir un conjunto de parámetros críticos que permita definir las unidades de medida [Newman et al. 2000]. Estos parámetros son críticos porque el éxito o el fracaso del diseño dependen de forma crítica del cumplimiento de los objetivos fijados para estos parámetros. Los parámetros críticos permiten formalizar un espacio de diseño en el que se re-usa el conocimiento [Cheward et al. 2004], expresando los problemas asociados a una interfaz con un lenguaje consistente y que permite a los expertos expresar juicios a través de una 'evaluación medida' [Caroll et al. 1992]. El principal objetivo de esta tesis es desarrollar un marco de evaluación predictiva para Sistemas de Notificación en Aviónica (SNA) basado en las teorías cognitivas, que pueda proporcionar datos cualitativos y cuantitativos que permita a los diseñadores y evaluadores de interfaces averiguar si la interfaz sea usable antes de construirlo. De este modo, el diseñador de interfaces puede ser capaz de introducir la evaluación como patrones de diseño de la interfaz humano-computador sin requerir recursos adicionales. En esta tesis, la aproximación a la evaluación de la interfaz humano-computador de los Sistemas de Notificación para sistemas críticos en aviónica se propone en varios pasos. En primer lugar se establecen un conjunto de parámetros críticos basado en los modelos psicológicos, para después, realizar una evaluación predictiva de la usabilidad basada en la cuantificación de estos parámetros y, finalmente, aplicar de forma incremental las guías de diseño basándose en una estrategia de escenarios. Esta aproximación permitirá una evaluación sistemática garantizando, por un lado, centrarse en el diseño en términos de objetivos cuantificables, independientemente del evaluador y del usuario y, por otro lado, mejorar de forma continua y agregativa los diseños específicos aplicables a un problema determinado. En esta tesis, la principal contribución es la formalización matemática de cada uno de los parámetros críticos seleccionados para llevar a cabo la evaluación de usabilidad de las interfaces de Sistemas de Notificación integrados en sistemas críticos. El objetivo de esta tesis es conseguir una función de evaluación Ф, tal que aplicada a la interfaz del sistema de notificación permita obtener datos numéricos en el dominio de los números reales. Formalmente, el objetivo es obtener ℜΦ∃∀a)(/IIφ Donde I es la interfaz del sistema de notificación y Ф es la función de evaluación de la usabilidad que produce como resultado un número real. Para evaluar la validez de este marco de evaluación es aplicado a la interfaz de la planta de potencia de dos aviones modernos y a la interfaz de control de la planta de potencia de un avión de transporte militar. La aplicación de este marco de evaluación está basada en la ejecución de tests automáticos implementados en Ada95 aplicados a una caracterización XML de las interfaces. ____________________________________________ ; It's unheard-of to find any system of any kind, whether critical or not, based on computer with no human interaction. Even though it will be a system related with the communication between two computers it is mostly used to find an interface allowing to solve any problem related with the link between computers [Mulligan et al. 1991]. The user of a computer interface, as any human being, needs information, which is considered one of the few constants in our every day world, whatever his culture or activity. This need plays together with other tasks the user has to perform in such a way that there are at least two systems competing for the most appraised resource at any human-computer interface: the user attention. Different usage situations, expectations, and error consequences govern the growing breed of applications and devices being introduced to support multitasking information demands. Referred to as notification systems, these interfaces are generally desired as a means to access valued information in an efficient and effective manner without introducing unwanted interruption to a primary task. They can be found in many implementation forms and on a variety of platforms. Perhaps classic desktop systems are the most readily identifiable—instant messengers, status programs, and news and stock tickers. Other familiar examples such as Weiser's dangling string representation of network, in-vehicle information systems, ambient media, multi-monitor displays hint at the potential range of systems and modern aircraft cockpit. While use of these systems and the range of solutions have skyrocketed, our ability to scientifically recognize, pattern, and improve success within the HCI community has not kept pace. There are surprisingly few efforts in the literature that effectively evaluate usability of the information and interaction design for notification systems. On the other hand, a system will be considered as critic when a sole failure due to the system, the user, or the way the user executes the tasks that the system is designed for could produce serious damage to the system itself, in its surrounds or even tragic damage to the user. Examples of this kind of system can be found on many aircrafts corresponding to critical systems handled by very specialized users. Generically usability evaluation includes a spread set of methods and techniques used by evaluators for examining, qualitatively or quantitatively, several aspects related with usability. As support to the design, evaluation will allow firstly verification of design decisions and secondly to choose among possible alternatives. Although evaluation is recognized as very valuable it is rarely introduced in the design cycle of a product and it is deemed as very hard in terms of time and resources and what is worse is that it does not add any value to the product. Human-computer interface evaluation permits the comparison of different systems and ultimately to determine which system is better. Therefore evaluation shall provide direct measurement units to determine the interface usability and to compare two interfaces objectively. Even though computer users are extensively used for notification systems their evaluation is limited to probe the intrinsic value of some isolated paradigms of design based in a single implementation. The evaluation of notification systems is poorly developed on interfaces for systems in general and extremely rare in critical systems. The only available experiences are based on empirical data on simulated environments and the follow-up of some guidebooks probed by the experience. Those guidebooks come from military applications and even they can be questioned from the cognitive science point of view. These guidebooks are mostly accepted on interface user design for notification critical systems. The designer of these interfaces can justifiably reject them based on his experience and judgement. If there is no quantitative evaluation based on cognitive models then the evaluator will never be able to scientifically determine the errors in the interface. A situation can be reached with no way out, with the designer and the evaluator set against each other with conflicting needs. This situation can be resolved with an evaluation framework, which provides quantitative data showing whether one design is better than the other based on objective principles. Design and evaluation of human-computer interfaces have evolved much faster than theories based on psychology and cognitive science and still there are a number of issues that remain unresolved. The design has evolved since line editors and customized application with interfaces reduced to very cryptic line commands up to interfaces WIMP (Windows-Icons-Menus-Pointers), information recovery systems, smart agents, CSCW (Cooperative and Shared Computer Works) servers, virtual environments, augmented reality, tangible bits, ubiquitous computation, the web and many types of interfaces able to talk and show emotions. Usability evaluation has evolved based only on heuristics and empirical evaluation [Nielsen 1993; Dix et al. 2.003]. Evaluation based on theory has been considered as being of very limited value [Landauer 1987] due to the fact that it has a very limited scope and application areas, applicable only to local features of an interface and with a very reduced capacity to be re-used on other designs in the real world, far away from laboratories. Although a big effort is paid for developing theories covering a wide set of topics most of them are only adjustments so as to explain every day applications. Currently, the performance measurements are not done against specific requirements prior to design. It is notable that there is an absence of this kind of requirements in a human-computer interface development life cycle. In case of absence of performance requirements a set of critical parameters could be introduced in order to define measurement units [Newman et al. 2000]. These parameters are so called critical because the success or failure of a design will depend critically whether the fixed objectives are fulfilled or not. Critical parameters allow formalising a design space [Cheward et al. 2004] which allocates acknowledge re-use, expressing interface problems with a consistent language and let experts to express judgements through a mediated evaluation [Caroll et al. 1992]. The main objective of this thesis is to develop a framework of predictive evaluation for notification system on avionics (NSA) (from aviation electronics) based on cognitive theories which can bring both quantitative and qualitative data which allow interface designers and evaluators to guess whether the system will be usable before it is built. The interface designer will be able to include evaluation on the Human-Computer Interface pattern design without adding any additional resources. In this thesis the approach to usability evaluation for the design of Human-Computer Interface of notification systems for critical design in avionics is proposed in several steps. Firstly to establish the set of critical parameters based on psychological models, then to carry out a predictive usability evaluation based on the quantification of these parameters and finally to apply in an incremental way the design guidelines based on a scenario strategy. This approach will allow a systematic evaluation, achieving on one side to focalize the design in term of quantifiable objectives independently of the evaluator and the user and, on other side improving in a continuous and aggregative way the different designs applicable on a specific problem. In this thesis, the main contribution is a mathematical formalization of each critical parameter for carrying out quantitative usability evaluation on interfaces for notification systems integrated on critical system. The objective is to reach an evaluation function Ф in such a way that applied to the interface of a notification system I it will permit to get quantitative data on the real numbers domain. Formally the objective is to obtain ℜΦ∃∀a)(/IIφ Where I is the interface of the notification system and Ф is the usability evaluation function which produce a real number. To evaluate the validity of this framework it will be applied to two power plant control interfaces of modern aircrafts and to a power plant control interface to be implemented in a military transport aircraft. The application of this framework is based on the execution of automatic test implemented on Ada95 applied to a XML characterization for the interfaces.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
Loet Leydesdorff on the Triple Helix: How Synergies in University-Industry-Government Relations can Shape Innovation Systems
This is the sixth and last in a series of Talks dedicated to the technopolitics of International Relations, linked to the forthcoming double volume 'The Global Politics of Science and Technology' edited by Maximilian Mayer, Mariana Carpes, and Ruth Knoblich
The relationship between technological innovation processes and the nation state remains a challenge for the discipline of International Relations. Non-linear and multi-directional characteristics of knowledge production, and the diffusive nature of knowledge itself, limit the general ability of governments to influence and steer innovation processes. Loet Leydesdorff advances the framework of the "Triple Helix" that disaggregates national innovation systems into evolving university-industry-government eco-systems. In this Talk, amongst others, he shows that these eco-systems can be expected to generate niches with synergy at all scales, and emphasizes that, though politics are always involved, synergies develop unintentionally.
Print version of this Talk (pdf)
What is the most relevant aspect of the dynamics of innovation for the discipline of International Relations?
The main challenge is to endogenize the notions of technological progress and technological development into theorizing about political economies and nation states. The endogenization of technological innovation and technological development was first placed on the research agenda of economics by evolutionary economists like Nelson and Winter in the late 1970s and early 1980s. In this context, the question was how to endogenize the dynamics of knowledge, organized knowledge, science and technology into economic theorizing. However, one can equally well formulate the problem of how to reflect on the global (sub)dynamics of organized knowledge production in political theory and International Relations.
From a longer-term perspective, one can consider that the nation states – the national or political economies in Europe – were shaped in the 19th century, somewhat later for Germany (after 1871), but for most countries it was during the first half of the 19th century. This was after the French and American Revolutions and in relation to industrialization. These nation states were able to develop an institutional framework for organizing the market as a wealth-generating mechanism, while the institutional framework permitted them to retain wealth, to regulate market forces, and also to steer them to a certain extent. However, the market is not only a local dynamics; it is also a global phenomenon.
Nowadays, another global dynamics is involved: science and technology add a dynamics different from that of the market. The market is an equilibrium-seeking mechanism at each moment of time. The evolutionary dynamics of science and technology nowadays adds a non-equilibrium-seeking dynamics over time on top of that, and this puts the nation state in a very different position. Combining an equilibrium-seeking dynamics at each moment of time with a non-equilibrium seeking one over time results in a complex adaptive dynamics, or an eco-dynamics, or however you want to call it – these are different words for approximately the same thing.
For the nation state, the question arises of how it relates to the global market dynamics on the one side, and the global dynamics of knowledge and innovation on the other. Thus, the nation state has to combine two tasks. I illustrated this model of three subdynamics with a figure in my 2006 book entitled The Knowledge-Based Economy: Modeled, measured, simulated (see image). The figure shows that first-order interactions generate a knowledge-based economy as a next-order or global regime on top of the localized trajectories of nation states and innovative firms. These complex dynamics have first to be specified and then to be analyzed empirically.
For example, the knowledge-based dynamics change the relation between government and the economy; and they consequently change the position of the state in relation to wealth-retaining mechanisms. How can the nation state be organized in such a way as to retain wealth from knowledge locally, while knowledge (like capital) tends to travel beyond boundaries? One can envisage the complex system dynamics as a kind of cloud – a cloud that touches the ground at certain places, as Harald Bathelt, for example, formulated.
How can national governments shape conditions for the cloud to touch and to remain on the ground? The Triple Helix of University-Industry-Government Relations can be considered as an eco-system of bi- and tri-lateral relations. The three institutions and their interrelations can be expected to form a system carrying the three functions of (i) novelty production, (ii) wealth generation, and (iii) normative control. One tends to think of university-industry-government relations first as neo-corporatist arrangements between these institutional partners. However, I am interested in the ecosystem shaped through the tri- and bilateral relationships.
This ecosystem can be shaped at different levels. It can be a regional ecosystem or a national ecosystem, for instance. One can ask whether there is a surplus of synergy between the three (sub-)dynamics of university-industry-government relations and where that synergy can generate wealth, knowledge, and control; in which places, and along trajectories for which periods of time – that is, the same synergy as meant by "a cloud touching the ground".
For example, when studying Piedmont as a region in Northern Italy, it is questionable whether the synergy in university-industry-government relations is optimal at this regional level or should better be examined from a larger perspective that includes Lombardy. On the one hand, the administrative borders of nations and regions result from the construction of political economies in the 19th century; but on the other hand, the niches of synergy that can be expected in a knowledge-based economy are bordered also; for example, in terms of metropolitan regions (e.g., Milan–Turin–Genoa).
Since political dynamics are always involved, this has implications for International Relations as a field of study. But the dynamic analysis is different from comparative statics (that is, measurement at different moments of time). The knowledge dynamics can travel and be "footloose" to use the words of Raymond Vernon, although it leaves footprints behind. Grasping "wealth from knowledge" (locally or regionally) requires taking a systems perspective. However, the system is not "given"; the system remains under reconstruction and can thus be articulated only as a theoretically informed hypothesis.
In the social sciences, one can use the concept of a hypothesized system heuristically. For example, when analyzing the knowledge-based economy in Germany, one can ask whether more synergy can be explained when looking at the level of the whole country (e.g., in terms of the East-West or North-South divide) or at the level of Germany's Federal States? What is the surplus of the nation or at the European level? How can one provide political decision-making with the required variety to operate as a control mechanism on the complex dynamics of these eco-systems?
A complex system can be expected to generate niches with synergy at all scales, but as unintended consequences. To what extent and for which time span can these effects be anticipated and then perhaps be facilitated? At this point, Luhmann's theory comes in because he has this notion of different codifications of communication, which then, at a next-order level, begin to self-organize when symbolically generalized.
Codes are constructed bottom-up, but what is constructed bottom-up may thereafter begin to control top-down. Thus, one should articulate reflexively the selection mechanisms that are constructed from the bottom-up variation by specifying the why as an hypothesis. What are the selection mechanisms? Observable relations (such as university-industry relations) are not neutral, but mean different things for the economy and for the state; and this meaning of the observable relations can be evaluated in terms of the codes of communication.
Against Niklas Luhmann's model, I would argue that codes of communication can be translated into one another since interhuman communications are not operationally closed, as in the biological model of autopoiesis. One also needs a social-scientific perspective on the fluidities ("overflows") and translations among functions, as emphasized, for example, by French scholars such as Michel Callon and Bruno Latour. In evolutionary economics, one distinguishes between market and non-market selection environments, but not among selection environments that are differently codified. Here, Luhmann's theory offers us a heuristic: The complex system of communications tends to differentiate in terms of the symbolic generalizations of codes of communication because this differentiation is functional in allowing the system to process more complexity and thus to be more innovative. The more orthogonal the codes, the more options for translations among them. The synergy indicator measures these options as redundancy. The selection environments, however, have to be specified historically because these redundancies—other possibilities—are not given but rather constructed over long periods of time.
How did you arrive where you currently work on?
I became interested in the relations between science, technology, and society as an undergraduate (in biochemistry) which coincided with the time of the student movement of the late 1960s. We began to study Jürgen Habermas in the framework of the "critical university," and I decided to continue with a second degree in philosophy. After the discussions between Luhmann and Habermas (1971), I recognized the advantages of Luhmann's more empirically oriented systems approach and I pursued my Ph.D. in the sociology of organization and labour.
In the meantime, we got the opportunity to organize an interfaculty department for Science and Technology Dynamics at the University of Amsterdam after a competition for a large government grant. In the context of this department, I became interested in methodology: how can one compare across case studies and make inferences? Actually, my 1995 book The Challenge of Scientometrics had a kind of Triple-Helix model on the cover: How do cognitions, texts, and authors exhibit different dynamics that influence one another?
For example, when an author publishes a paper in a scholarly journal, this may add to his reputation as an author, but the knowledge claimed in the text enters a process of validation which can be much more global and anonymous. These processes are mediated since they are based on communication. Thus, one can add to the context of discovery (of authors) and the context of justification (of knowledge contents) a context of mediation (in texts). The status of a journal, for example, matters for the communication of the knowledge content in the article. The contexts operate as selection environments upon one another.
In evolutionary economics, one is used to distinguishing between market and non-market selection environments, but not among more selection environments that are differently codified. At this point, Luhmann's theory offers a new perspective: The complex system of communications tends to differentiate in terms of the symbolic generalization of codes of communication because this differentiation among the codes of communication allows the system to process more complexity and to be more innovative in terms of possible translations. The different selection environments for communications, however, are not given but constructed historically over long periods of time. The modern (standardized) format of the citation, for example, was constructed at the end of the 19th century, but it took until the 1950s before the idea of a citation index was formulated (by Eugene Garfield). The use of citations in evaluative bibliometrics is even more recent.
In evolutionary economics, one distinguishes furthermore between (technological) trajectories and regimes. Trajectories can result from "mutual shaping" between two selection environments, for example, markets and technologies. Nations and firms follow trajectories in a landscape. Regimes are global and require the specification of three (or more) selection environments. When three (or more) dynamics interact, symmetry can be broken and one can expect feed-forward and feedback loops. Such a system can begin to flourish auto-catalytically when the configuration is optimal.
From such considerations, that is, a confluence of the neo-institutional program of Henry Etzkowitz and my neo-evolutionary view, our Triple Helix model emerged in 1994: how do institutions and functions interrelate and change one another or, in other words, provide options for innovation? Under what conditions can university-industry-government relations lead to wealth generation and organized knowledge production? The starting point was a workshop about Evolutionary Economics and Chaos Theory: New directions for technology studies held in Amsterdam in 1993. Henry suggested thereafter that we could collaborate further on university-industry relations. I answered that I needed at least three (sub)dynamics from the perspective of my research program, and then we agreed about "A Triple Helix of University-Industry-Government Relations". Years later, however, we took our two lines of research apart again, and in 2002 I began developing a Triple-Helix indicator of synergy in a series of studies of national systems of innovation.
What would you give as advice to students who would like to get into the field of innovation and global politics?
In general, I would advise them to be both a specialist and broader than that. Innovation involves crossing established borders. Learn at least two languages. If your background is political science, then take a minor in science & technology studies or in economics. One needs both the specialist profile and the potential to reach out to other audiences by being aware of the need to make translations between different frameworks. Learn to be reflexive about the status of what one can say in one or the other framework.
For example, I learned to avoid the formulation of grandiose statements such as "modern economies are knowledge-based economies," and to say instead: "modern economies can increasingly be considered as knowledge-based economies." The latter formulation provides room for asking "to what extent," and thus one can ask for further information, indicators, and results of the measurement.
In the sociology of science, specialisms and paradigms are sometimes considered as belief systems. It seems to me that by considering scholarly discourses as systems of rationalized expectations one can make the distinction between normative and cognitive learning. Normative learning (that is, in belief systems) is slower than cognitive learning (in terms of theorized expectations) because the cognitive mode provides us with more room for experimentation: One can afford to make mistakes, since one's communication and knowledge claims remain under discussion, and not one's status as a communicator. The cognitive mode has advantages; it can be considered as the surplus that is further developed during higher education. Normative learning is slower; it dominates in the political sphere.
What does the "Triple Helix" reveal about the fragmentation of "national innovation systems"?
In 2003, colleagues from the Department of Economics and Management Studies at the Erasmus University in Rotterdam offered me firm data from the Netherlands containing these three dimensions: the economic, the geographical, and the technological dimensions in data of more than a million Dutch firms. I presented the results at the Schumpeter Society in Turin in 2004, and asked whether someone in the audience had similar data for other countries. I expected Swedish or Israeli colleagues to have this type of statistics, but someone from Germany stepped in, Michael Fritsch, and so we did the analysis for Germany. These studies were first published in Research Policy. Thereafter, we did studies on Hungary, Norway, Sweden, and recently also China and Russia.
Several conclusions arise from these studies. Using entropy statistics, the data can be decomposed along the three different dimensions. One can decompose national systems geographically into regions, but one can also decompose them in terms of the technologies involved (e.g., high-tech versus medium-tech). We were mainly relying on national data. And of course, there are limitations to the data collections. Actually, we now have international data, but this is commercial data and therefore more difficult to use reliably than governmental statistics.
For the Netherlands, we obtained the picture that would more or less be expected: Amsterdam, Rotterdam, and Eindhoven are the most knowledge-intensive and knowledge-based regions. This is not surprising, although there was one surprise: We know that in terms of knowledge bases, Amsterdam is connected to Utrecht and then the geography goes a bit to the east in the direction of Wageningen. What we did not know was that the niche also spreads to the north in the direction of Zwolle. The highways to Amsterdam Airport (Schiphol) are probably the most important.
In the case of Germany, when we first analyzed the data at the level of the "Laender" (Federal States), we could see the East-West divide still prevailing, but when we repeated the analysis at the lower level of the "Regierungsbezirke" we no longer found the East-West divide as dominant (using 2004 data). So, the environment of Dresden for example was more synergetic in Triple-Helix terms than that of Saarbruecken. And this was nice to see considering my idea that the knowledge-based economy increasingly prevails since the fall of the Berlin Wall and the demise of the Soviet Union. The discussion about two different models for organizing the political economy—communism or liberal democracy—had become obsolete after 1990.
After studying Germany, I worked with Balázs Lengyel on Hungarian data. Originally, we could not find any regularity in the Hungarian data, but then the idea arose to analyze the Hungarian data as three different innovation systems: one around Budapest, which is a metropolitan innovation system; one in the west of the country, which has been incorporated into Western Europe; and one in the east of the country, which has remained the old innovation system that is state-led and dependent on subsidies. For the western part, one could say that Hungary has been "europeanized" by Austria and Germany; it has become part of a European system.
When Hungary came into the position to create a national innovation system, free from Russia and the Comecon, it was too late, as Europeanization had already stepped in and national boundaries were no longer as dominant. Accordingly, and this was a very nice result, assessing this synergy indicator on Hungary as a nation, we did not find additional synergy at the national (that is, above-regional) level. While we clearly found synergy at the national level for the Netherlands and also found it in Germany, but at the level of the Federal States, we could not find synergy at a national level for Hungary. Hungary has probably developed too late to develop a nationally controlled system of innovations.
A similar phenomenon appeared when we studied Norway: my Norwegian colleague (Øivind Strand) did most of our analysis there. To our surprise, the knowledge-based economy was not generated where the universities are located (Oslo and Trondheim), but on the West Coast, where the off-shore, marine and maritime industries are most dominant. FDI (foreign direct investment) in the marine and maritime industries leads to knowledge-based synergy in the regions on the West Shore of Norway. Norway is still a national system, but the Norwegian universities like Trondheim or Oslo are not so much involved in entrepreneurial networks. These are traditional universities, which tend to keep their hands off the economy.
Actually, when we had discussions about these two cases, Norway and Hungary, which both show that internationalization had become a major factor, either in the form of Europeanization in the Hungarian case, or in the form of foreign-driven investments (off-shore industry and oil companies) in the Norwegian case, I became uncertain and asked myself whether we did not believe too much in our indicators? Therefore, I proposed to Øivind to study Sweden, given the availability of well-organized data of this national system.
We expected to find synergy concentrated in the three regional systems of Stockholm, Gothenburg, and Malmö/Lund. Indeed, 48.5 percent of the Swedish synergy is created in these three regions. This is more than one would expect on the basis of the literature. Some colleagues were upset, because they had already started trying to work on new developments of the Triple Helix, for example, in Linköping. But the Swedish economy is organized and centralized in this geographical dimension. Perhaps that is why one talks so much about "regionalization" in policy documents. Sweden is very much a national innovation system, with additional synergy between the regions.
Can governments alter historical trajectories of national, regional or local innovation systems?
Let me mention the empirical results for China in order to illustrate the implications of empirical conclusions for policy options. We had no Chinese data set, but we obtained access to the database Orbis of the Bureau van Dijk (an international company, which is Wall Street oriented, assembling data about companies) that contains industry indicators such as names, addresses, NACE-codes, types of technology, the sizes of each enterprise, etc. However, this data can be very incomplete. Using this incomplete data for China, we said that we were just going to show how one could do the analysis if one had full data. We guess that the National Bureau of Statistics of China has complete data. I did the analysis with Ping Zhou, Professor at Zhejiang University.
We analyzed China first at the provincial level, and as expected, the East Coast emerged as much more knowledge intense than the rest of the country. After that, we also looked at the next-lower level of the 339 prefectures of China. From this analysis, four of them popped up as far more synergetic than the others. These four municipalities were: Beijing, Shanghai, Tianjin, and Chongqing.
These four municipalities became clearly visible as an order of magnitude more synergetic than other regions. The special characteristic about them is that –as against the others – these four municipalities are administered by the central government. Actually, it came out of my data and I did not understand it; but my Chinese colleague said that this result was very nice and specified this relationship.
The Chinese case thus illustrates that government control can make a difference. It shows – and that is not surprising, as China runs on a different model – that the government is able to organize the four municipalities in such a way as to increase synergy. Of course, I do not know what is happening on the ground. We know that the Chinese system is more complex than these three dimensions suggest. I guess the government agencies may wish to consider the option of extending the success of this development model, to Guangdong for example or to other parts of China. Isn't it worrisome that all the other and less controlled districts have not been as successful in generating synergy?
Referring more generally to innovation policies, I would advise as a heuristics that political discourse is able to signal a problem, but policy questions do not enable us to analyze the issues. Regional development, for example, is an issue in Sweden because the system is very centralized, more than in Norway, for example. But there is nothing in our data that supports the claim that the Swedish government is successful in decentralizing the knowledge-based economy beyond the three metropolitan regions. We may be able to reach conclusions like these serving as policy advice. One develops policies on the basis of intuitive assumptions which a researcher is sometimes able to test.
As noted, one can expect a complex system continuously to produce unintended consequences, and thus it needs monitoring. The dynamics of the system are different from the sum of the sub-dynamics because of the interaction effects and feedback loops. Metaphors such as a Triple Helix, Mode-2, or the Risk Society can be stimulating for the discourse, but these metaphors tend to develop their own dynamics of proliferating discourses.
The Triple Helix, for example, can first be considered as a call for collaboration in networks of institutions. However, in an ecosystem of bi-lateral and tri-lateral relations, one has a trade-off between local integration (collaboration) and global differentiation (competition). The markets and the sciences develop at the global level, above the level of specific relations. A principal agent such as government may be locked into a suboptimum. Institutional reform that frees the other two dynamics (markets and sciences) requires translation of political legitimation into other codes of communication. Translations among codes of communication provide the innovation engine.
Is there a connection between infrastructures and the success of innovation processes?
One of the conclusions, which pervades throughout all advanced economies, is that knowledge intensive services (KIS) are not synergetic locally because they can be disconnected – uncoupled – from the location. For example, if one offers a knowledge-intensive service in Munich and receives a phone call from Hamburg, the next step is to take a plane to Hamburg, or to catch a train inside Germany perhaps. Thus, it does not matter whether one is located in Munich or Hamburg as knowledge-intensive services uncouple from the local economy. The main point is proximity to an airport or train station.
This is also the case for high-tech knowledge-based manufacturing. But it is different for medium-tech manufacturing, because in this case the dynamics are more embedded in the other parts of the economy. If one looks at Russia, the knowledge-intensive services operate differently from the Western European model, where the phenomenon of uncoupling takes place. In Russia, KIS contribute to coupling, as knowledge-intensive services are related to state apparatuses.
In the Russian case, the knowledge-based economy is heavily concentrated in Moscow and St. Petersburg. So, if one aims –as the Russian government proclaims – to create not "wealth from knowledge" but "knowledge from wealth" – that is, oil revenues –it might be wise to uncouple the knowledge-intensive services from the state apparatuses. Of course, this is not easy to do in the Russian model because traditionally, the center (Moscow) has never done this. Uncoupling knowledge-intensive services, however, might give them a degree of freedom to move around, from Tomsk to Minsk or vice versa, steered by economic forces more than they currently are (via institutions in Moscow).
Final question. What does path-dependency mean in the context of innovation dynamics?
In The Challenge of Scientometrics. The development, measurement, and self-organization of scientific communications (1995), I used Shannon-type information theory to study scientometric problems, as this methodology combines both static and dynamic analyses. Connected to this theory I developed a measurement method for path-dependency and critical transitions.
In the case of a radio transmission, for example, you have a sender and a receiver, and in between you may have an auxiliary station. For instance, the sender is in New York and the receiver is in Bonn and the auxiliary station is in Iceland. The signal emerges in New York and travels to Bonn, but it may be possible to improve the reception by assuming the signal is from Iceland instead of listening to New York. When Iceland provides a better signal, it is possible to forget the history of the signal before it arrived in Island. It no longer matters whether Iceland obtained the signal originally from New York or Boston. One takes the signal from Iceland and the pre-history of the signal does not matter anymore for a receiver.
Such a configuration provides a path-dependency (on Iceland) in information-theoretical terms, measurable in terms of bits of information. In a certain sense you get negative bits of information, since the shortest path in the normal triangle would be from New York to Bonn, and in this case the shortest path is from New York via Iceland to Bonn. I called this at the time a critical transition. In a scientific text for instance, a new terminology can come up and if it overwrites the old terminology to the extent that one does not have to listen to the old terminology anymore, one has a critical transition that frees one from the path-dependencies at a previous moment of time.
Thus, my example is about radical and knowledge-based changes. As long as one has to listen to the past, one does not make a critical transition. The knowledge-based approach is always about creative destruction and about moving ahead, incorporating possible new options in the future. The hypothesized future states become more important than the past. The challenge, in my opinion, is to make the notion of options operational and to bring these ideas into measurement. The Triple-Helix indicator measures the number of possible options as additional redundancy. This measurement has the additional advantage that one becomes sensitive to uncertainty in the prediction.
Loet Leydesdorff is Professor Emeritus at the Amsterdam School of Communications Research (ASCoR) of the University of Amsterdam. He is Honorary Professor of the Science and Technology Policy Research Unit (SPRU) of the University of Sussex, Visiting Professor at the School of Management, Birkbeck, University of London, Visiting Professor of the Institute of Scientific and Technical Information of China (ISTIC) in Beijing, and Guest Professor at Zhejiang University in Hangzhou. He has published extensively in systems theory, social network analysis, scientometrics, and the sociology of innovation (see at http://www.leydesdorff.net/list.htm). With Henry Etzkowitz, he initiated a series of workshops, conferences, and special issues about the Triple Helix of University-Industry-Government Relations. He received the Derek de Solla Price Award for Scientometrics and Informetrics in 2003 and held "The City of Lausanne" Honor Chair at the School of Economics, Université de Lausanne, in 2005. In 2007, he was Vice-President of the 8th International Conference on Computing Anticipatory Systems (CASYS'07, Liège). In 2014, he was listed as a highly-cited author by Thomson Reuters.
Literature and Related links:
Science & Technology Dynamics, University of Amsterdam / Amsterdam School of Communications Research (ASCoR)
Leydesdorff, L. (2006). The Knowledge-Based Economy: Modeled, Measured, Simulated. Universal Publishers, Boca Raton, FL.
Leydesdorff, L. (2001). A Sociological Theory of Communication: The Self-Organization of the Knowledge-Based Society. Universal Publishers, Boca Raton, FL.
Leydesdorff, L. (1995). The Challenge of Scientometrics . The development, measurement, and self-organization of scientific communications. Leiden, DSWO Press, Leiden University.
http://www.leydesdorff.net/
Print version of this Talk (pdf)
0 0 1 4814 27442 School of Global Studies, University of Gothenburg 228 64 32192 14.0