Aus der Einleitung: Umweltpolitik globalisiert sich und internationaler Klimaschutz ist eine der größten globalen Herausforderungen im 21. Jahrhundert. Die globale Umwelt wird jenseits ihrer Kapazität zur Selbsterneuerung genutzt und ist damit zur knappen Ressource geworden. Die befürchtete weltweite Klimaerwärmung durch die Nutzung der Erdatmosphäre als Deponie für Treibhausgase ist nur ein Beispiel. Globale Umweltmedien sind unteilbar. Eine Aneignung des physischen Besitzes oder dessen Verteidigung gegen Eingriffe anderer ist nicht möglich. Globale Umweltgüter sind dadurch gekennzeichnet, dass alle Völker an ihnen partizipieren und unter ihrer kollektiven Zerstörung leiden. Hierin besteht die "Tragödie der Allmende": Die gemeinsame Umwelt wird gemeinsam geschädigt, weil die Gewinne ihrer Nutzung privat anfallen, während die Nutzungskosten von allen Ländern getragen werden müssen. Im Falle der Umweltverschmutzung gilt kein Ausschlussprinzip und so tritt in der internationalen Umweltpolitik das Phänomen des Trittbrettfahrer-Verhaltens auf: Ergreifen einige Staaten Maßnahmen zum Umweltschutz, profitieren alle davon. Trittbrettfahrer haben daher keinen Anreiz selbst Kosten für Maßnahmen zu übernehmen. In der Folge kommt ein wirksamer internationaler Umweltschutz erst gar nicht zustande. Umweltprobleme sind daher nur national lösbar. In den meisten Industrieländern konnten die drängendsten Umweltprobleme wie etwa Boden und Wasser auf meist nationaler Ebene gelöst werden, da so durch staatlichen Zwang umweltschädliches Handeln verboten werden kann. Ein solcher Zwang ist auf internationaler Ebene nicht durchsetzbar. Es gibt keine internationale Macht, die Umweltschutzmaßnahmen gegenüber den Staaten durchsetzen kann. Die Vereinten Nationen wären hier zwar als oberste Umweltschutzinstanz denkbar, aber ihnen fehlt die Legitimation. Globalen Umweltproblemen muss daher nicht mit Zwang, sondern mit Anreizen begegnet werden. Die Spieltheorie gibt Erklärungen, wie ein solches stabiles internationales Umweltschutzabkommen anreizverträglich ausgestaltet sein sollte. Die Public-Choice-Theorie erklärt das Zustandekommen der unterschiedlichen Positionen und den daraus resultierenden Konflikten in den Verhandlungen in der internationalen Umweltpolitik. In dieser Arbeit greife ich mit der Betrachtung der internationalen Klimapolitik ein konkretes globales Umweltproblem heraus und untersuche, welchen Beitrag die Spieltheorie und die Public-Choice-Theorie zur Erklärung der Entwicklung der internationalen Klimapolitik leisten. Zunächst wird in Kaptitel 2 der Klimawandel und das Erkennen des Klimawandels als Problem beschrieben und der Weg der internationalen Klimapolitik mit ihren Ergebnissen nachgezeichnet. Der Abschnitt 3 stellt den Beitrag der Spieltheorie zur Erklärung der Entwicklung der internationalen Klimapolitik dar. Dem theoretischen Hintergrund über die hier relevanten Elemente der Spieltheorie in Abschnitt 3.1 folgt die Heranführung an eine stabile Klimaschutzvereinbarung in Kapitel 3.2. Dort erfolgt eine spieltheoretische Beschreibung der Konfliktpotentiale eines internationalen Klimaschutzvertrages, um dann Lösungskonzepte für ein aus Sicht der Spieltheorie stabiles und anreizverträgliches Klimaschutzabkommen darzulegen. Das Kapitel 3.3 zeigt schließlich den Beitrag der Spieltheorie zur Erklärung der Entwicklung der internationalen Klimapolitik. In einer spieltheoretischen Analyse der internationalen Klimapolitik wird dargelegt, dass die spieltheoretisch herausgearbeiteten Lösungsansätze aus Kapitel 3.2 in den Ergebnissen der internationalen Klimaverhandlungen zu finden sind und auch einzelne Positionen von Akteuren in der Klimapolitik spieltheoretisch erläutert werden können. Die Public-Choice-Theorie hingegen ist keine normative Theorie, sondern eine erklärende. Daher ist die Herangehensweise, um den Beitrag der Public-Choice-Theorie zur Erklärung der Entwicklung der internationalen Klimapolitik darzustellen, eine andere. Eine normative Erarbeitung wie im Fall der Spieltheorie in Kapitel 3.2 wird also nicht durchgeführt. Der Abschnitt 4.1 beschreibt den theoretischen Hintergrund der Public-Choice-Theorie. In Abschnitt 4.2 werden die unterschiedlichen Staaten und Staaten-Gruppen als Akteure in der internationalen Klimapolitik festgestellt, um anschließend in Kapitel 4.3 die Konfliktlinien in der internationalen Klimapolitik zu skizzieren. Der Abschnitt 4.4 stellt den Beitrag der Public-Choice-Theorie zur Erklärung der internationalen Klimapolitik dar. Mit dem Hintergrund der Kapitel 4.2 und 4.3 werden die Positionen der Akteure in der internationalen Klimapolitik mit dem Public-Choice-Modell aus Kapitel 4.1 verglichen. Dieser Vergleich zeigt den Beitrag der Public-Choice-Theorie zur Erklärung der Entwicklung der internationalen Klimapolitik.Inhaltsverzeichnis:Inhaltsverzeichnis: I.Abbildungsverzeichnis5 II.Tabellenverzeichnis5 III.Abkürzungsverzeichnis5 1.Einleitung7 2.Klimawandel und internationale Klimapolitik9 2.1Ursachen für den Klimawandel und seine Folgen9 2.2Der Weg von der Erkenntnis des Klimaproblems zur Erkenntnis der Notwendigkeit des Handelns11 2.3Stationen internationaler Klimapolitik12 2.4Eckpunkte und Ergebnisse aus den Klimaverhandlungen17 2.4.1Eckpunkte der Klimaverhandlungen17 2.4.2Das Kyoto-Protokoll - Reduktionspflichten18 2.4.3Das Kyoto-Protokoll - Maßnahmen und Bestimmungen19 3.Der Beitrag der Spieltheorie zur Erklärung der Entwicklung der internationalen Klimapolitik21 3.1Theoretischer Hintergrund21 3.1.1Gefangenendilemma und Nash-Gleichgewicht21 3.1.2Chicken-Game25 3.1.3Tit-for-Tat-Strategie27 3.1.4Coase-Theorem28 3.2Eine stabile Klimaschutzvereinbarung29 3.2.1Konfliktpotentiale eines internationalen CO2-Vertrags aus dem Blickwinkel der Spieltheorie29 3.2.1.1Kooperation versus Nichtkooperation31 3.2.1.2Gefangenen- und Freifahrerdilemma33 3.2.1.3Effizienz- und Verteilungsaspekte35 3.2.1.4Vertragsverletzung und -Stabilität40 3.2.1.5Superspiele und Hyperspiele42 3.2.2Koalitionsbildung bei internationalen Umweltverhandlungen44 3.2.2.1Beschreibung eines Koalitionsmodells44 3.2.2.2Der Prozess der Koalitionsbildung als Zahlenbeispiel45 3.2.3Spieltheoretische Lösungskonzepte48 3.2.3.1Interne Partizipations- und Stabilisierungsanreize49 3.2.3.2Externe Partizipations- und Stabilisierungsanreize50 3.2.3.3Effizienz52 3.3Spieltheoretische Analyse der internationalen Klimapolitik52 3.3.1Kooperationsentscheidungen53 3.3.2Vertragsgestaltung58 3.3.2.1Effizienz- und Verteilungsaspekte58 3.3.2.2Issue-Linkage59 3.3.2.3Sanktionen60 3.4Fazit zu Kapitel 361 4.Beitrag der Public-Choice-Theorie zur Erklärung der Entwicklung der internationalen Klimapolitik62 4.1Theoretischer Hintergrund.62 4.1.1Wähler63 4.1.2Verbände64 4.1.3Politiker64 4.1.4Bürokraten65 4.2Konfliktparteien in der internationalen Klimapolitik66 4.2.1Die Europäische Gemeinschaft67 4.2.2Die USA und die JUSSCANZ- und UMBRELLA-Staaten70 4.2.3Die Entwicklungsländer73 4.2.4Nicht-Regierungs-Organisationen77 4.2.4.1Umweltorganisationen und Wissenschaft78 4.2.4.2Emittentenverbände und Gewerkschaften79 4.2.4.3Klimaschutzindustrie- und Dienstleistungsverbände80 4.3Die Konfliktlinien in der internationalen Klimapolitik81 4.4Analyse der Positionen und der Konfliktlinien der internationalen Klimapolitik aus Sicht der Public-Choice-Theorie82 4.4.1Wähler und Öffentlichkeit82 4.4.2Verbände und Nicht-Regierungsorganisationen85 4.4.3Politiker88 4.4.4Bürokraten91 4.5Fazit zu Kapitel 492 5.Fazit93 IV.Literaturverzeichnis95Textprobe:Textprobe: Kapitel 3.1.3, Tit-for-Tat-Strategie: Die zugrunde liegende Taktik hinter dieser Strategie kann mit dem Motto "wie du mir, so ich dir" beschrieben werden. Es handelt sich also um eine Strategie im Falle eines Mehr-Perioden-Spiels. Ein Spieler, der sich dieser Strategie bedient, wir immer das tun, was sein Gegner gerade getan hat. Allerdings ist, wenngleich das aus dem Namen nicht hervorgeht, der Spieler zu Beginn auf jeden Fall kooperativ. Es handelt sich also um eine freundliche Strategie. Wenn zwei Tit-for-Tat-Spieler aufeinandertreffen kooperieren sie immer. "Tit-for-Tat" wurde als erfolgreiche Strategie im wiederholten Gefangenendilemma bekannt, in dem zwei Gefangene gedrängt werden, den jeweils anderen zu beschuldigen. Das Ergebnis hierzu wurde bereits in 3.1.1 erläutert. Werden die Gefangenen wiederholt vor diese Entscheidung gestellt und ist beiden die jeweils vorherige Entscheidung des anderen bekannt, gibt es verschiedene Strategien, um das Spiel erfolgreich zu durchlaufen. "Tit-for-Tat" ist dabei eine der erfolgreichsten. In diesem Spiel bedeutet das, dass einer der Gefangenen generell kooperativ in das Spiel geht und dem anderen Teilnehmer hilft, indem er schweigt. Sollte der andere Gefangene nun nicht schweigen, rächt sich der "Tit-for-Tat"-Spielende in der folgenden Runde, indem er auch nicht schweigt. Allerdings ist er bereit, sofort zu vergessen, wenn sich der Mitspieler bessert und wieder kooperativ spielt. In der nächsten Runde wird er auch wieder kooperativ spielen. So kann man in einem Spiel über mehrere Runden zwar nie besser abschneiden als der eigene Gegenspieler, aber der maximale Rückstand ist dafür verhältnismäßig klein. Wenn der andere ebenfalls Tit-for-Tat spielt, entsteht kein Rückstand. In einem Spiel mit mehreren Mitspielern dagegen schneidet man in vielen Fällen besser ab, als Spieler mit anderen Strategien, da sich dort Kooperation bezahlt macht, die Tit-forTtat-Strategie sich aber zugleich nicht ausbeuten lässt. Überträgt man diese Strategie auf internationale umweltpolitische Vereinbarungen wie z.B. Emissionsreduktionen, so lässt sich als Beispiel für eine Antwort auf nicht kooperatives Verhalten des Gegenspielers eine Bestrafung durch Re-Optimierung seiner Reduktionsmenge anführen. Die verbleibenden kooperierenden Staaten sind hierbei berechtigt, ihre Emissionsmengen an die neuen Gegebenheiten anzupassen, also zu erhöhen. Steigt die aggregierte gleichgewichtige Emission der verbliebenen Koalitionäre in diesem Prozess, so ist darin eine "Bestrafung" des vertragsbrüchigen Koalitionsmitglieds zu sehen, denn dieser leidet ja auch unter der Emissionsausweitung. Allerdings wird der defektionierende Staat wieder in die Koalition aufgenommen, wenn er ausreichend "Buße" getan hat. Dies kann er durch Entrichtung einer Strafzahlung oder entsprechend überobligatorische Emissionsvermeidungsmaßnahmen tun. Kapitel 3.1.4, Coase-Theorem: Ronald Coase entwickelte einen Ansatz über die Internalisierung von externen Effekten durch Verhandlungen zwischen den beteiligten Akteuren einer Externalität, die in diesem Fall die Verschmutzung der Erdatmosphäre darstellt. Das Coase-Theorem basiert im Marktsystem der vollkommenen Konkurrenz auf der Annahme der Nichtexistenz von Transaktionskosten. Den Akteuren muss die Möglichkeit gegeben werden, in Verhandlungen zu treten, um so zu einer für beide Seiten vorteilhaften Übereinkunft zu gelangen. Hierzu bedarf es keiner staatlichen Eingriffe in das Preissystem, sondern lediglich der eindeutigen Zuordnung der Eigentumsrechte, mit denen die externen Effekte verbunden sind. Bezogen auf die internationale Klimapolitik sind hier Verschmutzungsrechte an der Erdatmosphäre gemeint. Coase beschreibt zwei polare Ansätze zur Verhandlungslösung: Zum einen die Laissez-faire Regel (Nichthaftungsregel): Bei Fehlen von gesetzlichen Regelungen haftet der Schädiger nicht für seinen verursachten Schaden. Er kann seine Aktivität auf beliebigem Niveau ausüben. Um dem Schädiger zu einer Reduktion des externen Effektes zu bewegen, muss der Geschädigte ihn bestechen. Zum zweiten die Verursacherregel (Haftungsregel): Liegen die Eigentumsrechte hingegen beim Geschädigten, ist es dem Verursacher nicht gestattet, eine Aktivität aufzunehmen, von der externe Effekte ausgehen. Will der Schädiger dennoch eine Aktivität aufnehmen, so muss er an den Geschädigten eine Kompensationszahlung für die Duldung des externen Effektes leisten. Die Verhandlungen zwischen den beiden Akteuren führen stets zu einer pareto-optimalen Ressourcenallokation. Diese Aussage wird als Effizienzthese des Coase-Theorems bezeichnet. Es spielt für die Optimalität keine Rolle, bei wem die Eigentumsrechte liegen. Die pareto-optimale Menge der den externen Effekt auslösenden Aktivität wird da sein, wo die Grenzkosten der Vermeidung des externen Effekts dem "Grenzleid" des Geschädigten bzw. den Grenzkosten der Beseitigung des externen Effekts entsprechen. Kapitel 3.2. Eine stabile Klimaschutzvereinbarung: Der Beitrag der Spieltheorie zur Erklärung der Entwicklung der internationalen Klimapolitik ist hauptsächlich in den Entscheidungsproblemen der einzelnen souveränen Staaten als Spieler in diesem Feld zu sehen. So stehen diese Spieler vor der Entscheidung über den Beitritt in das internationale Klimaregime. Im Falle des Beitritts stehen die Staaten dann vor der Entscheidung über die Einhaltung der mit dem Beitritt eingegangenen Verpflichtungen. Auch die Ausgestaltung des Kyoto-Protokolls und seiner Mechanismen lassen durch spieltheoretische Ansätze erklären. Zunächst werden potentielle Konfliktpotentiale eines internationalen Klimaschutzvertrages spieltheoretisch analysiert. Mit der Beschreibung der Koalitionsbildung zu internationalen Umweltverträgen wird dann an eine theoretische Klimavereinbarung herangeführt, die durch Ex-Ante- und Ex-Postanreize so vertragsstabil ist, dass die von den Unterzeichnern übernommenen Vertragspflichten auch von opportunistischen Staaten erfüllt werden . Johannes Heister beschreibt einen solchen Vertrag als internationalen CO2-Vertrag, der mit einem Klimaabkommen mit CO2-Reduktionszielen vergleichbar ist. Dieses Konstrukt mit seinen theoretisch herausgearbeiteten Anreizen und Sanktionsmechanismen dann auf inhaltliche Übereinstimmung mit den Ergebnissen der internationalen Klimapolitik verglichen, um zu sehen, welchen erklärenden Beitrag die Spieltheorie auf diesem Feld leistet. Kapitel 3.2.1, Konfliktpotentiale eines internationalen CO2-Vertrags aus dem Blickwinkel der Spieltheorie: Es ist anzunehmen, dass viele Verträge zwischen souveränen Staaten erst gar nicht zustande kommen, weil sie nicht durchsetzbar sind, so dass der Welt mögliche Wohlfahrtsgewinne verloren gehen. Das gilt auch für internationale Umweltverträge wie einem internationalen Klimaabkommen. Globale Umweltprobleme, wie der (menschgemachte) Klimawandel zeichnen sich im allgemeinen durch das Vorliegen dauerhafter, globaler Externalitäten aus, deren Beherrschung die multiliberale Zusammenarbeit (fast) aller souveräner Staaten erfordert. Die Erdatmosphäre kann als globales Umweltmedium aufgefasst werden, welches von allen Ländern gemeinsam genutzt wird. Die Nutzung der Atmosphäre als Aufnahmemedium für CO2-Emisionen durch einzelne Länder verschlechtert die Klimabedingungen auch für alle anderen Länder der Erde. CO2-Emissionen sind daher, unabhängig vom Ort des Geschehens, ein "öffentliches Übel" für die gesamte Völkergemeinschaft. Sie produzieren externe Kosten, die nicht vom Verursacher, sondern von dritten Ländern getragen werden. Die Staaten vernachlässigen die von ihnen verursachten externen Kosten in der individuellen Kosten-Nutzen-Rechnung. Sie vergleichen lediglich die Verminderung der Umweltschäden durch die eigenen Reduktionsanstrengungen, welche sie beeinflussen können, mit ihren individuellen Reduzierungskosten, wobei die Emissionen aller übrigen Länder als gegeben hingenommen werden. Die externen Kosten von CO2-Emissionnen haben ihr Gegenstück im externen Nutzen klimaschützender Maßnahmen. Einseitige Reduzierungsmaßnahmen durch ein einzelnes Land können daher als Produktion und Bereitstellung eines internationalen öffentlichen Gutes aufgefasst werden, da die Vorteile aus einer verminderten atmosphärischen CO2-Konzentration allen Ländern als ein externer Nutzen zufließen, ohne dass diese dafür eine Gegenleistung erbringen müssen. Aus der Theorie der öffentlichen Güter ist aber bekannt, dass die unkoordinierte, individuelle Bereitstellung öffentlicher Güter suboptimal bleibt. Das Gleiche gilt für den Klimaschutz, wenn die Staaten ihn ausschließlich im Eigeninteresse betreiben. Darüber hinaus kann der externe Nutzen einzelne Länder sogar dazu veranlassen, ihre Klimaschutzbemühungen zurückzufahren, da das Problem weniger dringlich geworden ist, nachdem andere reduziert haben. Aus diesen Gründen kann es - von einem rein nationalen Gesichtspunkt aus betrachtet - im Interesse der meisten Länder sein, ihre CO2-Emissionen nicht oder nur wenig zu reduzieren. Folglich bleiben die Klimaschutzbemühungen aller Länder weit unterhalb des globalen Optimums, welches erreicht würde, wenn jedes Land die globalen externen Effekte seiner nationalen Energie- und CO2-Politiken bei der Festsetzung seiner Emissionsziele berücksichtigt. Zur Überwindung des nichtkooperativen Verhaltens souveräner Staaten, die nur ihren eigenen Nutzen maximieren, und zur Implementierung von CO2-Politiken, die den größtmöglichen Nettonutzen stiften, ist es erforderlich, einen internationalen Koordinationsmechanismus einzurichten. Wegen des Fehlens einer zentralen Autorität muss dieser Mechanismus so beschaffen sein, dass er die Klimaschutzbemühungen eines jeden Staates mit den reziproken Bemühungen aller anderen Staaten so verknüpft, dass erhaltener und bereitgestellter Nutzen einander bedingen. Eine solche Verknüpfung kann ein internationaler CO2-Vertrag sein, der eine kooperative CO2-Strategie festlegt, und die erlaubten Emissionen für jede Vertragspartei oder einen äquivalenten Allokationsmechanismus so spezifiziert, dass der globale Nettonutzen maximiert bzw. die Summe aus globalen Klimaschäden und Reduzierungskosten minimiert wird. Es ist auf jeden Fall darauf zu achten, dass die vereinbarte CO2-Reduzierung effizient, d.h. zu den global geringst möglichen Kosten, erfolgt. Aus der Tatsache, dass souveräne Staaten freiwillig kooperieren müssen, ergeben sich im Vergleich zur nationalen Umweltpolitik zwei weitere Restriktionen: Die Verteilung der Nettogewinne muss sich so ergeben, dass sich kein Staat durch die kooperative Klimapolitik schlechter stellt als ohne sie. Weiter sind Vorkehrungen zu treffen, die den CO2-Vertrag vor Vertragsverletzungen durch opportunistische Staaten schützen. Im Rahmen des folgenden Modells von Heister lassen sich die angesprochenen Punkte gut darstellen.
Durante más de dos siglos Santander ha sido uno de los departamentos pioneros en la industria de la Confección, haciendo que Bucaramanga sea catalogada como una de las principales ciudades que trabaja fuertemente en este sector, al igual que regiones como Medellín, Bogotá, Cali, Pereira, Barranquilla e Ibagué. Santander, a pesar de ser uno de los departamentos con menor participación en las exportaciones nacionales, durante los últimos años su crecimiento ha sido superior a la del promedio nacional. La industria de las confecciones que en su gran mayoría está conformada por pequeñas y medianas empresas, está enfrentando el dilema de la competitividad, siendo ellas muy importantes para Colombia por el valor que generan, tanto en lo económico como en lo social. Por esto es que muchas organizaciones han puesto sus ojos sobre las Pymes, buscando la forma de apoyarlas y orientarlas para que puedan afrontar la situación actual, dándose un balance positivo inclinado hacia ellas, que junto con el desarrollo de las tecnologías de información y las comunicaciones que se están dando, les permite actualizarse y estar al día de los requerimientos de negocios. Por su parte el gobierno ha buscado mejorar el entorno económico de Colombia y sus industrias, con este fin están realizando cambios en el modelo económico y en sus políticas nacionales, con las que se quiere que el país y las empresas empiecen a competir en un mercado global y que temas como productividad y competitividad se estudien más a profundidad, para introducirlos como tópico clave en la industria. Se busca este fortalecimiento en especial en momentos como el presente, en donde el ALCA es considerada como una gran amenaza, si no se preparan las empresas para estar en condiciones de competencia, muy seguramente no podrán mantener sus actuales niveles de ventas y participación en el mercado, pues será usurpado por empresas más competitivas y capaces para los requerimientos de hoy. En la actualidad el gobierno y entidades de apoyo buscan mecanismos para fortalecer y beneficiar al País, aprovechando que este sector cuenta con un alto reconocimiento internacional, en calidad, precio y servicio. Santander cuenta con una alta gama de productos que jalonan de manera efectiva la cadena productiva, entre los cuales se tienen la ropa infantil con marca propia y los procesos de maquila. El Gobierno se ha dado cuenta de la importancia de implementar una serie de instrumentos que permitan de una manera ordenada y coherente el desarrollo de los sectores productivos del país. Algunos de los instrumentos que el gobierno ha liderado para el desarrollo de los sectores son el Plan de Desarrollo, el Plan Estratégico Exportador, la Política de Competitividad y Productividad, la Red Colombia Compite, los CARCEs, y el impulso al desarrollo de clusters, como acciones estratégicas, con ellos se pretende motivar la vocación exportadora de la Industria, mejorar la competitividad, incrementar considerablemente las ventas y la generación de empleos. De esta manera se busca concientizar al sector privado y a las instituciones, en la necesidad de trabajar cooperativamente para lograr ser más fuertes y obtener índices adecuados de competitividad internacional, facilitando así el fortalecimiento de la Industria. ; Instituto Tecnológico y de Estudios Superiores de Monterrey (ITESM) ; INTRODUCCION 1 1. MARCO TEORICO 9 1.1 Fundamentos de productividad y competitividad 10 1.2 Pensamiento estratégico y competitividad 11 1.3 Enfoques de competitividad 14 1.3.1 Modelo de competitividad tradicional 15 1.3.1.1 Competitividad a nivel interno 15 1.3.1.2 Competitividad a nivel externo 19 1.3.1.3 Análisis del ambiente 28 1.3.2 Competitividad contemporánea 30 1.3.2.1 Competitividad basada en recursos y capacidades internas 31 1.3.2.2 Competitividad basado en tecnologías de información 33 1.4 Opciones estratégicas 36 1.4.1 Lógica estratégica tradicional 37 1.4.2 Lógica estratégica contemporánea 39 1.5 Herramientas de soporte estratégico 45 1.5.1 Benchmarking estratégico 45 1.5.2 Inteligencia competitiva 47 2. METODOLOGIA 48 2.1 Tipo de investigación 48 2.1.1 Respecto a los fines 48 2.1.2 Respecto a los medios 48 2.1.3 Colecta de datos 49 2.2 Tratamiento de los datos colectados 50 2.3 Limitaciones del método 50 2.4 Población y muestra 51 3. INDUSTRIA TEXTIL – CONFECCIÓN 59 3.1 Generalidades de la Industria 59 3.2 Industria de Confecciones a Nivel Internacional 64 3.3 Industria de Confecciones a Nivel Colombia 88 3.4 Industria de Confecciones a Nivel Bucaramanga 152 4. TECNOLOGIA DE INFORMACION Y COMPETITIVIDAD 204 5. FACTORES DE COMPETITIVIDAD DE LA INDUSTRIA CONFECCIONES EN BUCARAMANGA DE 215 6. HACIA UNA PROPUESTA PARA EL SECTOR 219 6.1 Probables Alternativas Estratégicas para la Industria 219 7. CONCLUSIONES Y FUTUROS TRABAJOS 233 BIBLIOGRAFIA 239 GLOSARIO DE TERMINOS ANEXOS ; Maestría ; For more than two centuries, Santander has been one of the pioneering departments in the Garment industry, making Bucaramanga classified as one of the main cities that works strongly in this sector, as well as regions such as Medellín, Bogotá, Cali, Pereira , Barranquilla and Ibagué. Santander, despite being one of the departments with the lowest participation in national exports, in recent years its growth has been higher than the national average. The apparel industry, which is mostly made up of small and medium-sized companies, is facing the dilemma of competitiveness, being very important for Colombia due to the value they generate, both economically and socially. This is why many organizations have set their eyes on SMEs, looking for a way to support and guide them so that they can face the current situation, giving themselves a positive balance inclined towards them, which together with the development of information and communication technologies that are being given, allows them to update and be up to date with business requirements. For its part, the government has sought to improve the economic environment of Colombia and its industries, to this end they are making changes in the economic model and in their national policies, with which they want the country and companies to begin to compete in a market global and that issues such as productivity and competitiveness are studied more in depth, to introduce them as a key topic in the industry. This strengthening is sought especially in times like the present, where the FTAA is considered a great threat, if companies do not prepare to be in competitive conditions, they will most certainly not be able to maintain their current levels of sales and participation in the market. market, as it will be usurped by more competitive and capable companies for today's requirements. Currently, the government and support entities are looking for mechanisms to strengthen and benefit the country, taking advantage of the fact that this sector has a high international recognition, in quality, price and service. Santander has a high range of products that effectively mark the production chain, among which are children's clothing with its own brand and maquiladora processes. The Government has realized the importance of implementing a series of instruments that allow the development of the country's productive sectors in an orderly and coherent manner. Some of the instruments that the government has led for the development of the sectors are the Development Plan, the Strategic Export Plan, the Competitiveness and Productivity Policy, the Colombia Competes Network, the CARCEs, and the promotion of the development of clusters, such as strategic actions, with which it is intended to motivate the export vocation of the Industry, improve competitiveness, considerably increase sales and create jobs. In this way, it seeks to make the private sector and institutions aware of the need to work cooperatively to be stronger and obtain adequate rates of international competitiveness, thus facilitating the strengthening of the Industry.
ÖZETSon yıllarda tüm dünyada yoğun biçimde özellikle de gelişmekte olan ülkelerde bankacılık krizleri yaşanmaktadır. Gelişmekte olan ülkelerde finansal liberalizasyon sürecinde denetleme ve düzenleme kurumlarına yeterince önem verilmemesi ve iç ve dış etmenlerden kaynaklanan makroekonomik istikrarsızlıklar bankacılık krizlerinin ana nedenleri olarak gösterilebilir. Faiz oranları ve döviz kurlarındaki dalgalanmalar ve vade uyumsuzlukları finansal ve finansal olmayan kurumlar için piyasa riskinin ve kırılganlığın artmasına neden olmuştur. Bankacılık sektöründe devletin verimsiz bir şekilde bulunması ve kredilerin banka sahipleri ile bağlantılı kişilere verilmesi ve sektörde etkinliğin arttırılmaması sistemik banka krizlerinin artış trendinde önemli etmenlerdir. Diğer yandan muhasebe standartları, yasal düzenlemeler, şeffaflık, risk yönetimi konularındaki eksiklikler bankacılık sektörü gözetim ve denetim otoritelerindeki boşluktan kaynaklanmıştır. Tüm bu bileşenler ülke ve zamana göre farklılık göstermekle birlikte bankacılık sektöründe yaşanan sistemik krizlere neden olan en temel etmenlerdir.Türkiye'de 1980 sonrası yaşanan finansal liberalizasyon 1989 yılında sermaye hareketlerinin de serbestleştirilmesi ile tamamlanmıştır. Bu süreçte finansal piyasaların derinleştirilmesi hedefine ulaşılsa da, seksenli yılların ilk yarısında yakalanan ekonomik istikrar doksanlı yıllarla birlikte kendini ekonomik dalgalanmalara bırakmıştır. Yüksek kamu açıkları iç borçlanma ile finanse edilmiş, devletin borçlanma gereği özel sektör yatırımlarını dışlamıştır. Bankacılık sektöründe finansal aracılık faaliyetlerinin payı düşmüş, kamu borcunun finansmanı ve arbitraj gelirleri sektörün en önemli gelir kaynakları haline gelmiştir. Makroekonomik istikrarsızlığa son vermek amacıyla IMF ile 2000 yılı başında döviz kuru çıpasına dayalı bir istikrar programı anlaşması imzalanmıştır. Ne var ki programın maliye ve para politikası hedefleri tutturulsa da enflasyon beklentilerinin inatçılığı istenen ölçüde kırılamamıştır. Döviz kurunun olağanüstü değerlenmesi ve büyüme sürecine giren ekonomide ithalat harcamalarının artması, cari dengenin beklenenin iki katı (GSMH'nın%5'i) açık vermesine neden olmuştur. İç politikadaki belirsizlikler ve dış ekonomik dalgalanmalar programa olan güvenin azalmasına neden olmuş ve ard arda yaşanan iki kriz sonrası kur çıpası hedefi terk edilip döviz kuru serbest dalgalanmaya bırakılmıştır.İstikrar programının uygulanma sürecinde, zaten zayıf olan bankacılık sistemi programın yapısı gereği piyasa risklerine çok daha açık hale gelmiş ve kırılgan bir hal almıştır. Kriz döneminde faiz oranlarının inanılmaz seviyelere ulaşması, döviz kurunun ise aşırı değer kaybetmesi birçok bankanın sermaye yapısını olumsuz yönde etkilemiştir. Kamu bankalarının büyük montanlı "görev zararlarını" çok kısa vadeli kaynaklarla fonlaması piyasada baskı yaratmıştır. Ekonomideki yavaşlama reel sektör bilançolarının da bozulmasına neden olmuş finansal sektörde geri dönmeyen kredilerin oranı giderek artmıştır. Bütün bu problemlerin çözümü için 2001 yılı ortasında Bankacılık Sektörü Yeniden Yapılandırma Programı yürürlüğe konmuştur. Kamu bankalarının yeniden yapılandırılması ile işe başlanmış, kısa vadeli borçlanma ihtiyacını ortadan kaldıracak önlemler alınmış, eriyen sermaye yapısı onarılmıştır. Tahsili gecikmiş alacakları toplam kredilerinin 'ine ulaşan kamu bankalarına aktarılan kaynağın 2001 GSMH'na oranı .8'i bulmuştur. Kriz sonrası zor duruma düşen ve bankacılık sektöründe toplam paya sahip olan pek çok özel bankaya Tasarruf Mevduatı Sigorta Fonu (TMSF) tarafından el konulmuştur. Bu bankaların rehabilitasyon ve satışı için kamu kaynaklarından 2001 yılı GSMH'nın .9'u kadar kaynak ayrılmıştır. Bunlara ilaveten krizden olumsuz etkilenen diğer özel bankalar için sermayelerinin güçlendirilmesi ve tahsili gecikmiş alacaklarının çözümü için girişimlerde bulunulmuştur. Ayrıca etkin çalışan, global rekabete açık ve daha güçlü bir bankacılık sektörüne sahip olabilmek için düzenleme ve denetleme yapısının güçlendirilmesi çalışmalarına hız verilmiştir. Şimdiye kadar kayda değer bir ilerleme kaydedilmiş olsa da, yeniden yapılandırma programının başarısı, istikrarlı bir ekonomik ve politik düzenle doğrudan ilişkilidir. Uzun dönemdeki hedef büyüme sürecini yeniden yakalayıp ve o çizgide devam edip olası Avrupa Birliği üyeliğine zemin hazırlamaktır.SUMMARY (Banking Crises and Turkish Banking Sector Restructuring Program)In recent years there have been an increasing trend in banking crises in both developing and developed countries. Attaching inadequate importance to the regulatory and supervisory institutions during the financial liberalization process and macroeconomic instability due to domestic and foreign disturbances are the main determinants of banking crises in the developing countries. Volatility in interest and foreign exchange rates and maturity mismatches increased market risk and vulnerability of both financial and nonfinancial institutions. Inefficient occurrence of government in the banking sector, increase in connected lending and no correction policies for the inefficiencies in the sector are important factors contributing to the increasing trend of the systemic banking crises. Deficiencies in accounting and disclosure standards, legal arrangements and risk management arise from the absence of sound regulatory and supervisory framework in these countries. Even if it can change from country to country and from time to time all the above stated distortions are the main determinants of banking crises.Financial liberalization process in Turkey after 1980 has been completed by the liberalization of capital account in 1989. During this process even if the target of financial deepening is attained, the economic stability that is reached in 1980s has to be abandoned with the economic instability years of the 1990s. High borrowing requirement of government has been financed through domestic borrowing and the private sector investments are crowded out. Share of financial intermediation declined and financing government debt and arbitrage gains became the main sources of income for the banking sector. In the beginning of 2000, Turkey signed a stabilisation program with IMF, which was based upon an exchange rate anchor. Main target of the program was to attain macroeconomic stability. Even if the fiscal and monetary targets of the program have been achieved, inflation expectations could not be reduced. Extraordinary valuation of the foreign exchange rate and economic growth induced increase in import expenses resulted twice of the targeted current account deficit reaching 5%of GNP. Uncertainties in domestic politics and instability in foreign economic environment caused a decline in confidence to the program. After the Nov.2000 and Feb.2001 crises the program became unsustainable; the government has to abandon the peg and to float the currency.During implementation process of the stabilisation policy, due to the structure of the program already weak banking sector became more open to market risks and more fragile. Increase in interest rates to incredible levels and excessive valuation of exchange rate influenced negatively the capital structure of banks. State banks created pressure on the markets through their large amount of financing requirements for "Duty losses". The crises led to a serious contraction in the economy, which exerted an adverse impact on the asset quality of the banking sector and increased the nonperforming credits. In consequence, the government adopted in mid-2001 "Banking Sector Restructuring Program" in order to eliminate these problems. The program started with the restructuring of state banks, measures are taken to limit the short-term exposure and support is given to dissolved capital. While the state banks' 45% of the total loans were nonperforming, total cost of the financial restructuring was 15.8% of 2001 GNP. On the other hand those banks which became insolvent after the crises and taken over by Saving Deposit Insurance Fund totalled 14% of banking sector assets. For the rehabilitation and sale of these banks a resource of 11.9% of 2001 GNP is allocated from public funds. In addition, for other banks that have been affected adversely by the crises, steps have been taken to strengthen capital base and to solve the problem of nonperforming assets. Achieving an efficiently working, globally competitive and sound Turkish Banking sector, strengthening the regulatory and supervisory framework is important. Up to now even if a significant progress has been taken in the implementation of the program, the successful completion of the program depends on the political and economic stability. For the longer term, the challenge is to recapture growth momentum and translate it into sustained economic convergence, as a basis for prospective entry into the European Union.
Dzud is the Mongolian term for a winter weather disaster in which deep snow, severe cold, or other conditions render forage unavailable or inaccessible and lead to high livestock mortality. Dzud is a regular occurrence in Mongolia, and plays an important role in regulating livestock populations. However, dzud, especially when combined with other environmental or socio-economic stresses and changes, can have a significant impact on household well-being as well as local and national economies. This study aims to fill this gap in knowledge by conducting in-depth case studies of four communities responses to the 2009-2010 dzud to document both household-and community-level impacts and responses. The case studies use a mixed-methods approach employing qualitative and quantitative data collection and analysis techniques including interviews, focus groups, household questionnaires, photovoice and document review, and were carried out in two soums (districts) located in the forest-steppe zone of Arkhangai Aimag (province), Ikhtamir and Undur Ulaan, and two soums in the Gobi desert-steppe zone of Bayankhongor Aimag, Jinst and Bayantsagaan. The specific objectives of this study are to assess herder household and community vulnerability, adaptive capacity, and medium-term recovery and resilience from the dzud of 2010.
This dissertation investigates the ways in which societies are coming to know and govern solar geoengineering. The question at the heart of this dissertation is not whether solar geoengineering will succeed, or even whether it should, but rather what makes it --- and its governance --- imaginable. To this end, the bulk of this dissertation aimed to analyze the co-production of the evidence --- and governance assumptions --- for a sociotechnical system that does not yet exist. To do so, I draw on work in science and technology studies (STS) and political science to elucidate and analyze the political and scientific claims underpinning expert attempts to capture the public imagination and put solar geoengineering on mainstream public policy agendas. I argue that the ability to put an emerging technology on the public agenda constitutes an exercise of power, determined neither by social structures nor entrepreneurial social actors alone, and entails its own, oft-neglected, evidentiary politics.Decades of scholarship in the interpretive social sciences demonstrates that framing and producing technoscience requires imaginative as much as technical work. Sheila Jasanoff's concept of `sociotechnical imaginaries' offers a useful point of entry into these dynamics. Sociotechnical imaginaries describe ``collectively held, institutionally stabilized, and publicly performed visions of desirable futures'' co-produced with advances in science and technology. As a theoretical concept, imaginaries help to explain why some visions of scientific and social order are co-produced, while others are not. Coupling this work with responsible research and innovation (RRI), which is concerned with the responsible steering of technoscientific developments, draws attention to the ways these imaginaries may play a vital role in the development, assessment, and governance of emerging technologies in the present, making scrutiny of their content and prospects for institutionalization urgent and timely.Any social scientific study of solar-geoengineering-in-the-making presents challenges for the analyst, some of which are shared across `emerging technologies,' and some of which are unique to this topic, at least at this stage. For one, the supply of research on solar geoengineering --- social scientific and otherwise --- has outpaced any demand function. It is not yet a topic of research in the private sector, nor is it entangled in broader imaginaries of national identity or competitiveness, though this may change. As Steve Rayner has pointed out, solar geoengineering is at a research impasse. Moreover, the primacy of models as an evidentiary basis for contemplating solar geoengineering has contributed to its stabilization as an object of governance before we know much about what it is likely to become, or even whether it is doable at all. This has contributed to a set of early assumptions about solar geoengineering (for example, as cheap and easy, or likely to make things better or worse for specific people in specific places) that need to be revisited. In this supply-driven context, the visions of a relatively narrow set of actors --- and narrow kinds of evidence --- are forming the foundation for future policy regimes. In Evoking equity as a rationale for solar geoengineering research? Scrutinizing emerging expert visions of equity, I examine the scientization of debates about the equity implications of solar geoengineering research. In so doing, I identify three sets of equity-related arguments advanced by sociotechnical vanguards advocating for more solar geoengineering research. The first is a call for more research as a means to shed light on the distributional outcomes of envisioned futures with and without solar geoengineering. This includes a call to reduce uncertainties inherent in scientific models examining distributional outcomes of potential deployment of solar geoengineering. Accompanying such calls is a discernible shift in the content of science itself, from more extreme to more `realistic' modeled scenarios of deployment, and from consideration of global to regional effects. The second equity-related rationale for more research is a call for comparative risk-risk assessment, underpinned by the claim that equity demands that potential risks and benefits of solar geoengineering be compared to the risks of climate change itself, especially for vulnerable populations. The third equity-related rationale for more solar geoengineering research is the invocation of the 1.5 degree aspirational goal of the Paris Agreement as requiring research on solar geoengineering, out of concern for the global poor and those most vulnerable to the consequences of climate change.My research suggests reveals several implications of this expert-driven, outcome-oriented, and risk-based understanding of equity. First, it may suggest that more research on solar geoengineering is the only rational choice, since many of the relevant equity concerns are empirical matters, amenable to resolution through the provision of more science. Second, it sidesteps the question of whether and how diverse non-experts should have a say in whether and how such research moves forward --- even if it is to occur on their behalf, in part by assuming that climate-related preferences are knowable and quantifiable. Third, the focus on predicting the outcomes of any future deployment at this stage represents an exercise in speculative ethics, and risks ignoring alternative ways of thinking about equity and responsibility in the context of technological innovation. Finally, I suggest that further analysis should be directed toward whether the vanguard visions I explore reflect a broader shift in operationalizing equity within multilateral climate politics, with those bearing the greatest responsibility now recast as `risk managers' on behalf of the global poor and the vulnerable. I argue that those characterized as `the vulnerable' in expert discourses should regain their status as agential subjects, rather than remain undifferentiated objects in expert discourse. Empirical research suggests that publics have a set of concerns not captured in the approach to equity I analyze in this dissertation, including issues around moral responsibility, historical global injustices, the ability to be included in, and benefit from, technological development, and concerns around lack of agency and self-determination in shaping innovation pathways.In The Politics of Climate Models for Solar Geoengineering Research, I argue that there is an oft-neglected politics of evidence around attempts to put emerging topics on the formal public agenda, which has the potential to shape future policy regimes. In this chapter, I analyze the mutual construction of solar geoengineering modeling and policy framing. Climate models have been understood as important nodes at the interface of climate science and policy, and as capable of shaping societies' understanding of, and responses to, climate change. As other scholars have pointed out, less has been said about the development of this relationship over time, which can help explain how it is that the intersection of modeling and politics takes on the form that it does.There are at least two issues around uncertainty and representation in the use of climate models for knowledge about solar geoengineering, which raise questions at the intersection of modeling and politics. The first is that models are being used to represent technologies which do not yet exist, black-boxing the engineering in geoengineering ideas. As one interviewee stated, ``In the model, you can just make geoengineering work. You can just assume that the oceans have a higher albedo because of ocean bubbles, whether it's possible or not.'' This results in the management of the representation of a technology in models, rather than managing the development of the technology itself, eliding important near-term questions around the complexities of technology development and the structure of responsible research programs, and stabilizing solar geoengineering as an object of governance in potentially problematic ways. Secondly, there is significant debate about whether these models can usefully predict outcomes at all; uncertainties that may be less relevant to models of and for climate science and mitigation policy may become `matters of concern' when it comes to predicting or promising regarding the effects of geoengineering.I argue that imaginaries of solar geoengineering technologies --- despite not serving current regulatory demands, and despite the non-existence of the technologies themselves (perhaps because of it) --- are engaging directly with policy needs (both current and predicted). With regard to current needs, the focus on models as proxies for actual deployment of these imagined technologies has the effect of making it seem as though societies `know' more about whether and how to develop these techniques than they do, which is resulting in debates about the management of the representation of a technology which does not yet exist. This has contributed to the current research impasse, in which ``technologists await a green light from social scientists before proceeding with research, while social scientists are limited to commenting on highly speculative ideas about how geoengineering might turn out in practice.'' In this context, policymakers are avoiding decisions regarding the advisability of a research program aimed at answering societally-relevant questions about technology development, and are content to fund indoor modeling studies. Alternatively, one might argue that the existing settlement, at least in the US, between governments, non-governmental organizations (NGOs), and scientists, in which governments seem willing to fund indoor modeling studies but accept an informal moratorium on everything else, may itself be a kind of clumsy solution, the stability of which depends on its non-articulation.There is a broader question around displacement in the realm of climate policy raised by this research. Several scholars and commentators have raised questions about the role of imagined technologies in the present, especially since the 2015 Paris climate agreement. As Steve Rayner has pointed out, the agreement maintains the belief that global temperature targets are achievable via the inclusion of imaginary technologies, which represents a kind of `magical thinking.' Noting that the line between ambition and delusion is not always sharp, Rayner argues that the reality seems to be that the world is already likely to exceed the temperature limit agreed to absent some form of geoengineering. Despite this reality, the inclusion of climate engineering technologies in modeled scenarios has the effect of making political targets seem achievable. This is true even without any instrumental action --- and potential near-term political costs --- to policymakers when it comes to actually funding research and development on these imagined technologies, and assessing their impacts and implications.Finally, in Climate Researchers' Views of Solar Geoengineering: Benefits, Risks, and Governance, I present the results of the first survey of climate change researchers' views of solar geoengineering research and its appropriate oversight. I argue that definitions of `expert' in emerging domains is itself a contested political category, and far from straightforward, particularly when the technologies under consideration do not yet exist. Respondents in this survey, much like surveys of general publics, report concern about the moral hazard operating at the level of political decision-making. Nevertheless, respondents generally support research on solar geoengineering, including small-scale outdoor studies --- despite both a general concern that research may result in lock-in and slippery slopes to deployment, and skepticism about the advisability of ever deploying these techniques. I find strong support for some form of novel or supplementary governance arrangement(s) for research, and a belief that scientific self-regulation is insufficient to manage risks. There seems to be less agreement, however, on particular governance approaches; I find mixed responses regarding the desirability of a `physical thresholds' approach to governing geoengineering experiments, for example.Despite the fact that most respondents express skepticism about the desirability of future deployment, respondents tend to support more research into these techniques, both indoor and, to a lesser extent, outdoors. This might be explained by a view that research will reveal reasons not to move forward, or because of a belief that concerns about slippery slopes are overstated (although this seems less likely, given that most respondents report concern that research may result in lock-in and slippery slopes to deployment). Alternatively, a substantial number of researchers surveyed here may have an interest in scientific research moving forward in general, irrespective of its strategic aims. Respondents express skepticism about prediction and controllability when it comes to solar geoengineering deployment. It remains an open question whether a desirable future world with solar geoengineering would depend upon predicting such outcomes, although most respondents do report a belief that uncertainty in our understanding of the climate system means we should never deploy solar geoengineering. Given low awareness of solar geoengineering, participation by a narrow set of actors --- including scientists, but also those who claim to represent the views of civil society --- can close down discussion of this imaginary technology, rather than open it up. In this way, the views of relevant but disempowered publics are assumed before most people have even heard of these ideas. It remains to be seen whether and how early visions of solar geoengineering will cohere or acquire collective stability, or whether they will be radically disrupted. My hope is that the data and analysis in this dissertation may prove useful in tracing the evolution of solar geongineering and its governance over time.
Tese de doutoramento em Governação, Conhecimento e Inovação, apresentada à Faculdade de Economia da Universidade de Coimbra ; O comércio de carbono, enquanto uma política climática de mercado que permite aos poluidores cumprir com compromissos de redução de emissões recorrendo a direitos de poluição transacionáveis, é apresentado pelos seus proponentes como a alternativa mais eficiente para a mitigação das alterações climáticas, enquanto oponentes contrapõem que o argumento baseado na custo-eficiência negligencia os prejuízos que resultam da mercantilização do carbono. Esta tese contribui para este debate, que é fundamental para o futuro das políticas ambientais, expondo os custos sociais do comércio de carbono e posicionando-se contra a inclusão do comércio de carbono no leque de políticas climáticas. A argumentação aqui desenvolvida é baseada nas contribuições teóricas sobre os custos sociais de atividades privadas e conflitos de valores, assim como perspetivas críticas sobre a neoliberalização da natureza e os limites do mercado. O comércio de emissões foi primeiramente proposto como uma alternativa às taxas ambientais pigouvianas maximizadoras da eficiência. Baseado na perspetiva sobre custos sociais assente em direitos de propriedade, o comércio de emissões permitiria ao regulador escapar à impossível tarefa de calcular um nível ótimo de poluição e providenciaria em alternativa uma forma custo-eficiente de atingir um nível de poluição determinado exogenamente. Esta transição teórica permitiria à Economia centrar-se na discussão dos melhores meios par atingir fins dados e esquivar-se à discussão dos fins. A dicotomia fins-meios, no entanto, não se aplica fora da teoria económica, tal como a descrição do comércio de emissões como uma alternativa simples e eficiente à regulação direta. Como a experiência dos EUA com o comércio de emissões demonstra, criar mercados para direitos de poluição transacionáveis requer investimento governamental num aparato regulatório que não é menos complexo do que é requerido pela regulação direta ou pela taxação. Esta experiência também ilustra o quanto a alegada eficiência dos mercados de emissões é resultado do seu fraco desempenho ambiental e da sua desconsideração pela justiça social e pela participação democrática. Os mercados de carbono criados ao abrigo do Protocolo de Quioto suscitam problemas adicionais. Comparados com os esquemas de "limitação e comércio" baseados num único poluente e um número restrito de fontes, esquemas como o Sistema Europeu de Comércio de Licenças de Emissão são mais complexos e requerem maior intervenção governamental. Para mais, instrumentos flexíveis como o Mecanismo de Desenvolvimento Limpo permitem aos países industrializados poluir além dos seus compromissos de emissões e suscitam preocupações com a integridade disputável de metodologias que contabilizam reduções de emissões de projetos de compensação em relação a um cenário de referência arbitrário. O fraco desempenho ambiental destes esquemas é ilustrado pela sua incapacidade de incentivar a descarbonização, enquanto distribuem rendas aos poluidores e criam novas fontes de corrupção. Estas questões não são redutíveis a discussões sobre procedimentos contabilísticos e outras tecnicalidades. Abrindo a "caixa negra" da quantificação e comensuração do carbono, é revelado que os seus cálculos marginalizam incertezas relevantes e assumem um grau de precisão que o conhecimento científico e a tecnologia não podem providenciar no presente. No entanto, dado que contabilizar aumentos e reduções de emissões requer decisões políticas sobre o que deve ser contabilizado, qual a métrica relevante e o que é um grau de incerteza aceitável, avanços científicos e tecnológicos não são condição suficiente para que seja possível produzir os números inequívocos que o comércio de carbono requer. Indo mais longe na discussão sobre as implicações da comensuração e abstração de carbono, esta tese apresenta um argumento contra a inclusão do comércio de carbono no leque de políticas climáticas, baseado em quatro críticas normativas. Com o apoio da literatura crítica, é defendido que o comércio de carbono é ineficaz, antidemocrático, injusto e antiético e que, por estas razões, só pode ser considerado como uma política custo-eficiente quando os seus custos sociais são ignorados. Um argumento contra o reformismo do comércio de carbono é então apresentado mostrando como tentar contrariar os efeitos negativos dos mercados de carbono através de restrições ao comércio conduz à erosão destes mercados. Uma melhor alternativa é o apoio a políticas climáticas que fomentam uma pluralidade de valores e providenciam benefícios sociais. A tese conclui defendendo uma mudança no debate sobre política climática no sentido da discussão dos valores que são fomentados ou prejudicados por cada política. Um enquadramento geral é proposto que respeita o pluralismo de valores e reconhece conflitos entre valores incomensuráveis, o que não é compatível com políticas de mercado. ; Carbon trading, as a market-based climate policy that allows polluters to comply with emissions reductions commitments with tradable pollution rights, is presented by its proponents as the most cost-efficient alternative for climate change mitigation, while critics counter that the cost-efficiency argument ignores the harms that result from commodifying carbon. This thesis contributes to this debate, which is fundamental for the future of environmental policies, by exposing the social costs of carbon trading and making the case against its inclusion in the climate policy-mix. The argument developed here draws from theoretical contributions on the social costs of private activities and on value conflicts, as well as critical perspectives on the neoliberalization of nature and the limits of the market. Emissions trading was firstly proposed as an alternative to efficiency-maximizing or pigouvian environmental taxation. Based on the property rights approach to social costs, emissions trading would allow regulators to escape the impossible task of calculating the optimal level of pollution and offer instead a cost-efficient way to achieve an exogenously determined level of pollution. This theoretical shift would allow economics to be centred on discussing the best means to achieve given ends and relived it of discussing ends. The ends-means dichotomy, however, does not hold outside textbook economics, as well as the description of emissions trading as a simple and efficient alternative to direct regulation. As the US experience with emissions trading shows, creating markets for tradable pollution rights requires government investment in a regulatory apparatus that is no less complex than what is required for direct regulation or taxation. This experience also illustrates how the purported efficiency of emissions trading systems is a flip side of their weak environmental performance and their disregard for social justice and democratic participation. Carbon trading schemes created under the Kyoto Protocol raise additional problems. Compared to "cap and trade" schemes based on a single pollutant and a restricted number of sources, schemes like the EU Emissions Trading System are more complex and require further government intervention. Furthermore, flexibility instruments like the Clean Development Mechanism allow industrialized countries to pollute beyond their emissions commitments and raise issues with the disputable integrity of methodologies that account for emissions reductions from offset projects relative to an arbitrary baseline. The dismal performance of these schemes is illustrated by their inability to provide an incentive to decarbonization, while distributing rents to polluters and creating new sources of corruption. These issues are not reducible to discussions on accounting procedures and other technicalities. Opening the "black box" of carbon quantification and commensuration reveals that its calculations sideline relevant uncertainties and assume a degree of accuracy that scientific knowledge and technology cannot deliver in the present. Yet, since accounting for emissions increases or reductions requires political decisions on what is to be accounted for, what is the relevant metric and what is an acceptable degree of uncertainty, further scientific and technological developments are not enough to make it possible to produce the unambiguous numbers that carbon trading requires. Going further on the discussion of the implications of carbon commensuration and abstraction, this thesis presents an argument against the inclusion of carbon trading in the climate policy-mix based on four normative critiques. With the support of critical literature, it is argued that carbon trading is ineffective, undemocratic, unjust and unethical and that, for these reasons, it can only be considered as a cost-effective policy when its social costs are ignored. An argument against carbon trading reformism is then presented by illustrating how trying to mitigate the negative effects of carbon markets by imposing restrictions on trading leads to the erosion of these markets. A better alternative is claimed to be supporting climate policies that foster a plurality of values and deliver social benefits. The thesis concludes by advocating a shift in the climate policy debate to a discussion on the values that are fostered or hindered by each policy. A general framework is proposed that respects value pluralism and acknowledges conflicts between incommensurable values, which is not compatible with market-based policies. ; FCT - "Projeto BECOM" - FCOMP-01-0124-FEDER-009234
This paper will present our explorative work in software reusability and concurrent programming. This work was divided into two parts. First, in order to abstract the reusable components, three application problems were tried to be solved by means of object-oriented programming using Ada. Second, in order to address how Ada provides an environment for concurrent programming, several concurrent programming concepts were described using Ada. ; Technical Report 2018-07-ECE-005 Technical Report 87-CSE-11 Reusability and Concurrency Issues in the Real-time Use of Ada* W. P. Yin P. H. Liou Murat M. Tanik This technical report is a reissue of a technical report issued May 1987 Department of Electrical and Computer Engineering University of Alabama at Birn1ingham July 2018 Technical Report 87-CSE-11 REUSABILITY ABD COBCURREIICY ISSUES IB THE REAL-TIME USE OF Ada• V. P. I:in P. B. Liou H. H. Tanik Department of Computer Science and Engineering Southern Methodist University Dallas, Texas 75275 May 1987 *Ada is a registered trade mark of the U.S. government, Ada Joint Program Office. Abstract REUSABILITY AND CONCURRENCY ISSUES IN THE REAL-TIME USE OF Ada* W. P. Yin P. H. Liou M. M. Tanik Department of Computer Science and Engineering Southern Methodist University Dallas, 'IX 75275 This paper will present our explorative work in software reusability and concurrent programming. This work was divided into two parts. First, in order to abstract the reusable components, three application problems were tried to be solved by means of object-oriented programming using A da. Second, in order to address how Ada provides an environment for concurrent programming, several concurrent programming concepts were described using Ada. 1. Introduction Reusability is a general engineering principle . It derives from the desire to avoid duplication and to capture commonality in undertaking classes of inherently similar works[ 1]. When software engineers try to apply this principle to software production, it brings many research questions into the open. The arguments focus on the question that what are the candidates for software reuse, how reusable software components should be stored, how we can locate reusable software components, and how we can incorporate reusable software components into our own software system. Concurrent Programming is the name given to programming notations and techniques for expressing paten tial parallelism and for solving the resulting synchronization and communication problems. Traditionally, the programs that run asynchronously were written in assembly language for the reasons : • High-level languages did not provide the appropriate tools for writing concurrent programs. • High-level languages for concurrent programming were not efficient. However, high-level language programs are easier to test, verify, and modify. Due to the progress on compiler techniques, we can obtain efficient object code for concurrent programs written in a high-level language. Concurrent programming is important because it provides an abstract setting in which studying parallelism becomes possible . The basic problem in writing a concurrent program is to identify the activities which are concurrent. It is also difficult to ensure the correctness of concurrent programs. In addition, concurrent programs are much more difficult to debug than sequential programs. * Ada is a registered trade mark of the U .8. government, Ada Joint Program Offi ce. - 2- Occasionally, asynchronous processes must interact with one another and these interactions can be complex. The following sections constitute a brief presentation of our explorative work for software reusability and concurrent programming using Ada. 2. Reusability Issue 2.1 Software Components and Their Reusability The term "; Computer Software" is used very often by most professionals and many members of the public at large . They feel they understand it. Most professionals have an intuitive feeling of it, but there is no complete and formal definition . Informally, computer software can be regarded as information having two basic formats: non-machine-executable and machine-executable[2]. Any information unit created by a software engineer during software development, such as specification, design, code, data and so on , is a software component. More abstractly, the problem solving knowledge, programming knowledge, problem domain knowledge and other knowledge which are used by software engineers in order to solve a problem by computer software are also software components. These knowledge assume specification, design, code and data as their external formats . Therefore, software reusability manifests itself in many forms . It can roughly be classified into reuse of data, reuse of code including programs, systems and libraries, reuse of programming knowledge including system architecture and detail design , reuse of domain knowledge including specification and reuse of abstract modules[3 ,4] . With respect to the time the reusable components are used , software reusability can be divided into two groups-reusability of components in building a variety of structures and reusability of components in performing a variety of tasks . Figure 1 depicts this idea. 2.2 Software Reusability Problems As a general engineering principle, reusability implies the obvious system benefits of lower cost, increased reliability and easier maintenance. It appears that the reusability principle should be used widely in software engineering. Unfortunately this is not true . According to some statistics, in commercial banking and insurance applications, about 75% of the functions were common ones that occurred in more than one program. There is also statistical data indicating that less than 15% of the code written in 1983 was unique, novel and specific to individual applications while the remaining 85% was more or less generic[3]. The main reason for the above situation is that regardless of the particular programming technique , design methodology or developing environment, software engineering is divided into individual creative processes. The exact nature of those individual process, such as problem identification, conceptual solution , design of implementation, testing of solution and so on , is poorly understood. Hence, reusing the software designed by other people is in general not a simple matter. Besides this , there are other reasons. First, some software is ";malevolent" because it is strongly self-centered and highly proprietary. That means it cannot be reused by organizations other than the developer. Second, even with "; benevolent" software, there are software engineers who may feel that they could produce a "; better" solution anyway. Third, some software may have to be modified excessively to fit the new application precisely. Fourth , some software may require a great effort to be understood in order to be reused. In the last two situations, software engineers would rather rewrite .[ 5] . - 3 - Figure 2 shows reusable component characteristics m terms of their functionality and scope. 2.3. Explorative Work in Software Reusability 2.3.1. The Problems In the following sections three problems are investigated. The problems are the environmental monitor problem[6], the cruise-control problem[7] and the message switching problem [8]. Those three are real-time problems. All of them require parallel processing, realtime control, exception handling and unique input/ output control. 2.3.2. The Method The ";object-oriented methodology" is chosen for solving the problems . Object-oriented methodology is a software approach in which the decomposition of a system is based upon the concept of objects. In real-time systems, often the problem is given by the description of entities, their behaviors and relations among the entities . In addition, object abstraction is a promising avenue for reusability. The object-oriented design methodology has the following steps [6, 7]: • Identify the objects and their attributes. • identify the operations. • Establish the visibility of each object in relation to other objects. • Establish the interface of each object. • Implement each object. 2.3.3. The Language Ada was chosen as the design language. Ada was chosen as a design language because of its rich variety of program units such as subprograms, packages and tasks . It is convenient for software engineers to choose one of the most suitable program units to represent classes of objects, instances of objects and primitive operations of each object. More importantly, the capabilities of Ada make it possible for us to break from the traditional flat, sequential design style into the object-centered design style. In particular, using Ada as a design language can improve the quality of the design by highlighting interfaces and formally capturing many important design decisions. 2.3.4. The Case Studies The case studies are concerned with the use of object-oriented design method for software reusability. The goal of the case-study was to explore how well the object-oriented method can apply the reusability principle. More specifically, how well the object-oriented method can recognize and abstract reusable software components for a specific class of problems (the real-time systems) . In this paper, the detailed case explanations will not be presented. Only are the observations and experiences listed. For each problem, first, a problem definition in problem space is given , then an informal system architecture design in conceptual solution space is presented . In the solution space , the details will be ignored; only objects and their operations are indicated. - 4- 2.3.4.1. Ca.se-1: Environmental Monitor Problem The environmental monitor problem is explained in detail in [6]. The figure 3 is the problem definition abstracted from [ 6] . And the figure 4 is the partial formalization of the system architecture design. Problem~ Objects and their Operations: ( 1) A user interacts with the system by setting the sensor limits, reading the status of all sensors , or q.u.it the system . ( 2) A printer Jlli,n.ts. the current reading of each sensor or s.h.u.t. rl.mYn. by the user. (3) Sensor r.e.ads. temperature or setting limit or s.h.u.t. d.mm. or initialized by user. ( 4) Monitor responds to out-of-limits sensor reading or detects the printer failure by setting an alarm. Also the alarm can be sh.n.t ~ Keeping the object-oriented methodology in mind, the transformation from the figure 3 to figure 4 is straightforward. In the step of identifying the objects and their attributes, the decision for specific representation of objects is delayed . We only take into account what objects in the problem space we are interested in . In general, the nouns denote the objects and the adjectives represent the attributes of each object. After identifying objects, extracting operations appropriate to each object is straightforward . The verbs attached to each object can be abstracted as corresponding operations . Those operations are visible outside. The object together with its operation forms one unit which can be defined by one program unit. The arrows are used to indicate the operation application direction . If there is one arrow from object A to object B, it indicates that the object A does one operation requiring something from B, or triggering B's operation . In this situation , the object A is an active object. If A is a passive object which does not have operations, all the arrows connected to A must point to A. System Specification in A.d.a package PRINTER is task THEYRINTER is entry PRINT_READING (THE_ITEM: in STRING); entry SHUT_DOWN; end THEYRINTER; end PRINTER; package ALARM is task THE_ALARM is entry REPOR T_OF LIMIT; entry REPORTYRINTER_ERROR; entry SHUT_DOWN; end THE_alarm; end ALARM; generic type NAME is ( ); type VALUE is range ; SENSE_RATE: in DURATION; - 5 - with function VALUE_OF (THE_NAME m NAME) return VALUE; with procedure SOUND_ALARM; package SENSORS is task type SENSOR is entry START (THE_NAME : in NAME) ; entry SET_LIMIT (THE_VALUE : in VALUE) ; entry GET_8TATUS (THE_VALUE : out VALUE; OUT_OF LIMITS : out BOOLEAN); entry SHUT_D OWN; end SENSOR; end SENSORS; type COMMAND is (SET_LIMIT, GET_8TATUS, SHUT_DOWN); procedure MONITOR is -- local type declarations -- ALARM task specification -- PRINTER task specification -- SENSOR task specification -- USER_COMMAND declaration -- task bodies begin -- manipulation of USER_COMMAND end MONITOR; Abstractions from ease-l: 1 Object-oriented design methodology is fundamentally different from traditional functional methods. Traditional functional methods factor system in problem space into operational modules in solution space , in which each operation module represents a major step in the overall transformation process. The object-oriented design method decomposes problem around objects that exist in the real world. 2 The object-oriented design method needs different requirement analysis to support. During the problem definition step, the requirement analyst must keep object orientedness in mind, because different analysis will get different problem decompositions. During the problem analysis , a good domain knowledge certainly helps a lot. 3 4 - 6 - It is necessary to use object-oriented system specification methodology during the system specification step. The specification is the result of a process of requirements analysis, and represents the first complete description of the conceptual solution. It contains clear descriptions of the external view of the system the user required along with any related or implied system constraints. The object-oriented system specification ideally closely matches the user's problem. It is desirable to make system specification consistent, completely, comprehensible and traceable to the requirement. Also , the object-oriented s specification will make the transaction between system specification and system design smooth, and easy. It is desirable to keep the system specification to be independent from the implementation . That means the transaction from problem space to conceptual solution space should not be restricted by implementations, especially not limited by the capability of implementation tools. Ada has the design description capability, but there is no direct notations for objects. 2.3.4.2. Ca.se-2: Cruise-Control System The Cruise-Control system problem is given in [7]. A data flow diagram (figure 5) is used to express the problem. This problem is more complicated than the environmental monitor system problem. The data flow diagram gives a clear view of each main step of the system transaction . Using object-oriented method the problem space is abstracted as in figure 6. From problem space the system architecture was abstracted using the object-oriented method (figure 7) . First, the objects and their operations are identified . Especially, the passive objects (no operations) and active objects (having operations) are distinguished; the required operations (triggered externally) and suffered operations (not triggered by outside world) are distinguished. For example, brake and accelerator are passive objects, others are active objects. Throttle has two v visible operations which are triggered by other objects and one invisible operation which is hiding in throttle's body. That invisible operation can only be seen by throttle itself. Problem~ Objects and their Operations: ( 1) Pulse from wheels: A pulse is. 5.f.D.t for every revolution of the wheel (2) Clock: Timing~ every milli-second. ( 3) Driver: If the driver s.fls. system on, it denotes that the cruise-control system should maintain the car speed. Also , the driver may require increasing or decreasing maintained speed when cruise-control on . Or, the driver requires resuming the last maintained speed. ( 4) Brake: If brake is. pressed, then cruise-control temporarily reverts to the manual control. ( 5) Brake state: Cruise-control requires the current brake state . ( 6) Engine state: If engine-on, the cruise-control may be active. (7) Accelerator: Accelerator state is. required by the cruise-control system. (8) Throttle: Setting the throttle value . Abstraction from case-2: - 7 - 1 An object is an entity that exists in time and space . An obje ct also has state . The operations indicate the object's state. Each object will be in one state at one time . The object state may change by the activity of other objects or as the time changes. We can trace the system activity in the state space . 2 We need facilities to indicate the time constraints of the system . For example, the clock's and wheel's operations must be synchronized. and the throttle has one operation-desired speed which can be visible by all the three operations of control increase, decrease and resume . 3 The ease-l and case-2 deal with different problems. The objects abstracted from these two problems are different, except one situation that the control object in cruise-control system is interacting with driver's requirement, the monitor object in environmental monitor system is interacting with user's command. Both system needs an interface with the user who will dynamically input his requirements / commands. This interface can be a reusable component. 2.3.4.3. Ca.se-3: Message Switching System The message switching problem is addressed in [8] . Figure 8 is the problem abstraction . The message switching system consists of a network of switching nodes connected via trunk lines. Each switching node is locally attached to subscribers , an operator, archive tape , and auxiliary memory. The operator can send and receive messages like any subscriber. In addition , the operator monitors and controls the node activity. The function of each node is to route input messages to one or more output destinations . Three successive phases are involved in processing each message: input, switch and output. inputReading input from a local subscriber or trunk link and storing the message on both auxiliary memory and an archive tape . switch Each input message contains a header, body and end marker. The header is examined to determine the output destination. For each destination , a directory is consulted to determine the appropriate output line to use and a copy of the message is queued for output on each distinct line. output A message is retrieved from auxiliary memory and written on the appropriate output line . Each message contains a priority as part of its header so that, at all times, the highest priority message for an output line is transmitted . If preempted, a message is later transmitted in its entirety. Having the experience of solving previous two problems, abstract objects and their operations can be obtained by repeatedly using the object-abstraction. Thus, we get the problem definition in object space (figure 9) and the concept solution (figure 10~. Pro b 1e m 8.p.a.c.e. Objects and their Operations: (1) Switches i.n.p.u.t message head and control signal. (2) Switches s.tu.r.e. the message on the auxiliary memory. (3) Switches archive message on the tape. . 8 . ( 4) Switches consult the cross-reference table to determine the appropriate output port. (5) Switches handle th e output message priority and preemption-output queue. (5) Switches output the message head and control signal to the output port. (7) Operator monitors the switch system . (8) Output ports retrieve the message body. This problem solution must solve the following four problems: • maximize I/ 0 parallelism , • control different I/0 devices, • coordinate mode activity, • handle output message preemptions. System Specification in A.d.a type MSG_A.DDR is STRING(l.20); type MSG is record HEAD : STRING(l.20); BODY: STRING(l.lOO); end record; task type ARCHIVE_TAPE is entry ARCHIVE (THE_NISG: in MSG); end ARCHIVE_TAPE; task type AUX_MEM is entry OUTPUT_NISG (THE_NISG_A.DDR in MSG_A.DDR); entry INPUT . MSG (THE_NISG_A.DDR : in MSG_A.DDR); end AUX_MEM; task type OUTPUT_CONTROL IS entry OUTPUT. . Jv1SG ( OUTPUT_pORT THE_NISG_A.DDR : in end OUTPUT_CONTROL; task type SWITCH is in STRING(1.20); MSG_A.DDR); entry INPUT_CONTROL (THE_NISG in MSG); end SWITCH; task type OPERA TORS is entry INPUT_MSG (THE_MSG m MSG); end OPERA TORS; task type SUBSCRIBER is entry INPUT.Jv1SG (THE_NISG in MSG); end SUBSCRIBER; OPERA TOR : OPERATORS; task body OPERA TOR is -- local type declarations -- ARCHIVE TAPE task declaration - g - -- AUXILIARY MEMORY task declaration -- OUTPUT CONTROL task declaration -- REFERENCE TABLE data structure declaration -- OUTPUT QUEUE data structure declaration THE.BUBSCRIBER : array ( 1.100) of SUBSCRIBER; task body THE.BUBSCRIBER is task OUTPUT_MSG; • • • end THE.BUBSCRIBER; THE.BWITCH : SWITCH; task body THE.BWITCH is procedure STORE_MSG (THE_MSG in MSG); procedure ARCHIVE_MSG (THE_MSG : in MSG); procedure CONSULT_TABLE (OUTPUT _FORT : out STRING( 1.10); THE_MSG : in MSG) ; procedure PREEMPTION (THE_MSG : in MSG; -- subprogram body • • • end THE.BWITCH ; -- other task bodies begin loop PRIORITY : out INTEGER); accept INPUT_MSG ( THE_MSG in MSG) do • • • end INPUT_MSG; end loop; end OPERATOR; - 10- Abstractions from case-3: 1 2 Using the object-oriented method to do system design really requires a great deal of real world knowledge and intuitive understanding of the problem, especially for abstracting operations. Listing the goal of the system requirements helps to decide which object should do which operation . For example, for this specific problem, the solution must solve the maximizing I / 0 parallelism and control different I/0 devices, it had better make auxiliary memory and archive-tape become active objects. The control issue and time constraint are important. It definitely needs some facilities to specify them . For example, the input-control for switch needs to specify its input trigger is exclusive OR, its output is sequential. In system architecture design using Ada, it seems that Ada's program units are not sufficient for this specification. 3 The three case studies come from different application fields. The software systems are required for different purposes. They deal with totally different objects. From the domain object level, it is not clear what is the reusable component. 2.3.5. Summary of Reusability Concepts 1 2 3 Software reusability is an attribute of software relative to its applicability in different computational contexts as well as different application areas. The object-oriented methodology is a better fit for real applications than other traditional methodologies. It is at least useful to apply reusability principle in the same application domain. Reusable software components tend to be objects or classes of objects. Given a rich set of reusable software components, the implementation would proceed via composition of these parts, rather than further decomposition. The greater abstraction of object models provides greater potential reusability. The level of abstraction has a great effect on reusability. Higher the abstraction, the greater overhead it may require for interpretation and it provides less intuitive understanding. Lower the abstraction, the chance of recognizing reusable components become less. 3. Concurrency Issues 3.1. Synchronization In a real time system, several processes may access the same data at the same time . This situation may result in inconsistent data. A language dealing with concurrent programming must guard against this possibility. That is, the language must provide the means to guard against time-dependent errors. When a process is accessing shared data, the process is said to be in its critical section ( or critical region). The concept of allowing only one process into its critical region at a time is known as mutual exclusion. An elegant software implementation of mutual exclusion was presented by Dekker. Dijkstra also abstracted the key notation of mutual exclusion in his concept of semaphores [10] . 3.1.1. Semaphores A semaphores is a protected integer variable which can take on only non-zero values and whose value can be accessed and altered only by the operations P(s), stands for wait, and V(s), stands for signal, and an initialization operation. Binary semaphores can accept only the - 11 - values 0 or 1. General semaphores can accept non-negative integer values . The definition of P and S is as follows : P(s): If s > 0 then s :=s- 1 else the execution of the process that called P(s) is suspended. V(s) :If some process P has been suspended by a previous P(s) on the semaphores then wake up P else s := s + 1 3.1.2. Monitors The above methods are so primitive that it is difficult to express solutions for more complex concurrency problems, and their presence in concurrent programs increases the existing difficulty of proving program correctness [ 12]. Another drawback of the above methods was that every procedure had to provide its own synchronization explicitly. A desire to provide the appropriate synchronization automatically led to the development of a new construct, a monitor [10]. A monitor is a concurrency construct that contains both the data and the procedures needed to perform allocation of a shared resource or group of shared resources. The monitor enforces information hiding - processes calling the monitor have no idea of, nor access to, data inside the monitor. Mutual exclusion is rigidly enforced at the monitor boundary- only one process at a time is allowed to enter. If a process inside the monitor cannot proceed until a certain condition becomes true, the process calls wait (variables name) and waits outside the monitor on a queue for ";variables name" to be signaled by another process. To ensure that a process already waiting for a resource eventually does get it, the monitor gives higher priority to a waiting process relative to a new requesting process attempting to enter the monitor. A process calling wait is threaded into the queue; a process calling signal causes a waiting process to be removed from the queue. 3.2. Ada Rendezvous Ada is a higher-level program mg language which can be used for conventional programming. In this section, we are concerned with the features of Ada related to concurrent programming. Central to these features is the concept of the task which is a program module that is executed asynchronously. Tasks may communicate and synchronize their actions through : 1 accept statement: It is a combination of procedure calls and message transfer. 2 select statement : It is a non-deterministic control structure based on guarded command construct. The BNF of them are : accept statement has the form : accept entry~imple_name [( entry_index)] [formal_part] do sequence_of~tatemen ts end [ entry~imple_name]; select statement has the form : select [when boolean_expression =>] - 12- acce p L.s tate men t seq ue n ce_of .s tatements {or [when boolean_expression =>] acce pt.s tatemen t} se qu ence_of.s tatemen ts [else se quence _of.s tatemen ts] end select; Following sections are Ada programs that implement the above mentioned concurrent problems. 3.2.1. Dekker's Algorithm procedure DEKKER is FAVOREDPROCESS : INTEGER; Pl WANTSTOENTER, P2WANTSTOENTER : BOOLEAN; procedure TWOYROC (PlWANTSTOENTER, P2WANTSTOENTER : in out BOOLEAN; FAVORED PROCESS : in out INTEGER) is task Pl; task body Pl is begin loop Pl WANTSTOENTER :=TRUE; while P2WANTSTOENTER loop if FAVOREDPROCESS = 2 then Pl W ANTSTOENTER := FALSE; while FAVOREDPROCESS = 2 loop null; busy waiting end loop; PlWANTSTOENTER :=TRUE; end if; -- you can enter critical region for Pl from here FAVOREDPROCESS := 2; PlWANTSTOENTER :=FALSE; -- you may put other s tuff here end loop; end loop; endPl; task P2; task body P2 is begin P2WANTSTOENTER :=TRUE; while Pl WANTS TO ENTER loop - 13 - ifF A VORED PROCESS = 1 then P2WANTSTOENTER := FALSE; while FAVORED PROCESS = 1 loop null; busy waiting end loop; P2WANTSTOENTER :=TRUE; end if; -- you may enter critical region for P2 form here FAVOREDPROCESS := 1; P2WANTSTOENTER :=FALSE; --you may put other stuff here end loop; end loop; endP2; begin null; -- main program for TWO_pROC end TWO_pROC; begin P1 W ANTSTOENTER := FALSE; P2WANTSTOENTER :=FALSE; FAVORED PROCESS := 1; TWO_pROC (P1wantstoenter, P2wantstoenter, favoredprocess) ; end DEKKER; 3.2.2. Semaphore (Binary) The following are two approaches of Binary Semaphore . The first one is described in [10] and the second one exactly follows the original definition of sem aphore. 3.2.2.1. procedure BINARY ,SEMAPHORE is ta.sk SEMAPHORE is entry P; entryV; end SEMAPHORE; ta.sk body SEMAPHORE is begin loop accept P; only after P has been called that V accept V; can be accepted and vice versa end loop; end SEMAPHORE; task Pl; task body P 1 is begin loop - 14- -- you may put the remainde r of the program one here SEMAPHORE.P; -- call the P entry -- now you can go ahe ad to access the critical region SEMAPHORE.V; -- call the V entry end loop; end Pl; task P2; task body P2 is begin loop -- you may put the remainder of the program one here SEMAPHORE.P; -- call the Pen try -- now you can go ahead to access the critical region SEMAPHORE.V; -- call the V entry end loop; end P2; begin -- main program for BINARY_8EMAPHORE null; end BINARY_8EMAPHORE; 3.2.2.2. According to the definition, semaphore is a protected variable whose value can be accessed and altered by operations P and V and initial operation. So, we declare semaphore as private type and only those subroutines inside this package can access its value package BIN_8EMAPHORE is type SEMAPHORE is private; procedure P (S: in out SEMAPHORE); procedure V (S: in out SEMAPHORE); procedure INITIAL_8EMAPHORE(S: in out SEMAPHORE; VALUE : in INTEGER); private type SEMAPHORE is record VAL : INTEGER; end record; end; package body BIN.SEMAPH ORE is NO_ WAITING :INTEGER := 0; - 15 - -- Il:um her of processes that have been suspended task CONTROL is entry SUSPEND ; entry W AKE_UP; end; task body CONTROL is begin loop accept WAKE_UP do accept SUSPEND; end; end loop; end CONTROL; procedure P (S : in out SEMAPHORE) is begin if S.VAL > 0 then S.VAL := S.VAL - 1; else NO_ WAITING := NO_WAITING + 1; CONTROL.SUSPEND; -- suspend the process end if; endP; procedure V (S : in out SEMAPHORE) is begin if NO_ WAITING > 0 then CONTROL.WAKE_UP; -- wakeup one of the suspended processes NO_WAITING := NO_WAITING- 1; else S.VAL := S.VAL + 1; end if; end V; procedure INITIAL.SEMAPHORE (S: in out SEMAPHORE; VALUE : in INTEGER) is begin S.V AL :=VALUE; end INITIAL_sEMAPH ORE; end BIN_sEMAPH ORE; - 16 - with BIN_sEMAPHORE; use BIN_sEMAPHORE; procedure SEMAPH ORE_EXAMPLE is S: SEMAPHORE; procedure TWOYROC is task PROCESSONE; task body PROCESSONE is begin loop -- put some stuff here P(S); -- now you are inside the critical region one V(S); -- put some other stuff here end loop; end PROCESSONE; task PROCESSTWO; task body PROCESSTWO is begin loop -- put some stuff here P(S); -- now you are inside the critical region two V(S); end loop; end PROCESSTWO; begin null; -- main program for two_proc end TWOYROC; begin -- main program for semaphore example INITIAL_sEMAPHORE(S, 1); TWOYROC; -- now, two processes are executing concurrently - 17 - end SEMAPH ORE_EXAMPLE; 3.2.3. Binary Semaphore Using Monitor Concept In the following example we describe the implementation of a binary semaphore by a monitor written in Ada. generic package GENERIC_MONITOR is task type COND PTR is entry WAIT; entry SIGNAL; end COND PTR; type CONDITION is access COND PTR; -- This condition type of variab le provides a queue for wait entry, also for the signal entry. end GENERIC_MONITOR; package body GENERIC_MONITOR is task body COND PTR is begin loop accept SIGNAL do accept WAIT; end; end loop; end CONDPTR; end GENERIC_MONITOR; with GENERIC_MONITOR; procedure SEMAPHORE_USE_MONITOR is This package(monitor) performs information hiding. Procedures calling the monitor have no idea of, nor access to, data inside the monitor. package MONITOR is procedure P; procedure V; end MONITOR; package body MONITOR is - 18- package TEMP is new GENERIC_MONITOR; use TEMP; NOT_BUSY : CONDITION; BUSY: BOOLEAN := FALSE; procedure P is begin if BUSY then NOT_BUSY.WAIT; -- wait entry provides a queue for the -- procedures waiting to be accepted end if; BUSY :=TRUE; endP; procedure V is begin BUSY := FALSE; NOT_BUSY.SIGNAL; -- wake up the procedure at the first -- on the queue of wait entry endV; end MONITOR; use MONITOR; procedure TWOYROC is task P1 ; task body P1 is begin loop P; -- you can enter the critical region 1 now V ; -- you may put the rest of the stuff here end loop; endP1; task P2; task body P2 is begin loop P; -- you can enter the critical region 2 now V; -- you may put the rest of the stuff here end loop; end P2 ; - 1 g - begin -- main program of two_proc null; now Pl and P2 are executing concurrently end TWOYROC; begin -- main program of SEMAPHORE_USE_MONITOR TWO_?ROC; end SEMAPHORE_USE_MONITOR; 3.3. Real-Time Interrupt Handling Efficient interrupt handling is critical in real-time environments. Interrupts are used to control the transfer of data to and from external devices, which often generate interrupts at high frequencies. If the interrupt is not handled quickly, external data can be lost or overall efficiency of the system can be severely degraded . Real-time performance requirements are determined by the minimum time between arrival of interrupts and the maximum time that can elapse while an interrupt is pending before data are lost or a hardware time-out occurs. When an interrupt occurs, a processor must begin executing code in another environment. Context switching is machine dependent, and in most modern computers it is supported by special privileged instructions. Interrupt handling takes at least two context switches, one from the program currently running to the interrupt handler and one at the completion of the interrupt handler. However, neither of these need to be full context switches, nor do interrupts need to be disabled for long. 3.3.1. Language Mechanisms for Interrupt Handlers in Ada Most real-time software for embedded systems use interrupt handlers to control and communicate with external devices . Interrupt handlers are usually responsible for initializing devices, initiating physical I/0 operations and responding to both anticipated and unanticipated interrupts. Ideally, interrupts would arrive only as a direct consequence of a previously issued software command. However, in practice, interrupts can arrive unexpectedly or fail to arrive when expected. Interrupt handlers have traditionally been written in assembly language because few high-level languages provide support for interrupts and because interrupt handlers must often meet severe real-time constraints [15] . Mechanisms for implementing interrupt handlers provided by systems programming languages such as Concurrent Pascal and Modubv-2 are usually not optimized for real-time applications. Since Ada was intended for embedded applications, interrupt-handling mechanisms were integrated into Ada. The Ada Language Reference Manual (LRM) [18] briefly describes interrupt handlers and their semantics (in sec. 13.7) . The following example from the LRM illustrates the specification of an interrupt handler: t.a.sk INTERRUPT_HAND LER is entry DONE; for DONE use at 16#40#; end INTERRUPT_HANDLER; - 20- The task specification n, or interface, de fines each externally visible task operation , referred to as an entry. The semantics of an interrupt is defined in terms of the rendezvous which was dis- [ cussed in previous sections. Each Ada process, or task, declares a list of entry procedures that can be called by other tasks. A rendezvous occurs between a calling task and the serving task when the caller is waiting to execute an entry call, and the server is waiting to accept the en try call. Each task specification must have a corresponding body that contains the executable code of the task. The following is a more realistic example of an interrupt handler for a printer device [ which illustrates some of the hardware and software run-time support actions that must be considered when programming interrupt handlers. task PRINTER_8ERVER is entry OUTPUT__LINE (ST : in STRING); entry IO_INTERRUPT; for IO_INrERRUPT use at 16#1234#; end PRINTER_8ERVER; task body PRINTER_8ERVER is HARDWAREYORT : CHARACTER; for HARDWAREYORT use at 16#1234#; begin loop accept OUTPUT__LINE (ST : in STRING) do for INDEX in ST'RANGE loop HARDWAREYORT := ST(INDEX); accept IO_INTERRUPT; end loop end OUTPUT__LINE; HARDWAREYORT := ASCII.CR; accept IO_INTERRUPT do HARDWAREYORT := ASCII.LF; end IO_lNTERRUPT; accept IO_INTERRUPT; end loop; end PRINTER_8ERVER; The above example illustrates how it is possible in Ada to serve the same interrupt entry point with different accept bodies. 3.3.2. Interrupt Handling Model in Ada Hardware interrupts generated by a device or its controller are usually described informally by means of flowcharts and timing diagrams, in contrast to software whose behaviour is defined by a program . A uniform description of both the hardware and software makes it possible to define a model for a general-purpose , interrupt-handling mechanism [17]. The complete chain of control from the hardware to the server can be modeled by three Ada tasks , where the first two are asynchronous tasks external to the server. The first task - 21 - represents a hardware device, which is a producer of interrupts and a producer or consumer of data. The second task represents the hardware /software interface , and performs interrupt enabling , disabling and context switching outside the normal Ada rendezvous mechanisms. The task specifications are as follows: HARDWARE_DATA : DEVICE_DEPENDENT; task ASYNCHRONOUS_HARDWARE; task INTERFACE is entry D ISPA TCHJNTERRUPT; end INTERFACE; task SERVER is entry OUTPUT_LINE (ST: in STRING) ; entry IOJNTERRUPT; for IOJNTERRUPT use at 16#1234#; end SERVER; The advantage of adopting an Ada model for devices and their run-time support is that the semantics of interrupt handling can be defined entirely in Ada. This model can be used conveniently to illustrate some of the problems an effective implementation must be able to handle : 1. hardware that generates interrupts at power-up and in error situations where there is no Ada program or handler ready to serve interrupts 2. hardware that generates spurious interrupts when the interrupt handlers are not ready to serve interrupts 3. hardware that requires immediate action on the interrupt to prevent the loss of data 4. a hardware interrupt that demands a specific program action to mask it out so that it is not constantly pending. 3.3.2.1. The Hardware/Software Interface The interface is modeled by a task representing the connection between the hardware and server tasks that are running concurrently on two conceptually different processors with a need to communicate. The hardware task has no knowledge of the state of the software and can try to interact with it at unexpected times. Some hardware tasks must be serviced immediately, even if the server is not ready, and can therefore generate unexpected interrupts (and race conditions in the server) when interrupt handlers are too slow to handle successive interrupts. A model for robust and usable interrupt support environment must provide services for situations in which either software or hardware is malfunctioning. This kind of failure handling can be represented by the following body of the interface task . task body INTERFACE is begin loop - 22- accept DISPATCHJNTERRUPT do select-- conditional en try call SERVER.IOJNTERRUPT; else FAIL URE_8ERVER.SERVER_NOT_READ Y; end select; end DISPATCHJNTERRUPT; end loop; end INTERFACE; 3.3.2.2. The Hardware Task The Ada hardware task example below models many of the problems caused by actual hardware . In the ex ample , the server and interface tasks communicate with the hardware task v1a a global HARDWARE_DATA variable, which includes the fields INTERRUPT_ENABLED , STARTJO and IO_DA TA. task body ASYNCHRONOUS_HARD WARE is -- lo cal declarations procedure GENERATEJNTERRUPT(TIMEOUT NATURAL) is begin PENDING JNTERR UPT: for I in O . TIMEOUT loop if HARDWARE_DATA.INTERRUPT_ENABLED then select -- conditional en try call INTERF A CE.D ISPA TCHJNTERRUPT; ifDATA_UNSTABLE then HARDWARE_DATA.IO_DATA := IND ETERMINANT; end if; exit PENDINGJNTERRUPT; else if DATA_UNSTABLE then HARDWARE_DATA.IO_DATA end if; end select; end if; end loop PENDINGJNTERRUPT; end GENERA TEJNTERRUPT; begin loop IND ETERMINANT; SERVICEJNTERVAL := SERVICEJNTERV AL + 1; if SERVICEJNTERVAL > SERVICE_TIMEOUT then GENERATEJNTERRUPT (INTERRUPT_TIMEOUT) ; SERVICEJNTERVAL := 0; end if; if HARDWARE_DATA.STARTJO then - 23- for I in O . DO_IO_TIME loop null; end loop; HARD WARE.J) ATA.IO_DA TA :=VALID _DATA; GENERA TE_INTERRUPT ( IO_D ONE_TIMEOUT); end if; end loop; end ASYNCHRONOUS.JIARD WARE; The hardware and interface model is sufficiently general to cover a wide range of hardware devices and enables a specification of requirements for designing a system hardware support package. Without such a formal definition, it is difficult to verify the correctness of the interrupt run-time support package. In addition, the model permits a software task to simulate a hardware device and test the interrupt run-time support package. 3.4. Conclusion The traditional approach to implementing interrupt handlers using assembly language le ads to systems that are difficult to develop, maintain or adapt to new hardware and software requirements. By providing a high-level interface, Ada simplifies the design and maintenance of interrupt handlers . And Ada defines the semantics of tasking mechanism, making it possible to construct asynchronous and synchronous programming models. Ada not only provides a powerful tool from software resuability point of view but also provides a powerful tools for concurrent programming. It really is "; The Language for the 1980s ";(May be 1990s). References [1] P.Wegner ";Capital-Intensive Software Technology,"; IEEE Software, Vol. 1, No.3, July, 1984, p. 3-45. [2] R. S. Pressman, Software Engineering, McGRAW-HILL, Inc. 1987, p. 5-8. [3] T . C. Jones , ";Reusability In Programming: A Survey of the State of the Art,"; IEEE Trans. Software Eng., Vol. SE-10, No.5, sept. 1984, p. 488-497. [ 4] G. Jones, ";Software Reusability: Approaches and Issues,"; Pro c. of IEEE computer Software f3 Applications Gonf., Nov. 1984, p. 476-477 . [ 5] M. D . Lubars, ";Code Reusability in the Large vs . in the Small,"; AG!vf SIGSOFT Software Engineering Notes, Vol. 11, No.1, Jan. 1986, p. 21-27. [6] G . Booch, Software Engineering With Ada, The Benjamin/Cummings Publishing Company, Inc., 1987, p. 334-354. [7] G. Booch, "; Object-Oriented Development,"; IEEE Trans. software Eng., Vol. SE-12, No. 2, Feb. 1986, p. 211-221. [8] G . R. Andrews, ";The Design of a Message Switching System : An Application and Evaluation of Modula,"; IEEE Trans. Software Eng., Vol. SE-5, No.2, Mar. 1979, p. 138- 147. [9] J. G . P. Barnes, Programming in Ada, Addison-Wesley, 1984. - 24- [10] M. Ben-Ari, Principles of Concurrent Programming, Prentice-Hall International, 1983. [11] G. Booch , Software Engineering with Ada, Benhamin/Cummings, 1983. [ 12] H. M. D eitel, An Introduction to Operating System, Addison-Wesley, 1983. r [13] J. Peterson and A. Silberschatz, Operating System Concepts, Addison-Wesley, 1983 . [ 14] M. M. Tanik, ";A Comparative Study of Synchronization Models Exploitable for Real [ Time Software Development Environment Design and Testing", SMU Technical Report 87-CSE-1, 1987. M. M. Tanik, ";Message Based Kernel in Communications", AACI Tech . Report, 1984. ( [ 16] ";Analyzing Ada Concurrent Programming", ACM Ada LETTERS, March-April, 1987. [ 17] J. B. Rasmussen and B. Appelbe, ";Real-time Interrupt Handling in Ada", Software Practice and Experience, Vol. 17, No. 3, Mar. 1987, p.197-213. [18] United States Department of Defense, Reference Manual for the Ada Programming Language, ANSI/MIL-STD 1815A, Feb. 1983.
There is a growing body of practice and literature on the role of information and communication technologies (ICTs) in preventing and responding to violence. There is also a lot of excitement and corresponding literature about the role of the internet in non-violent change and democratization. The use of mobile phones, social networks such as Facebook and Twitter, and user-generated content (UGC) like blogs and YouTube videos in the protests in Tunisia and Egypt, as well as throughout the wider middle-east and North Africa (MENA) region have shown how ICTs can complement and augment the exercise of rights to freedom of expression, freedom of association, and freedom of peaceful assembly. This literature focuses on the use of ICTs before and during conflict, for example in conflict prevention and early warning. What about the use of ICTs in post-conflict situations; after the negotiation of peace agreements? How can ICTs be used in post-conflict interventions; more specifically in post-conflict peacebuilding and post-conflict reconstruction and recovery? What role of can be played here by social media and user-generated content?
This paper describes agricultural policy choices and tests some predictions of political economy theories. It begins with three broad stylized facts: governments tend to tax agriculture in poorer countries, and subsidize it in richer ones, tax both imports and exports more than nontradables and tax more and subsidize less where there is more land per capita. We test a variety of political economy explanations, finding results consistent with hypothesized effects of rural and urban constituents' rational ignorance about small per person effects, governance institutions' control of rent seeking by political leaders, governments' revenue motive for taxation, and the role of time consistency in policy making. We also find that larger groups obtain more favorable policies, suggesting that positive group size effects outweigh any negative influence from free ridership, and that demographically driven entry of new farmers is associated with less favorable farm policies, suggesting the arrival of new farmers erodes policy rents and discourages political activity by incumbents. Another new result is that governments achieve very little price stabilization relative to our benchmark estimates of undistorted prices, and governments in the poorest countries actually destabilize domestic prices.
The Africa Infrastructure Country Diagnostic (AICD) has produced continent-wide analysis of many aspects of Africa's infrastructure challenge. The main findings were synthesized in a flagship report titled Africa's Infrastructure: a time for transformation, published in November 2009. Meant for policy makers, that report necessarily focused on the high-level conclusions. It attracted widespread media coverage feeding directly into discussions at the 2009 African Union Commission Heads of State Summit on Infrastructure. Although the flagship report served a valuable role in highlighting the main findings of the project, it could not do full justice to the richness of the data collected and technical analysis undertaken. There was clearly a need to make this more detailed material available to a wider audience of infrastructure practitioners. Hence the idea of producing four technical monographs, such as this one, to provide detailed results on each of the major infrastructure sectors, information and communication technologies (ICT), power, transport, and water, as companions to the flagship report. These technical volumes are intended as reference books on each of the infrastructure sectors. They cover all aspects of the AICD project relevant to each sector, including sector performance, gaps in financing and efficiency, and estimates of the need for additional spending on investment, operations, and maintenance. Each volume also comes with a detailed data appendix, providing easy access to all the relevant infrastructure indicators at the country level, which is a resource in and of itself.
"This book documents the decline of white-working class lives over the last half-century and examines the social and economic forces that have slowly made these lives more difficult. Case and Deaton argue that market and political power in the United States have moved away from labor towards capital-as unions have weakened and politics have become more favorable to business, corporations have become more powerful. Consolidation in some American industries, healthcare especially, has brought an increase in monopoly power in some product markets so that it is possible for firms to raise prices above what they would be in a freely competitive market. This, the authors argue, is a major cause of wage stagnation among working-class Americans and has played a substantial role in the increase in deaths of despair. Case and Deaton offer a way forward, including ideas that, even in our current political situation, may be feasible and improve lives." --
Verfügbarkeit an Ihrem Standort wird überprüft
Dieses Buch ist auch in Ihrer Bibliothek verfügbar:
Why the state is the elephant in the room of political theory, too long ignored, and how to put this rightThe future of our species depends on the state. Can states resist corporate capture, religious zealotry, and nationalist mania? Can they find a way to work together so that the earth heals and its peoples prosper? Or is the state just not up to the task? In this book, the prominent political philosopher Philip Pettit examines the nature of the state and its capacity to serve goals like peace and justice within and beyond its borders. Doing so, he breaks new ground by making the state the focus of political theory—with implications for economic, legal, and social theory—and presents a persuasive, historically informed image of an institution that lies at the center of our lives.Offering an account that is more realist than utopian, Pettit starts from the function the polity is meant to serve, looks at how it can best discharge that function, and explores its ability to engage beneficially in the life of its citizens. This enables him to identify an ideal of statehood that is a precondition of justice. Only if states approximate this functional ideal will they be able to deal with the perennial problems of extreme poverty and bitter discord as well as the challenges that loom over the coming centuries, including climate change, population growth, and nuclear arms
Zugriffsoptionen:
Die folgenden Links führen aus den jeweiligen lokalen Bibliotheken zum Volltext:
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Herausgeber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie diese Quelle zitieren möchten.
Now we know more about why the Environmental Protection Agency last year suddenly punted on one Louisiana case trying to expand its powers beyond its legal authority, thanks to a similar case initiated by Republican Gov. Jeff Landry.
Last week, a federal district court blocked the EPA from creating rules that would allow use of disparate impact requirements in its decision-making process. This process utilizes a disparate impact study, which assesses whether proposed actions that may have differential impacts on protected classes under Title VI of the Civil Rights Act and assumes foundationally that significant differences must connote racist intentions deemed illegal.
Over two decades ago the U.S. Supreme Court instructed the EPA that it couldn't impose this requirement, but only at the tail end of the Republican Pres. Donald Trump Administration did it issue a repeal. But before the rule became final, predictably the new Democrat Pres. Joe Biden EPA dropped it. Illegally imposing the rule threatens, in this instance, hundreds of millions of dollars in state grant money from the federal government because, as part of its role in approving these, EPA insists on including the language that the state must follow.
That led to a special interest group ideologically opposed to chemical facilities in its neighborhood, RISE St. James, lodging a complaint that used disparate impact as a justification to curtail development of these facilities. Louisiana argued that doing this violated statute in that it constituted reverse discrimination, i.e. to conjure a power not granted by statute made for an illegal race-based approach, where statute only allowed race-based solutions in the case of intentional discrimination and not simply on disparate outcomes as an indicator assuming that.
The court agreed with the state that the EPA showed no reluctance to trying to impose such a standard again, and so enjoined it. Landry launched the case (GOP new Atty. Gen. Liz Murrill pushed it across the goal line), which proved to be his last solo victory in that office (along with other states, just as he exited office he also beat the EPA in another case), which will have ramifications nationwide as the EPA now can't overstretch its authority anywhere in overseeing use of federal grant monies.
But an interesting side note to that case resonates both to a case against the state the EPA abandoned last summer and to yet another now overseen by Murrill still working its way through the courts. The only part that the state didn't prevail upon was getting the court to declare that the EPA had illegally coordinated with an interest group, the Sierra Club, in having the case play out.
It turns out that the state's suspicion that coordination of the Biden EPA with sympathetic private parties, spanning not just interest groups but journalists, was much wider. Additionally, attorney general Landry was, and now Murrill is, pursuing another suit against the EPA for failure to respond to federal Freedom of Information Act requests detailing communications between it and these parties. The state asserts that the EPA is stonewalling production of these records, with the request going back a couple of years surrounding previous efforts that the EPA made to try to bludgeon the state with its permitting of chemical producers from expanding their footprints on the basis that the attempts constituted "environmental racism." This alleged that the state's decisions were a form of illegal racial discrimination based upon the alleged impact the subsequent activities would have on protected classes under the Civil Rights Act.
Landry stepped up to the plate and sued the EPA for acting on the complaint, asserting that it exceeded its authority under federal and in the process delegated authority to special interest groups. Within a month, the EPA abruptly closed its investigation without any action – an unusual resolution, since with almost every case it typically extracts concessions. At the time, speculation was it had such a thin case using environmental racism as a justification that if the case continued the judiciary could rule the EPA had exceeded its authority. The recent ruling partially addresses that issue insofar as to the use of disparate impact requirements.
Yet in light of the recent ruling, back then another and perhaps even greater motivation existed for Biden's EPA to fold up – evidence of coordination with outside sympathetic parties. Keep in mind that only weeks earlier Landry and Missouri's attorney general had succeeded in another federal court case where a number of federal agencies were enjoined from contact with social media companies after finding sufficient evidence that they had colluded to support Biden Administration policies and election activities. That case, after enforcement was put on hold by the U.S. Supreme Court, will be heard on Mar. 18.
Production of these public records not only would touch upon the communications with the Sierra Club subject to the other suit, but also apparently with a host of others entities, including special interests RISE St. James, Concerned Citizens of St. John, and the Deep South Center for Environmental Justice, as well as with journalists working with news outlets including the Times-Picayune, The Advocate, the Guardian, WGNO, WWNO, and MSNBC. These groups and outlets sympathize with climate alarmism and other politically leftist issue preferences on issues of the environment and race, and therefore are antipathetic to Landry.
Given the facts of the case before the Supreme Court – Murthy v. Missouri – it's not unreasonable to think that the EPA engaged in similar tactics to collude with certain special interests and to influence certain media outlets in order to impose its will on Louisiana. Indeed, that behavior would be a variant of the "sue-and-settle" tactics first employed by Democrat Pres. Barack Obama and adopted by Landry's predecessor Democrat former Gov. John Bel Edwards. With this, government would try to make end runs around the law by having a friendly special interest sue over a certain practice, then give the group its way – also preferred by government – through a court settlement. In the present variation with the EPA, it would come though administrative law processes, the complaint procedure.
And maybe that's another reason why the complaint last summer was dropped like a hot potato, the threat of revelation that perhaps there exists deep ties among these groups, and even these journalists, with the Biden EPA and its agenda. Continuing it further might have exposed them and opened up litigation along the lines of Murthy v. Missouri.
Except that Louisiana and now Murrill don't plan on stopping with discovery, through FOIA requests instead of as attendant to an administrative law case. Which is the right thing to do to ensure oppressive government with its beneficiary allies don't run roughshod over democracy.
Historyof Law Kyiv Rus, without regard to the more than 200-years-old period of her research, continues to remain in the field of attention of scientists, and, also, politicians. One of important questions of History of Law these period is a question about rule-making and role in him shows of witnesses. This problem was studied thoroughly enough as early as ХІХ of century and one of active acting persons of discussion round this theme there was a historian of law, native of Ukraine, professor of the Kyiv university Vasyl Hryhorovych Demchenko. In the master's degree dissertation "Historical research is about the shows of witnesses as proof in matters of judicial, accoding the Russian law before Peter the Great" (in 1859), V.H. Demchenko considered becoming of such important institute of judicial law as institute of certificate in detail. A scientist marks the value of judicial proofs in general genesis of law, he underlines that before an arbitrariness was the only means of guard of rights, but development of society resulted in a volume, that next to this means already the guard of rights for cramps began to be used. He considered that the judicial aspects related to the shows of witnesses are system enough set forth in Russian True. A researcher paid attention to that Russian True talks about witnesses in most cases in connection with those the offences that must be by them validified, in accordance with every case, where this proof is required. Therefore resolutions about them matter general not rules that spread to all possible cases of their use, but touch only private, that have force only for those cases for that they are straight set. General rules some resolutions that is unconnected with determinations about separate offences matter only. A scientist underlined that the judicial role of witnesses accoding Russian True did not have been limited to only the value of them, as judicial proof. On occasion they got the certain participating in realization of process. It touched those judicial events application of that got to parties without every participation of some government bodies ( for example,zvid). In the research of V.H. Demchenko analysed a question about the capacity of witnesses for a certificate, specified also on space of application of proofs with participation of witnesses after Russian True, order of finishing telling with participation of witnesses, force of shows of witnesses. Thus, V.H. Demchenko no doubt, was one of the most skilled specialists on history law that investigated time of Kyiv Rus. ; История права Киевской Руси, невзирая на больше чем 200-летний период ее исследования, продолжает оставаться в поле внимания ученых, а, также, политиков. Одним из важных вопросов истории права этого периода является вопрос о судопроизводстве и роли в нем показаний свидетелей. Эта проблема изучалась достаточно основательно еще в ХІХ в. и одним из активных действующих лиц дискуссии вокруг этой темы был историк права, уроженец Украины, профессор Киевского университета Василий Григорьевич Демченко. В своей магистерской диссертации «Историческое исследование о показаниях свидетелей как доказательстве по делам судебным, по русскому праву до Петра Великого» ( 1859 г.), В. Г. Демченко детально рассмотрел становление такого важного института процессуального права как институт свидетельства. Ученый отмечает значение судебных доказательств в общем генезисе права, он подчеркивает, что раньше самоуправство было единственным средством охраны прав, но развитие общества привело к тому, что рядом с этим средством уже стала использоваться охрана прав судом. Он считал, что в Русской Правде достаточно системно сформулированы процессуальные аспекты, связанные с показаниями свидетелей. Исследователь обратил внимание на то, что Русская Правда говорит о свидетелях в большинстве случаев в связи с теми правонарушения, которые должны быть ими удостоверенными, в соответствии с каждым случаем, где требуется это доказательство. Поэтому постановления о них имеют значение не общих правил, которые распространяются на все возможные случаи их использования, а касаются лишь частных, которые имеют силу только для тех случаев, для которых они прямо установлены. Значение общих правил имеют лишь некоторые постановления, которые не связаны с определениями об отдельных правонарушениях. Ученый подчеркивал, что процессуальная роль свидетелей по Русской Правде не ограничивалась одним значением их, как судебного доказательства. В некоторых случаях им предоставлялось определенное участие в осуществлении процесса. Это касалось тех процессуальных мероприятий, применение которых предоставлялось самим сторонам без какого-либо участия каких-то правительственных органов (например, свод). В своем исследовании В.Г. Демченко проанализировал вопрос о способности свидетелей к свидетельству, указал также на пространство применения доказательств при участии свидетелей по Русской Правде, порядок дсказывания при участии свидетелей, силу показаний свидетелей. Следовательно, В.Г. Демченко, без сомнения, был одним из самых квалифицированных специалистов по истории права, которые исследовали период Киевской Руси. ; Історія права Київської Русі, незважаючи на більш ніж 200-річний період її дослідження, продовжує залишатися в полі уваги науковців, а, також, політиків. Одним з важливих питань історії права цієї доби є питання про судочинство та роль в ньому показів свідків. Ця проблема вивчалася досить грунтовно ще у ХІХ ст. і однією з активних дійових осіб дискусії навколо цієї теми був історик права, уродженець України, професор Київського університету Василь Григорович Демченко. У своїй магістерській дисертації «Історичне дослідження про покази свідків як доказ у справах судових, за руським правом до Петра Великого» (1859р.), В.Г.Демченко детально розглянув становлення такого важливого інституту процесуального права як інститут свідчення. Вчений відзначає значення судових доказів у загальній ґенезі права, він підкреслює, що раніше самоуправство було єдиним засобом охорони прав, але розвиток суспільства призвів до того, що поряд з цим засобом вже стала використовуватися охорона прав судом. Він вважав, що в Руській Правді достатньо системно сформульовано процесуальні аспекти, пов'язані з показами свідків. Дослідник звернув увагу на те, що Руська Правда говорить про свідків у більшості випадків у зв'язку з тими правопорушення, які повинні бути ними посвідченими, відповідно до кожного випадку, де вимагається цей доказ. Тому постанови про них мають значення не загальних правил, що поширюються на усі можливі випадки їх використання, а стосуються лише приватних, які мають силу тільки для тих випадків, для яких вони прямо встановлені. Значення загальних правил мають лише деякі постанови, які не пов'язані з визначеннями про окремі правопорушення. Вчений підкреслював, що процесуальна роль свідків за Руською Правдою не обмежувалася одним значенням їх, як судового доказу. В деяких випадках їм надавалась певна участь в провадженні процесу. Це стосувалося тих процесуальних заходів, застосування яких надавалося самим сторонам без усілякої участі якихось урядових органів( наприклад, звід). У своєму дослідженні В.Г. Демченко проаналізував питання про здатність свідків до свідчення, вказав також на простір застосування доказів за участю свідків за Руською Правдою, порядок доказування за участю свідків, силу показів свідків. Отже, В.Г. Демченко, без сумніву, був одним з найкваліфікованіших фахівців з історії права, які досліджували добу Київської Русі.
Liberal feminism rooted in modernity, is closely connected with emancipationist political activity in which gender equality is a long-term goal guaranteed by democracy, at the same time that goal is considered to be a «self-legitimized myth». Feminism in "postmodern conditions" faces complicated and ambiguous processes of critical debates and sharp conflicts. On the one hand, it concerns acute arguments about the nature of the contemporary feminism, on the other,- the conflict between the theory of feminism and gender research, which lately has become evident. This mainly reflects disputes concerning sharp actualization of the relations between feminism and gender studies with obvious actualization of the "queer"-theories and corresponding discourses, which expands their impact on cultural and social discursive practices with visible appreciation by gender studies. It is necessary to stress, that "queer"-movements have never been simply movements for emancipation and civil rights. For feminism the latter means that the first phase (not "wave") of feminism has been completed. Opposition to feminism is not new. However, the claim that we are now in a post-feminism epoch is challenged. Feminism is effective in many countries of the contemporary world. The next phase will be developed in the context of the transformations of gender relations. At the same time, nowadays women`s interests as always occupy constant positions in the gender agenda. Moreover, feminist projects are very important for the transformations of the gender regimes and forms. Feminism can change not only the nature of the gender regime but the nature of the "late capitalism" as well: labour regimes, labour time regulations, elimination of violence both in private and public spheres etc. All these factors concern capitalism and gender regimes simultaneously. If democratic processes enhance, the feminist projects will be able to influence both the form of capitalism and the form of gender regimes.In this context such fundamental questions as whether the transgender epoch proclaims the end of gender in its traditional meaning and how all that influences the theories of feminism and gender should be answered. In the whole the «sexuality approach» provides an opportunity to consider the fundamental problems arising from our indefinite responses to the messages of the discourses and discourse practices in the world. However it is proved that the fact of the power component in the distribution of the gender roles is not less valid nowadays, and gender as a theory cannot be represented without an analysis of the corresponding power systems. Thus, if feminism is not the struggle for the equality of women, then it is a method for the scientific analysis. The idea of the non-importance of feminism now is crossed with the questions about the false importance of gender categories provided we live in the culture of the «liquid gender» where stable gender has become non-obligatory and arbitrary. It is also important that the intention to overthrow the «tyranny of the normal» is obvious both in the theory and practices of postmodernism. ; Феминистская позиция начала ХХІ в. убедительно демонстрирует актуализацию дискуссий между эмансипацией модерна и неолиберальной свободой постмодерна. Либеральный феминизм глубоко укоренен в модерне как рациональный проект эмансипации, в постмодерне освобождение человечества, в том числе эмансипация женщин, это «самолегитимный миф». Феминизм в «ситуации» постмодерна переживает сложные и неоднозначные процессы острых дебатов и внутренних конфликтов. С одной стороны, это спорные вопросы касательно феминизма модерна и его постмодернистской версии, с другой ¬¬¬– назревающей в течение последних лет конфликт между феминизмом и гендерными исследованиями с их акцентуализацией «квир»-теорий и соостветствующих дискурсов. Каким образом всё это влияет на теорию феминизма и гендера? Провозглашает ли трансгендерный век конец гендера в его традиционном значении? Ответы на эти вопросы по-разному представлены в теории феминизма и теории гендерных исследований. В целом, «сексуальное» дает возможность рассмотреть фундаментальные проблемы наших неопределенных ответов на вызовы дискурсов и дискурсивных практик в современном мире, однако по-прежнему валидным остается тот факт, что социокультурные роли мужчин и женщин не могут быть поняты без анализа соответствующих властных систем. Следовательно, если феминизм – это не отстаивание прав женщин, тогда феминистская теория явным образом дестабилизируется, превращаясь в метод. Мысль о том, что феминизм сегодня неуместен, пересекается с вопросами о том, стоит ли вообще заниматься гендерными категориями, если мы живём в культуре «гендерной текучести», где гендер становится необязательным и произвольным. Бесспорно, стремление свергнуть «тиранию нормального» очевидно и в теории, и в практике постмодернизма. Постмодернизм, отвергая не только биологический, но и психологический детерминизм, провозглашает «трансгендерный» век: гендер как идентификация с одним полом или субъектом – это фикция. Для феминисток последнее означает, что первая фаза (именно фаза) феминизма завершилась. Следующая фаза, развиваясь в контексте трансформирующихся гендерных отношений, может изменить не только природу гендерного режима, но и природу «позднего» капитализма (режимы труда, рабочего времени, борьба с насилием и т. д.). И хотя гендерные проекты динамично реконструируются в контексте трансформирующихся гендерных отношений в ХХІ веке, однако женщины по-прежнему занимают твердые позиции в феминистских проектах, более того именно феминистские проекты очень важны для изменения гендерного режима. ; Феміністська позиція початку ХХІ ст. переконливо демонструє актуалізацію дискусій між емансипацією модерну й неоліберальною свободою постмодерну. Ліберальний фемінізм, глибоко вкорінений у модерні як раціональний проект емансипації, у постмодерні звільнення людства, зокрема емансипація жінок, це «самолегітимний міф». Фемінізм у «ситуації» постмодерну переживає складні й неоднозначні процеси гострих дебатів і внутрішніх конфліктів. З одного боку, це спірні питання стосовно фемінізму модерну та його постмодерністської версії, з іншого – наростаючий впродовж останніх років конфлікт між фемінізмом і гендерними дослідженнями з їх акцентуалізацією квір-теорій і відповідних дискурсів. Яким чином усе це впливає на теорію фемінізму й гендеру? Чи проголошує трансгендерне століття кінець гендеру в його традиційному значенні? Відповіді на ці запитання по-різному подані в теорії фемінізму й теорії гендерних досліджень. У цілому, «сексуальне» дає можливість розглянути фундаментальні проблеми наших невизначених відповідей на виклики дискурсів і дискурсивних практик у сучасному світі, проте як і раніше валідним залишається той факт, що соціокультурні ролі чоловіків і жінок не можуть бути зрозумілі без аналізу відповідних владних систем. Отже, якщо фемінізм – це не відстоювання прав жінок, тоді феміністська теорія відверто дестабілізується, перетворюючись на метод. Думка про те, що фемінізм сьогодні недоречний, перетинається з питаннями про те, чи варто взагалі займатися гендерними категоріями, якщо ми живемо в культурі «гендерної плинності», де гендер стає необов'язковим і довільним. Безперечно, прагнення повалити «тиранію нормального» очевидно і в теорії, і в практиці постмодернізму. Постмодернізм, відкидаючи не лише біологічний, але й психологічний детермінізм, проголошує «трансгендерне» століття: гендер як ідентифікація з однією статтю або суб'єктом – це фікція. Для феміністок останнє означає, що перша фаза (саме фаза) фемінізму завершилася. Наступна фаза, розвиваючись в контексті гендерних стосунків, що трансформуються, може змінити не лише природу гендерного режиму, але й природу «пізнього» капіталізму (режими праці, робочого часу, боротьба з насильством тощо). І хоча гендерні проекти динамічно реконструюються в контексті гендерних стосунків, що трансформуються в ХХІ ст., проте жінки як і раніше займають тверді позиції у феміністських проектах, крім того, саме феміністські проекти дуже важливі для зміни гендерного режиму.