Frontiers of computational fluid dynamics (CFD) are constantly expanding and eagerly demanding more computational resources. Currently, we are experiencing an rapid evolution in the high performance computing systems driven by power consumption constraints. New HPC nodes incorporate accelerators that are used as math co-processors for increasing the throughput and the FLOP per watt ratio. On the other hand, multi-core CPUs have turned into energy efficient system-on-chip architectures. By doing so, the main components of the node are fused and integrated into a single chip reducing the energy costs. Nowadays, several institutions and governments are investing in the research and development of different aspects of HPC that could lead to the next generations of supercomputers. This initiatives have entitled the problem as the exascale challenge. This goal can only be achieved by incorporating major changes in computer architecture, memory design and network interfaces. The CFD community faces an important challenge: keep the pace at the rapid changes in the HPC resources. The codes and formulations need to be re-design in other to exploit the different levels of parallelism and complex memory hierarchies of the new heterogeneous systems. The main characteristics demanded to the new CFD software are: memory awareness, extreme concurrency, modularity and portability. This thesis is devoted to the study of a CFD algorithm re-factoring for the adoption of new technologies. Our application context is the solution of incompressible flows (DNS or LES) on unstructured meshes. The first approach was using GPUs for accelerating the Poisson solver, that is the most computational intensive part of our application. The positive results obtained in this first step motivated us to port the complete time integration phase of our application. This requires a major redesign of the code. We propose a portable implementation model for CFD applications. The main idea was substituting stencil data structures and kernels by algebraic storage formats and operators. By doing so, the algorithm was restructured into a minimal set of algebraic operations. The implementation strategy consisted in the creation of a low-level algebraic layer for computations on CPUs and GPUs, and a high-level user-friendly discretization layer for CPUs that is fully localized at the preprocessing stage where performance does not play an important role. As a result, at the time-integration phase the code relies only on three algebraic kernels: sparse-matrix-vector product (SpMV), linear combination of two vectors (AXPY) and dot product (DOT). Such a simple set of basic linear algebra operations naturally provides the desired portability to any computing architecture. Special attention was paid at the development of data structures compatibles with the stream processing model. A detailed performance analysis was studied in both sequential and parallel execution engaging up to 128 GPUs in a hybrid CPU/GPU supercomputer. Moreover, we tested the portable implementation model of TermoFluids code in the Mont-Blanc mobile-based supercomputer. The re-design of the kernels exploits a heterogeneous execution model using both computing devices CPU and GPU of the ARM-based nodes. The load balancing between the two computing devices exploits a tabu search strategy that tunes the workload distribution during the preprocessing stage. A comparison of the Mont-Blanc prototypes with high-end supercomputers in terms of the achieved net performance and energy consumption provided some guidelines of the behavior of CFD applications in ARM-based architectures. Finally, we present a memory aware auto-tuned Poisson solver for problems with one Fourier diagonalizable direction. This work was developed and tested in the BlueGene/Q Vesta supercomputer, and aims at demonstrating the relevance of vectorization and memory awareness for fully exploiting the modern energy efficient CPUs. ; Las fronteras de la dinámica de fluidos computacional (CFD) están en constante expansión y demandan más y más recursos computacionales. Actualmente, estamos experimentando una evolución en los sistemas de computación de alto rendimiento (HPC) impulsado por restricciones de consumo de energía. Los nuevos nodos HPC incorporan aceleradores que se utilizan como co-procesadores para incrementar el rendimiento y la relación FLOP por vatio. Por otro lado, CPUs multi-core se han convertido en arquitecturas system-on-chip. Hoy en día, varias instituciones y gobiernos están invirtiendo en la investigación y desarrollo de los diferentes aspectos de HPC que podrían llevar a las próximas generaciones de superordenadores. Estas iniciativas han titulado el problema como el "exascale challenge". Este objetivo sólo puede lograrse mediante la incorporación de cambios importantes en: la arquitectura de ordenador, diseño de la memoria y las interfaces de red. La comunidad de CFD se enfrenta a un reto importante: mantener el ritmo a los rápidos cambios en las infraestructuras de HPC. Los códigos y formulaciones necesitan ser rediseñados para explotar los diferentes niveles de paralelismo y complejas jerarquías de memoria de los nuevos sistemas heterogéneos. Las principales características exigidas al nuevo software CFD son: estructuras de datos, la concurrencia extrema, modularidad y portabilidad. Esta tesis está dedicada al estudio de un modelo de implementation CFD para la adopción de nuevas tecnologías. Nuestro contexto de aplicación es la solución de los flujos incompresibles (DNS o LES) en mallas no estructuradas. El primer enfoque se basó en utilizar GPUs para acelerar el solver de Poisson. Los resultados positivos obtenidos en este primer paso nos motivaron a la portabilidad completa de la fase de integración temporal de nuestra aplicación. Esto requiere un importante rediseño del código. Proponemos un modelo de implementacion portable para aplicaciones de CFD. La idea principal es sustituir las estructuras de datos de los stencils y kernels por formatos de almacenamiento algebraicos y operadores. La estrategia de implementación consistió en la creación de una capa algebraica de bajo nivel para los cálculos de CPU y GPU, y una capa de discretización fácil de usar de alto nivel para las CPU. Como resultado, la fase de integración temporal del código se basa sólo en tres funciones algebraicas: producto de una matriz dispersa con un vector (SPMV), combinación lineal de dos vectores (AXPY) y producto escalar (DOT). Además, se prestó especial atención en el desarrollo de estructuras de datos compatibles con el modelo stream processing. Un análisis detallado de rendimiento se ha estudiado tanto en ejecución secuencial y paralela utilizando hasta 128 GPUs en un superordenador híbrido CPU / GPU. Por otra parte, hemos probado el nuevo modelo de TermoFluids en el superordenador Mont-Blanc basado en tecnología móvil. El rediseño de las funciones explota un modelo de ejecución heterogénea utilizando tanto la CPU y la GPU de los nodos basados en arquitectura ARM. El equilibrio de carga entre las dos unidades de cálculo aprovecha una estrategia de búsqueda tabú que sintoniza la distribución de carga de trabajo durante la etapa de preprocesamiento. Una comparación de los prototipos Mont-Blanc con superordenadores de alta gama en términos de rendimiento y consumo de energía nos proporcionó algunas pautas del comportamiento de las aplicaciones CFD en arquitecturas basadas en ARM. Por último, se presenta una estructura de datos auto-sintonizada para el solver de Poisson en problemas con una dirección diagonalizable mediante una descomposicion de Fourier. Este trabajo fue desarrollado y probado en la superordenador BlueGene / Q Vesta, y tiene por objeto demostrar la relevancia de vectorización y las estructuras de datos para aprovechar plenamente las CPUs de los superodenadores modernos. ; Postprint (published version)
Frontiers of computational fluid dynamics (CFD) are constantly expanding and eagerly demanding more computational resources. Currently, we are experiencing an rapid evolution in the high performance computing systems driven by power consumption constraints. New HPC nodes incorporate accelerators that are used as math co-processors for increasing the throughput and the FLOP per watt ratio. On the other hand, multi-core CPUs have turned into energy efficient system-on-chip architectures. By doing so, the main components of the node are fused and integrated into a single chip reducing the energy costs. Nowadays, several institutions and governments are investing in the research and development of different aspects of HPC that could lead to the next generations of supercomputers. This initiatives have entitled the problem as the exascale challenge. This goal can only be achieved by incorporating major changes in computer architecture, memory design and network interfaces. The CFD community faces an important challenge: keep the pace at the rapid changes in the HPC resources. The codes and formulations need to be re-design in other to exploit the different levels of parallelism and complex memory hierarchies of the new heterogeneous systems. The main characteristics demanded to the new CFD software are: memory awareness, extreme concurrency, modularity and portability. This thesis is devoted to the study of a CFD algorithm re-factoring for the adoption of new technologies. Our application context is the solution of incompressible flows (DNS or LES) on unstructured meshes. The first approach was using GPUs for accelerating the Poisson solver, that is the most computational intensive part of our application. The positive results obtained in this first step motivated us to port the complete time integration phase of our application. This requires a major redesign of the code. We propose a portable implementation model for CFD applications. The main idea was substituting stencil data structures and kernels by algebraic storage formats and operators. By doing so, the algorithm was restructured into a minimal set of algebraic operations. The implementation strategy consisted in the creation of a low-level algebraic layer for computations on CPUs and GPUs, and a high-level user-friendly discretization layer for CPUs that is fully localized at the preprocessing stage where performance does not play an important role. As a result, at the time-integration phase the code relies only on three algebraic kernels: sparse-matrix-vector product (SpMV), linear combination of two vectors (AXPY) and dot product (DOT). Such a simple set of basic linear algebra operations naturally provides the desired portability to any computing architecture. Special attention was paid at the development of data structures compatibles with the stream processing model. A detailed performance analysis was studied in both sequential and parallel execution engaging up to 128 GPUs in a hybrid CPU/GPU supercomputer. Moreover, we tested the portable implementation model of TermoFluids code in the Mont-Blanc mobile-based supercomputer. The re-design of the kernels exploits a heterogeneous execution model using both computing devices CPU and GPU of the ARM-based nodes. The load balancing between the two computing devices exploits a tabu search strategy that tunes the workload distribution during the preprocessing stage. A comparison of the Mont-Blanc prototypes with high-end supercomputers in terms of the achieved net performance and energy consumption provided some guidelines of the behavior of CFD applications in ARM-based architectures. Finally, we present a memory aware auto-tuned Poisson solver for problems with one Fourier diagonalizable direction. This work was developed and tested in the BlueGene/Q Vesta supercomputer, and aims at demonstrating the relevance of vectorization and memory awareness for fully exploiting the modern energy efficient CPUs. ; Las fronteras de la dinámica de fluidos computacional (CFD) están en constante expansión y demandan más y más recursos computacionales. Actualmente, estamos experimentando una evolución en los sistemas de computación de alto rendimiento (HPC) impulsado por restricciones de consumo de energía. Los nuevos nodos HPC incorporan aceleradores que se utilizan como co-procesadores para incrementar el rendimiento y la relación FLOP por vatio. Por otro lado, CPUs multi-core se han convertido en arquitecturas system-on-chip. Hoy en día, varias instituciones y gobiernos están invirtiendo en la investigación y desarrollo de los diferentes aspectos de HPC que podrían llevar a las próximas generaciones de superordenadores. Estas iniciativas han titulado el problema como el "exascale challenge". Este objetivo sólo puede lograrse mediante la incorporación de cambios importantes en: la arquitectura de ordenador, diseño de la memoria y las interfaces de red. La comunidad de CFD se enfrenta a un reto importante: mantener el ritmo a los rápidos cambios en las infraestructuras de HPC. Los códigos y formulaciones necesitan ser rediseñados para explotar los diferentes niveles de paralelismo y complejas jerarquías de memoria de los nuevos sistemas heterogéneos. Las principales características exigidas al nuevo software CFD son: estructuras de datos, la concurrencia extrema, modularidad y portabilidad. Esta tesis está dedicada al estudio de un modelo de implementation CFD para la adopción de nuevas tecnologías. Nuestro contexto de aplicación es la solución de los flujos incompresibles (DNS o LES) en mallas no estructuradas. El primer enfoque se basó en utilizar GPUs para acelerar el solver de Poisson. Los resultados positivos obtenidos en este primer paso nos motivaron a la portabilidad completa de la fase de integración temporal de nuestra aplicación. Esto requiere un importante rediseño del código. Proponemos un modelo de implementacion portable para aplicaciones de CFD. La idea principal es sustituir las estructuras de datos de los stencils y kernels por formatos de almacenamiento algebraicos y operadores. La estrategia de implementación consistió en la creación de una capa algebraica de bajo nivel para los cálculos de CPU y GPU, y una capa de discretización fácil de usar de alto nivel para las CPU. Como resultado, la fase de integración temporal del código se basa sólo en tres funciones algebraicas: producto de una matriz dispersa con un vector (SPMV), combinación lineal de dos vectores (AXPY) y producto escalar (DOT). Además, se prestó especial atención en el desarrollo de estructuras de datos compatibles con el modelo stream processing. Un análisis detallado de rendimiento se ha estudiado tanto en ejecución secuencial y paralela utilizando hasta 128 GPUs en un superordenador híbrido CPU / GPU. Por otra parte, hemos probado el nuevo modelo de TermoFluids en el superordenador Mont-Blanc basado en tecnología móvil. El rediseño de las funciones explota un modelo de ejecución heterogénea utilizando tanto la CPU y la GPU de los nodos basados en arquitectura ARM. El equilibrio de carga entre las dos unidades de cálculo aprovecha una estrategia de búsqueda tabú que sintoniza la distribución de carga de trabajo durante la etapa de preprocesamiento. Una comparación de los prototipos Mont-Blanc con superordenadores de alta gama en términos de rendimiento y consumo de energía nos proporcionó algunas pautas del comportamiento de las aplicaciones CFD en arquitecturas basadas en ARM. Por último, se presenta una estructura de datos auto-sintonizada para el solver de Poisson en problemas con una dirección diagonalizable mediante una descomposicion de Fourier. Este trabajo fue desarrollado y probado en la superordenador BlueGene / Q Vesta, y tiene por objeto demostrar la relevancia de vectorización y las estructuras de datos para aprovechar plenamente las CPUs de los superodenadores modernos. ; Postprint (published version)
Ambient Assisted Living (AAL) is an emerging multidisciplinary research area that aims to create an ecosystem of different types of sensors, computers, mobile devices, wireless networks, and software applications for enhanced living environments and occupational health. There are several challenges in the development and implementation of an effective AAL system, such as system architecture, human-computer interaction, ergonomics, usability, and accessibility. There are also social and ethical challenges, such as acceptance by seniors and the privacy and confidentiality that must be a requirement of AAL devices. It is also essential to ensure that technology does not replace human care and is used as a relevant complement. The Internet of Things (IoT) is a paradigm where objects are connected to the Internet and support sensing capabilities. IoT devices should be ubiquitous, recognize the context, and support intelligence capabilities closely related to AAL. Technological advances allow defining new advanced tools and platforms for real-time health monitoring and decision making in the treatment of various diseases. IoT is a suitable approach to building healthcare systems, and it provides a suitable platform for ubiquitous health services, using, for example, portable sensors to carry data to servers and smartphones for communication. Despite the potential of the IoT paradigm and technologies for healthcare systems, several challenges to be overcome still exist. The direction and impact of IoT in the economy are not clearly defined, and there are barriers to the immediate and ubiquitous adoption of IoT products, services, and solutions. Several sources of pollutants have a high impact on indoor living environments. Consequently, indoor air quality is recognized as a fundamental variable to be controlled for enhanced health and well-being. It is critical to note that typically most people occupy more than 90% of their time inside buildings, and poor indoor air quality negatively affects performance and productivity. Research initiatives are required to address air quality issues to adopt legislation and real-time inspection mechanisms to improve public health, not only to monitor public places, schools, and hospitals but also to increase the rigor of building rules. Therefore, it is necessary to use real-time monitoring systems for correct analysis of indoor air quality to ensure a healthy environment in at least public spaces. In most cases, simple interventions provided by homeowners can produce substantial positive impacts on indoor air quality, such as avoiding indoor smoking and the correct use of natural ventilation. An indoor air quality monitoring system helps the detection and improvement of air quality conditions. Local and distributed assessment of chemical concentrations is significant for safety (e.g., detection of gas leaks and monitoring of pollutants) as well as to control heating, ventilation, and HVAC systems to improve energy efficiency. Real-time indoor air quality monitoring provides reliable data for the correct control of building automation systems and should be assumed as a decision support platform on planning interventions for enhanced living environments. However, the monitoring systems currently available are expensive and only allow the collection of random samples that are not provided with time information. Most solutions on the market only allow data consulting limited to device memory and require procedures for downloading and manipulating data with specific software. In this way, the development of innovative environmental monitoring systems based on ubiquitous technologies that allow real-time analysis becomes essential. This thesis resulted in the design and development of IoT architectures using modular and scalable structures for air quality monitoring based on data collected from cost-effective sensors for enhanced living environments. The proposed architectures address several concepts, including acquisition, processing, storage, analysis, and visualization of data. These systems incorporate an alert management Framework that notifies the user in real-time in poor indoor air quality scenarios. The software Framework supports multiple alert methods, such as push notifications, SMS, and e-mail. The real-time notification system offers several advantages when the goal is to achieve effective changes for enhanced living environments. On the one hand, notification messages promote behavioral changes. These alerts allow the building manager to identify air quality problems and plan interventions to avoid unhealthy air quality scenarios. The proposed architectures incorporate mobile computing technologies such as mobile applications that provide ubiquitous air quality data consulting methods s. Also, the data is stored and can be shared with medical teams to support the diagnosis. The state-of-the-art analysis has resulted in a review article on technologies, applications, challenges, opportunities, open-source IoT platforms, and operating systems. This review was significant to define the IoT-based Framework for indoor air quality supervision. The research leads to the development and design of cost-effective solutions based on open-source technologies that support Wi-Fi communication and incorporate several advantages such as modularity, scalability, and easy installation. The results obtained are auspicious, representing a significant contribution to enhanced living environments and occupational health. Particulate matter (PM) is a complex mixture of solid and liquid particles of organic and inorganic substances suspended in the air. Moreover, it is considered the pollutant that affects more people. The most damaging particles to health are ≤PM10 (diameter 10 microns or less), which can penetrate and lodge deep within the lungs, contributing to the risk of developing cardiovascular and respiratory diseases as well as lung cancer. Taking into account the adverse health effects of PM exposure, an IoT architecture for automatic PM monitoring was proposed. The proposed architecture is a PM real-time monitoring system and a decision-making tool. The solution consists of a hardware prototype for data acquisition and a Web Framework developed in .NET for data consulting. This system is based on open-source and technologies, with several advantages compared to existing systems, such as modularity, scalability, low-cost and easy installation. The data is stored in a database developed in SQL SERVER using .NET Web services. The results show the ability of the system to analyze the indoor air quality in real-time and the potential of the Web Framework for the planning of interventions to ensure safe, healthy, and comfortable conditions. Associations of high concentrations of carbon dioxide (CO2) with low productivity at work and increased health problems are well documented. There is also a clear correlation between high levels of CO2 and high concentrations of pollutants in indoor air. There are sufficient reasons to monitor CO2 and provide real-time notifications to improve occupational health and provide a safe and healthy indoor living environment. Taking into account the significant influence of CO2 for enhanced living environments, a real-time IoT architecture for CO2 monitoring was proposed. CO2 was selected because it is easy to measure and is produced in quantity (by people and combustion equipment). It can be used as an indicator of other pollutants and, therefore, of air quality in general. The solution consists of a hardware prototype for data acquisition environment, a Web software, and a smartphone application for data consulting. The proposed architecture is based on open-source technologies, and the data is stored in a SQL SERVER database. The mobile Framework allows the user not only to consult the latest data collected but also to receive real-time notifications in poor indoor air quality scenarios, and to configure the alerts threshold levels. The results show that the mobile application not only provides easy access to real-time air quality data, but also allows the user to maintain parameter history and provide a history of changes. Consequently, this system allows the user to analyze in a precise and detailed manner the behavior of air quality. Finally, an air quality monitoring solution was implemented, consisting of a hardware prototype that incorporates only the MICS-6814 sensor as the detection unit. This system monitors various air quality parameters such as NH3 (ammonia), CO (carbon monoxide), NO2 (nitrogen dioxide), C3H8 (propane), C4H10 (butane), CH4 (methane), H2 (hydrogen) and C2H5OH (ethanol). The monitoring of the concentrations of these pollutants is essential to provide enhanced living environments. This solution is based on Cloud, and the collected data is sent to the ThingSpeak platform. The proposed Framework combines sensitivity, flexibility, and measurement accuracy in real-time, allowing a significant evolution of current air quality controls. The results show that this system provides easy, intuitive, and fast access to air quality data as well as relevant notifications in poor air quality situations to provide real-time intervention and improve occupational health. These data can be accessed by physicians to support diagnoses and correlate the symptoms and health problems of patients with the environment in which they live. As future work, the results reported in this thesis can be considered as a starting point for the development of a secure system sharing data with health professionals in order to serve as decision support in diagnosis. ; Ambient Assisted Living (AAL) é uma área de investigação multidisciplinar emergente que visa a construção de um ecossistema de diferentes tipos de sensores, microcontroladores, dispositivos móveis, redes sem fios e aplicações de software para melhorar os ambientes de vida e a saúde ocupacional. Existem muitos desafios no desenvolvimento e na implementação de um sistema AAL, como a arquitetura do sistema, interação humano-computador, ergonomia, usabilidade e acessibilidade. Existem também problemas sociais e éticos, como a aceitação por parte dos utilizadores mais vulneráveis e a privacidade e confidencialidade, que devem ser uma exigência de todos os dispositivos AAL. De facto, também é essencial assegurar que a tecnologia não substitua o cuidado humano e seja usada como um complemento essencial. A Internet das Coisas (IoT) é um paradigma em que os objetos estão conectados à Internet e suportam recursos sensoriais. Tendencialmente, os dispositivos IoT devem ser omnipresentes, reconhecer o contexto e ativar os recursos de inteligência ambiente intimamente relacionados ao AAL. Os avanços tecnológicos permitem definir novas ferramentas avançadas e plataformas para monitorização de saúde em tempo real e tomada de decisão no tratamento de várias doenças. A IoT é uma abordagem adequada para construir sistemas de saúde sendo que oferece uma plataforma para serviços de saúde ubíquos, usando, por exemplo, sensores portáteis para recolha e transmissão de dados e smartphones para comunicação. Apesar do potencial do paradigma e tecnologias IoT para o desenvolvimento de sistemas de saúde, muitos desafios continuam ainda por ser resolvidos. A direção e o impacto das soluções IoT na economia não está claramente definido existindo, portanto, barreiras à adoção imediata de produtos, serviços e soluções de IoT. Os ambientes de vida são caracterizados por diversas fontes de poluentes. Consequentemente, a qualidade do ar interior é reconhecida como uma variável fundamental a ser controlada de forma a melhorar a saúde e o bem-estar. É importante referir que tipicamente a maioria das pessoas ocupam mais de 90% do seu tempo no interior de edifícios e que a má qualidade do ar interior afeta negativamente o desempenho e produtividade. É necessário que as equipas de investigação continuem a abordar os problemas de qualidade do ar visando a adoção de legislação e mecanismos de inspeção que atuem em tempo real para a melhoraria da saúde e qualidade de vida, tanto em locais públicos como escolas e hospitais e residências particulares de forma a aumentar o rigor das regras de construção de edifícios. Para tal, é necessário utilizar mecanismos de monitorização em tempo real de forma a possibilitar a análise correta da qualidade do ambiente interior para garantir ambientes de vida saudáveis. Na maioria dos casos, intervenções simples que podem ser executadas pelos proprietários ou ocupantes da residência podem produzir impactos positivos substanciais na qualidade do ar interior, como evitar fumar em ambientes fechados e o uso correto de ventilação natural. Um sistema de monitorização e avaliação da qualidade do ar interior ajuda na deteção e na melhoria das condições ambiente. A avaliação local e distribuída das concentrações químicas é significativa para a segurança (por exemplo, deteção de fugas de gás e supervisão dos poluentes) bem como para controlar o aquecimento, ventilação, e sistemas de ar condicionado (HVAC) visando a melhoria da eficiência energética. A monitorização em tempo real da qualidade do ar interior fornece dados fiáveis para o correto controlo de sistemas de automação de edifícios e deve ser assumida com uma plataforma de apoio à decisão no que se refere ao planeamento de intervenções para ambientes de vida melhorados. No entanto, os sistemas de monitorização atualmente disponíveis são de alto custo e apenas permitem a recolha de amostras aleatórias que não são providas de informação temporal. A maioria das soluções disponíveis no mercado permite apenas a acesso ao histórico de dados que é limitado à memória do dispositivo e exige procedimentos de download e manipulação de dados com software proprietário. Desta forma, o desenvolvimento de sistemas inovadores de monitorização ambiente baseados em tecnologias ubíquas e computação móvel que permitam a análise em tempo real torna-se essencial. A Tese resultou na definição e no desenvolvimento de arquiteturas para monitorização da qualidade do ar baseadas em IoT. Os métodos propostos são de baixo custo e recorrem a estruturas modulares e escaláveis para proporcionar ambientes de vida melhorados. As arquiteturas propostas abordam vários conceitos, incluindo aquisição, processamento, armazenamento, análise e visualização de dados. Os métodos propostos incorporam Frameworks de gestão de alertas que notificam o utilizador em tempo real e de forma ubíqua quando a qualidade do ar interior é deficiente. A estrutura de software suporta vários métodos de notificação, como notificações remotas para smartphone, SMS (Short Message Service) e email. O método usado para o envio de notificações em tempo real oferece várias vantagens quando o objetivo é alcançar mudanças efetivas para ambientes de vida melhorados. Por um lado, as mensagens de notificação promovem mudanças de comportamento. De facto, estes alertas permitem que o gestor do edifício e os ocupantes reconheçam padrões da qualidade do ar e permitem também um correto planeamento de intervenções de forma evitar situações em que a qualidade do ar é deficiente. Por outro lado, o sistema proposto incorpora tecnologias de computação móvel, como aplicações móveis, que fornecem acesso omnipresente aos dados de qualidade do ar e, consequentemente, fornecem soluções completas para análise de dados. Além disso, os dados são armazenados e podem ser partilhados com equipas médicas para ajudar no diagnóstico. A análise do estado da arte resultou na elaboração de um artigo de revisão sobre as tecnologias, aplicações, desafios, plataformas e sistemas operativos que envolvem a criação de arquiteturas IoT. Esta revisão foi um trabalho fundamental na definição das arquiteturas propostas baseado em IoT para a supervisão da qualidade do ar interior. Esta pesquisa conduz a um desenvolvimento de arquiteturas IoT de baixo custo com base em tecnologias de código aberto que operam como um sistema Wi-Fi e suportam várias vantagens, como modularidade, escalabilidade e facilidade de instalação. Os resultados obtidos são muito promissores, representando uma contribuição significativa para ambientes de vida melhorados e saúde ocupacional. O material particulado (PM) é uma mistura complexa de partículas sólidas e líquidas de substâncias orgânicas e inorgânicas suspensas no ar e é considerado o poluente que afeta mais pessoas. As partículas mais prejudiciais à saúde são as ≤PM10 (diâmetro de 10 micrómetros ou menos), que podem penetrar e fixarem-se dentro dos pulmões, contribuindo para o risco de desenvolver doenças cardiovasculares e respiratórias, bem como de cancro do pulmão. Tendo em consideração os efeitos negativos para a saúde da exposição ao PM foi desenvolvido numa primeira fase uma arquitetura IoT para monitorização automática dos níveis de PM. Esta arquitetura é um sistema que permite monitorização de PM em tempo real e uma ferramenta de apoio à tomada de decisão. A solução é composta por um protótipo de hardware para aquisição de dados e um portal Web desenvolvido em .NET para consulta de dados. Este sistema é baseado em tecnologias de código aberto com várias vantagens em comparação aos sistemas existentes, como modularidade, escalabilidade, baixo custo e fácil instalação. Os dados são armazenados numa base de dados desenvolvida em SQL SERVER e são enviados com recurso a serviços Web. Os resultados mostram a capacidade do sistema de analisar em tempo real a qualidade do ar interior e o potencial da Framework Web para o planeamento de intervenções com o objetivo de garantir condições seguras, saudáveis e confortáveis. Associações de altas concentrações de dióxido de carbono (CO2) com défice de produtividade no trabalho e aumento de problemas de saúde encontram-se bem documentadas. Existe também uma correlação evidente entre altos níveis de CO2 e altas concentrações de poluentes no ar interior. Tendo em conta a influência significativa do CO2 para a construção de ambientes de vida melhorados desenvolveu-se uma solução de monitorização em tempo real de CO2 com base na arquitetura de IoT. A arquitetura proposta permite também o envio de notificações em tempo real para melhorar a saúde ocupacional e proporcionar um ambiente de vida interior seguro e saudável. O CO2 foi selecionado, pois é fácil de medir e é produzido em quantidade (por pessoas e equipamentos de combustão). Assim, pode ser usado como um indicador de outros poluentes e, portanto, da qualidade do ar em geral. O método proposto é composto por um protótipo de hardware para aquisição de dados, um software Web e uma aplicação smartphone para consulta de dados. Esta arquitetura é baseada em tecnologias de código aberto e os dados recolhidos são armazenados numa base de dados SQL SERVER. A Framework móvel permite não só consultar em tempo real os últimos dados recolhidos, receber notificações com o objetivo de avisar o utilizador quando a qualidade do ar está deficiente, mas também para configurar alertas. Os resultados mostram que a Framework móvel fornece não apenas acesso fácil aos dados da qualidade do ar em tempo real, mas também permite ao utilizador manter o histórico de parâmetros. Assim este sistema permite ao utilizador analisar de maneira precisa e detalhada o comportamento da qualidade do ar interior. Por último, é proposta uma arquitetura para monitorização de vários parâmetros da qualidade do ar, como NH3 (amoníaco), CO (monóxido de carbono), NO2 (dióxido de azoto), C3H8 (propano), C4H10 (butano), CH4 (metano), H2 (hidrogénio) e C2H5OH (etanol). Esta arquitetura é composta por um protótipo de hardware que incorpora unicamente o sensor MICS-6814 como unidade de deteção. O controlo das concentrações destes poluentes é extremamente relevante para proporcionar ambientes de vida melhorados. Esta solução tem base na Cloud sendo que os dados recolhidos são enviados para a plataforma ThingSpeak. Esta Framework combina sensibilidade, flexibilidade e precisão de medição em tempo real, permitindo uma evolução significativa dos atuais sistemas de monitorização da qualidade do ar. Os resultados mostram que este sistema fornece acesso fácil, intuitivo e rápido aos dados de qualidade do ar bem como notificações essenciais em situações de qualidade do ar deficiente de forma a planear intervenções em tempo útil e melhorar a saúde ocupacional. Esses dados podem ser acedidos pelos médicos para apoiar diagnósticos e correlacionar os sintomas e problemas de saúde dos pacientes com o ambiente em que estes vivem. Como trabalho futuro, os resultados reportados nesta Tese podem ser considerados um ponto de partida para o desenvolvimento de um sistema seguro para partilha de dados com profissionais de saúde de forma a servir de suporte à decisão no diagnóstico.
According to a many publications and discussions, fast reactors hold promises to improve safety, non-proliferation, economic aspects, and reduce the nuclear waste problems. Consequently, several reactor designs advocated by the Generation IV Forum are fast reactors. In reality, however, after decades of research and development and billions of dollars investment worldwide, there are only two fast breeders currently operational on a commercial basis: the Russian reactors BN-600 and BN-800. Energy generation alone is apparently not a sufficient selling point for fast breeder reactors. Therefore, other possible applications for fast nuclear reactors are advocated. Three relevant examples are investigated in this thesis. The first one is the disposition of excess weapon-grade plutonium. Unlike for high enriched uranium that can be downblended for use in light water reactors, there exists no scientifically accepted solution for the disposition of weapon-grade plutonium. One option is the use in fast reactors that are operated for energy production. In the course of burn-up, the plutonium is irradiated which intends to fulfill two objectives: the resulting isotopic composition of the plutonium is less suitable for nuclear weapons, while at the same time the build-up of fission products results in a radiation barrier. Appropriate reprocessing technology is in order to extract the plutonium from the spent fuel. The second application is the use as so-called nuclear batteries, a special type of small modular reactors (SMRs). Nuclear batteries offer very long core lifetimes and have a very small energy output of sometimes only 10 MWe. They can supposedly be placed (almost) everywhere and supply energy without the need for refueling or shuffling of fuel elements for long periods. Since their cores remain sealed for several decades, nuclear batteries are claimed to have a higher proliferation resistance. The small output and the reduced maintenance and operating requirements should make them attractive for remote areas or electrical grids that are not large enough to support a standard-sized nuclear power plant. The last application of fast reactors this thesis investigates promises a solution to the problem of the radioactive waste from nuclear energy production. The separation of the spent fuel in different material streams (partitioning) and the irradiation of minor actinides in a fast neutron spectrum (transmutation) is claimed to solve this problem. Implementation of partitioning and transmutation (P&T) would require centuries of dedicated efforts, since several irradiation cycles and repeated reprocessing of the spent fuel elements between the irradiation cycles would be necessary. For all three applications, computer models of exemplary reactor systems were set up to perform criticality, depletion, and dose rate calculations. Based on the results, a specific critique on the viability of these fast reactor applications was conducted. Possible risks associated with their deployment were investigated. A Super-Safe, Small and Simple reactor promises to meet the energy demand of remote, small energy grids. The discussion of the proliferation risks associated with the spread of this kind of reactors often addresses the sealed core. The fissile material produced in the core and the possibility of breaking a seal and extracting the fuel is neglected. To address these questions, the Toshiba 4S reactor was modeled as an example of a fast small reactor with a core lifetime of 30 years and an energy output of 10 MW. The fast SMR core is said to have a high level of proliferation resistance. Depletion calculations, however, show a production rate of more than 5 kg plutonium per year. Furthermore, the plutonium-239 fraction in the fuel is higher than 90% even at planned discharge from the reactor, resulting in very attractive material for a possible proliferator. Several SMR characteristics complicate the unauthorized removal: the refueling intervals are extraordinary long and in-between the core does not have to be opened for reshuffling of fuel elements. It supposedly remains sealed for the whole time. Also, the machines needed to remove the spent fuel elements are not kept at the reactor side but will be transported there only for refueling. Still, the fissile material produced in the core poses a proliferation risk. The dose rates emitted from fuel elements 30 years after discharge are higher than 1 Sv/hr. They fulfill what is currently considered to be an important part of the spent fuel standard. Yet, there is only a one year on-site cooling period planned before the spent fuel elements are transported back to a central facility. At this point, the spent fuel elements emit about 100 Sv/hr. This of course impedes diversion of the spent fuel from the reactor site, but also complicated transportation to the reprocessing facility. Especially if these nuclear reactors are to be deployed on a global scale, the proliferation risks imposed by the material production in the core have to be addressed. The likely detection of unauthorized fissile material diversion might discourage some actors from this pathway. But for a state determined to acquire nuclear weapons and thus most likely willing to break its obligations under the Non-Proliferation Treaty and as a consequence to face corresponding reactions from the international community, the detection might not be a prohibiting factor. In the case of an open break-out, at nearly any point of the SMR operation cycle, the state has access to significant quantities of weapon-grade plutonium. After only two years of reactor operation already more than one significant quantity (8 kg) of weapon-grade plutonium has been produced in the core. For a state opting for this type of nuclear batteries to power its remote small grid locations, the bottleneck for acquiring nuclear weapons is not the access to fissile material but reprocessing. Due to the modularity of small reactors, deployment of several of them in one country would not raise suspicions while the latent option of becoming a nuclear weapon state would emerge. The first generation of deployed SMRs would most likely be operated in a once-through fuel cycle and their number would be limited. In such a scenario, the concept of having only central facilities for (re)fueling might be realistic and the access to sensitive technology would be limited. But it is not yet clear by whom these facilities would be owned and how safe transportation to and from reactor sites can be ensured. For the second generation of the SMRs, a closed fuel cycle is foreseen. With the projected possible high number of deployed SMRs, several reprocessing and fuel fabrication facilities would be needed. To reduce transportation efforts, those facilities might be decentralized as well. In these scenarios, the number of states that have access to key technologies needed to acquire fissile material and build nuclear weapons increases and the obstacles for non-state actors are reduced. SMRs can only play their economic advantage caused by their modularity if they are produced and deployed in high numbers. Thus, for the proliferation risk assessment, this should be also taken into account. Even though providing enhanced features regarding the possible proliferation of nuclear material, the overall security case is not as easily made as suggested by its proponents. The BN-800 breeder reactor was awarded Top Plant 2016 in the nuclear generation category by the POWER Magazine, the oldest American journal for the global power generation industry. This award is given to what are considered to be the most advanced and innovative projects. Among the winning attributes are the possibility to use the reactor for various purposes, including plutonium consumption. The BN-800 is essential for Russia's efforts to dispose of its excess weapon-grade plutonium as agreed-upon in the recently suspended Plutonium Management and Disposition Agreement (PMDA) signed between Russia and the United States. Depletion calculations for the BN-800 verify the viability of this disposition method, according to the requirements set by the PMDA. The ratio of plutonium-240 to plutonium-239 is 0.17 in the spent fuel, thus fulfilling the agreed-upon fraction of 0.1 or higher. Yet, depending on its position in the core, the plutonium content in the spent fuel amounts to 82%-88% and is very close to what is generally labeled weapon-grade (more than 93% plutonium-239). After a cooling period of 30 years, the spent BN-800 fuel elements emit more than 1Sv/hr and can therefore be considered to be self-protecting. According to IAEA regulation, they require less strict safeguards. At the same time, in the blanket elements of the reactor attractive nuclear material is bred even when the reactor is operated with a breeding ratio below one. Not only is the plutonium produced in the blankets of weapon-grade quality with a plutonium-239 fraction significantly higher than 93%, the radiation barrier also deteriorates quickly. The elements cannot be considered to be self-protecting after a cooling period of 30 years. Currently, no separation and reprocessing of blanket material is planned, but it is not clear why the blankets are necessary at all. In particular for the purpose of plutonium disposition, it would be preferable if no new weapon-grade material be bred. Further research should be done to assess the possibility of operating the BN-800 without blankets. Additionally, the introduction of inert matrix fuel could further increase the rate of achievable plutonium reduction in the reactors. Unfortunately, with the PMDA suspended in September 2016, the issue lost its urgency. The BN-800 is planned to play a key role in Russia's efforts to establish a closed nuclear fuel cycle in the future. A closed nuclear fuel cycle always implies reprocessing of spent fuel. In the case of the breeding blankets, weapon-grade plutonium will be separated at a certain stage in the fuel cycle, which contradicts the current efforts to dispose of such material. Once the BN-800 is exported to other countries for energy production, the possible proliferation of nuclear materials becomes of even greater concern. It is widely accepted that fast reactors are more suitable for the production of nuclear weapons material. Especially for newcomers to nuclear energy, the possible advantages of fast reactors, namely the option to close the nuclear fuel cycle, seem to be a distant option. On the other hand, the operating history and economic viability of fast reactors is far worse than for light water reactors, but they offer the option of access to nuclear weapons material. Several measures could reduce the proliferation risk of the BN-800 in the case of export. The most obvious are of course IAEA safeguards, preferably including an Additional Protocol. Until now, only China showed interest in the BN-800. It would be advisable to achieve transparency during all steps of building the nuclear power plant, operation, and decommissioning. Comprehensive monitoring and inspection mechanisms would increase trust among the different parties and could also act as an example for other countries. As a demonstration, the precise and continuous monitoring of the reactor power output and irradiation times would provide the basis for a reliable assessment of the amount of plutonium and fission products produced in the core and blanket. The case of the BN-800 shows that especially in the light of several newcomer countries interested in buying nuclear technology the proliferation risk has to be assessed more comprehensively. Limiting the focus to the reactor itself and the country originally developing it is not sufficient. Disadvantages of fast reactors, such as the high costs and the proliferation and safety risks, have long been known. They should not be forgotten with the new reactor generation of reactors, even though some enhanced safety and security technologies are in place. Under current economic circumstances, the implementation of a transmutation fuel cycle is not competitive compared to other means of energy production. The use of plutonium in MOX fuel alone brings an economic penalty compared to the once-trough fuel cycle and is motivated by other reasons such as a better resource utilization, which is necessary if nuclear power is to be used on a global scale. An objective of introducing a double-strata partitioning and transmutation fuel cycle using accelerator-driven systems for the transmutation of minor actinides is the treatment of high-level waste. The implementation of such a fuel cycles requires long-term dedication to the use of nuclear energy and the deployment of all facilities that make up a closed nuclear fuel cycle. Before taking such far reaching decisions, it should be ensured that the promised benefits will hold true in reality. To date, even the proof of concept of an accelerator-driven system is pending. An analysis of the existing literature shows that some crucial points regarding a P&T scenario are not dealt with in sufficient detail. In this thesis, a closer look was taken at some of these issues. Burn-up calculations were performed based on computer models of the European proof-of-concept reactor MYRRHA and the facility explicitly designed to be used for transmutation of minor actinides (EFIT). Both are accelerator-driven systems (ADS). They consists of a sub-critical reactor core, a spallation target to provide extra source neutrons, and a particle accelerator to provide high-energy protons for the spallation reaction. Besides some general characteristics, such as the possible transmutation rate in those reactor systems, three key issues that might affect the implementation were investigated in detail: the change in the fuel composition, the characteristics of the spent fuel elements, and the concentration of long-lived fission products in the spent fuel. The minor actinides have to be irradiated in the ADS for several cycles. For efficient transmutation, plutonium and minor actinides must be mixed in the fuel according to fixed fractions. After each cycle, the fuel has to be reprocessed and fresh fuel elements must be fabricated. It is noteworthy, that even today's nuclear reactor fuel is only reprocessed once and its use as MOX fuel is limited to a second cycle. Calculations of the effective neutron multiplication factor keff for various fuel compositions that depend on the number of previous cycles show the influence of the changing isotope vector. The claim that one initial load of plutonium is sufficient for several irradiation cycles can not be confirmed. Moreover, criticality calculations show that using fuel compositions as published for European implementation scenarios (PATEROS) result in keff = 1.6. This is a much too high figure, suggesting that P&T scenarios published so far are not feasible. Calculations were done with the EFIT reactor, the reference reactor in the PATEROS study. As a consequence, major adjustments of the fissile material content in the fuel are necessary to resolve the overall reactivity problematic. This in turn might lead to performance losses regarding the intended reduction of minor actinides within one reactor cycle. The claimed benefits of a P&T scenario are the reduction of the minor actinide inventory in the deep geological repository. After each irradiation cycle, the spent fuel elements must be cooled and reprocessed before new fuel elements can be fabricated. Since transmutation requires several cycles, the necessary cooling periods before reprocessing of the spent fuel play an important role to assess P&T scenarios. The calculations show that due to the increased residual heat of the spent P&T-fuel elements, longer cooling periods than currently assumed would be necessary. The decay heat from the spent P&T-fuel elements after a 40 year cooling period is still higher than the decay heat from spent MOX and fast reactor fuel elements, although these contain significantly more fuel. Also, the dose rates and the activity of the spent fuel would pose challenges for the overall reprocessing and fuel fabrication scheme. The build-up of curium-242 with its high spontaneous fission rate causes a strong neutron background. Thus, heavy shielding would be necessary for the processing of the spent fuel elements. The high specific power makes permanent cooling of the tools and material unavoidable during all phases. Finally, it is questions in this thesis that the benefit of minor actinide transmutation is as significant as claimed by the proponents of P&T. With regard to the risks emerging from a deep geological repository, several long-lived fission products dominate the dose rate released to the biosphere. The production of the relevant nuclides zirconium-93, technetium-99, and iodine-129 in an ADS is mostly comparable to their generation in light water reactors. However, the fraction of cesium-135 increases four-fold. For a German P&T scenario, the cesium-135 inventory in the deep geological repository would more than double as compared to the agreed-upon phase-out scenario in which the spent fuel elements are directly disposed. The overall inventory of long-lived fission products in a German deep geological repository would increase by more than 50% in a P&T scenario. It can be stated that the reduction of the minor actinide inventory would be bought in exchange for an increase of the inventory of long-lived fission products. These results question the benefits of the currently researched P&T strategy that claims to reduce the nuclear waste burden. At the current stage of P&T research and development, there are several open questions that need to be answered before actual implementation. This includes not only technical challenges as the ones already discussed. Other crucial issues are the endurance of the cladding material in the core and the partitioning efficiency realistically achievable on industrial scale. Even with all these issues resolvable, the benefit of the technology remains uncertain. Over the years, the number of targeted isotopes published in P&T schemes has declined: while in the beginning the plan was to transmute long-lived fission products as well, it now seems that even curium must be left out of the minor actinide composition because of the challenges it poses to reprocessing and fuel fabrication. Even though fast reactor research and development has a long history, operational experience of fast reactors is quite small. Since more suitable solutions exist for energy generation, in recent years additional applications have been discussed for new and emerging fast reactor designs. The examples above show that the use of fast reactors is not as straight forward and beneficial as the advocates of this technology would argue. When looking at specific applications, fast reactors seem to offer solutions for various tasks, such as plutonium disposition, safe and secure energy supply in remote areas, and the treatment of radioactive waste. In a more comprehensive view, promises are fading and it turns out that suggested applications bear risks. Critical fast reactors cause the spread of nuclear weapons material and, even more importantly, the technology and facilities to handle it. This is also true for fast sub-critical ADS, which would be deployed in a P&T fuel cycle. It is not yet clear in how far P&T technology can actually help to solve the nuclear waste problem. The argument in favor of nuclear waste treatment in an ADS is based on one simple index value: the radiotoxicity based on the total ingestion by humans. Besides, the development risks regarding P&T are high and it is not clear whether a P&T fuel cycle could actually be implemented in the near future. Several crucial technologies do not yet exist. Moreover, nuclear reactors are first of all designed for energy production. Still, the vast majority of the current nuclear fleet are light water reactors and not fast breeder reactors. This might always be attributed to soft factors, such as political considerations and the public opinion. But maybe the reasons are intrinsic to the technology: fast reactors might just not be competitive for energy production. And it has not yet been proven that they are competitive in regard to emerging applications beyond power supply. In general, the new applications will lead to higher costs and risks and it sometimes seems puzzling why they are promoted by academia, industry, and policy. It also seems somehow contradictory to solve a problem, namely the excess plutonium stockpiles and the radioactive waste, by using the same technology that originally produced it. Research efforts in this field have been going on for decades, and they have been substantially sponsored. Apart from the fact that this money is lost for other means, it could be argued that if even huge investments do not result in the desired outcome, other approaches should be tried. Critical assessment of the technology, however, is difficult as long as research is almost exclusively conducted by institutions that would benefit from a future implementation. Especially when official entities, such as the European Union, allocate funds for research and design efforts, they should take care that at least a fraction of the money also goes to independent researchers. This is the only way to guarantee that transparent and comprehensive data information and assessment is available. And only then, can society come to informed decisions on whether it supports fast reactor technologies - or not.
Dottorato di ricerca in Economia e territorio ; Il cambiamento tecnologico comporta una "rimodellatura" e, a volte, un vero e proprio rovesciamento dell'ordine esistente all'interno delle organizzazioni produttive. La conoscenza generata dall'innovazione tecnologica, per essere "assorbita", necessita di un corredo di pratiche organizzative adeguate: per tale ragione è sempre più stretto il processo co-evolutivo tra sviluppo tecnologico e cambiamento organizzativo. Il coordinamento e la gestione delle sinergie e dei feedbacks tra diversi aspetti dell'attività innovativa diventa una specifica linea d'azione strategica per le imprese al fine di ottenere performances economiche superiori. La stretta complementarità tra investimenti in beni tangibili (nuove tecnologie) e intangibili (struttura organizzativa), da cui scaturisce una maggiore crescita della produttività, è il fulcro del nuovo approccio a queste tematiche. L'ipotesi di complementarità nei processi innovativi assume particolare rilievo con l'avvento delle tecnologie ICT, con la loro natura generalista o aspecifica (general purpose technology), il loro carattere ampiamente pervasivo, e l'esigenza connessa di una prestazione a più alto contenuto cognitivo e relazionale (Breshnahan et al. 2002, Brynjolfsson et al., 2000, Brynjolfsson et al., 2002, Bugamelli e Pagano 2004). La penetrazione di queste tecnologie nel tessuto produttivo favorisce lo sviluppo di diversi input complementari e comporta diverse ondate di innovazioni "secondarie" che creano nuovi prodotti e nuovi processi, dando luogo a periodi più o meno prolungati di aggiustamento strutturale che coinvolgono la riorganizzazione aziendale e l'implementazione delle pratiche del lavoro ad alta performance o High Performance Workplace Practices (Breshanan e Trajtenberg 1995). Quest'ultime si esplicitano in una serie di azioni che hanno nell'empowerment delle risorse umane l'elemento centrale, e che si concretizzano nella riduzione dei livelli gerarchici, nell'assunzione generalizzata di responsabilità, nel coinvolgimento dei lavoratori, nello svolgimento di ruoli attivi, nel lavoro in team, nella polivalenza e nella policompetenza, nei sistemi di valutazione della performance e dei suggerimenti dal basso, e infine nelle buone relazioni industriali. La concettualizzazione dell'organizzazione come un insieme di elementi profondamente eterogenei ma complementari risale a Milgrom e Roberts (1990 e 1995) che, dapprima, ne forniscono una definizione basata sulle proprietà di supermodularità della funzione di redditività dell'impresa, e poi modellano il raggruppamento delle pratiche risultanti dalla complementarità tra innovazioni tecnologiche e cambiamenti organizzativi. Implicita nella definizione di complementarità è l'idea che fare di più in una certa attività non impedisce di fare di più in un'altra, contrariamente alla teoria tradizionale dell'impresa in cui l'ipotesi di rendimenti di scala decrescenti può porre dei vincoli alla possibilità di incremento simultaneo delle variabili di scelta dell'impresa. Le analisi empiriche hanno messo in rilievo come frequentemente innovazioni tecnologiche ed organizzative siano adottate congiuntamente e come entrambe influiscano sulle performances delle imprese (Black e Lynch 2000, Bresnahan, Brynjolfsson e Hitt 2002, Brynjolfsson, Lindbeck e Snower 1996, Malone et al. 1994, Pini 2006, Pini et al. 2010). Nel nostro paese gli studi empirici sulle complementarità tra sfere innovative sono ancora pochi. I principali lavori di natura econometrica realizzati, sulla base di limitati campioni di imprese a livello provinciale, sono attribuibili a Cristini et al. (2003 e 2008), Leoni (2008), Mazzanti et al. (2006), Piva et al. (2005), Pini et al. (2010). Un aspetto poco indagato, anche nei lavori citati, è quello dell'interazione tra tecnologie ICT, cambiamenti organizzativi e pratiche lavorative ad alta performance sulla produttività del lavoro, che è proprio l'argomento specifico che ci siamo proposti di indagare. Preliminarmente abbiamo ricostruito il dibattito teorico ed empirico sul ruolo di driver al fine dell'ottenimento di performances superiori delle tecnologie ICT, dei cambiamenti organizzativi e delle nuove pratiche del lavoro, singolarmente presi. In una seconda fase abbiamo verificato l'esistenza di legami virtuosi tra le tre attività innovative e la produttività del lavoro mettendo in evidenza le complementarità tra le sfere innovative. Per questo abbiamo effettuato un'analisi empirica utilizzando due fonti principali: IX e X indagine sulle imprese manifatturiere del Mediocredito Centrale (ora Capitalia) e la Community Innovation Survey (Cis-4) dell'Istat. Questi ultimi dati sono integrati con quelli di bilancio delle imprese società di capitali attive dal 2001 al 2008, con i caratteri strutturali del Registro delle imprese (Asia), con i dati del commercio estero (Coe), e dell'occupazione (Oros). Seguendo il productivity approach, abbiamo ricercato i legami di complementarità eseguendo, con il software STATA 10, una serie di regressioni multivariate, utilizzando funzioni di produzione aggiustate con le strategie innovative e le loro interazioni. I modelli, stimati con la tecnica dell'Ordinary Least Square (OLS), sono differenti a seconda della tipologia di dati disponibili: con i dati Mediocredito si è stimata una funzione di produzione di tipo Cobb-Douglas, per i dati Cis-4 un stimato un modello a effetti fissi tramite una funzione di produzione di tipo Translog. Se il ricorso alla funzione Cobb-Douglas è ricorrente nella letteratura internazionale, soprattutto per stimare gli effetti delle singole strategie innovative sulla produttività del lavoro (Black e Lynch 2001, 2004, Breshnan et al. 2002, Gera e Gu 2004), l'utilizzo di una funzione Translog, è scelta assolutamente non ricorrente in letteratura per quanto riguarda l'oggetto di analisi. A tal riguardo ci si è ispirati al lavoro di Amess (2003), nel quale vengono valutati gli effetti del management buyouts sull'efficienza di lungo termine delle imprese manifatturiere della Gran Bretagna. Inoltre abbiamo testato la presenza di complementarità attraverso l'analisi delle differenze in termini di performance, suddividendo le imprese in base a diverse combinazioni nell'utilizzo delle strategie innovative. Un aspetto da rilevare è che, le nostre analisi realizzate sul panel integrato Cis-4 utilizzano un campione particolarmente numeroso e rappresentativo della realtà industriale italiana, un fatto, come detto, non frequente negli studi sull'argomento condotti nel nostro Paese. I risultati ottenuti dall'analisi di entrambi i campioni sono in linea con i principali studi empirici italiani (Cristini et al. 2003 e 2008, Mazzanti et al. 2006, Pini 2006, Pini et al. 2010), convalidando ampiamente l'ipotesi di un impatto positivo delle tre strategie innovative sull'aumento delle performances produttive delle imprese, anche se implementate singolarmente in azienda. Per quanto riguarda la verifica di un legame di complementarità tra le tre aree innovative emerge, chiaramente un effetto additivo sul valore aggiunto attraverso l'analisi dei differenziali e seguendo l'approccio sulla supermodularità di Milgrom e Roberts (1990, 1995). L'aspetto più rilevante dei risultati ottenuti è costituito dal fatto che alcune variabili diventano particolarmente significative quando le imprese le adottano simultaneamente: ciò vale in particolare per la formazione e la partnership in R&D. L'attività di formazione è associata positivamente alla presenza di tecnologie ICT e all'innovazione organizzativa, intesa come instaurazione di partnership per la R&D. Inoltre dall'analisi sui dati Mediocredito emerge, in conformità alla teoria skill biased technical change, una propensione a domandare lavoratori in possesso di qualifiche più elevate da parte delle imprese che hanno implementato in maniera significativa cambiamenti tecnologico-organizzativi (Berman, Bound e Griliches1994, Breshnan et al. 2002, Draca, Sadun e Van Reenen 2006). ; Technological development results in a "reshaping" and, sometimes, a complete change within existing productive structures. The knowledge brought about by technological innovations, to be incorporated need a wealth of suitable structural procedures: for this reason the evolution between the technological development and the structural change is getting narrower. The coordination and management of the sinergies and feedbacks among the different aspects of the innovative activity becomes a line of strategic action within the companies to obtain superior economic performances. The strict complementarity between investments in tangible goods (new technologies) and intangibles ones ( organization structure), which brings about a better productivity growth, is the pivot of the new approach to these thematics. The complementarity hypothesis in the innovative processes is particularly important with the advent of the ICT technologies, with their general purpose technology, their widely pervesive characters and the associated requirements of a knowledge at a higher contexct. (Breshnahan et al. 2002, Brynjolfsson et al., 2000, Brynjolfsson et al., 2002, Bugamelli e Pagano 2004). The penetration of these technologies in the productive frame favours the development of the different complementary inputs and allows several flows of "secondary" innovations, which creates new products and processes, bringing more or less long sructural adjustments which include the business reorganization and to carry out work documentation at high performance or High Performance Workplace Practices (Breshanan e Trajtenberg 1995). The latter can be explained in a series of actions which have in the human resources empowerment its central element (unit) and which are reliased with the reduction of the hierarchical levels of employment at general responsibility level, bringing in the employees in active running roles, in working as a team with many duties and competence, in the methods of valuing performance and suggesions from below and lastly in good industial relations. La concettualizzazione dell'organizzazione come un insieme di elementi profondamente eterogenei ma complementari risale a Milgrom e Roberts (1990 e 1995) che, dapprima, ne forniscono una definizione basata sulle proprietà di supermodularità della funzione di redditività dell'impresa, e poi modellano il raggruppamento delle pratiche risultanti dalla complementarità tra innovazioni tecnologiche e cambiamenti organizzativi. Implicita nella definizione di complementarità è l'idea che fare di più in una certa attività non impedisce di fare di più in un'altra, contrariamente alla teoria tradizionale dell'impresa in cui l'ipotesi di rendimenti di scala decrescenti può porre dei vincoli alla possibilità di incremento simultaneo delle variabili di scelta dell'impresa. The notion of the organisation as a collection of elements extremely different, but complementary, goes back to Milgrom and Roberts (1990 e 1995) who, at first, gave a definition based on the properties of modular dimensions of the firm income capacity and then they (the set of elements) put together the resulting documentation due to the complementarity between technological innovations and structural changes. Implicit to the complementariety definition, is the idea that to do more in a certain activity does not exclude to do more in another one; this is to the contrary to the firm traditional theory where the hypothesis of decreasing range efficiency can limit the possible simultaneous increase of the firm variable choices. The empiric analysis have put in evidence that often technological and structural innovetions are taken together and that both influence the firms performances (Black e Lynch 2000, Bresnahan, Brynjolfsson e Hitt 2002, Brynjolfsson, Lindbeck e Snower 1996, Malone et al. 1994, Pini 2006, Pini et al. 2010). In our country the empiric studies on the complementarity within the innovative fields are still few. The major econometric works realised, based on limited samples at provincial level are ascribed to Cristini et al. (2003 e 2008), Leoni (2008), Mazzanti et al. (2006), Piva et al. (2005), Pini et al. (2010). The integration within the ICT technologies, stuctural changes, work habits at high performance on work productivity are aspects investigated insufficiently even on the studies already mentioned. This is the specific subject we propose to examine. At first we have reconstructed the theoric and empiric argument on the driver role aiming to obtain performances better than the the ICT technologies, stuctural developments, work habits, each taken individually. In a second phase we have verified the existence of virtual bonds between the innovative activities and labour productivity putting in evidence the complementariety within the innovative areas. For this reason we carried out an empiric analysis using two main sources: IX and X investigation on manufactury firms of Mediocredito Centrale (now Capitalia) and the Community Innovation Survey (Cis-4) by Istat. These last data are put together with the ones of active plc (public limited companies) balances from 2001 to 2008 with structural characteristics according to Companies Register (Asia), foreign trade data (Coe) and employment (Oros). By following the productivity approach we searched complementarity bonds, achieved with software STATA 10, a range of changeble regressions, using production activities related to innovative strategies and their interactions. The samples, based on the Ordinary Least Square (OLS) technique, are different according to the type of data: available with Mediocredito data, we valued a production function of the Cobb-Douglas type; for the Cis-4 was valued a sample at fixed results using a production function of Tanslog type. If going back to the Cobb-Douglas function appears again in the international literature, especially to value the consequences of single innovative strategies on labour productivity (Black e Lynch 2001, 2004, Breshnan et al. 2002, Gera e Gu 2004), the use of a Translog funtion, is chosen absolutely, without going back to literature when referring to the object of the analysis. From this point of view, we were influenced by Amess' (2003) work, where were valued the results of the management buyouts on the long term efficiency of manifacturing industies in Great Britain. Besides we tested the presence of complementarity by using the analysis of the differences based on performance, by dividing the firms according to their different utilization of innovative strategies. An aspect to take into consideration is that, our analysis carried out on the integrated Cis-4 panel utilise a rather special and large sample which represents the Italian industrial reality, a fact, as already mentioned, not common in the studies undertaken in our Country on this subject. The results obtained from the analysis of both samples are in line with the principal Italian empiric studies (Cristini et al. 2003 e 2008, Mazzanti et al. 2006, Pini 2006, Pini et al. 2010), widely confirming the hipothesys of a positive impact within the three innovative strategies on the companies increase of the producteve performances, even if singularly employed by the business. As regards the examination of a complementarity connection within the three innovative areas, emerges clearly an additive effect on its added value using the analysis of the differentials and approching the super modularity of Milgrom e Roberts (1990, 1995). The most important aspect of these results is that some variables become particularly significative when the firms use them simultaneously: this is particularly valid at educational level and partnership in R&D. The activity at educational level is positively associated to ICT technologies and structural innovetions, understood as the setting up of partnership for R&D. In addition from the analysis of Mediocredito data emerges, according to the skill biased technical change theory, a tendency by the companies, which have made significant techinical-structural changes to look for employees with higher qualification levels (Berman, Bound e Griliches1994, Breshnan et al. 2002, Draca, Sadun e Van Reenen 2006).
This paper proposes a framework for software system design. The framework is based on the decomposition and abstraction. The design formalism will employ an Object Descriptive Attributed Notation (ODAN) for software design representation which records three types of primary information of software system detail design: the decomposition hierarchy (of the system being designed), the taxonomic structure (recognizing the construction and function similarities), and the coupling specification (specifying the way of component integration). A message switching simulation system will be taken as an example during the discussion. An Ada program based on this design is also presented. ; Technical Report 2018-07-ECE-017 Technical Report 88-CSE-11 The Design of a Message Switching System: Software Reusability Applied to Discrete Event Simulation W. P. Yin Murat M. Tanik This technical report is a reissue of a technical report issued February 1988 Department of Electrical and Computer Engineering University of Alabama at Birmingham July 2018 Technical Report 88-CSE-11 THE DESIGN OF A MESSAGE SWITCHING SYSTEM: SOFTWARE REUSABILITY APPLIED TO DISCRETE EVENT SIMULATION W. P. Yin M. M. Tanik Department of Computer Science and Engineering Southern Methodist University Dallas, Texas 75275-0122 February 1988 The Design of a Message Switching System: Software Reusability Applied to Discrete Event Simulation W.P.Yin M. M. Tanik Department of Computer Science SMU Abstract - This paper proposes a framework for software system design. The framework is based on the decomposition and abstraction. The design formalism will employ an Object Descriptive Attributed Notation (ODAN) for software design representation which records three types of primary information of software system detail design : the decomposition hierarchy (of the system being designed), the taxonomic structure (recognizing the construction and function similarities), and the coupling specification (specifying the way of component integration) . A message switching simulation system will be taken as an example during the discussion . An Ada program based on this design is also presented. I. INTRODUCTION Recent years, software engineers gradually realize that reuse concepts play a key role in several issues [1] : productivity, maintainability, portability, quality, and standards. A carefully engineered collection of re t:·sa.ble software components can reduce the cost of software development, improve the quality of software products, and accelerate software production [2] . Many approaches have been proposed and implemented which have tried to make the reuse of software components a reality. Among them , subroutine libraries, software generators and object-oriented programming have achieved relative popularity [3, 9]. The technical foundations for making software reuse a viable alternative to program development have been identified and demonstrated by several projects [4, 11] . In this paper a software detail design methodology and a design representation system (ODAN) [10] is presented. One specific example, a message switching communication system, is developed using this methodology and representation. A message switching communication system was chosen as the application because it shows the range of ODAN's applicabilities and presents a number of interesting design problems. Ada was chosen as the implementation language because it is capable for dedicated concurrent programming, provides much needed facilities for synchronization, and appears to be good at supporting software reusability. The specific design technique used here is described in Section II. The communication system itself is then developed in Section III through V. Section III specifies the functional requirements of the system. Section IV summarizes the major features of ODAN and presents the system decomposition, abstraction, and integration in terms of ODAN. Section V refines the decomposition by giving outlines of Ada programs for some of the interesting - II - parts of the system. Finally, Section VI summaries the conclusion of our explorative work. II. DESIGN TECHNIQUE The process of design is a transformation of a designer's ideas and expertise in to a concrete implementation [ 5]. Observing the design process, we can see the following facts: • Software design is a creative act of individuals using basic problem-solving techniques, building conceptual solutions based upon a software system specification . • By providing decomposition and abstraction mechanisms, a large-scale, complex problem will become an aggregation of subproblems. The solution for original problem will be the • combination of the solutions for subproblems. The design representation is a knowledge representation that facilitates expressing the system decomposition hierarchy, the similarities of system components presented, and the coupling constraints on which system components are identified in the decomposition hierarchy. The design described here is developed in two major steps: system decomposition and component abstraction. By iterating between decomposition and abstraction , three kinds of information are derived: decomposition, taxonomy and integration. By decomposition we mean that dividing the original problem into smaller models that are themselves small problems and interact with one another in simple, well-defined ways [6]. By abstraction we mean the change in the level of detail to be considered in which certain details are ignored in an effort to convert the original problem to a simpler one. In our design technique, we use a decomposition form which gives software system designers opportunity to concentrate on the behavior of the entities of the application and the relationships among them. Therefore, a designer is to be working at the level of general construction and functionality description, but not to the degree of precision necessary for executability. The design process can be characterized as [ 5] :"The design procedure is a series of successive refinements comprising two types of design activities. The first type concerns the transitions between the so-called design levels. The second type defines a set design actions associated with a given design level. The design levels are successive refinements of the decomposition of the system under consideration." At the beginning of a software system design , designers decompose the original problem into subproblems based upon a behavioral scenario. Each subproblem corresponds to one object in the problem, in which changing the state of one object will affect the state of other objects. Decomposition continues until the subproblems are not further decomposable, i.e., any state changing on that object will not have effect on any other objects. During the decomposition, each object in the problem space will be designed as an entity in the solution space. Entities in solution space will be represented in computers. For each solution entity, three kinds of information are derived. First, the decomposition hierarchy (of the system) is declared. The decomposition hierarchy is the composition schema for the object. Composition schema contains data items which indicate the object states and operations which manipulate the data items. Second, the taxonomic - III - knowledge is specified. This knowledge can be viewed as a representation for inheritance relar tions which declare the reusability information, i.e., one entity in solution space can inherit some facilities designed in another entity. Inherited facilities include data items, operations or a whole entity. One entity can have multiple inheritance. These taxonomic knowledge will be used during further evolutionary stages in making reusable code segments. Third, the integration rules are specified. The primary goal for decomposition of the original problem is to divide-and-conquer. The partial solutions of subproblems must be coupled together to solve the original problem. The integration rules specify the data flow and control flow among the entities, input and output constraints, and an activating algorithm . ODAN [10] is used for design representation. For each entity in the solution space a set of attributes is attached. Attributes are classified into three groups corresponding to the above three kinds of information. The values for attributes are designer-defined, e.g., the value can be a single word, a sentence, a piece of code, a rule, an abstract algorithm, or a link to another entity based on the attribute meaning. We choose ODAN as our design representation because it gives software designers flexibility to add or erase attributes, or assign a specific value to the attribute. In addition , ODAN is easy to be machine representable and computable. This benefit will help software engineers to store and retrieve the previous design . III. SPECIFICATION OF A MESSAGE SWITCHING SYSTEM A typical message switching system [7] has been chosr'n as an example to show the detail of our design technique because it is a problem that is realistic in nature and implements many of concepts of embedded systems. This generalized system is typical of several communication systems used by the U.S. government and NATO. The complete system consists of a network of switching nodes connected via high-speed trunk lines. Each switching node has an auxiliary memory, an archive facility, an operator, and can support up to fifty subscriber terminals. Figure 1 shows configuration for a given node in the system . The general function of each node is to accept input messages from the trunk or local subscriber lines, and route them to one or more output destinations. Input can be received from local subscribers or from another switching node (via the trunk line). The input message is stored in the auxiliary memory and then forwarded to the output destinations, which can be either local subscribers or another node in the network. Since messages must be completely received before forwarded, this type of communication is often called store-andforward message switching. Three successive phases are required to process each message: input, switching, and output. Figure 2 presents the system components and interfaces required to perform these functions. The following summary describes the processing that must be done during each of these phases. Input Figure 1. Typical node configuration • • Oper-ator- Read the input message from a subscriber or trunk link and store the message on both auxiliary memory and archive tapes. Each input message consists of a header, a body and an end (end-of-message indicator). Switch Examine the header to determine the output destinations. For each destination , consult a directory to determine the appropriate output line to use (local subscriber or trunk to remote destination). Add a copy of the message header to the output queue on each line . Output Retrieve the message from the auxiliary memory and display it. Each message contains a priority-at all times. The message with the highest priority is transmitted first. Each node has an operator who can send and retrieve messages like a subscriber. In addition, he can monitor and control the message activity at the node; for example, he can cancel a message or check the messages in each output queue. Also the operator is notified of exceptions-for example, end of archive tape. The simulation of this system addresses the following requirements. Figure 2. Message switch system components • Maximum I/ 0 parallelism must be provided. • Two different types of I/0 devices exist (trunks and terminals) . Both process messages. • Switch must coordinate output to multiple destinations. • Messages have priorities. • The auxiliary memory and terminal devices must be controlled and synchronized because we are simulating more than one l/0 device . We now turn our attention to the design of a message switching simulation system that solves each of these problems. V. DETAIL SYS'IEM DESIGN IN 0DAN The main activity of software design is not only generating new programs but also maintaining, integrating, modifying, and explaining existing ones [8]. Based upon the problem modelling and system specification, designers know what the message switching system should look like and how it should behave. Iterating between decomposition and abstraction, designers will know how the system is divided, what are the functionalities for each component, and how those components will be integrated together. Our objective for using ODAN to represent the design idea is for keeping the reusability in mind before and during the coding not after the coding. -VIA. System Decomposition Acceding to Grady Booch [2] : " Simply stated, object-oriented development is an approach to software design and implementation in which the decomposition of a system is based upon the concept of an object. An object is an entity whose behavior is characterized by the operations that it suffers and it requires of other objects". Using object-oriented design methodology, an object existing in the model of reality will have a corresponding structureentity in the solution. As specified above, each message processed by a switching node goes through three phases: input, switch, and output. In the problem space, there are several different objects participate these three phases, I/ 0 devices for sending and reading messages, a temporary memory for storing messages, a long term memory for message backup, a reference table for destination list, and a switching node for scheduling message transmission. The decomposition is shown in Figure 2. Each I/ 0 device manages one subscriber or trunk line . The auxiliary memory provides a temporary storage. The archive tape provides a tape storage for recording all message transmission . The table provides the destination cross-reference. The switch coordinates the output to multiple destinations. Using ODAN, we can describe the system decomposition in the following way. Message_8witching_8ystem Components: (Aux.:Mem , Archive_Tape, Reference_Table, Subscribers, Trunks, Switch , Operator) Interface: ~Operator Aux_Mem Components: ( Storage_Cell, directory_Cell) Access_Constrain t: mutual exclusive Operations: (Write, Read, Write_Directory, Read_Directory) Archive_Tape Components: (Tape) Access_Constrain t: mutual exclusive Operations: (Arch ive.:Msg, Re trieve_Msg) Reference_Table Components: (Table) Operations: (Look_Up, Insert, Delete) Trunk Components: (Msg_Queue) Access_Constrain t: mutual exclusive Operations: (Broad cast_Msg) Subscriber Components: (Msg_Queue) Access_Constrain t: mutual exclusive Operations: -VII- (Add, Delete, ls_Empty, Read_Terminal, Write_Terminal) Switch Components: (Msg_Queue) Access_Constrain t: mutual exclusive Operations: (Add, Delete, ls_Empty) For abstraction reasons, we ignore some details here. Actually, for each component and operation, ODAN provides a set of attributes. For example, in Aux_Mem entity, the system designer may specify the structure for Storage_Cell and Directory_Cell, algorithm skeleton for each operation, and exceptions for the operations. The algorithm skeleton may take an existing program code as its body, or a set of rules, or a PDL like specifications. We take an Ada like specification as the algorithm body. B. Similarity Recognition As we mentioned before, our goal is to make design information not only used to develop a new program, but also provide reusable designs. During the system design stage, based upon the object-oriented decomposition and construction as well as function specification on those objects, designers have an opportunity to recognize the construction - VIII - and function similarities among those objects. In ODAN, we introduce knowledge representation into software design. More specificly, we take inheritance relations in semantic nets and modify the semantics of those relations to represent the construction or function similarities. For example, the components of two entities, Reference_Table and Archive_Tape, have a functional similarity. Reference uses a "table" to save all the destination information, Archive_Tape uses a "tape" to backup all the messages. Ignoring the structures of destination and message information, functions for a table and tape can be the same, sequential data structure and no priority. In our design, we use a non-priority queue to implement the reference table and archive tape. The design representation is as follows. Non_Friority_Qu eue Components: ( Queue_En try) Operations: (Clear, Is_Empty, Add, Position_Of, Remove, En try_Of) Archive_ Tape Components: (Tape) instance of Non_Friority_Queue • • • Reference_Table Components: (Table) insta.nce of Non_Friority_Queue • • • Here we modified the semantics of instance of relation because the component for reference table is an instance of a non-priority queue, the non-priority queue will be biding inside the reference table. The manipulations on the table will be specified by reference table operations. In the same way, message queues are designed as follows: Priority_Queue Components: ( Queue_Entry) Operations: (Clear, Is_Empty, Add, Delete) I f Trunk Components: -IX- (Msg_Queue) instance of Priority_Queue • • • Subscriber Components: (Msg_Queue) instance of Priority_Queue • • • Switch Components: (Msg_Queue) instance of Priority_Queue • • • C. Component Integration Entities of a software system are not isolated. They are related with each other to do a specific task. One important information in the software design is the coupling specification, (how those entities coordinate). In ODAN we use the "interface" attribute, to indicate the relation information and coupling specification. The operations provide static default interface if no explicit interface specified. In the message switching system, we specify the interface for auxiliary memory, switch, and subscriber in the following way. Aux_MemJnterface time_rule: concurrent con trol_ru le : iterative im port_ru le: single body: (loop select ac ce pt Read_Msg or acce pt Wri te_Msg end select end loop) SwitchJnterface time_rule: concurrent controLrule: iterative body: (loop if not Is_Empty(Msg_Queue) -- Delete(Msg_Header) -- Look_Up(Msg_Header) - X - -- Add( Su bscriber_Msg_Queue) end if end loop) Su bscriberJn terface time_rule: concurrent control_rule: iterative body: (loop while not Is_Empty(Msg_Queue) -- Delete(Msg_Header) -- Aux_Mem.Read_Msg(Msg) -- Display(Msg) end loop -- Read_Terminal(Msg) -- Aux_Mem.Write_Msg(Msg) -- Archive_Tape.Archive_Msg(Msg) -- Add(Switch_Msg_Queue) end loop) -XIXI. DESIGN IMPLEMENTATION We choose Ada as the implementation language. Ada is a general-purpose language that embodies and enforces the modern software engineering principles of abstraction , information hiding, modularity, and locality. Ada offers a number of features that facilities the expression of reusable software components and real-time systems. For example, generic program units are parameterized templates for generating software components; tasks operates in parallel with other program units and imply the mutual exclusion; systematic separation between visible syntactic interface specifications and hidden bodies allow the programmer to separate concerns of module interconnections from concerns about how the module performs its task. Ada is used here as an implementation language for message switching system also because it is available in our VAX 11 / 780 under Unix operating system. Some specific algorithm designs can be written in Ada, and taken as the value of some ODAN attributes. Those algorithms written in Ada serves as intermediate steps between system detail design and coding. Since our goal is also to show how to make a reusable software component during the design stage, we will not show the complete detail of program code . The executable code runs on VAX 11 / 780 under Unix operating system . A. Message Queue As we mentioned before in system decomposition section, we decided to design a priority queue to implement the message queue for subscribers, trunks and switches. Ada's generic package provides a powerful too!· &.t this point. Generic packages have the ability to create templates of program units with generic parameters needed at translation time . The specification of generic priority queue package is as follows. genenc type QUEUE_ENTRY is private; type PRIORITY is limited private; with function PRIORITY_OF(THE_ENTRY : in QUEUE_EN1RY) return PRIORITY; with function " SIGNAL, PRIORITY => PRIORITY_TYPE, PRIORITY _OF=> CHEOK_PRIORITY, " " PORT2, PORT_QUEUE = > QUEUE_?KG .SUBSCRIBERl_QUEUE, -XIVPORT_ QUEUE_8EMAPHOR => TABLEYKG.SUBSCRIBERLQUEUE_8EMAPHOR, GETJIEADER => DEVICE_DRIVERSYKG.GETJIEADER_VTlOO, GET_BLOCK => DEVICE_DRIVERSYKG.GET_BLOCK_VTlOO, PUTJIEADER => DEVICE_DRIVERSYKG.PUTJIEADER_VTlOO, PUT_BLOCK => DEVICE_DRIVERSYKG.PUT_BLOCK_VTlOO) ; package TRUNK is new NOD EYKG (PORT =>PORTS, PORT_QUEUE => QUEUEYKG.TRUNK_QUEUE, PORT_QUEUE_8EMAPHOR => TABLEYKG.TRUNK_QUEUE_8EMAPHOR, GETJIEADER => DEVICE_DRIVERSYKG.GETJIEADER_TRUNK, GET_BLOCK => DEVICE_DRIVERSYKG.GET_BLOCK_TRUNK, PUTJIEADER => DEVICE_DRIVERSYKG.PUTJIEADER_TRUNK, PUT_BLOCK => DEVICE_DRIVERSYKG.PUT_BLOCK_TRUNK); C. Simulation Control Message switching system is a multiple processing system, but we simulate this multiple I/0 device activities on a single 1/0 device - one terminal. Thus, it is necessary to synchronize the access to the terminal. This synchronization is implemented by an Ada task. The semantics of Ada tasks guarantee the mutual exclusion . Only one task can access the terminal at a time, and if more than one task try to access at the same time, the remaining ones except one have to wait in an implicit queue so as not to interfere with each other. If those tasks arrive at different times, the first task will be permitted accessing first, the remaining ones are put in the queue based upon a time stamp. The task for synchronizing the terminal access is as follows: task I0_8YNC is entry REQUESTJO_DEVICE; entry RELEASEJOJ)EVICE; end I0_8YNC; task body 10_8YNC is BUSY: BOOLEAN:= false; begin loop select when not BUSY=> accept REQUESTJOJ)EVICE do BUSY :=true; end REQUESTJOJ)EVICE; or accept RELEASE_IO_DEVICE do BUSY :=false; end RELEASE_IO_DEVICE; end select; end loop; end IO_SYNC; -XVXV. CONCLUSION Current approaches for software reusability are primarily based on code sharing and subroutine libraries . Ada's generic units provide additional reusability techniques. We believe that if we can find ways to express reusable software components at a higher level than at the programming code level, software reusability will significantly improve the software productivity. The message switching system design is our explorative work on software reusability. 'Ne feel that it is necessary to develop a software design representation . Such a representation must not bind the implementation too early and must capture the logic of system functions. The programming environment support is also important for applying software reusability more effectively. REFERENCES [1] P . G. Bassett, " Framed-Based Software Engineering," IEEE Software, July, 1987. [2] G. Booch, Software Components With Ada, The Benjamin/ Cummings, Publishing Company Inc., 1987. [3] G. E. Kaiser and D. Garlan, " Melding Software Systems from Reusable Building Blocks," IEEE Software, July, 1987 . [ 4] W. Tracz , " Reusability Comes of Age," IEEE Software ", July, 1987. [5] J. W. Rozenblit and B. P. Zeigler, "Concept For Knowledge-Based System Design Environment," Proc. of the 1985 Winter Simulation Conference, San Francisco, Dec.1985. [ 6] B. Liskov and J. Guttag, Abstraction and Specification in Program development, The MIT Press, McGraw-Hill Book Company, 1986. [7] G. R. Andrew, "The design of a Message Switching System : An Application and Evaluar tion of Modula," IEEE Trans. Software Eng., Vol.SE-5, No.2, Mar.1979. [8] G. Fischer, "Cognitive View of Reuse and Redesign," IEEE Software, July, 1987. [9] W. P. Yin, M. M. Tanik, D. Y. Y. Yun, T. J. Lee and A. G. Dale, "Software Reusability: A Survey and A Reusability Experiment", Proc. of FJCC, Dallas, Oct. 1987. [10] W. P. Yin , M . M. Tanik, and D . Y. Y. Yun "Software Design Representation : Object Descriptive Attributed Notation (ODAN), " (Available from authors). [11] R. T. Yeh and T. A. Welch , "Software Evolution: Forging a Paradigm," Proc. of FJCC, Dallas, Oct. 1987.
Technical Report 2018-08-ECE-137 Technical Report 2002-09-ECE-006 Engineering of Enterprises a Transdisciplinary Activity Murat M. Tanik Ozgur Aktunc John Tanik This technical report is a reissue of a technical report issued September 2002 Department of Electrical and Computer Engineering University of Alabama at Birmingham August 2018 Technkal Report 2002-09-ECE-006 Engineering of Enter·prises A Transdisciplim•ry Activity Murat M. Tanik Ozgur Aktunc John Tanik TECHNICAL REPORT Department of Electrical and Computer Engineering University of Alabama at Birmingham September 2002 ENGINEERING OF ENTERPRISES A TRANSDISCIPLINARY ACTIVITY OVERVIEW Contributed by: Murat M. Tanik, Ozgur Aktunc, and John U. Tanik This module is composed of two parts: Part I surveys and defines Enterprise Engineering in the context of transdiscipline. Part II introduces Internet Enterprise and addresses engineering implementation consider ations. PART I ENTERPRISE ENGINEERING ESSENTIALS 1 INTRODUCTION When Henry Ford rolJed out his first automobile assembly during 1913, he created the archetype of single-discipline enterprise. Ford's adventure was a self-contained and efficient exercise in mechankal engineering. With no competition, no regulatory constraints, and no pressing need for cross-disciplinary partnerships, from design development to process development, all ideas primarily originated from Ford's own engineers. The world is a different place today. Automobiles are complicated hybrids of mechanical, electrical, electronic, chemicaJ, and software components. Modern 4 manufacturers must now pay dose attention to new technological developments in hardware (mechanisms associated with physical world), software (mechanisms associated with computational world), netware (mechanisms associated with communications), and peopleware (mechanisms associated with human element). The changes experienced in the automotive industry exemplify the needs of the ever increasingly complex nature of today's modern enterprise. In other words, the ubiqui tous existence of the ";computing element" forces us to take into account disciplinary notions, ranging from psychology to ecology. In one word, the world is becoming transdisciplinary. In this world of transdisciplinary needs, we need to approach designing of enterprises as engineers, moving away from the traditional ad hoc approach of the past. This module expl ai n~ the changes to be made to current enterprise organization in order to be successful in the networked economy. A brief definition of Enterprise Engineering is given as an introduction, foJJowed by a summary of Enterprise Engineering subtopics, namely modeling, analysis, design, and implementation. In the last section of Part I, the definition of an intelJigent enterprise is made with an emphasis on knowledge management and integration using Extensible Markup Language (XML) technology [1]. 2 DEFINITION The Society for Enterprise Engineering (SEE) defined Enterprise Engineering as ";the body of knowledge, principles, and practices having to do with the analysis, design, implementation and operation of an enterprise" [2]. Enterprise Engineering methods include modeling, cost analysis, simulation, workflow analysis, and bottleneck analysis. 5 In a continually changing and unpredictable competitive environment, the Enterprise Engineer addresses a fundamental challen ge: ";How to design and improve all elements associated with the total enterprise through the use of engineering and analysis methods and tools to more effectively achieve itsgoals and objectives" [3]. Enterpr.ise Engineering has been considered as a disdpline after its establishment in the last decade of the 20th Century. The discipline has a wor]dvicw that is substantial enough to be divided into sub-areas, with a foundation resting on several reference disciplines. In the Enterprise Engineering worldview, the enterprise is viewed as a complex system of processes that can be engineered to accompli sh specific organizational objectives. Enterprise Engineering has used several reference disciplines to develop its methods, technologies, and theories. These reference disciplines can be listed as the following: Industrial Engineering, Systems Engineering, Information Systems, Information Technology, Business Process Reengineeling, Organizational Design, and Human Systems [2]. 2.1 Understanding Enterprise Engineering Like most engineering profession als, Enterprise Engineers work on four main areas: modeling, analyzing, design, and implementation. One important issue facing Enterprise Engi neering is the development of tools and techniques to support the work of analyzing, designing, and imp1ementjng organizational systems. These tools must assist enterprise engineers in the initial transformation of functional, often disjoint, operations into a set of integrated business processes replete with supporting information and control systems [4]. To develop new models of enterprises, the enterprise should be analyzed 6 using process analysis, simulation, activity-based analysis, and other tools. Also an abstract representation of the enterprise and the processes should be modeled in a graphical, textual, or a mathematical representation. The . design issues in Enterprise Engineering consist of developing vision and strategy, integration and improvement of the enterprise, and developing technology solutions. Lastly, implementation deals with the transformation of the entetprise, integration of corporate culture, strategic goals, enterprise processes, and technology. We will take a look at these areas in the fol1owing section: • Enterprise Engineering Modeling (EEM), • Analyzing Enterprises, • Design of Enterprises, and • Implementation. 2.2 Enterprise Engineering Modeling Enterprise Engineering Modeling (EEM) is basically dealing with the abstraction of engineering aspects of enterprises and connecting them to other business systems. The model encompasses engineering organizations' products, processes, projects, and, ultimately, the ";engineered assets" to be operated and managed. EEM coordinates design and deployment of products and assets at the enterprise level. It integrates engineering information across many disciplines, allows engineering and business data to be shared through the combinatjon of enterprise IT (information technology) and engineering IT, and simulates the behavior of intelligent, componentbased models [5). 7 The selection and design of enterprise processes for effective cooperation is a prime objective of Enterprise Engineering. Enterprise models can assist the goal of Enterprise Engineering by helping to represent and analyze the structure of activities and their interactions. Models eliminate the irrelevant details and enable focusing on one or more aspects at a time. Effective models also facilitate the discussions among different stakeholders in the enterprise, helping them to reach agreement on the key fundamentals and to work toward common goals. Also it can be a basis for other models and for different information systems that support the enterprise and the business. The enterprise model will differ according to the perspective of the pers.on creating the model, including the visions of the enterprise, its efficiency, and other various elements. The importance of an enterprise model is that it wm provide a simplified view of the business structure that will act as a basis for communication, improvements, or innovations and define the Information Systems requirements that are \ necessary to support the business. The term business in this context is used as a broad term. The businesses or the activities that can be represented with Enterprise Engineering models do not have to be profit making. For example, it can be a research environment with the properties of an enterprise. Any type of ongoing operation that has or uses resomces and has one or more goals, with positive or negative cash flow, can be referred to as a business [6]. The ideal business model would be a single diagram representing all aspects of a business. However this is impossible for most of businesses. The business processes are so complex that one diagram cannot capture all the information. Instead, a business model is composed of different views, diagrams, objects, and processes: A business 8 model is illustrated with a number of different views, and each captu~cs infmmation about one or more specific aspects of the businesses. Each view consists of a number of diagrams, each of which shows a specific part of the business structure. A diagram can show a ~1ructure (e.g., the organization of the business) or some dynamic collaboration (a number of objects and their interaction to demonstrate a process). Concepts are related in the diagrams through the use of different objects and processes. The objects may be physical such as people, machines, and products or more abstract such as instructions and services. Processes are the functions in the business that consume, refine, or use objects to affect or produce other objects. There are cunently hundreds of modeling tools for enterprises, and many modeling techniques such as Integrated Definition Language (IDEF), Petri-Net, Unified Mode1ing Language (UML), and meta-modeling. Modeling involves a modeling language and the associated modeling tools. Different enterprises may need different modeling tools according to the nature of the enterprise. Before selecting the modeling tool, a detailed analysis should be made to select the most appropriate modeling language and the tool. For the software industry, UML has become the standard modeJjng language [7]. 2.3 Enterprise Analysis The increasing complexity of enterprises has stimulated the development of sophisticated methods and tools for modeling and analysis of today's modern enterprises. Recent advances in information teclu1ology along with significant progress in analytical and computational techniques have facilitated the use of such methods in industry. 9 Applying Enterprise Analysis methods results in a documentation that supports a number of programs, which are as follows: strategic information resource planning, information architecture, technology and services acquisition, systems design and development, and functional process redesign. Most organizations have a wealth of data that can be used to answer the basic questions supporting strategic planning: who, what, where, and bow much. By modeling with these data using an Enterprise Analysis toolset, the enterptise models can be built incrementally and in less time. The most important use of Enterprise Analysis is that it presents the organization's own business, demographic, and workload data in a compelling manner to tell the story. Whether they are used to support programs for acquisitions, information architectures, or systems development, Enterprise Analysis studies are rooted in the business of the organization and thus are easily understood and supported by executive management. 2.4 Enterprise Design The design of an enterprise deaJs with many issues, including development of a vision and a strategy, the establishment of a corporate cu.lture and identity, integration and improvement of the enterprise, and development of technology solutions. Optimization of several perspectives within an enterprise is the objective of Enterprise Design. Examples of enterprise perspectives include quality, cost, efficiency, and agility ,. and management perspectives s uch as motivation, culture, and incentives. For example, consider the efficiency perspective. The modeling task will provide ontologies (i.e., object libraries) that can be used to construct a model of the activities of a process, such as its resource usages, constraints, and time. Based on these models the efficiency 10 perspective will provide tools to design, analyze, and evaluate organizational activities, processes, and structures. These tools will also be capable to represent and model the current status of an enterprise and to analyze and assess potential changes. One issue is wbetber there exists sufficient knowledge of the process of designing and optimizing business activities/processes to incorporate in knowledge-based tools. The main goal of an Enterprise Design application is to deveJ~p a software tool that enables a manager to explore alternative Enterprise Designs that encompass both the stmcture and behavior of the enterprise over extended periods of time. lssues such as motivation, culture, and incentives are explored, along with other relevant parameters such as organizational structure, functions, activity, cost, quality, and information [8]. 3 STRATEGY FORMULATION FORE-BUSINESS Electronic commerce is becoming a growing part of industry and commerce. The speed of technological change is enabling corporations large and small to transact business in a variety of ways. Today, it is routine practice to transact some aspect of business electronically from e-mail to exchanging data via Electronic Data Interchange (EDI), World Wide Web (WWW), and various shades these technologies. Numerous benefits accrue to corporations when they use automated capabilities. In order to maximize such benefits, electronic enterprises must base their efforts on welJdeveloped strategies. In this manner, tbe probability of success is increased many folds. Embarking on electronic commerce or business should never be thought of as the sole quest of the information systems department. The following strategies are a synthesis of II best practices introduced to assist information systems departments to prepare the organization for the information age [9, 1 0]. 3.1 Strategy 1 - Improve Corporate System DeveJopment Skms In addition to developing technical skills, corporations must pay close attention to effective communication, eliminating cross-functional language barriers, and improving inadequate facilities in geographically dispersed systems. 3.2 Strategy 2 -Build a Proactive Infrastructure There must be a constant effort to keep up with technological changes. Frequently, these changes trickle down from the top as a result of various business strategies. For example, top managers may discover that they need video-conferencing capability, and the information technology people are under pressure to deliver it. This kind of approach will put the chief information officer(CIO) in a reactive posture, trying to put out fires as they appear. In putting out such fires, local resources may be used to satisfy higher level needs without any obvious benefits to local managers who may resent this fact and create barriers against success. CIOs should try to get the cooperation of all users in anticipating system needs. If users are not satisfied with an imposed system, they wiiJ try to build their bootleg systems for their own needs. Thus, project needs should be anticipated as far as possible and should be planned to meet both short-term goals of management and yield benefits for the development of the infrastmcture of the corporation in the Jong term. 12 3.3 Strategy 3 - Consolidate Data Centers A corporation embar1dng on developing an e-business system must realize that there do already exist semiautonomous data centers distributed throughout various geographical locations. There may have been a time when such data centers were desirable. Today, e-business demands integrated information systems, and the data centers must be consolidated. An integrated information system is far more effic ient in controlling corporate operations. Obviously, operating fewer facilities, maintaining minimum levels of inventory, and giving better service to customers will bring handsome returns to corporations. During the consolidation process, a number of problems of compatibility and standardization will occm, but tackling such problems is better than having semiautonomous data centers. 3.4 Strategy 4 - Standardize Data Structures As corporations grow, different data processing systems and data centers proliferate, especiaUy in transnational corporations. Consolidating data centers and systems as suggested in strategy 3 may not be sufficient. Corporations need to determine data needed at global levels and standarclize them. Standardization may not be possible for certain applications in an international setting since regulatory accounting of different countries may be a roadblock. However, this should not be taken as a signal for nonstandardization. Standardization will make useful information available throughout the corporation. For example, these days many corporations are adapting XML as part of data stmcture consolidation strategy. XML issues are addressed in the next section with more detail. 13 3.5 Strategy 5 - Accommodate Linkages with Cui-rent Strategic Allies and Provide Expansion for Future Str ategic Alliances Recent developments in globalization and Internet technology are spurring corporations to form sliategic alliances. Automobile manufacturers are, for example, forming alliances to influence prices and qualities of their raw materials and parts purchases. Similar alliances are growing at an accelerated pace in other industries. These alliances are designed to create not only purchasing power but also a variety of other mutual interests, from technological co-operation to joint production. 3.6 Strategy 6 - Globalize Human Resource Accounting As companies centralize their information systems through computerization, a global inventory of human skills should be developed. Frequent human resource problems arjse when Information Systems (IS) personnel focus locally rather than globally. Recmiting of specialists, for example, must be done not with a local perspective but with a global one. This will help eliminate possible redundancies with potential savings. 4 INTELLIGENT ENTERPRISES Enterprises competing in global markets assume complex organizational forms such as supply chain, virtual enterprise, Web-based enterprise, production network, e-business, and e manufacturing. Common traits of these organizations are willingness to cooperate, global distributed product development and manufacturing, and high coordination and communication. These traits have led the trend of transformation from 14 capital intensive to intelligence intensive entetprises [1 1]. Visions of the organization's future e-Business roles as an intelligent enterprise could be formulated as follows [12]: • Transparent - Intelligent enterprises will contain substantial amounts of information on capabilities, capacities, inventories, and plans that can be exchanged between tools, servers, and optimizing agents that will augment capabilities of their human masters. • Timely - Intelligent enterprises will be designed to meet a customer need exactly when the customer wants it. • Tuned - Through collaboration and sharing of knowledge, the intel1igent enterprise wiJl serve customer needs with a mjnimum of wasted effort or assets. 4.1 Knowledge Management and Integration with XML One important challenge for enterprises today is storing and reusing knowledge. For many organizations, up-to-dale knowledge of what is relevant and important to customers distinguishes their offerings. The challenge is to assimilate this rapidly changing knowledge about products and services quickly and distribute it rapidly to leverage it for improved performance and quality service. This means finding all knowledge that is embedded in and accessed through technologies and processes and stored in documents and external repositories and being able to share it quickly with the customers. The capital-based organization needs to transform into bigh-perforrn.ing, processbased, knowledge-based enterprises, characterized by agility, f lexibility, adaptability, and willingness to learn. To overcome the difficulties during the transformation, powetful tools are needed to manage the knowledge within the enterprise and to develop the 15 communication between the company and the customers. The key tool to be used within this process is XML, which will set the standards of communication and wm help to manage the knowledge [13]. To understand how XML will help us managing the knowledge, a def111ition of a knowledge-based business is needed. 4.2 Knowledge-Based Businesses The following six characteristics of knowledge-based business were defined in Davis and Botkin [14]. ~hese characteristics are actually guidelines for businesses to put their information to productive use. 4.2.1 The More You Use Knowledge-Based Offerings, The Smarter They Get This characteristic fits in the customer-defined offerings the companies give. For example, a credit card company can build a system that could understand the buying patterns of a customer that can protect the customer from fraud. A news agency can change the interface of its system to give the type of news that a newspaper or journal requests. Knowledge-based systems not only get smcuter but also enable their users to learn. For example, General Motors' computer-aided maintenance system not only helps novice mechanics to repair automobiles but also helps expe1t mechanics to refine their knowledge. As the technology advances, the amount of information that a mechanic needs to know expands rapidly. With tllis system a mechanic can leverage the knowledge of all mechanics using the system. As a result, the system continually improves, as does the service quality. 16 4.2.2 Knowledge-Based Products And Services Adjust To Changing Circumstances When knowledge is built into a product, the product may adjust itself in a smart manner to changing conditions. For example, a glass window that may reflect or transmit sunlight according to temperature is such a product. Producing tbese producl:s will not only be marketed well but also have important economical advantages. Tbe smart pr~ducts will guide their users as well. 4.2.3 Knowledge-Based Businesses Can Customize l1teir Offerings Knowledge-based products and servkes can determine customers' changing paltems, idiosyncrasies, and specific needs. For example, a smart telephony system can understand which language will be used on specific num bers~ also by using the voice recognition system, the need for telephone credit cards can be diminished. 4.2.4 Knowledge-Based Products A11d Services Have Relatively Short Life Cycles Many knowledge-based products have short life cycles, because they depend on the existing market conditions; their viability is short-lived. For example, the foreign exchange advisory services offered by a commercial bank are highly specialized and customized for corporate clients. Such services should be constantly upgraded to keep the profits and the proprietary edge. 4.2.5 Knowledge-Based Businesses Enable Customers To Act In Real Time Information becomes more valuable when it can be acted on constantly. A system that will deliver the tour book information while you are driving the car will have a great 17 value. An interactivity. added to the system will make the product's value even higher. Knowledge-based products can also act in real time. For example, a copier machine that calJs the maintenance provider when an error occurs wiJJ have a great value in this sense. 4.3 XML's Role in Business Applications The smallest cluster of knowledge is data. These are basic building blocks of information that come in four particular forms: numbers, words, sounds, and images. Manipulation of the data determines its value. The arrangement of data into meaningful patterns is information. For example, numbers can be arranged in tables, which is information; a series of sounds, which is music, can also be considered as information. Today, an important challenge for Internet-based businesses is using the information efficiently and in a productive way that will upgrade the information to knowledge. Thus, we say that knowledge is the application and productive use of information. The shift from the information to knowledge age will be via technology. The new enabling technologies of software development such as XML, J2EE, and Visual Studio are forcing e-businesses to build knowledge-based businesses. Here we will explain the most important enabling technology, XML, within the development of e~businesses. XML can be used effecti~ely for exchanging of business documents and information over the Internet. XML is a standard language that simultaneously presents content for display on the Intemet and describes the content so that other software can understand and use the data. Therefore XML can be a medium through which any business application can share documents, transactions, and workload with any other 18 business application [15]. In other words, XML can become the common language of ebusi. ness and knowledge management. One impmtant property of XML is providing .information about the meaning of the data. Thus, an XML-Jonnatted document could trigger a software application at a receiving company to launch an activity such as shipment loading. But to provide that level of data integration, trading partners would have to agree on definiti ons for the various types of documents as well as standard ways of doing business. In addition to facilitating e-commerce, having common defini tions and uses for data also enable an enterprise to better leverage the .knowledge ctmenrly stored in information silos. XML supports the searching and browsing of such information sHos [16]. It structures documents for granularity, such as alJowing access to sections within documents and fine-tuning retrieval Also, it annotates documents, which enables users to not restrict themselves to what is in the document. XML organizes documents by classifying documents into groups and supports browsing them. AdditionaiJy, it has Hyper Text Markup Language (HTML)-like linking options that help the information users to find the documents they arc seeking. Fig. I shows the tools that are common in the organization of information through XML. XML is the next evolution in knowledge management, and organizations are beginning to understand the potential of this technology to develop enterprise-wide information architectures. As a technology, XML does not bring any value to an organization. The value of XML wHI depend on how it js used within a company. The agreement on data definitions within an enterprise has always been a hard task. At rn.inlmum, XML should be implemented strategically within the organization. Ideally, the 19 Annotate Documents I RDF I Schemas J:: I XML I (.---, X-Poin- ter--, Fig. 1. Organization of information through XML. implementation should include strategic partners and other organizations that have a need to share data and information. XML is a majm advance in the standardization of information sharing across traditional information boundaries, both internal and external Information security and privacy issues are major concerns revolving around customer and corporate data flowing across wires. Successful knowledge management in a company often depends on having access to information outside the enterprise walls. XML can also be of value here by helping to improve the functioning of supply chains and the extranet. In conclusion, it becomes obvious that managing knowledge requires better tools. We need to create systems that manage documents, as people would do, and we know that better tools need better documents. Thus by building on a solid knowledge management strategy using XML, we believe an organization can gain competitive differentiation in the near future. 20 PART II INTERNET ENTERPRISE IMPLEMENTATION CONSIDERATIONS 1 INTRODUCTION In the first section of this module we introduced essential elements of enterprise engineering in abstract and general terms. Building on the notions explored in the first section, we will address here, specifics concerning designing and implementation of Internet enterprises. In this section, a review is provided of the key concepts and concerns an Internet enterprise engineering (IEE) project would encounter and need to address. Business engineering fundamentals, technologies, and strategies for the lrlternet such as Unified Modeling Language, Cosmos Model, Enterprise Maturity Model, Web Business . Models, Methods of Electronic Transaction, Online Contracts, Security Protocols, selected integrated development tools, Next Generation Internet, and Internet2 arc covered. Over 20 occupational roles within IEE are identified and described separately. A technology implementation platform and strategy are introduced, along with marketing and customer retention technologies and strategies on the Internet A detailed overview is provided of the various Internet business tools, technologies, and terminology for the systematic construction of new ventures on the Internet l7]. For convenience, all these issues are summarized in table fmm at the end of this section. 2 BUSINESS ENGINEERING FUNDAMENTALS 2.1 UML: Officially introduced in November 1997, UML has quickly become the standard modeling language for software development [6]. It bas a business model approach that provides a plan for engineering an orchestrated set of business functions. It 21 provjdes a framework by which business is to be performed, allowing for changes and various improvements in the process. The model is designed to be able to anticipate changes in business function in order to maintain an edge on the competition. One of the advantages of modeling in UML is that it can visually depict functions, relationships, and paradigms. UML is a recommended tool for business analysts to break down a large-scale business operation into its constituent parts. Capturing a business model in one diagram is not realistic, so it should be noted that a business model is actually composed of a number of different views. Each view is designed to capture a separate purpose or function without losing any important overall understanding of the business operation. A view is composed of a set of diagrams, each of which shows a specific aspect of the business structure. A diagram can show a structure or a kind of dynamic collaboration. The diagrams contain objects, processes, rules, goals, and visions as defined in the business analysis. Objects contain information about mechanisms in the business, and processes are functions that use objects to affect or produce other objects. Objectoriented techniques can be used to describe a business. There are similar concepts in business functions that mn parallel to object-oriented techniques of designconceptualization. Another advantage of UML is derived from the ability of business modelers and software developers to use the same conceplualization tools and techniques to achieve a common business end. Additionally, the power of UML is derived :from its ability to transcend tbe standard organizational chart [ 17]. 22 2.2 Cosmos Model: A generic approach for a business to manage change is through a holistic framework as described by Yeh in his three-dimensional model called Cosmos (Fig. 1). One of the important aspects of this model is that three dimensions exist interdependently because each dimension behaves as an enabler and an inhibitor to the other dimensions. The ";activity structure" dimension covers how work is structured in an organization, factoring in the steps and tasks that are taken to achieve an appropriate level of workflow. The ";infrastructure dimension" covers how resources are allocated and factors in the assets of an enterprise. The ";coordination dimension" covers how information is created, shared, and distributed. The cultural aspects of the enterprise are factored in here. The Cosmos model provides a conceptual space bounded by concrete factors for successfully navigating from one point of an organizational situation to another. Infrastructure Long-term vs. short-term objectives Activity Structure Stability vs. Flexibility Target Coordination Structure Modu]arity vs. Interconnectedness Fig. l. Cosmos model--holistic framework for managing change. [13) 23 The Cosmos model is an abstract tool for managers to guide their company along the best possible path. The trade-offs between the three dimensions at each point in the journey along the path are what the manager must determine to be most effective and best for the organization as a whole. In the case of work structure, there is an inherent tradeoff between stability and flexibility. In the case of a coordination structure, there is a tradeoff between strictly aligning of human resources with company objectives and providing each operating unit with sufficient autonomy. More autonomous organizations are generally organized with a greater degree of modularity, allowing for the ability to make rapid decisions by adapting to changing market conditions. In the case of infrastlucture, there is a trade-off between seeking short-term gain versus long-term gain. Overall, the Cosmos mode] provides an executive or project manager with another technique to visualize the overaJJ situation and path of an organization by laking into account the three dimensions that correspond to the three main forces that affect its future [ 18]. 2.3 Enterprise Maturity Model: In order to characterize a business in terms of its level of maturity, focus, activity, coordination, and infrastructure, please refer to Table l, provided by Yeh [18]. The table provides an overview of the various levels of enterprise maturity. 2.4 Web Business Models: Entrepreneurs who wish to start e-businesses need to be aware of e-business models and how to implement them effectively. The combination 24 of a company's policy, operations, technology, and ideology defines its business model. Table 2 describes in more detail the types of business models in existence today [6, 19]. 2.5 IVIethods of Elech·onic Transaction: There are various methods and mechanisms that merchants can collect income through electronic transactions. Table 3 provides the types of transactions covered such as credit card, e-walJets, debit cards, digital currency, peer-to-peer, smartcards, micro-payments, and e-billing [19]. 2.6 OnJine Contracts: An online contract can be accomplished throt1gh the use of a digital signature. Digital signatures are the electronic equivalent of written signatures. The Electronic Signatures in Global and National Commerce Act of 2000 (E-sign Bi11) recently passed into law were developed for use in public-key cryptography to solve the problems of authentication and integrity. The purpose of a digital signature is for electronic authorization. The U.S. government's digital authentication standard is called the Digital Signature Algorithm. The U.S. government also recently passed digitalsignature legislation that makes digital signatures as legally bindiqg as handwritten signatures. This legislation is designed to promote more activity in e-business by legitimizing online contractual agreements. 2.7 Security Protocols: Netscape Communkations developed the SSL protocol, developed as a non-proprietary protocol commonly used to secure communication on the Internet and the Web. SSL is designed to use public-key technology and digital 25 certificates to authenticate the server. in a transaction and to protect private information as it passes from one party to another over the Tnternet. SSL can effectively protect information as it is passes through the Internet but does not necessarily protect private information once stored on the merchant's server. An example of private information would be credit card numbers. When a merchant receives credit-card information with an order, the information is often decrypted and stored on the merchant's server until the order is placed. An insecure server wi th data that are not encrypted is vulnerable to unauthorized access by a third party to that information. SET protocol was developed by Visa International and MasterCard and was designed speci.tically to protect e-commerce payment transactions [20]. SET uses digital certificates to authenticate each party in an e-commerce transaction, including the customer, merchant, and the merchant's bank. In order for SET to work, merchants must have a digital certificate and special SET software to process transactions. Additionally, customers must have complementary digital certificate and digital walJet software. A digital wallet is similar to a real wallet to the extent that it stores credit (or debit) card information for multiple cards, as well as a digital certificate verifying the cardholder's identity. Digital wallets add convenience to online shopping because customers no longer need to re-enter their credit card information at each shopping site. 2.8 Integrated Tool Example: Drumbeat 2000: Macromedia Drumbeat 2000 is a tool capable of accepting and delivering complex infmmation and functionality through a web-interface [21]. The tool aids a visually skilled Web designer in competitively building a website without necessarily having to do any coding, which is useful in the 26 initial prolotyping phase. It is a tool that can interact with the back-end database with the ability to build a user-friendly client-side using Active Server Page (ASP) Web technology. ASP technology enables a real-time connection to the database, so any changes made to the database are immediately re flected on the client side. Macromedia D1umbeat 2000 claims to provide everything needed to build dynamic Web applications and online stores visually at a fraction of the typical development time and expense. The designers of Drumbeat 2000 also cl aim that the development environment can keep up with continuously evolving web technology, thus making it a future-oriented technology. 2.9 NGI: This initiatjve is a mulli-agency Federal research and development program began on October 1, 1997 with the participation of the following agencies: DARPA, DOE, NASA, NIH, NIST, and NSF (Table 4). These agencies arc charged with the responsibility of developing advanced networking technologies and revolutionary applications that require advanced networkjng. 2.10 Internet2: The Intemet2 is a consortium of over 180 uruversit ies leading the way towards a partnership with industry and government to develop advanced network applications and technologies in order to accelerate formation of a more advanced Internet. The primary goals of Internct2 are to create a leading edge network capability for the national research community, enable revolutionary Internet applications, and ensure the rapid transfer of new network services and applications to the broader Internet community. Through Intemet2 working groups and initiatives, Internet2 members are 27 collaborating on advanced app.lications, middleware, new networking capabilities, advanced network infrastructure, partnerships, and alliances [22). 3 OCCUPATIONAL ROLES IN illE In order to build, deploy, and maintain an Internet Enterprise, certain roles and positions most be filled for the organization to be effective. Table 5 lists and describes many of the relevant roles required within an enterprise initiative, such as Chief Privacy Officer (CPO), in addition to the more traditional organizational roles such as Chief Executive Ofilcer (CEO), Chief Technology Officer (CTO), and Chief Financial Officer (CFO) [20, 23]. 4 TECHNOLOGY IMPLEMENTATION AND STRATEGY 4.1 Microsoft Dotsmart Initiative: There are various approaches to imp.lementing strategic planning and technology implementations. For illustrative purposes, Microsoft is considered in this thesis to be one such approach for enterprise planning. Once the overall conceptualization and business pattern is created and a.ll the necessary occupational roles within the organization are identified, it is necessary to identify exactly which technology to utilize in order to build and implement the business venture. As the requirements of a business are analyzed, a useful guide is the Microsoft Dotsmarl Initiative. This mode of business analysis will help determine which business engineering concepts to use and what kinds of personnel are needed to 1un the operation. Additionally, the Microsoft Dotsmatt Initiative provides key points to address when building an Intemet operation from scratch. 28 4.2 Microsoft Technology Centers (MTCs): MTCs are areas designed for groups of entrepreneurs, Information Technology personnel, and businessmen for the rapid development. of robust e-commerce solutions. At these facilities, developers, entrepreneurs, and high-technology business persons use Microsoft Technology and the relevant knowledge to build enterprise solutions. The centers provide the essentials a team would need to develop an enterptise from the initial conception of the idea to launch. Microsoft provides essential equipment, support, and expe11ise, with an application of a ";best-practices" approach. These best practices have been tested before at MTCs, expediting the development progress and time to market. Laboratory sessions are designed to bring together an assortment of entrepreneurial individuals as they facilitate the development process using the latest Microsoft products. The MTCs offer customers wishing to capitalize on emerging Microsoft.NET technologies the service, infrastltlctme, and development environment to accelerate their projects and reduce thejr risk. The working laboratory is intended to help customers develop and test next-genera6on e-commerce technologies and demonstrate further the value of Windows platforms and other industry-standard systems for powering ebusiness. 4.3 Impact of XML: XML represents a more general way of defining text-based/ documents compared to Hypertext Markup Language (HTML). Both HTML and XML descend from Standard Generalized Markup Language (SGML). The greatest difference between HTML and XML is the flexibi lity of the allowable tag found in XML. An XMLbased document can define its own tags, in addition to including a set of tags defined by a 29 third-party. This ability may become very useful for those applications that need to deal with very complex data structures. An example of an XML-based language is the Wireless Markup Language (WML). WML essentially allows text pm1ions of Web pages to be displayed on wireless devices, such as cellular phones and personal digital assistants (PDAs). WML works with the Wireless AppHcation Protocol (W AP) to deliver this content. WML is similar to HTML but does not require input devi ces such as a keyboard or mouse for navigation. In the case of a PDA thal requests a Web page on the Intemet, a WAP gateway receives the request, translates it, and sends it to the appropriate Internet server. In response, the server replies by sending the requested WML document. The WAP gateway parses this document's WML and sends the proper text to the PDA. This introduces the element of device portability. 4.4 Microsoft.Net Initiative: Microsoft announced a new generation of software called Microsoft .NET. This software is intended to enable every developer, business, and consumer to benefit from the combination of a variety of new Internet devices and programmable Web services that characte1ize NGI. Microsoft is trying to create an advanced new generation of software that wiiJ drive NGI. This initiative is called Microsoft.NET and it.s key purpose is to make information available at any time, in any place, and on any device. 4.5 Microsoft BizTalk 0 1·chestration: For IEE purposes, BizTalk Server 2000 is the considered a nex t-generation software that plays an important role in forming the infrastructure and tools for building successful e-commerce communities. The core of 30 BizTalk Server offers business document routing, transformation, and tracking infras tructure that is mles based. BizTalk Server offers many services that allow for quickly building dynamic business processes for smooth integration of applications and business partners while utilizing pubJic standards to ensure interoperability. Essentially, BizTalk server provides a method to build dynamic business processes quickly. 4.6 Back-end Configurations Using Microsoft Technology: In the design of the backend of a website, special considerations must be given to security. This is done by providing a kind of safety buffer from the greater world of the Internet using a demiUtarized-zone (DMZ) strategy. The components of a DMZ such as the firewall, the front-end network, the back-end nelwork, and the secure network function as a security buffer from the outside world. 4.7 Rapid Economic Justification (RE.fl: The REJ framework makes it possible for IT and business executives to demonstrate how specific investments in IT will eventually benefit the business, ensuring in the process that the IT projects are aligned with the specific business strategies and priorities. IT investments play a critical role in Internet enterprises. Important decision-making at the early stages of any venture does require an effective methodology to identify the best strategic IT investments. Leaders in the upper echelon of organizations such as CEOs, CTOs, and CFOs are being overwhelmed with complex information. REJ may prove to be a reliable method to quickly evaluate the true value worth and potential of a company by taking into consideration its intangible IT assets. 31 In the past, companies developed metrics for the valuation of IT investments on the basis of cost improvements. Metric methodologies have focused on Total Cost of Ownership (TCO), whereas the strategic role of IT in boosting new opportunities for business has been largely ignored. Understandably, the benefits of IT can be traced to ways of measuring business value the traditional way. Unfortunately, current business practices are not necessarily adequately equipped to handle the complexities of the New Economy. Although the economic justification of IT projects has been researched extensively in the past decade, the problem is that these metiJods and techniques require too much data-crunching power and time to prepare. These unwieldy research techniques need to be replaced by a new and practical approach to quantify swiftly and accurately the true value of IT investments. 5 MARKETING AND CUSTOMER RETENTION 5.1 Online Marketing: The Internet provides marketers with new tools and convenience that can considerably increase the success of their marketing efforts. An Internet marketing campaign such as advertising, promotions, public relations, partnering and Customer Relationship Management (CRM) systems are all an integral prut of the marketing process. Table 6 describes the various techniques at the marketer's disposal when using the Internet as the medium of customer information delivery [19]. 5.2 CRM Systems: CRM is a general but systematic methodology using both business and technological techniques to maintain and grow a business's customer base. CRM systems enable a business to keep detaj led records on the activity of its c ustomers 32 by using new, sophisticated tracking systems on the Internet. Table 7 shows various areas where CRM technology and CRM business techniques can assist in managing a customer base [19]. 5.3 Web Design Technology Example: Dreamweaver Technology: Macromedia Dreamweaver is Web technology for building websites on the Internet without the need for programming directly in HTivlL [21]. Also, Web designers are easily able to create Web-based leaming content with Dreamweaver 4.0. A Web designer has the ability to create site maps of the website that can be easily maintained and enhanced. This is a very popular technology available on the market that can be used to make professional quality websites for marketing and promotional purposes. 5.4 Web Enhancement Technology Example: Flash Technology: Macromedia Flash is a multimedia technology for applications on the Web. This technology gives the user, especially one not artistically talented, the ability to develop interactive animations that can look quite impressive. A flash movie can be embedded into a Web site or run as a standalone program, and Flash is compatible with Dreamweaver. Flash movies can be made with sound and animation, so it is useful as a software tool to produce demonstrations at the user-interface. Flash can be used on CD-ROMs and allows for the construction of cross-platform audio/video animations and still jmages. 33 \ 6 SUMI\-IARY TABLES We would like to reiterate emphasis areas for Electronic Enterprise as listed in the introduction of this module. These are a) hardware (mechanisms associated with physical world), b) software (mechanisms associated with computational world), c) netware (mechanisms associated with communications), and d) peopleware (mechanisms associated with human element) [23, 24]. Following tables provide a useful Jist in all these areas. For convenience, we include all summary tables in following order: Table 1 Enterprise Maturity Levels Table 2 Web Business Models Table 3 Electronic Transactions Table 4 NGI Participating Agencies Table 5 Occupational Roles in lEE Table 6 Marketing Techniques on the Internet Table 7 Customer Relationship Management 34 Table 1 Enterprise Maturity Levels Levels Focus Activity Coordination Infrastructure 5. Whole Human-society Process Self-directed teams Long-term oriented; in engineering dominate orientation, harmony with methodology workplace; toLal personal mastery, nature, people institutionalized; alignment; open, heavy investments routinely do the Flexible and honest in IT, continuous right things: predictable communication improvement change is second process, right the channels institutionalized nature first time, value- throughout adderl activities only 4. Wise Stakeholders and Process monitored Organjzational Organi:z.ation community automatically for structure based on competency oriented in high performance; cross-trained case management; harmony with dominated by teams; vision continuing community; value-added al igned with the education; team-people routinely activities; high needs of the based structure; doing things right. degrl:e of society tenm-oriented HR Changes are concurrency; few policy planned and handoffs mannged 3. Mature Customer oriented; Process defined Vision defined Integrated customer's needs and is measured with extensive capacity, are anticipated; buy-ins, multi- con sol ida ted people are proud to functional project function; work here teams exist; investment in participatory training and work culture with force planning; managers as flattened coaches organization 2. Stable Competition- Process under Internal focus, Short-term focus, oriented reactive statistical control; control oriented, fragmented bench-marking as functional division capacity, little IT, a result of reaction, hierarchical, inflexible process, difficult to get has many information, no handoffs and a formal HR policy substantial number of non-value-added tasks I . Ignorant Disoriented- Fire-fighting Ad- No clear vision, Don' t know where chaotic hoc, unpredictable, resources exist fragmented Rumor mill rampant 35 e-Business Model Storefront Model Auction Model Portal Model Dynamic Pricing Model Comparison Pricing Model Demand-Sensitive Pricing Model Table 2 Web Business Models Description The~ storefront model is what many persons think of when they bear the word ebusiness. The storefront model combines transaction processing, security, online payment and information storage to enable merchants to sell their products on lhe web. This is a basic form of e-commcrce where the buyer and seller interact directly. To conduct storefront c-commerce, merchants need to organize an online catalog of products, take orders through their Web sites, accept pnyments in a secure envi ronment, send merchandise to customers, and manage customer data. One of the most commonly used e-commercc enablers is the shopping cart. This order-processing technology allows customers to accumulate items they wish to buy as they continue to shop. www.amazon.com is a good example. Forrester Research reveals that an estimated $3.8 billion will be spent on online person-to-person auctions in the year 2000 alone. This number is expected to rise to $52 billion for Business-to-Business (B2B) auctions. Usually auction sites act as forums through which Internet users can log-on and assume the role of either bidder or seller. As a seller, you are able to post an item you wish to sell, the minimum price you require to sell it, your item, and a deadline to close the auction. As a bidder, you may search the site for availability of the item you are seeking, view lhe current bidding activity and place a bid. They usually do not involve themselves in payment and delivery. www.ebay.com is a good example. Portal sites give visitors the chance to find almost everything they are looking for in one place. They often offer news, sports, and weather, as weU as the ability to search the Web. Search engines are h01i zontal portals, or portals that aggregate information on a broad range of topics. Yahoo! at www.yahoo.com is an example of a horizontal portal. America Online (AOL) www.aol.com is an example of a vertical portal because it is a community-based site. The Web has changed the way business is done and the way products are priced. Companies such as Priceline (www.pricelinc.com) and Imandi (www.imandi.com) have enabled customers to name their prices for travel, homes, automobiles, and consumer goods. The name-your-price model empowers customers by allowing them to choose their price for products and services. The comparison pricing model allows customers to polJ a variety of merchants and find a desired product or service at the lowest price (i.e. www.bottomdollar.com). The Web has enabled customers to demand bener, faster service at cheaper prices. It has also empowered buyers to shop in large groups to achieve a group rate (i.e., www.rnercata.com). Customers become loyal to Mercata because it helps them save money. 36 e-Business Model Bartering Model Advertising Model Procurement Model B2B Service Provider Model · Online Trading Model Online Lending Model Online Recruiting Model Online Travel Service Model TabJe 2 (Continued) Description A popular method of conducting e-business is bartering, offering one item in exchange for anotiier. If a business is looking to get rid of an overstocked product, iSolve ~isolve.com) can help sell it PotenHal customers send their pricing pre ferences to the merchant who evaluates the offer. Deals are often part barter and part cash. Examples of items typically bartered are overstocked inventory items, factory surplus, and unneeded assets. Forming business models around advertising-driven revenue streams is the advertising model. Television networks, radio stations, magazines, and print media usc advertising to fund their operations and make a profit. www.Iwon.com is a portal site that rewards users with raffle points as they browse the site's content. www.freemerchant.com offers free hosting, a free store builder, a free shopping cart, free traffic logs, free auction tools and all the necessary elements for running an e-commerce storefront. Frccmerchanl makes money from its strategic partnerships and referrals. The procurement model means acquiring goods and services with effective supply chain management via a B2B Exchange. ICG Commerce Systems (www.icgcommerce.com) is a site that enables businesses, customers, suppliers, purchasers, and any combination of these to interact and conduct transactions over the Internet. The system supports B2B, B2C, and all variations of these models. · B2B service providers make B2B transactions on the Internet easier. These e-businesscs help other businesses improve policies, procedures, customer service, and general operations. Ariba (www.ariba.com) is a B2B service provider. The online trading model is essentially securities trading on the Internet. Trading sites allow you to research securities and to buy, sell, and manage all of your investments from your desktop; they usually cost less. Charles Schwab (www.schwab.com) is a notable example. Companies are now making loans online. E-loan (www.eloan.com) offers creditcard services, home equity loans, and the tools and calculators to help you make educated borrowing decisions. Recruiting and job searching can be done effectively on the Web whether you are an employer or a job seeker. Refer.com (www.refer.com) rewards visitors for successful job referrals. Web surfers can search for and arrange for all their travel and accommodations online, and can often save money doing so. Cheaptickets (www.cheaptic kets.com) .is a similar site that helps customers find discount fares for airl.ine tickets, hotel rooms, cruise vacations and rental cars. 37 e-Business Model Online Entertainment Model Energy Distribution Model Braintrust Model Online Learning Model Click-and-Mortar Model Table 2 (Continued) Description The entertainment industry has recognized this and has leveraged its power to sell movie tickets, albums and any other entertainment-related content they can fit on a Web page. ICast.corn (www.icast.com) is a multimedia-rich entertainment site. A number of companies have set up energy exchanges where buyers and sellers come together to corrununicate, buy, sell, and distribute energy. These companies sell crude oil, electricity, and the products and systems for distributing them. Altranet (_www.altranet.com) also sells energy commodities. Companies can buy patents and other intellectual property online. Yet2 (www.yct2.com) is an e-business designed to help companies raise capital by selling intellectuaJ property such as patents and trademarks. Universities and corporate-training companies offer high-quality distance education directly over the Web. Click2learn ~www.click2 1earn.com) has created a database of products and services to elp mdtvtdunls and companies fi.nd the education they need. Brick-and-mortar companies who wish to bring their businesses to the Web must determine the level of cooperation and integration the two separate entities will share. A company that can offer its services both offline and o nline is called click-and-mortar, such as Barnes & Noble (www.bn.com). 38 Electronic Transaction T e Credit Card Transactions E-wallets Debit cards Digital Currency Table 3 Electronic Transactions Descrjption Merchant must have a merchant. account with a bank. Specialized Internet merchant accounts have been established to handle online credit card transactions. These transactions are processed by banks or third-party services. To faci litate the credit card process, many companies are introducing electronic wallet services. E-wallets allow you to keep track of your billing and shipping information so it can be entered with one click. Banks and businesses are also creating options for online payment that do not involve credit cards. There are many forms of digital currency; digital cash is one example. It is stored electronically and can be used to make online electronic payments. Digjtal cash is often used with other payment technologies such as digital wallets. Digital cash allows people who do not have credit cards to shop online, and merchants accepting digital-cash payments avoid creditcard transaction fees. 39 Examples Companies like Cybercnsh (www.cybercash.com) and ICat (www.icat.com) enable merchants to accept credit card payments online like www.Charge.com. www. visa.com offers a variety of ewallets. Entrypoint.com offers a free, personalized desktop toolbar that includes an e-wallct to facltitate one click shopping at its affiliate stores. In order to standardize e-wallet technology and gain wider acceptance among vendors, Visa, Mastercard, and a group of e-wallet vendors have standardized the technology with the Electronic Commerce Modeling Language (ECML), unveiled in June 1999 and adopted by many online vendors. Companies such as AroeriNet allow merchants to accept a customer's checking-account number as a valid form of payment. AmeriNet provides authorization and account settlement, handles distribution and shipping (fulfi11ment), and manages customer service inquiries. E-Cash Technologies (www.ccas.b.com) is a secure digitalcash provider that allows you to withdraw funds from your traditional bank account. Gift cash is another form of digital currency that can be redeemed at leading shopping sites. Web. Flooz (www.Jlooz.wm) is an example of gift currency. Some companies offer points-based rewards. www.beenz.com is an international, points-based currency system. Electronic Transaction Peer-to-peer Smart Cards Micropaymenls Table 3 (Continued) Description Peer-to-peer transactions allow online monetary transfers between consumers. A card with a computer chip embedded on its face is able to hold more information than an ordinary credit card with a magnetic strip. There are contact and contactless smartcards. Similar to smart cards, ATM cards can be used to make purchases over the Internet. Merchants must pay for each credit card transaction that is processed. The cost of some items could be lower than the standard transaction fees, causing merchants to incur losses. Micropayments, or payments that generally do not exceed $10.00, offer a way for companies offering nominal.ly priced products and services to generate a profit. 40 Examples cCash runs a peer-to-peer payment services that allows the transfer of digital cash via email between two people who have accounts at eCashcnablcd banks. Pay Pal offers a digital payment system known as X payments. PayPal allows a user to send money to anyone with an email nddress, regardless of what bank either person uses or whether the recipient is pre-registered with the service. EConnect has technology in the form of a device that connects to your computer and scrambles financial data, making it secure to send the data over the Internet. EpocketPay is another product developed by eConnect that allows a consumer to make secure purchases from the ePocketPay portable device. This device acts as a cell phone with a card reader built into it and will allow you to make secure purchases anywhere. Millicent js a micropayment technology provider. Millicent handles all of the payment processing needed for the operation of an e-busi ness, customer support, and distribution services. Millicent's services are especially useful to companies that offer subscription fees and small pay-per-download fees for digjtal content. c-Billi ng Electronic llill Presentment and payment (EllPP) offers the ability to present a company's bill on multiple platforms online. Payments arc generally electronic transfers from consumer checking accounts. 41 The Automated Clearing House (ACH) is the current method for processing electronic monetary transfers. Table4 NGI Participating Agencies _A~c~ro~t~1Y~n_l_ _~ E_x~p_a_n_si~n --- ~ --- ~--~ --- DARPA Defense Advnnced Research Projects Agency DOE Department of Energy (beg inning in PY 1999) NASA National Aeronautics and Space Administration NIH National Insti tutes of Health NIST National Institute of Standards and Tec hnology NSF National Science Foundation 42 Occupation Entrepreneur e-Commerce Program Manager Enterprise Architect Business and Infonnation Architect Table 5 Occupational Roles in illE Descdptjon An entrepreneur on the Internet is usually the person with the initial idea for the entire business and is involved in its early stages of inception before official management takes over. e-Commerce Program Managers are involved in enterprise-wide ecommerce initiatives and projects, managing e-cornmerce integration and overall business and technology architecture and infrastn1cture. Usually, they arc senior-level line managers who are effective at uniting the business and technology front by coordinating units within an organization and across the extended enterprise. Enterprise Arc hitects are involved in the definition, alignment, and refinement of the overall ente rprise architecture. Their responsibilities include seeing to it that many of the tasks of program management are can·ied out properly. More important, they must provide guidance so individual projects can make optimal use of infrastructure resources for e-Cornmerce. A balancing act between business requirements and tcchnologicnl capabilities is accomplished through their efforts . Enterprise Architects have a duty to identify the requirements, goals, and constraints of the project. They allocate responsibilities for each of the architectural elements. They are also responsible for lhe coordination of the modeling and design activities for the overall enterprise architecture. They are the chief e-commerce architects because they coordinate the work information, infrastructure and application architects. All architects and modelers should be completely capable in design patterns common to the many facets of business and technology. The design pattern movement has affected all aspects of analysis, design, and implementation of componentbased systems. Design patterns are the reusable material of architecture and have an important role in the complex distributed information systems lhat are conceived and developed today. Business and Information Architects have business domain knowledge, including business processes and logical information structures. They coordinate the work of business and technology analysts and modelers who develop abstract representations or business object models of the subjects, rules, roles, events, tasks, activities, and policies of the business domain. Application-neutral models that are built enable the reuse of business engineering analysis and design patterns and artifacts 43 Occupation Infrastructure Architect Application Architect Humru1 Factors Engineer Business Manager Internet Commerce Architect Table 5 (Continued) Description Infrastructure Architects identify the technical services required of the technology infrastructure to empower and support the logical busi ness and information architecture. They evaluate existing infrastructure services, s\~l ect those appropriate to a given project and acquire (via build or buy) new components needed in the infrastructure. They oversee the work of technical specialists in modeling the service architecture of the technical infrastmcturc. They maintain the technical components of the development repository. Application Architects coordinate the business process modeling activities across multiple projects and business domains. They coordinate the work of domain modelers and maintain the repository of business and component models. They evaluate existing business component services, sclectthose appropriate to a given project and (via build or buy) new components needed in the evolving business model. They maintain the business application components of thC development repository. Most importantly. tl1ey guide solution developers in blending the business object model with the infrastruchJre services needed to implement the models in an e~com merce platform. Human Factors Engineers are needed to design the next generation of user interfaces. While the graphical user Interface (GUD is recognized as the enabler of wide-spread personnl computing, task centered user interfaces provide assistance to end-users and can be a boon to productivity in the world of e-commerce. E-commerce transactions can involve a multitude of complex steps and processes. Well-designed user interfaces can help navigate and guide the user through these tasks, keeping track of the progress, and picking up where users leave off when transactions span multiple sessions of work. The Business Manager is responsible for the business approach on the Internet, creating and operating the Internet presence for the business, deciding what products and services are sold online, determining pricing, and establishing the key business relationships needed lo make a venture successful. This is primarily a business role, with particular attention paid to the success of the online business and bottom line. The Internet Commerce Architect is generally a systems analyst who turns the business requirements into a system design that incorporates the creation and management of content, the tnmsaction processing, fulfillment, and technical aspects of customer service 44 Occupation Solution Developer Content Designer Content Author Implementor Database Administrator Internet Sales and Marketing Customer Service Representative T~lble 5 (Continued) Description Solution Developers are application developers. They develop the use cases for the specific application at hand, compose solutions through extensive use of business object models, and use repositories. They assemble application components to implement c-commercc application. Unlike conventional programmers or programmer/analysts, they do not build or pmgram components. Instead, they assemble or glue together business solut ions from prefabricated components. They use highly integrated development environments (IDEs) such as IBM's VisuaiAge, Symantec's Visual Caf6, Sybase's PowcrJ, and Inprise's Jbuilder. Emerging Computer Assisted Software Engineering (CASE) tools and related methods will likely appear that tighten the link between business modeling and software development. Tools for understanding and managing business processes, such as Inte11icorp's LiveModel allows solution developers to build logical business that can automate the configuration and management of the SAP/R3 ERP system. The Content Designer is responsible for the look and feel of an Internet commerce system, including the graphic design, page layout, and user experience. The Content Author creates or adapts product information into a form that can be used for internet commerce, working within the design laid out by the content designer. The Impleme::ntor is responsible for creating any programs or software extensions needed to make the Internet commerce system work. For example, an Implementor might write the software or construct an ASP page using Drumbeat 2000 that takes product information from a database and dynamically renders it into a Web page. In the case that a database is used in the back-end, the Database Administrator (DBA) manages the creation and operation of the database to ensure its reliability, integrity, and performance. The Sales and Marketing team is responsible for focused efforts in promoting Internet-based commerce. Customer Service Representatives answer questions about products, assist buyers with registration or the purchasing of goods and services. 45 Occupation Component Developer Operations Manager System Supervisor System Administrator Security Officer Fulfillment Agent CPO Internet Lawyer Internet Accountant Table 5 (Continued) Description Component Developers usually build components in the form of coding projects. They are masters of component technology and know the intricacies of composition, delegation, and object-oriented systems analysis and design. They are proficient in component development languages (such as Java and C++), modeling standards (such as UMLand XMI), and distributed computing platforms (such as CORBA, DCOM, EJB). They understand and think in terms of architectural design patterns. In the meanti me, they will close the gap between business requirements and available components. Component developers must be highly qualified software engineers since quality'components do not just happen. They are carefully constructed using quality soflware engineering disciplines. Component Developers, therefore, must be highly trained specialists and masters of software quality processes such as CMM and ISO, as well as masters of component-based development methods. The Operations Manager is responsible for managing all service activities for the Internet commerce system. The System Supervisor manages the system staff. The System Administrator is responsible for the technical operations of the computer systems and networks. The Security Officer ensures that appropriate security measures have been taken in the design and implementation of the Internet commerce system. The Fulfillment Agent is responsible for shipping and handling of physical goods or delivery of services. In the case of digital goods, the fulfillment agent is responsible for overseeing the operation of the fulfillment system. The Chief Privacy Officer is io charge of measures for ensuring the security of vital company information, such as customer credit card numbers remains secure within the company network. An Internet Lawyer is a legal expert for Internet fu nctions. The .importance of this position cannot be overstated, because new laws and regulations could ki ll a company without legal assistance, prevention, or intervention. The Internet Accountant is responsible for ensuring that the proper accounting procedures have been followed for Internet-based transactions. 46 Technique Domain name FAQ Forum Networking Faci litation Promotions c-Business advertising Pay-per-click Pay-per-lead Pay-per-sale Webcasting Interactive Advertising Public Relations and press releases Trade shows Table 6 Marketing Techniques on the Internet Description The Universal Resource Locator (URL) represents the address of the domain name, which must be chosen with care because it reflects the company's values immediately and connotes immediate meaning to customers with its first impression. One can purchose a domain name at www.networksolutions.com. A frequently asked questions (FAQ) section contributes to a userfiiendly site. An onli ne forum on the website enables customers to congregate at a pre-de~ign at cd place on the site to post comments and to share ideas. This promotes site activi ty. It is important to make it easy for the customer to recommend a site to a friend. This can be accomplished with a quick button that brings up an email exchange. c-Business promotions can attract visitors to your s ite and can influence purchasing. Netcenlives.com is a company that can provide your business with customer reward programs. P ublicizing through traditional channels such as television slots, movies, newspapers, and magazines is effective. Pay-per-click is a mode of operation that calls for paying the host according to the number of click-throughs to a site. Pay-per-lead is a mode of operation that pays the host for every lead generated from the advertisement. Pay-per-sale is a mode of operation that pays the host for every sale resulting from a click through. Webcasting is a broadcasting technique on the Web that uses streaming media to broadcast an event over the Web. Interactive Advertising involves consumers in the advertising campaign. An example is WebRIOT, a game show on MTV. The game is aired on television, and viewers can join in the game at the same time by playing online. Public Relations (PR) and press releases keep customers and your company's employees current on the latest information about products, services, and intemal and external issues such as company promotions and consumer reactions. Trade shows arc excellent opportunities to generate site interest by speaking at conferences, which increases brand awareness 47 Table 7 Customer Relationship Management CR.M:Area Handling Sales tracking Transaction support Data-mining Call center Log-file analysis Cookie Customer registrntion Personalization One-to-one marketing Onsite Search engine Registering with Internet search engines Partnering Afffiiate Programs Culture management Description Handling is essentially the maintenance of out-bound and in-bound calls from customers and service representatives. Sales tracking is the process of tracing and recording all sales made. Transaction support entails technology and personnel used for conducting transactions. Data-mini ng is a wny to analyze information collected from visitors. Data-mining uses algorithms and statistical tools to find patterns in data gathered from customer visits. A call center gathers customer-service representatives who can be reached by an 800 number or through email, online text chatting, or real-time voice communications. A log-file analysis is a useful way to keep track of your visitors in tenns of site visits, including each visitor's location, IP address, time of visit, frequency of visits, and other key indicators. A cookie is a technology that keeps a profile on each visitor. Customer registration is an excellent method to create customer profiles because visitors fi ll out a form with personal information. Personalization technology can help a company understand the needs of its customers and the effectiveness of its website, thereby catering to the whims of the customer. One-to-one marketing such as e-mails confirms purchases and offers new products, showing customers that the business appreciates their patronage. Onsite Search engines allow people to find information relative to a subject of interest amidst the large amounts of information available on a personal website. Registering with Internet search engines is important because there are reportedly over 400 se::arch engines in usc on the Internet. This process makes a website known to the world by submitting the website as a searchable domain name in a sea of domain names. Partncring is a way of forming a strategic union with another company. Generally, legal contracts are usually written to define the relationship in a wf'ly to help a company provide customers with complimentary services and product<;. An Affiliate Program is an agreement between two parties that one will pay the other a commission based on a designated consumer action. Affi liate programs establish new income streams for companies and individuals that host the advertising affili ate websites. Culture management is the ability to understand and cater lo a target audience's patronage and culture, especially in global enterprises. 48 LIST OF REFERENCES [1] 0. Aktunc, ";The Role of Component Technologies on Enterprise Engineering,"; Masters Thesis, Department of Electrical and Computer Engineering, University of Alabama at Birmingham, 2002. [2] D.H. Liles, M.E. Johnson, L.M. Meade, and D.R. Underdown, ";Enterprise Engineering: A Discipline?"; Society for Ente1prise Engineering Conference Proceedings, June 1995. [3] L. Whitman, Enterprise Engineeiing IE8801 class webpage, http://webs.twsu. edu/enteng, 2002. [4] W.D. Barnett and M.K. Raja, ";Object-Oriented Enterprise Engineering,"; http:/ /webs. twsu .edu/enteng/papers/OOEE. pdf, 1999. [5] J. Orr, ";Enterprise Engineedng Modeling,"; http://www.cadinfo.net/editorial!eem. htm, 2002. [6] H. Eriksson and M. Penker, Business Modeling with UML, New York: Wiley, 2000. [7] G. Herzum and 0. Sims, Business Component Factory, New York: Wiley, 2000. [8] ";Enterprise Design and Engineering,"; http://www.eil.utoronto.ca/ent-eng/, 2002. [9] M. Segal, M. N. Tanju, 0. Aktunc, and M. M. Tanik, ";Strategy Formulation for E-Business ,"; in The fifth World Conference on Integrated Design & Process Technology, 2000, Proceedings CD. [10] E.M. Roche, ";Managing Information Technology in Multinational Corporations,"; Macmillan Publishing Company, New York, 1992. [11] C. Chandra and A.V. Smirnov, ";Ontology-Driven Knowledge Integration for Consumer-Focused Smart Companies,"; Proceedings of the Twelfth Annual Conference of the Production and Operations Management Society, POM-2001, Orlando FL, 2001. [12] G.J. Cross, ";Now e-Business is Transforming Supply Chain Management,"; Journal of Business Strategy, March/April, pp. 36-39, 2000. [13] S. Chincholikar, 0. Aktunc, and M.M. Tanik, ";TheN-Queens Test-Bed,"; Technical Report 2001-1 0-ECE-0 11, Department of Electrical and Computer Engineering, University of Alabama at Birmingham, 2001. [14] S. Davis and J. Botkin, ";The Coming of Knowledge-Bases Business,"; D. Tapscott, eds., Creating Value in the Network Economy, Boston: Harvard Business School Publishing, 1999. 49