Novel Battery Thermal Management Via Scalable Dew-Point Evaporative Cooling
In: RSER-D-22-03749
84 Ergebnisse
Sortierung:
In: RSER-D-22-03749
SSRN
In: Computers and Electronics in Agriculture, Band 117, S. 214-225
SSRN
In: Journal of South Asian studies, Band 9, Heft 2, S. 107-111
ISSN: 2307-4000
The dew point temperature is related to the total water vapor content available in the atmosphere column. In this study, Water Vapor Content (WVC) from Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI) and Relative humidity from Research Moored Array for African-Asian-Australian Monsoon Analysis (RAMA) buoy data has been utilized to make a relationship between satellite measured WVC and Dew point temperature. This study focuses on the development of an algorithm to estimate the surface dew point temperature from satellite-based WVC. Regression coefficients are established using 9-years (2004-2012) data of Dew point Temperature computed from Relative humidity and satellite measured WVC. 1594 data points are observed weekly, mean monthly collocated data points are considered to examine the relationship between Dew point temperature and WVC.
Abstrak Permasalahan yang terjadi dixBali, banyaknya penemuan – penemuan baru yang bersifat modern yaitu pendingin ruangan. Permasalahan ini, dari hari ke hari semakin banyak dikarenakan hotel – hotel, kantor- kantor pemerintahan dan daerah pemukiman warga di perkotaan menggunakan Air Conditioning ( AC ) sebagai pendingin ruangan dimana tidak ramah lingkungan mengakibatkan tingkat pemanasan global semakin tinggi dan pemborosanxlistrik. Dengan permasalahan di atas, maka diperlukan sistim pendinginan yang hemat energi dan ramah lingkungan yaitu dewxpoint evaporativezcooling system. Dewcpoint evaporativevcooling system merupakan proses pendingininan tanpa meningkatkan kelembaban.Dalam penelitian ini, dry ice merupakan material pads yang digunakan. Potong dry ice sesuai dangan ukuran pads, masukan dry ice ke dalam pipa in line sebanyak 20 batang pipa dengan panjang 400 mm, diameter 40 mm x 40 mm dengan pengisian tube 75%. Pengujian yang di lakukan untuk mengetahui penurunan temperatur bolamkering udara (?TdB), menghitung efektivitasbpendinginan, menghitung kapasitas pendinginan, menghitung EER, menghitung laju kondensak. Variabelmyang di ukur pada saat pengujian adalah variasi kecepatan aliran udara V1 (10 Regavolt ) = 4,8 m/s, V2 ( 30 Regavolt ) = 9,5 m/s, V3 (50 Regavolt ) = 11,3 m/s.Dari penelitian di dapat : Pada kecepatan aliran udara rendah (10 Regavolt ) = 4,8 m/s laju aliran udara konstan maka panas yang di serap pipa semakin banyak. Sedangkan pada kecepatan aliran udara tinggi (50 Regavolt ) = 11,3 m/s panas yang di serap pipa semakin banyak tetapi dry ice pada pad cepat menyublim atau menguap. Semakinzbesar putaran fanxyang digunakan menghasilkan meningkatnya penurunancbola keringbudara, menurunnya efektivitasnpendinginan, meningkatnya kapasitasopendinginan, EER yang menurun dan laju kondensak yangntinggi. Kata kunci: Dew point, evaporative,suhu,pads. Abstract Problems that occur inxBali, very many new discoveries that are modern in nature, namely air conditioning. This problem, from day to day more and more because hotels, government offices and residential areas in urban areas use Air Conditioning (AC) as air conditioners not environmentally friendly resulting in higher levels of global warming and waste of electricity. From the problem above, it requires an energy-efficient and environmentally friendly cooling system, the dewgpoint evaporativezcooling system. Dewcpoint evaporativebcoolingnsystem is a cooling process without increasing humidity.In this research, dry ice is the pad material used. Cut dry ice according to the size of the pads, insert dry ice into the in line pipe as many as 20 pipes with a length of 400 mm, diameterbof 40 mm x 40 mm with tube filling of 75%. Tests carried out to determine the decrease in temperature of air dry balls , calculate cooling effectiveness, calculate cooling capacity, calculate EER, calculate the rate of condensate. The variables measured at the time of testing are variations in air flow velocity V1 (10 Regavolt) = 4.8 m / s, V2 (30 Regavolt) = 9.5 m / s, V3 (50 Regavolt) = 11.3 m / s.From the research obtained : At low air flow speeds (10 Regavolt) = 4.8 m / s the air flow rate is constant so the heat absorbed by the pipe increases. Whereas at high air flow speeds (50 Regavolt) = 11.3 m / s heat is absorbed by more pipes but dry ice on the pads quickly sublimes or evaporates. The greater fan rotation used results in an increase in the decreasezin air dry ball, decreased cooling effectiveness, increased coolingrcapacity, decreased EERzand high condensate rate. Keywords: Dew point, evaporative, temperature, pads
BASE
In: Computers and Electronics in Agriculture, Band 153, S. 334-346
In: IJIN-D-23-00410
SSRN
Blog: The Grumpy Economist
I've been reading a lot of macro lately. In part, I'm just catching up from a few years of book writing. In part, I want to understand inflation dynamics, the quest set forth in "expectations and the neutrality of interest rates," and an obvious next step in the fiscal theory program. Perhaps blog readers might find interesting some summaries of recent papers, when there is a great idea that can be summarized without a huge amount of math. So, I start a series on cool papers I'm reading. Today: "Tail risk in production networks" by Ian Dew-Becker, a beautiful paper. A "production network" approach recognizes that each firm buys from others, and models this interconnection. It's a hot topic for lots of reasons, below. I'm interested because prices cascading through production networks might induce a better model of inflation dynamics. (This post uses Mathjax equations. If you're seeing garbage like [\alpha = \beta] then come back to the source here.) To Ian's paper: Each firm uses other firms' outputs as inputs. Now, hit the economy with a vector of productivity shocks. Some firms get more productive, some get less productive. The more productive ones will expand and lower prices, but that changes everyone's input prices too. Where does it all settle down? This is the fun question of network economics. Ian's central idea: The problem simplifies a lot for large shocks. Usually when problems are complicated we look at first or second order approximations, i.e. for small shocks, obtaining linear or quadratic ("simple") approximations. On the x axis, take a vector of productivity shocks for each firm, and scale it up or down. The x axis represents this overall scale. The y axis is GDP. The right hand graph is Ian's point: for large shocks, log GDP becomes linear in log productivity -- really simple. Why? Because for large enough shocks, all the networky stuff disappears. Each firm's output moves up or down depending only on one critical input. To see this, we have to dig deeper to complements vs. substitutes. Suppose the price of an input goes up 10%. The firm tries to use less of this input. If the best it can do is to cut use 5%, then the firm ends up paying 5% more overall for this input, the "expenditure share" of this input rises. That is the case of "complements." But if the firm can cut use of the input 15%, then it pays 5% less overall for the input, even though the price went up. That is the case of "substitutes." This is the key concept for the whole question: when an input's price goes up, does its share of overall expenditure go up (complements) or down (substitutes)? Suppose inputs are complements. Again, this vector of technology shocks hits the economy. As the size of the shock gets bigger, the expenditure of each firm, and thus the price it charges for its output, becomes more and more dominated by the one input whose price grows the most. In that sense, all the networkiness simplifies enormously. Each firm is only "connected" to one other firm. Turn the shock around. Each firm that was getting a productivity boost now gets a productivity reduction. Each price that was going up now goes down. Again, in the large shock limit, our firm's price becomes dominated by the price of its most expensive input. But it's a different input. So, naturally, the economy's response to this technology shock is linear, but with a different slope in one direction vs. the other. Suppose instead that inputs are substitutes. Now, as prices change, the firm expands more and more its use of the cheapest input, and its costs and price become dominated by that input instead. Again, the network collapsed to one link. Ian: "negative productivity shocks propagate downstream through parts of the production process that are complementary (\(\sigma_i < 1\)), while positive productivity shocks propagate through parts that are substitutable (\(\sigma_i > 1\)). ...every sector's behavior ends up driven by a single one of its inputs....there is a tail network, which depends on \(\theta\) and in which each sector has just a single upstream link."Equations: Each firm's production function is (somewhat simplifying Ian's (1)) \[Y_i = Z_i L_i^{1-\alpha} \left( \sum_j A_{ij}^{1/\sigma} X_{ij}^{(\sigma-1)/\sigma} \right)^{\alpha \sigma/(\sigma-1)}.\]Here \(Y_i\) is output, \(Z_i\) is productivity, \(L_i\) is labor input, \(X_{ij}\) is how much good j firm i uses as an input, and \(A_{ij}\) captures how important each input is in production. \(\sigma>1\) are substitutes, \(\sigma<1\) are complements. Firms are competitive, so price equals marginal cost, and each firm's price is \[ p_i = -z_i + \frac{\alpha}{1-\sigma}\log\left(\sum_j A_{ij}e^{(1-\sigma)p_j}\right).\; \; \; (1)\]Small letters are logs of big letters. Each price depends on the prices of all the inputs, plus the firm's own productivity. Log GDP, plotted in the above figure is \[gdp = -\beta'p\] where \(p\) is the vector of prices and \(\beta\) is a vector of how important each good is to the consumer. In the case \(\sigma=1\) (1) reduces to a linear formula. We can easily solve for prices and then gdp as a function of the technology shocks: \[p_i = - z_i + \sum_j A_{ij} p_j\] and hence \[p=-(I-\alpha A)^{-1}z,\]where the letters represent vectors and matrices across \(i\) and \(j\). This expression shows some of the point of networks, that the pattern of prices and output reflects the whole network of production, not just individual firm productivity. But with \(\sigma \neq 1\) (1) is nonlinear without a known closed form solution. Hence approximations. You can see Ian's central point directly from (1). Take the \(\sigma<1\) case, complements. Parameterize the size of the technology shocks by a fixed vector \(\theta = [\theta_1, \ \theta_2, \ ...\theta_i,...]\) times a scalar \(t>0\), so that \(z_i=\theta_i \times t\). Then let \(t\) grow keeping the pattern of shocks \(\theta\) the same. Now, as the \(\{p_i\}\) get larger in absolute value, the term with the greatest \(p_i\) has the greatest value of \( e^{(1-\sigma)p_j} \). So, for large technology shocks \(z\), only that largest term matters, the log and e cancel, and \[p_i \approx -z_i + \alpha \max_{j} p_j.\] This is linear, so we can also write prices as a pattern \(\phi\) times the scale \(t\), in the large-t limit \(p_i = \phi_i t\), and \[\phi_i = -\theta_i + \alpha \max_{j} \phi_j.\;\;\; (2)\] With substitutes, \(\sigma<1\), the firm's costs, and so its price, will be driven by the smallest (most negative) upstream price, in the same way. \[\phi_i \approx -\theta_i + \alpha \min_{j} \phi_j.\] To express gdp scaling with \(t\), write \(gdp=\lambda t\), or when you want to emphasize the dependence on the vector of technology shocks, \(\lambda(\theta)\). Then we find gdp by \(\lambda =-\beta'\phi\). In this big price limit, the \(A_{ij}\) contribute a constant term, which also washes out. Thus the actual "network" coefficients stop mattering at all so long as they are not zero -- the max and min are taken over all non-zero inputs. Ian: ...the limits for prices, do not depend on the exact values of any \(\sigma_i\) or \(A_{i,j}.\) All that matters is whether the elasticities are above or below 1 and whether the production weights are greater than zero. In the example in Figure 2, changing the exact values of the production parameters (away from \(\sigma_i = 1\) or \(A_{i,j} = 0\)) changes...the levels of the asymptotes, and it can change the curvature of GDP with respect to productivity, but the slopes of the asymptotes are unaffected....when thinking about the supply-chain risks associated with large shocks, what is important is not how large a given supplier is on average, but rather how many sectors it supplies...For a full solution, look at the (more interesting) case of complements, and suppose every firm uses a little bit of every other firm's output, so all the \(A_{ij}>0\). The largest input price in (2) is the same for each firm \(i\), and you can quickly see then that the biggest price will be the smallest technology shock. Now we can solve the model for prices and GDP as a function of technology shocks: \[\phi_i \approx -\theta_i - \frac{\alpha}{1-\alpha} \theta_{\min},\] \[\lambda \approx \beta'\theta + \frac{\alpha}{1-\alpha}\theta_{\min}.\] We have solved the large-shock approximation for prices and GDP as a function of technology shocks. (This is Ian's example 1.) The graph is concave when inputs are complements, and convex when they are substitutes. Let's do complements. We do the graph to the left of the kink by changing the sign of \(\theta\). If the identity of \(\theta_{\min}\) did not change, \(\lambda(-\theta)=-\lambda(\theta)\) and the graph would be linear; it would go down on the left of the kink by the same amount it goes up on the right of the kink. But now a different \(j\) has the largest price and the worst technology shock. Since this must be a worse technology shock than the one driving the previous case, GDP is lower and the graph is concave. \[-\lambda(-\theta) = \beta'\theta + \frac{\alpha}{1-\alpha}\theta_{\max} \ge\beta'\theta + \frac{\alpha}{1-\alpha}\theta_{\min} = \lambda(\theta).\] Therefore \(\lambda(-\theta)\le-\lambda(\theta),\) the left side falls by more than the right side rises. Does all of this matter? Well, surely more for questions when there might be a big shock, such as the big shocks we saw in a pandemic, or big shocks we might see in a war. One of the big questions that network theory asks is, how much does GDP change if there is a technology shock in a particular industry? The \(\sigma=1\) case in which expenditure shares are constant gives a standard and fairly reassuring result: the effect on GDP of a shock in industry i is given by the ratio of i's output to total GDP. ("Hulten's theorem.") Industries that are small relative to GDP don't affect GDP that much if they get into trouble. You can intuit that constant expenditure shares are important for this result. If an industry has a negative technology shock, raises its prices, and others can't reduce use of its inputs, then its share of expenditure will rise, and it will all of a sudden be important to GDP. Continuing our example, if one firm has a negative technology shock, then it is the minimum technology, and [(d gdp/dz_i = \beta_i + \frac{\alpha}{1-\alpha}.\] For small firms (industries) the latter term is likely to be the most important. All the A and \(\sigma\) have disappeared, and basically the whole economy is driven by this one unlucky industry and labor. Ian: ...what determines tail risk is not whether there is granularity on average, but whether there can ever be granularity – whether a single sector can become pivotal if shocks are large enough.For example, take electricity and restaurants. In normal times, those sectors are of similar size, which in a linear approximation would imply that they have similar effects on GDP. But one lesson of Covid was that shutting down restaurants is not catastrophic for GDP, [Consumer spending on food services and accommodations fell by 40 percent, or $403 billion between 2019Q4 and 2020Q2. Spending at movie theaters fell by 99 percent.] whereas one might expect that a significant reduction in available electricity would have strongly negative effects – and that those effects would be convex in the size of the decline in available power. Electricity is systemically important not because it is important in good times, but because it would be important in bad times. Ben Moll turned out to be right and Germany was able to substitute away from Russian Gas a lot more than people had thought, but even that proves the rule: if it is hard to substitute away from even a small input, then large shocks to that input imply larger expenditure shares and larger impacts on the economy than its small output in normal times would suggest.There is an enormous amount more in the paper and voluminous appendices, but this is enough for a blog review. ****Now, a few limitations, or really thoughts on where we go next. (No more in this paper, please, Ian!) Ian does a nice illustrative computation of the sensitivity to large shocks:Ian assumes \(\sigma>1\), so the main ingredients are how many downstream firms use your products and a bit their labor shares. No surprise, trucks, and energy have big tail impacts. But so do lawyers and insurance. Can we really not do without lawyers? Here I hope the next step looks hard at substitutes vs. complements.That raises a bunch of issues. Substitutes vs. complements surely depends on time horizon and size of shocks. It might be easy to use a little less water or electricity initially, but then really hard to reduce more than, say, 80%. It's usually easier to substitute in the long run than the short run. The analysis in this literature is "static," meaning it describes the economy when everything has settled down. The responses -- you charge more, I use less, I charge more, you use less of my output, etc. -- all happen instantly, or equivalently the model studies a long run where this has all settled down. But then we talk about responses to shocks, as in the pandemic. Surely there is a dynamic response here, not just including capital accumulation (which Ian studies). Indeed, my hope was to see prices spreading out through a production network over time, but this structure would have all price adjustments instantly. Mixing production networks with sticky prices is an obvious idea, which some of the papers below are working on. In the theory and data handling, you see a big discontinuity. If a firm uses any inputs at all from another firm, if \(A_{ij}>0\), that input can take over and drive everything. If it uses no inputs at all, then there is no network link and the upstream firm can't have any effect. There is a big discontinuity at \(A_{ij}=0.\) We would prefer a theory that does not jump from zero to everything when the firm buys one stick of chewing gum. Ian had to drop small but nonzero elements of the input-output matrix to produces sensible results. Perhaps we should regard very small inputs as always substitutes? How important is the network stuff anyway? We tend to use industry categorizations, because we have an industry input-output table. But how much of the US industry input-output is simply vertical: Loggers sell trees to mills who sell wood to lumberyards who sell lumber to Home Depot who sells it to contractors who put up your house? Energy and tools feed each stage, but don't use a whole lot of wood to make those. I haven't looked at an input-output matrix recently, but just how "vertical" is it? ****The literature on networks in macro is vast. One approach is to pick a recent paper like Ian's and work back through the references. I started to summarize, but gave up in the deluge. Have fun. One way to think of a branch of economics is not just "what tools does it use?" but "what questions is it asking? Long and Plosser "Real Business Cycles," a classic, went after idea that the central defining feature of business cycles (since Burns and Mitchell) is comovement. States and industries all go up and down together to a remarkable degree. That pointed to "aggregate demand" as a key driving force. One would think that "technology shocks" whatever they are would be local or industry specific. Long and Plosser showed that an input output structure led idiosyncratic shocks to produce business cycle common movement in output. Brilliant. Macro went in another way, emphasizing time series -- the idea that recessions are defined, say, by two quarters of aggregate GDP decline, or by the greater decline of investment and durable goods than consumption -- and in the aggregate models of Kydland and Prescott, and the stochastic growth model as pioneered by King, Plosser and Rebelo, driven by a single economy-wide technology shock. Part of this shift is simply technical: Long and Plosser used analytical tools, and were thereby stuck in a model without capital, plus they did not inaugurate matching to data. Kydland and Prescott brought numerical model solution and calibration to macro, which is what macro has done ever since. Maybe it's time to add capital, solve numerically, and calibrate Long and Plosser (with up to date frictions and consumer heterogeneity too, maybe). Xavier Gabaix (2011) had a different Big Question in mind: Why are business cycles so large? Individual firms and industries have large shocks, but \(\sigma/\sqrt{N}\) ought to dampen those at the aggregate level. Again, this was a classic argument for aggregate "demand" as opposed to "supply." Gabaix notices that the US has a fat-tailed firm distribution with a few large firms, and those firms have large shocks. He amplifies his argument via the Hulten mechanism, a bit of networkyiness, since the impact of a firm on the economy is sales / GDP, not value added / GDP. The enormous literature since then has gone after a variety of questions. Dew-Becker's paper is about the effect of big shocks, and obviously not that useful for small shocks. Remember which question you're after.My quest for a new Phillips curve in production networks is better represented by Elisa Rubbo's "Networks, Phillips curves and Monetary Policy," and Jennifer La'o and Alireza Tahbaz-Salehi's "Optimal Monetary Policy in Production Networks," If I can boil those down for the blog, you'll hear about it eventually. The "what's the question" question is doubly important for this branch of macro that explicitly models heterogeneous agents and heterogenous firms. Why are we doing this? One can always represent the aggregates with a social welfare function and an aggregate production function. You might be interested in how aggregates affect individuals, but that doesn't change your model of aggregates. Or, you might be interested in seeing what the aggregate production or utility function looks like -- is it consistent with what we know about individual firms and people? Does the size of the aggregate production function shock make sense? But still, you end up with just a better (hopefully) aggregate production and utility function. Or, you might want models that break the aggregation theorems in a significant way; models for which distributions matter for aggregate dynamics, theoretically and (harder) empirically. But don't forget you need a reason to build disaggregated models. Expression (1) is not easy to get to. I started reading Ian's paper in my usual way: to learn a literature start with the latest paper and work backward. Alas, this literature has evolved to the point that authors plop results down that "everybody knows" and will take you a day or so of head-scratching to reproduce. I complained to Ian, and he said he had the same problem when he was getting in to the literature! Yes, journals now demand such overstuffed papers that it's hard to do, but it would be awfully nice for everyone to start including ground up algebra for major results in one of the endless internet appendices. I eventually found Jonathan Dingel's notes on Dixit Stiglitz tricks, which were helpful. Update:Chase Abram's University of Chicago Math Camp notes here are also a fantastic resource. See Appendix B starting p. 94 for production network math. The rest of the notes are also really good. The first part goes a little deeper into more abstract material than is really necessary for the second part and applied work, but it is a wonderful and concise review of that material as well.
12.2. Thermal conditioning of simple-form solids: semi-empirical study12.3. Thermal conditioning and hydrothermal processing; Chapter 13. Thermal Insulation of Piping: Tracing; 13.1. Thermal insulation; 13.2. Pipe tracing; Chapter 14. Combustion and Sulfur Dew Point; 14.1. Characteristics of combustion; 14.2. SO3 content and dew point; Chapter 15. Heat Supply by Microwave or Infrared Radiation; 15.1. Microwave heating (theory); 15.2. Microwave heating (practical); 15.3. Infrared drying; Chapter 16. Freezing, Deep-freezing and Thawing; 16.1. Introduction; 16.2. Industrial freezing apparatus
In: Sugar industry, S. 684-690
Corrosion of boiler air-heater tubes costs the industry several million dollars each year in repairs, reduced boiler steam output and reduced boiler efficiency. There have been many cases where the reduced boiler steam output caused by leaking air-heater tubes has reduced factory crushing rates and electricity export. Corrosion of air-heater tubes can, in many cases, be minimised by improving the gas and air flow distributions with turning vanes and ductwork redesign, but in nearly all cases, there are some tubes on the cold air side of air-heaters that are still susceptible to dew-point corrosion. Using improved tube materials in parts of boiler air-heaters that are susceptible to dew-point corrosion will significantly extend air-heater life. Available materials and coatings were reviewed, and laboratory- and factory-scale trials, metal temperature measurements, dew-point calculations and a financial analysis-based ranking of commercially available tube materials were undertaken. S-TEN 1 had a similar corrosion performance to SS304 stainless steel. Both the S-TEN 1 and SS304 stainless steel tubes had significantly greater resistance to dew-point corrosion than the carbon-steel tubes typically used in the air-heaters of Australian sugar factory boilers. The good performance of S-TEN 1 in this project is not consistent with the poor performance of S-TEN 1 (no better than carbon-steel) in earlier trials carried out by Isis Mill. The conditions experienced by the trial tubes in this project were not as severe as those experienced by the tubes in the earlier Isis Mill trials, and this appears to be the main reason for the current improved performance of the S-TEN 1 tubes. This requires further investigation.
In: GeoScience Engineering, Band 61, Heft 2, S. 23-36
ISSN: 1802-5420
Abstract
This paper presents energy and exergy analysis of air cooling cycle based on novel Maisotsenko indirect evaporative cooling cycle. Maisotsenko cycle (M-cycle) provides desired cooling condition above the dew point and below the wet bulb temperature. In this study, based on average annual temperature, The Iran area is segmented into eleven climates. In energy analysis, wet-bulb and dew point effectiveness, cooling capacity rate and in exergy analysis, exergy input rate, exergy destruction rate, exergy loss, exergy efficiency, exergetic COP and entropy generation rate for Iran's weather conditions in the indicated climates are calculated. Moreover, a feasibility study based on water evaporation rate and Maisotsenko cycle was presented. Energy and exergy analysis results show that the fifth, sixth, seventh and eighth climates are quite compatible and Rasht, Sari, Ramsar and Ardabile cities are irreconcilable with the Maisotsenko cycle.
In: http://hdl.handle.net/2027/mdp.39015094995456
"Oak Ridge and Paducah Facilities operated by Union Carbide Nuclear Company, a Division of Union Carbide and Carbon Corporation for the Atomic Energy Commission acting under U.S. Government Contract W7405 eng 26." ; "Report No. K-1300, Vol. 1, No. 1." ; "Date of issue: August 3, 1956." ; Three-dimensional graphic-display device / Glenn C. Williams -- Modified card sorting machine / Fred Campbell -- Improved contact between an electrode and a mercury pool / Glenn H. Jenks -- Replaceable dry-box glove / Bernard G. Jenkins -- Machining of crystals / Richard J. Fox -- Powder valve / Ralph D. Frey -- Method for copper flashing / Abraham Krieg -- Vacuum gage / George A. Kuipers -- Ionization guage with removable filament assembly / William R. Rathkamp -- Gated oscillator / Edward Fairstein -- Device for illustrating the principles of a nuclear reactor / Edward C. Campbell -- Special hoist control / Chauncey D. Stout -- Ground-water chaser / William J. Lacy -- Motor inspection technique / Aaron W. Williams -- Precise automatic manometer reader / John Farquharson -- Improved Moineau pump / Tom S. Mackey -- Improvement in sequential data recorder / Arthur M. Sherman -- Radial switch for safety exhibit / E.H. Ressor and D.A. Smith -- Portable dew point meters reported at K-25 / Harry Kohne . et al. -- Multiple spot dew point meter / Harold H. Lunn -- Modified dew point meter / Jack G. Thompson . et al. -- Colorimetric moisture indicator / Ralph P. Levey, Jr., Harold H. Lunn and Seymour H. Smiley -- Frost detector / William S. Pappas. ; Mode of access: Internet. ; This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
BASE
The turning point in China's economic development /Ross Garnaut --Continued rapid growth and the turning point in China's development /Ross Garnaut and Yiping Huang --Growth accounting after statistical revisions Xiaolu Wang --Quadrupling the Chinese economy again: constraints and policy options /Justin Yifu Lin --China's interaction with the global economy /Nicholas R. Lardy --Global imbalance, China and the international currency system /Fan Gang --Who foots China's bank restructuring bill? /Guonan Ma --Keeping fiscal policy sustainable in China: challenges and solutions /Jinzhi Tong and Wing Thye Woo --Employment growth, labour scarcity and the nature of China's trade expansion /Cai Fang and Dewen Wang --The impact of the guest-worker system on poverty and the well-being of migrant workers in urban China /Yang Du, Robert Gregory and Xin Meng --China's growth to 2030: demographic change and the labour supply constraint /Jane Golley and Rod Tyers --Changing patterns in China's agricultural trade after WTO accession /Chen Chunlai --Village elections, accountability and income distribution in rural China /Yang Yao --China's resources demand at the turning point /Ross Garnaut and Ligang Song --Economic growth and environmental pollution: a panel data analysis /Bao Qun and Shuijun Peng --Harmonising the coal industry with the environment /Xunpeng Shi --Growth, energy use and greenhouse gas emissions in China /Warwick McKibbin.
Gas produced at different deposits has different compositions, namely different amounts of methane, propane, nitrogen and so on. Different gas composition determines different physical and chemical characteristics of the gas, its heat of combustion, dew points on water and hydrocarbons, and Wobbe number. Previously, when using gas, little attention was paid to its characteristics, which are guided by the requirements of GOST 5542-87. With the restoration of Ukraine's independence and the transition to market conditions, attitudes toward gas have changed. Gas is a commodity and has a price. With the integration into the European Union, there was not only the implementation of European legislation but also the widespread use of advanced equipment and technologies in these countries. Gas will be metered in units of energy, not volume as before. This forces us to pay more attention to the quality of gas, set stricter requirements for its component composition, and monitor compliance with regulatory requirements in the process of transporting and supplying gas to the final consumer. GOST 5542-87 defined only a few quality parameters. Currently, the Code of Gas Transmission and Gas Distribution Systems, the Technical Regulation of Natural Gas and other regulations determine the quality of gas by more than 20 parameters that meet European standards. The methane content in the composition of natural gas must be at least 90 %, and other components of the gas are regulated. However, different deposits have different methane compositions. These limits range from 85 % to 99 % of methane. Different composition of natural gas affects not only its properties but also the reliability of the gas transmission and distribution system and affects the work of individual end-users. This problem is especially relevant when using gas appliances with high efficiency. The problem of gas quality is important for its metering in Ukraine and in settlements with other countries. The analysis of gas composition and quality during its transportation from the field to the consumer is carried out. It is determined that in the distribution part of the gas transmission system of Ukraine there are deviations in gas quality from regulatory requirements. This reduces the efficiency of some gas appliances and throttling equipment. ; Газ, що видобувається на різних родовищах, має різний склад, і відповідно, різні фізичні та хімічні характеристики. Раніше при використані газу його характеристикам приділяли мало уваги та керувались вимогами ГОСТ 5542-87. З відновленням незалежності України та переходом на ринкові умови господарювання змінилося ставлення до газу. Газ є товаром і має певну ціну. З інтеграцією в Європейський Союз відбулася не лише імплементація європейського законодавства, але й масове використання передової техніки та технологій цих країн. Облік газу буде здійснюватися в одиницях енергії. Це змушує приділяти більшу увагу якості газу, встановлювати жорсткіші вимоги до його компонентного складу та контролювати дотримання нормативних вимог у процесі транспортування й подавання газу кінцевому споживачу. На цей час чинні Кодекс газотранспортних та газорозподільчих систем, Технічний регламент природного газу та інші нормативні документи визначають якість газу за понад 20 параметрами що відповідає Європейським стандартам. Вміст метану в складі природного газу має становити не менше 90 %. Регламентуються також інші складові газу. Проте на різних родовищах вміст метану різний. Ці межі коливаються від 85 % до 99 % метану. Різний склад природного газу впливає на надійність роботи газотранспортної й газорозподільчої систем, а також на роботу окремих кінцевих споживачів. Питання якості газу важливе і при його обліку в Україні та при розрахунках з іншими країнами. Проведено аналіз складу та якості газу при його транспортуванні від родовища до споживача. Визначено, що в розподільній частині газотранспортної системи України є відхилення якості газу від нормативних вимог. Це призводить до зниження ефективності роботи систем газопостачання.
BASE
In: Australian journal of human rights: AJHR, Band 24, Heft 2, S. 162-181
ISSN: 1323-238X