Regularization techniques in joinpoint regression
In: Statistical papers, Band 57, Heft 4, S. 939-955
ISSN: 1613-9798
75 Ergebnisse
Sortierung:
In: Statistical papers, Band 57, Heft 4, S. 939-955
ISSN: 1613-9798
In: Computers and electronics in agriculture: COMPAG online ; an international journal, Band 143, S. 79-89
ISSN: 1872-7107
In: Asian journal of research in social sciences and humanities: AJRSH, Band 6, Heft 12, S. 592
ISSN: 2249-7315
In: Vojnotehnički glasnik: naučni časopis Ministerstva Odbrane Republike Srbije = Military technical courier : scientific periodical of the Ministry of Defence of the Republic of Serbia = Voenno-techničeskij vestnik : naučnyj žurnal Ministerstva Oborony Respubliki Serbija, Band 70, Heft 3, S. 720-733
ISSN: 2217-4753
Introduction/purpose: The principal techniques of regularization schemes and their validity for gauge field theories are discussed. Methods: Schemes of dimensional regularization, Pauli-Villars and lattice regularization are discussed. Results: The Coleman-Mandula theorem shows which gauge theories are renormalizable. Conclusion: Some gauge field theories are renormalizable, the Standard Model in particular.
Comunicació presentada a: The 9th International Conference Proceedings, FIMH 2017, celebrat de l'11 al 13 de juny de 2017 a Toronto, Canadà ; The electrocardiographic imaging (ECGI) inverse problem is highly ill-posed and regularization is needed to stabilize the problem and to provide a unique solution. When Tikhonov regularization is used, choosing the regularization parameter is a challenging problem. Mathematically, a suitable value for this parameter needs to fulfill the Discrete Picard Condition (DPC). In this study, we propose two new methods to choose the regularization parameter for ECGI with the Tikhonov method: i) a new automatic technique based on the DPC, which we named ADPC, and ii) the U-curve method, introduced in other fields for cases where the well-known L-curve method fails or provides an over-regularized solution, and not tested yet in ECGI. We calculated the Tikhonov solution with the ADPC and U-curve parameters for in-silico data, and we compared them with the solution obtained with other automatic regularization choice methods widely used for the ECGI problem (CRESO and L-curve). ADPC provided a better correlation coefficient of the potentials in time and of the activation time (AT) maps, while less error was present in most of the cases compared to the other methods. Furthermore, we found that for in-silico spiral wave data, the L-curve method over-regularized the solution and the AT maps could not be solved for some of these cases. U-curve and ADPC provided the best solutions in these last cases. ; This study received financial support from the French Government under the "Investments of the Future" program managed by the National Research 12 Agency (ANR), Grant reference ANR-10-IAHU-04 and from the Conseil Régional Aquitaine as part of the project "Assimilation de données en cancérologie et cardiologie". This work was granted access to the HPC resources of TGCC under the allocation x2016037379 made by GENCI.
BASE
In: Forthcoming:; Journal of the Knowledge Economy
SSRN
In: Preprint 160
In the numerical solution of stochastic differential equations (SDEs) suchappearances as sudden large fluctuations (explosions) negative pathsor unbounded solutions are sometimes observed in contrast to the qualitativebehaviour of the exact solution. To overcome this dilemma we constructregular (bounded) numerical solutions through implicit techniques withoutdiscretizing the state space. For discussion and classification thenotation of life time of numerical solutions is introduced. Thereby the taskconsists in construction of numerical solutions with lengthened lifetime up to eternal one. During the exposition we outline the role ofimplicitness for this `process of numerical regularization'. Boundedness(Nonnegativity) of some implicit numerical solutions can be proved at leastfor a class of linearly bounded models. Balanced implicit methods (BIMs) turnout to be very efficient for this purpose. Furthermore the local property of conditional positivity of numerical solutions is shown constructively(by special BIMs). The suggested approach also gives some motivation to useBIMs for the construction of numerical solutions for SDEs on bounded manifoldswith `natural conditions' on their boundaries ...
SSRN
In: IMF Working Paper No. 17/107
SSRN
The power amplifier (PA) is the most critical subsystem in terms of linearity and power efficiency. Digital predistortion (DPD) is commonly used to mitigate nonlinearities while the PA operates at levels close to saturation, where the device presents its highest power efficiency. Since the DPD is generally based on Volterra series models, its number of coefficients is high, producing ill-conditioned and over-fitted estimations. Recently, a plethora of techniques have been independently proposed for reducing their dimensionality. This paper is devoted to presenting a fair benchmark of the most relevant order reduction techniques present in the literature categorized by the following: (i) greedy pursuits, including Orthogonal Matching Pursuit (OMP), Doubly Orthogonal Matching Pursuit (DOMP), Subspace Pursuit (SP) and Random Forest (RF); (ii) regularization techniques, including ridge regression and least absolute shrinkage and selection operator (LASSO); (iii) heuristic local search methods, including hill climbing (HC) and dynamic model sizing (DMS); and (iv) global probabilistic optimization algorithms, including simulated annealing (SA), genetic algorithms (GA) and adaptive Lipschitz optimization (adaLIPO). The comparison is carried out with modeling and linearization performance and in terms of runtime. The results show that greedy pursuits, particularly the DOMP, provide the best trade-off between execution time and linearization robustness against dimensionality reduction. ; This work was supported in part by the Spanish Government (Ministerio de Ciencia, Inno- vación y Universidades) and Fondo Europeo de Desarrollo Regional (FEDER) under Grants TEC2017- 83343-C4-2-R and PID2020-113832RB-C21 and in part by the Government of Catalonia and the European Social Fund under Grants 2017-SGR-813 and 2021-FI-B-137. ; Peer Reviewed ; Postprint (published version)
BASE
In this work we study the effects of systematic and random errors on the inversion of multiwavelength (MW) lidar data using the well-known regularization technique to obtain vertically resolved aerosol microphysical properties. The software implementation used here was developed at the Physics Instrumentation Center (PIC) in Troitsk (Russia) in conjunction with the NASA/Goddard Space Flight Center. Its applicability to Raman lidar systems based on backscattering measurements at three wavelengths (355, 532 and 1064 nm) and extinction measurements at two wavelengths (355 and 532 nm) has been demonstrated widely. The systematic error sensitivity is quantified by first determining the retrieved parameters for a given set of optical input data consistent with three different sets of aerosol physical parameters. Then each optical input is perturbed by varying amounts and the inversion is repeated. Using bimodal aerosol size distributions, we find a generally linear dependence of the retrieved errors in the microphysical properties on the induced systematic errors in the optical data. For the retrievals of effective radius, number/surface/volume concentrations and fine-mode radius and volume, we find that these results are not significantly affected by the range of the constraints used in inversions. But significant sensitivity was found to the allowed range of the imaginary part of the particle refractive index. Our results also indicate that there exists an additive property for the deviations induced by the biases present in the individual optical data. This property permits the results here to be used to predict deviations in retrieved parameters when multiple input optical data are biased simultaneously as well as to study the influence of random errors on the retrievals. The above results are applied to questions regarding lidar design, in particular for the spaceborne multiwavelength lidar under consideration for the upcoming ACE mission. ; This work was supported by the NASA/Goddard Space Flight Center, the Spanish Ministry of Science and Technology through projects CGL2010-18782 and CSD2007-00067, the Andalusian Regional Government through projects P10-RNM-6299 and P08-RNM-3568, the EU through ACTRIS project (EU INFRA-2010-1.1.16-262254) and the Postdoctoral Program of the University of Granada.
BASE
ABSTRACT The great expectation about the New Forest Code consisted in reducing the hermeneutics distortions and increasing legal certainty for farmers. However, the new legislation raised more uncertainties and discussions, mainly because it consolidates the anthropic use and allows of low-impact activities in areas that should, by law, be kept untouched. This study aimed to survey and to describe the legislation related to protected areas on the rural area (APP and RL), to analyze the consolidated forms of use, occupation and low impact activities that can be developed in these areas, and to propose sustainable technical alternatives for interventions in the areas already consolidated and their recovery. The text is based on literature and documents, elaborated on the survey and study of legal aspects about protected areas in rural properties of Brazil and the main low-impact farming techniques, highlighting the agroforestry systems as an alternative of consolidated occupations in environmental protection areas. The text provides in an organized way the main aspects of the legislation on such areas and describes the sustainable activities allowed in APP and RL according to the flexibility of the new Forest Code.
BASE
In: Emerging science journal, Band 7, Heft 3, S. 791-798
ISSN: 2610-9182
The Takagi Sugeno Kang (TSK) fuzzy approach is popular since its output is either a constant or a function. Parameter identification and structure identification are the two key requirements for building the TSK fuzzy system. The input utilized in fuzzy TSK can have an impact on the number of rules produced in such a way that employing more data dimensions typically results in more rules, which causes rule complexity. This issue can be solved by employing a dimension reduction technique that reduces the number of dimensions in the data. After that, the resulting rules are improved with MBGD (Mini-Batch Gradient Descent), which is then altered with uniform regularization (UR). UR can enhance the classifier's fuzzy TSK generalization performance. This study looks at how the rough sets method can be used to reduce data dimensions and use Mini Batch Gradient Descent Uniform Regularization (MBGD-UR) to optimize the rules that come from TSK. 252 respondents' body fat data were utilized as the input, and the mean absolute percentage error (MAPE) was used to analyze the results. Jupyter Notebook software and the Python programming language are used for data processing. The analysis revealed that the MAPE value was 37%, falling into the moderate area. Doi: 10.28991/ESJ-2023-07-03-09 Full Text: PDF
In: Child maltreatment: journal of the American Professional Society on the Abuse of Children, Band 25, Heft 3, S. 318-327
ISSN: 1552-6119
Despite an increasing awareness about the existence and harms of commercial sexual exploitation of children (CSEC), the identification of victims remains a challenge for practitioners, hindering their ability to provide appropriate services. Tools that gauge risk of CSEC support the identification of victims but are underdeveloped because most tools assess risk of CSEC within a general youth population. An understanding of what predicts actual CSEC victimizations among youths at higher risk of CSEC due to experiences of childhood adversities has been left unassessed. Research in this area is limited in part because traditional methods do not allow for an assessment of the unique impact of childhood adversities that tend to co-occur. To address these difficulties, the current study applied predictive regularization methods to identify the most decisive risk items for CSEC. Proximal risk of CSEC was assessed among 317 youths who were referred to a specialized program in the Northeast of the United States due to suspicion of CSEC. With an innovative methodological approach, this study seeks to prompt other scholars to examine risk utilizing novel techniques and provides a foundation for the development of concise tools that assess risk of CSEC among populations of youths at higher levels of risk.
In: International Journal of Computer Engineering and Technology 10(3), pp. 110-118, 2019
SSRN