Cover; Half-title; Title; Copyright; Contents; Figures; Tables; Preface; 1 In the beginning; 2 Basic notions of statistics; 3 Choosing; 4 Paradigms of choice data; 5 Processes in setting up stated choice experiments; 6 Choices in data collection; 7 NLOGIT for applied choice analysis: a primer; 8 Handling choice data; 9 Case study: mode-choice data; 10 Getting started modeling: the basic MNL model; 11 Getting more from your model; 12 Practical issues in the application of choice models; 13 Allowing for similarity of alternatives; 14 Nested logit estimation; 15 The mixed logit model.
In this study we show how the coexistence of different decision rules can be accommodated in discrete choice models. Specifically, in this paper we present a generic hybrid model specification that allows for some attributes being processed using conventional linear-additive utility-maximization-based rules, while others are being processed using regret-minimization-based rules. We show that on two revealed and stated choice datasets particular specifications of hybrid models, containing both regret-based and utility-based attribute decision rules, outperform—in terms of model fit and out-of-sample predictive ability—choice models where all attributes are assumed to be processed by means of one and the same decision rule. However, in our data differences between models are very small. Implications, in terms of marginal willingness-to-pay measures (WtP), are derived for the different hybrid model specifications and applied in the context of the two datasets. It is found that in the context of our data hybrid WtP measures differ substantially from conventional utility-based WtP measures, and that the hybrid WtP specifications allow for a richer (choice-set-specific) interpretation of the trade-offs that people make.
There is a small but growing literature that promotes the derivation of distributions of willingness-to-pay (WTP) estimates using information specific to each individual observation. These are referred to as individual conditional distributions, in contrast to approaches that rely on unconditional distributions that use random assignment in the construction of WTP distributions within a sampled population. The interest in alternative specifications is in large measure attributed to the search for empirical ways of deriving the WTP distribution that satisfies a behaviourally acceptable sign and range over the entire domain. In this paper we examine both conditional and unconditional approaches to establishing WTP distributions within the context of a mixed logit model. We find that calculating WTP measures from ratios of individual-level parameters in contrast to drawing them from unconditional population distributions empirically reduces the incidence of extreme values. Our results suggest that although problematic estimates cannot be ruled out, the use of the extra information on each individual's choices is a valuable input into the derivation of WTP distributions.
Over the past ten years, the use of the Internet and e-mail as communication tools has become ubiquitous. In the survey arena, the rising costs of gathering data have been partly compensated by the use of the internet and e-based technologies which offer a range of new, relatively cost effective survey design and delivery options. This paper reports on two studies where Microsoft Excel was used to design and gather data without the additional investment associated with specialist programs. Study one examines the development of a multi-attribute survey conducted to create a new scale using a local (Australian) population of students. The second describes the use of Excel in a stated choice experiment that was sent to an international sample of museum managers. These studies show that it requires minimal programming skill on behalf of the researcher whilst offering the many of the cost, administrative and questionnaire design benefits seen with specialist software and Internet delivery. We conclude that Microsoft Excel can be considered when designing online surveys as it provides a wide range of features and benefits that allow for flexible, rich instrument design and fast, potentially accurate, data collection, checking and entry.