In the context of Monte Carlo simulations, Latin hypercube sampling (LHS) is a wellstudied method to reduce the variance of the estimator. I analyse Latin hypercube sampling with dependence (LHSD), a method first suggested by Stein [1987] and further developed by Packham and Schmidt [2010], which extends LHS into a setting with dependent marginal distributions. I find that the LHSD estimator is consistent and asymptotically unbiased. In the final theoretical result the estimator is found to never have a variance larger than that of Monte Carlo simulations, if it fulfils certain monotonicity requirements. Subsequently, I analyse the pricing of a first-to-default-swap with LHSD among others and find that it is reasonable to apply it even after accounting for increased computation time.
«
In the context of Monte Carlo simulations, Latin hypercube sampling (LHS) is a wellstudied method to reduce the variance of the estimator. I analyse Latin hypercube sampling with dependence (LHSD), a method first suggested by Stein [1987] and further developed by Packham and Schmidt [2010], which extends LHS into a setting with dependent marginal distributions. I find that the LHSD estimator is consistent and asymptotically unbiased. In the final theoretical result the estimator is found to neve...
»