Adaptive estimation for the nonparametric bivariate additive model in random design with long-memory dependent errors

May 22, 2022

Rida Benhaddou, Qing Liu

Abstract

We investigate the nonparametric bivariate additive regression estimation in the random design and long-memory errors, constructing adaptive thresholding estimators based on wavelet series. This proposed approach opens the door to when the unknown function and its univariate additive components fit in another Besov space. We navigate two types of noise structures; homoskedastic Gaussian long-memory errors and heteroskedastic Gaussian long-memory errors. Specifically, the convergence rates depend on the long-memory parameter only when long-memory is robust enough. The proposed approach diverges into the generalr-dimensional additive case, avoiding the curse of dimensionality.

Introduction

Nonparametric additive regression models are the focus of this exploration, with a nod towards the efficacy of additive models in balancing the benefits of fully nonparametric, and parametric estimation procedures. The potential for applications, including the analysis of responses nonlinearly connected to multiple predictors, is substantial.

This ever-evolving problem has been studied extensively via a range of nonparametric methods, incorporating kernel smoothing, local polynomials, and wavelets among others. Recent advancements suggest we can remediate the errors induced by long-memory under both homoskedasticity and heteroskedasticity; the key difference between this study and its predecessors.

Application to Long-memory

The occurrence of long-memory, and its extensive influence, forms the precedence for many nonparametric estimation problems, including regression and deconvolution. As an under-investigated field in the context of additive models, this study aims to fill the gaps.

Bridging the Gap

This paper's prime goal is to work on the nonparametric bivariate additive regression model furnished with homoskedastic and heteroskedastic long-memory Gaussian noise. The estimators securitize the univariate components f and g of the additive regression function within the Besov class while maximizing quasi-optimal convergence rates asymptotically. This study also explores the effects of long-memory, proving its significant impact on convergence rates only when robust enough.

An Extension to the General r-Dimensional Additive Case

Transcending the bivariate model, the proposed approach accommodates the general r-dimensional additive model without spiralling into extreme complexity, and without suffering from the infamous curse of dimensionality. Therefore, this paper is a holistic extension to existing works.

Estimation Algorithm

The last piece of this complex puzzle is the designation of an estimation algorithm. The roadmap involves wavelet bases with compact support and vanishing moments. With the help of scaling functions, the algorithm offers an exhaustive way to extend functions as wavelet series. This intuitive approach finalizes the methodology successfully applied in tackling nonparametric bivariate additive regression models.

Sign up to AI First Newsletter

Recommended

We use our own cookies as well as third-party cookies on our websites to enhance your experience, analyze our traffic, and for security and marketing. Select "Accept All" to allow them to be used. Read our Cookie Policy.