Statistics

My research in Statistics revolve around different topics in Dimension Reduction (the order of topics is random – and references can be found in my Publications list) :

  1. SVM-based Sufficient Dimension Reduction: There are a number of topics that I am working in this area going back to my PhD years. In my Ph.D. years, with my supervisor (Prof Bing Li) and Dr. Lexin Li we have introduced what I now call SVM-based SDR methodology where introduced (SVM stands for Support Vector Machines). We introduced a unified framework for linear and nonlinear feature extraction (see Li, Artemiou and Li (2011) for all the details). Nowadays, I am working on a number on other projects:
    • Robustifying SVM-based SDR algorithms. There are many ideas that can be used to robustify SVM-based SDR methodology; a lot of these ideas are coming from the classification framework where this methodology was introduced. One paper on this topic is currently submitted and another two are in progress. This is also the topic we will be focusing with Prof. Alexandros Karagrigoriou on supervising Mr. Kimon Ntotsis Ph.D. dissertation.
    • Introducing real time SVM-based SDR algorithms. This is a topic which we have a couple of papers in progress with my collaborators Dr. Y. Dong (Temple University) and Dr. SJ Shin (Korea University). The basic idea is to find a way to introduce a real-time updates to our estimate of the reduced subspace. This is essential as computer power nowadays allows for the continuous collection of data.
    • Using SVM-based SDR methodology in very high-dimensional problems. Using observations from the problematic behaviour of SVM in very high dimensions we develop algorithms which can help us improve the performance of SVM-based SDR in very high-dimensions. This is currently work with my Ph.D. student Hayley Randall.
    • Post dimension reduction inference: With my collaborator, Prof Gerda Claeskens (KU Leuven) we are looking to address the problem of post-dimension reduction inference.
  2. Inverse-moment-based Sufficient Dimension Reduction: Inverse-moment-based methodology for Sufficient Dimension Reduction (SDR) is the oldest class of algorithms introduced in the SDR methodology and they are still considered the first go-to methods when developing new methodology mainly due to their theoretical and computational simplicity. I am working on a number of topics in this framework, which can be grouped in the following methods:
    • Robustifying inverse-moment-based SDR algorithms. We are working with different ideas in robustifying inverse-moment-based SDR algorithms. We currently have a paper with my former CUROP student Stephen Babos (Babos and Artemiou – 2020) and a paper in progress.
    • Sufficient dimension reduction at the presence of categorical predictors. There has been a number of methods developed in the literature on Sufficient Dimension Reduction at the presence of categorical predictors. We are looking on expanding these ideas on a number of different directions with my Ph.D. student Ben Jones.
  3. Methods that apply to both large inverse-moment and SVM-based SDR algorithms.
    • Flexible Sufficient Dimension Reduction: The flexible sufficient dimension reduction idea is mostly an algorithm which let you choose the best framework to choose a dimension reduction algorithm from. This is currently a work in progress
    • HDLSS problems: The large p small n problems pose a great challenge in SDR algorithms. With my collaborator Eugen Pircalabelu (UC Louvain) we are currently working on a couple of ideas to address the issue in different SDR algorithms. We are a paper submitted and a paper we are working on.
  4. Dimension Reduction for Exponential distribution Principal Component Analysis (PCA). With partial funding from the University Health Board my PhD student Luke Smallman found out that there is a limited number of methods to perform dimension reduction on text data. Text data can’t be assumed Gaussian and therefore they are more appropriately being modelled using Poisson distribution. They are also very high-dimensional (if we assume that each word is a different variable) and therefore there is a need for sparse dimension reduction methods. We have extended previous literature using appropriate penalties to sparsify them. One method was published in Pattern Recognition (Smallman, Artemiou, Morgan – 2018) and one is in Computational Statistics (Smallman, Underwood, Artemiou – 2019). We are also looking into creating a unified framework for the exponential family PCA algorithms using different ideas.
  5. Dimension Reduction for Tensors. With my collaborator, Dr. Joni Virta (Turku University) we are working on a number of different ideas for tensors. We are in the very early stages on these projects, trying to identify directions of mutual interests.
  6. Envelope models. With my Ph.D. student Alya Alzahrani we are thinking on a number of topics we can work on in this topic. We currently do not have a concrete direction on this.
  7. Philosophical research on PCA. Following three papers with my PhD supervisor Prof. Li (Artemiou and Li 2009- 2013) and my current Ph.D. student Ben Jones (Jones and Artemiou – 2018+) on the appropriateness of PCA as a dimension reduction tool in linear, nonlinear regression settings and functional predictor settings we have recently further explored the appropriateness of kernel PCA as a nonlinear feature extraction tool in a regression setting (see Jones, Artemiou and Li – 2020) . At this point we are working with my Ben on different extensions of this work .
  8. Classification algorithms. With a number of students (mainly MSc and BSc project students) I am trying to propose a number of new classification algorithms which improves on the existing literature of classification algorithms. Currently we have a work in progress with my former BSc student Michalis Panayides.