The influence of dimensions on the complexity of computing decision trees

Stephen Kobourov, Maarten Löffler, Fabrizio Montecchiani, Marcin Pilipczuk, Ignaz Rutter, Raimund Seidel, Manuel Sorge, Jules Wulms

Research output: Contribution to journalArticlepeer-review

Abstract

A decision tree recursively splits a feature space Rd and then assigns class labels based on the resulting partition. Decision trees have been part of the basic machine-learning toolkit for decades. A large body of work considers heuristic algorithms that compute a decision tree from training data, usually aiming to minimize in particular the size of the resulting tree. In contrast, little is known about the complexity of the underlying computational problem of computing a minimum-size tree for the given training data. We study this problem with respect to the number d of dimensions of the feature space Rd, which contains n training examples. We show that it can be solved in O(n2d+1) time, but under reasonable complexity-theoretic assumptions it is not possible to achieve f(d)⋅no(d/log⁡d) running time. The problem is solvable in (dR)O(dR)⋅n1+o(1) time if there are exactly two classes and R is an upper bound on the number of tree leaves labeled with the first class.

Original languageEnglish (US)
Article number104322
JournalArtificial Intelligence
Volume343
DOIs
StatePublished - Jun 2025

Keywords

  • Decision trees
  • Machine learning
  • Parameterized complexity

ASJC Scopus subject areas

  • Language and Linguistics
  • Linguistics and Language
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'The influence of dimensions on the complexity of computing decision trees'. Together they form a unique fingerprint.

Cite this