---
abstract: |
  Developing foundation models for time series classification is of high practical relevance, as such models can serve as universal feature extractors for diverse downstream tasks. Although early models such as Mantis have shown the promise of this approach, a substantial performance gap remained between frozen and fine-tuned encoders. In this work, we introduce methods that significantly strengthen zero-shot feature extraction for time series. First, we introduce Mantis+, a variant of Mantis pre-trained entirely on synthetic time series. Second, through controlled ablation studies, we refine the architecture and obtain MantisV2, an improved and more lightweight encoder. Third, we propose an enhanced test-time methodology that leverages intermediate-layer representations and refines output-token aggregation. In addition, we show that performance can be further improved via self-ensembling and cross-model embedding fusion. Extensive experiments on UCR, UEA, Human Activity Recognition (HAR) benchmarks, and EEG datasets show that MantisV2 and Mantis+ consistently outperform prior time series foundation models, achieving state-of-the-art zero-shot performance.
author:
- Vasilii Feofanov
- Songkang Wen
- Jianfeng Zhang
- Lujia Pan
- Ievgen Redko
bibliography:
- references.bib
date: '`\vspace{-0.7cm}`{=latex}'
title: 'MantisV2: Closing the Zero-Shot Gap in Time Series Classification with Synthetic Data and Test-Time Strategies'
---

```{=latex}
\newcommand*\pct{\mkern2mu\scalebox{.9}{\%}}
```
```{=latex}
\newcommand{\theHalgorithm}{\arabic{algorithm}}
```
```{=latex}
\newcommand{\algoname}{Mantis}
```
```{=latex}
\newcommand{\tick}{\pmb{\textcolor{ForestGreen}{$\bm{\checkmark}$}}~}
```
```{=latex}
\newcommand{\crossi}{\pmb{\textcolor{BrickRed}{$\times$}}~}
```
```{=latex}
\renewcommand\Authfont{\bfseries}
```
```{=latex}
\renewcommand\Authsep{,\quad}
```
```{=latex}
\renewcommand\Authand{ and }
```
```{=latex}
\renewcommand\Authands{,\quad}
```
```{=latex}
\renewcommand\theaffil{}
```
```{=latex}
\renewcommand\Affilfont{\normalfont}
```
```{=latex}
\renewcommand{\shorttitle}{\algoname V2: Foundation Model for Time Series Classification}
```
```{=latex}
\newcommand{\mycite}[1]{\citeauthor{#1}, \citeyear{#1}}
```
```{=latex}
\maketitle
```
```{=latex}
\centering
```
  ---------------- -------------------------------------------------------
      **Mantis+**: <https://huggingface.co/paris-noah/MantisPlus>
     **MantisV2**: <https://huggingface.co/paris-noah/MantisV2>
    **CauKer 2M**: <https://huggingface.co/datasets/paris-noah/CauKer2M>
  ---------------- -------------------------------------------------------

Introduction
============

Classification of time series data is a fundamental problem across science and industry, arising in domains such as activity / action recognition [@chen2025comodo; @li2025zara; @jie2026novel], power electronics [@liao2025pe; @li2025data], observability [@feng2025telecomts], healthcare [@alchieri2025exploring; @wong2025large], finance [@lee2023stockemotions], and neuroscience [@wang2024cbramod; @gnassounou2025leveraging]. Following the success of foundation models in vision [@radford2021CLIP] and language [@achiam2023gpt], the development of time series foundation models (TSFMs) has become an active research direction. These models aim to serve as universal feature extractors for diverse downstream tasks, reducing the need for extensive labeled data and simplifying the process of model selection and tuning.

Over the past three years, a wide variety of TSFMs have been introduced. Several approaches adopt decoder-only architectures for forecasting [@cohen2025toto; @auer2025tirex; @ansari2025chronos2], while others rely on masked autoencoders [@goswami2024moment], adapt large language [@zhou2023onefitsall; @ashok2025beyond] or vision models [@chen2024visionts; @roschmann2025tivit]. For TSFMs designed specifically for classification, the main goal is to learn discriminative embeddings, thereby making self-distillation [@lin2024nutime] and contrastive objectives prevail [@albelwi2022survey]. Among these, Mantis [@feofanov2025mantis] stands out for achieving strong feature-extraction performance while remaining lightweight and operating in a purely frozen-encoder (zero-shot) regime.

Despite this progress, two important limitations remain. First, previous Mantis pre-training pipelines suffered from data leakage, as the pre-training corpus overlapped with evaluation datasets. Second, the performance gap between frozen and fine-tuned Mantis is still substantial, raising doubts about whether current foundation models can surpass modern self-supervised methods in time series classification. To address these issues, we present **MantisV2** and **Mantis+**, a new generation of Mantis-based time series foundation models built around three key ideas:

1.  *Synthetic-data pre-training.* We show that pre-training on large-scale synthetic time series [@xie2025cauker], generated to cover a broad range of temporal patterns, yields substantially more generalizable representations. Re-training the original Mantis architecture on synthetic data only produces Mantis+.

2.  *Architecture refinement.* Through controlled ablation studies, we streamline and improve the architectural design of Mantis, resulting in MantisV2, a more lightweight yet more performant encoder.

3.  *Test-time optimization.* We introduce a comprehensive inference-time pipeline that enhances representation quality without any additional pre-training or fine-tuning. Our method incorporates the use of intermediate-layer representations, improved output-token aggregation, input-perturbation self-ensembling, and cross-model embedding fusion. Together, these components substantially improve the robustness and expressiveness of the frozen encoder.

Our extensive experiments on UCR [@dau2019ucr], UEA [@bagnall2018uea], Human Activity Recognition (HAR), and EEG datasets demonstrate the effectiveness of our approach. MantisV2 and Mantis+ consistently outperform prior TSFMs in the zero-shot setting. Moreover, we show that MantisV2 outperforms competitive self-supervised models such as TS2Vec [@yue2022ts2vec] and T-Loss [@franceschi2019tloss], while its fusion with a vision backbone [@roschmann2025tivit] can match the performance of fine-tuned Mantis, closing the zero-shot gap. We release all models by open-sourcing the pre-trained checkpoints on HuggingFace, updating the Mantis package on [GitHub](https://github.com/vfeofanov/mantis), and publishing the synthetic pre-training dataset on HuggingFace (**CauKer 2M**).

```{=latex}
\centering
```
![Final Performance on the UCR Benchmark.](pics/version2/teaser-plot-2.png){#fig:teaser-plot width="\\linewidth"}

Methodology {#sec:method}
===========

In this section, we present the main technical details behind Mantis: we mathematically introduce the problem setup, present the architecture and the pre-training process.

Problem Setup {#sec:problem-setup}
-------------

Mathematically speaking, our time series classification foundation model is an encoder $F: \R^t \to \R^{q}$ that projects any time series $\mbf{x}\in\R^t$ with a fixed sequence length $t$ to a discriminative hidden space $\R^{q}$. During the pre-training phase, we observe an unlabeled pre-training set $\mathrm{X}_{\text{0}}$ that is sufficiently large in order to learn rich embeddings that generalize well across different tasks. During the fine-tuning phase, we observe a supervised downstream task with observations $\mathrm{X}$ and labels $\mathrm{Y}$. The two options are possible: 1) we use $F$ to extract deep embeddings $\mathrm{Z}=\{F(\mbf{x}),\ \mbf{x}\in\mathrm{X}\}$ and then learn any classifier $h:\R^{q}\to \{1,\dots,K\}$ using $\mathrm{Z}$ as features and $\mathrm{Y}$ as corresponding labels, 2) append a classification head $h:\R^{q}\to \R^K$ and fine-tune $h\circo F$ by minimizing a loss function evaluated on the downstream dataset.

When time series with multiple channels $\mbf{x}=[\mbf{x}_1,\dots,\mbf{x}_d]\in\R^{d\times t},\ d>1$, are considered, we send each channel $\mbf{x}_i,\ i\in[1,d]$, to the TSFM independently, i.e., the embedding of $\mbf{x}$ is defined as $\mbf{z}=\textrm{concat}\left[(F(\mbf{x}_i))_{1\leq i\leq d}\right]$, where $\textrm{concat}$ denotes the vector concatenation operator, and the input dimension of the classifier (head) is $\R^{d\times q}$. Alternatively, we can use adapters to mix channels and feed them to the encoder [@ilbert2025user; @benechehab2025adapts]. As this approach is complementary and orthogonal to the focus of our work, we leave its integration with the proposed model variants for future research.

Architecture {#sec:architecture}
------------

In this section, we describe the architecture of Mantis, which adapts the Transformer architecture [@vaswani2017transformer] to time series data through a dedicated tokenization strategy. We fix the number of tokens to 32, which implies that the input sequence length $t$ must be proportional to 32. This design choice differs from approaches such as @lin2024nutime and @goswami2024moment, which instead fix the patch (token) length. In the earlier stages of development, we empirically find that both strategies lead to comparable performance. However, fixing the number of tokens is preferable under computational constraints, as the self-attention mechanism scales quadratically with respect to the number of tokens. As a requirement, an input time series should be resized/padded to a certain length. While @goswami2024moment and @lin2024nutime require the sequence length proportional to the patch size, in our case, the length should be proportional to the number of tokens, i.e., 32. By default, we resize all inputs to length 512. Further analysis of the impact of the interpolation length is provided in Section `\ref{sec:self-ensembling}`{=latex}.

```{=latex}
\vspace{0.2cm}
```
**Token Generator Unit.** The first stage of the model encodes a raw time series into a set of meaningful tokens, which are subsequently processed by the Transformer. The token generation procedure consists of the following steps:

-   *Instance normalization.* For each time series instance, we subtract the mean and divide by the standard deviation computed across time steps. This normalization is implemented as part of the model architecture and applied during the forward pass.

-   *Patch extraction from the signal.* We extract patch-level representations by applying a single convolutional layer with a fixed kernel size, followed by mean pooling configured to produce 32 patches. The convolution outputs 256 channels, resulting in patch embeddings of dimension 256.

-   *Patch extraction from the first-order differential.* We apply the same patching procedure to the first-order temporal difference of the time series, computed as the difference between adjacent time steps. This differential representation encourages stationarity and reduces the influence of long-term trends.

-   *Patch-wise statistics encoding.* To preserve information about the original measurement scale, we split the raw (unnormalized) time series into 32 non-overlapping patches. For each patch, we compute the mean and standard deviation and encode these statistics using the Multi-Scaled Scalar Encoder [@lin2024nutime].

-   *Token projection.* All patch-level features (signal, differential signal, and statistical descriptors) are concatenated and passed through a linear projection layer followed by layer normalization [@ba2016layernormalization], producing the final set of 32 tokens with dimensionality 256.

**Transformer.** The resulting tokens are processed by a Transformer encoder, as summarized below:

-   *Class token.* A learnable class token is prepended to the 32 generated tokens. This token attends to all other tokens and aggregates global information, with its final representation serving as a summary of the entire input sequence.

-   *Positional encoding.* Positional information is incorporated using positional encodings. In the original Mantis architecture, we employ sinusoidal positional embeddings [@vaswani2017transformer], which are added to the input tokens. Alternatively, Rotary Positional Encoding (RoPE; [@su2024roformer]) can be used which rotates the query and key representations.

-   *Transformer layers.* We apply six Transformer layers, each consisting of multi-head self-attention with eight heads followed by a feedforward network. All layers use a pre-normalization design.

-   *Output representation.* The final hidden state of the class token is taken as the output embedding of the foundation model.

**Projector and Prediction Head.** Depending on the training or evaluation regime, different heads are appended to the Transformer output:

-   *Pre-training:* A layer normalization followed by a linear projection is applied to produce embeddings used for similarity-based objectives.

-   *Fine-tuning:* A task-specific classification head maps the embeddings to class logits.

-   *Inference:* No additional layers are applied, and the Transformer output embedding is returned directly.

Pre-training {#sec:pre-training}
------------

We pre-train Mantis in a self-supervised manner using a contrastive learning objective. The goal is to learn an encoder that produces similar representations for two random augmentations of the same time series (a positive pair), while producing dissimilar representations for augmentations of different time series (negative pairs). Formally, let $\mathcal{T}$ be a space of transformations (augmentations) such that for all $\phi\in\mathcal{T}$ and $\mbf{x}\in\mathcal{X}$ we have $\phi(\mbf{x})\in\mathcal{X}$. To measure the similarity between two embeddings, we first project the output of the foundation model $F(\mbf{x})$ to a new dimension using a projector $g: \R^{q} \to \R^{q'}$ and then compute the cosine similarity between the two vectors defined as follows: $$\begin{aligned}
    s_{\cos}(\mathbf{a}, \mathbf{b}) := \frac{\mbf{a}^\top\mbf{b}}{\norm{\mbf{a}}\cdot\norm{\mbf{b}}},\qquad \forall(\mbf{a}, \mbf{b})\in\R^{2q'}.\end{aligned}$$ Given a batch $B=\{\mbf{x}_i\}_{i=1}^b$, for each example $\mbf{x}_i$, we independently sample two augmentation functions $\phi$ and $\psi$ uniformly from $\mathcal{T}$, i.e., $\phi,\psi\sim\mathcal{U}(\mathcal{T})$. We then compute the pairwise similarities between all the examples in the following way: $$\begin{aligned}
    \mbf{s}_i(\phi, \psi) = \left[s_{\cos}\left(g\circo F\circo\phi(\mbf{x}_i),  \, g\circo F\circo\psi(\mbf{x}_j)\right)\right]_{j=1}^b \in \R^b.\end{aligned}$$

Following @oord2018representation and @he2020momentum, and denoting the cross-entropy loss by $l_{\text{ce}}: \R^b\times\{1,\dots,b\}\to\R$, we update the parameters of $F$ and $g$ by minimizing the contrastive objective defined by $$\begin{aligned}
    \sum_{i=1}^b l_{\text{ce}}\left(\frac{\mbf{s}_i(\phi, \psi)}{T},\  i\right),\end{aligned}$$ where $T\in(0,+\infty)$ is a temperature, which we fix to $0.1$ in all experiments.

`\setlength{\intextsep}{-5pt}`{=latex} `\setlength{\columnsep}{10pt}`{=latex}

```{=latex}
\begin{wrapfigure}[12]{r}{0.27\textwidth}
\includegraphics[width=0.26\textwidth, clip=True, trim=0 0 0 0]{pics/randomcropresize.png}
\caption{Random Crop Resize.}
\label{fig:randomcropresize}
\end{wrapfigure}
```
**Augmentation.** We empirically evaluated several time-series augmentation strategies and observed that their effectiveness is highly dataset-dependent, as they may aggressively distort the signal and remove discriminative information. For pre-training, we have chosen the Random Crop Resize (RCR) augmentation (Figure `\ref{fig:randomcropresize}`{=latex}). This transformation randomly crops a contiguous segment covering $(1\!-\!c)\pct$ of the original time series and then resizes it back to the original sequence length. We apply moderate distortions by sampling the crop rate $c$ uniformly between $0\pct$ and $20\pct$, thereby preserving the overall temporal structure of the signal. A key advantage of contrastive learning with RCR is that the encoder is encouraged to be invariant to small temporal stretches and compressions, which in turn enables flexible resizing of the input without degrading performance. It is important to note that RCR perturbs the original unit measurements, making it less suitable for forecasting tasks. However, since our focus is on time series classification, the primary goal is to capture discriminative temporal patterns rather than preserve absolute scales. Empirically, we find RCR to be the most effective augmentation for this purpose.

Key Improvements {#sec:key-improvements}
================

In this section, we describe the proposed methodology used to improve the zero-shot performance of Mantis. Since optimizing the architecture of a foundation model is a complex and high-dimensional problem, we conduct a series of controlled ablation studies in which each modification is evaluated relative to the original Mantis architecture. The final MantisV2 architecture is obtained by combining all components that individually lead to performance improvements. In parallel, we show that the proposed methodology can also substantially improve the original Mantis model without any architectural changes; we refer to this enhanced variant as Mantis+. All architectural and methodological choices are evaluated on the UCR benchmark [@dau2019ucr]. Experiments are conducted on NVIDIA Tesla V100 GPUs with 32GB of memory: pre-training is performed using four GPUs, while feature extraction and evaluation are carried out on a single GPU.

Pre-training with Synthetic Data
--------------------------------

Recently, @xie2025cauker proposed CauKer, a synthetic time-series generation framework based on Gaussian process kernel composition and structural causal models. A key and somewhat surprising finding of their work is that Mantis can be pre-trained entirely on synthetic data while retaining strong downstream performance. Unlike the original Mantis pre-training corpus (1.89 million samples), which partially overlapped with UCR and UEA training sets and therefore did not constitute a strictly out-of-distribution (OOD) setting, CauKer-generated data are OOD by construction. This property substantially increases the reliability of zero-shot evaluation results.

To further validate the effectiveness of synthetic pre-training data, we compare 100,000 time series generated by CauKer against two alternative pre-training datasets: (i) Anomaly UCR [@wu2021current], a collection of real-world datasets for anomaly detection, and (ii) a 100,000-sample subset of the original Mantis pre-training corpus that explicitly excludes all UCR and UEA samples. Table `\ref{tab:pre-train-data-comparison}`{=latex} reports zero-shot feature extraction results following the evaluation protocol used by @feofanov2025mantis. Specifically, a Random Forest classifier [@breiman2001random] is trained on encoded training time series and evaluated on test sets, with accuracy averaged over 128 UCR datasets. The results show that synthetic data not only match but even outperform real data of the same size, demonstrating their high effectiveness for pre-training.

```{=latex}
\vspace{0.3cm}
```
```{=latex}
\centering
```
::: {#tab:pre-train-data-comparison}
  Pre-training Data       Nature   Size             UCR Included?                 UCR acc. (%)
  ---------------------- -------- ------- ---------------------------------- -----------------------
  Anomaly                  Real     38K    [No]{style="color: ForestGreen"}   $0.7473_{\pm 0.0014}$
  Subset of V1 Dataset     Real    100K    [No]{style="color: ForestGreen"}   $0.7829_{\pm 0.0008}$
  CauKer                  Synth    100K    [No]{style="color: ForestGreen"}    $78.81_{\pm 0.001}$
  V1 Dataset               Real    1.89M    [Yes]{style="color: BrickRed"}    $79.21_{\pm 0.0012}$

  : Performance of Mantis on UCR collection with different pre-training datasets.
:::

```{=latex}
\vspace{0.55cm}
```
This observation can be intuitively explained by the nature of the pre-training objective. Self-supervised contrastive learning explicitly promotes *uniformity* in the embedding space, encouraging representations to be evenly distributed [@wang2020understanding]. Achieving such uniformity requires high data diversity, which synthetic generation methods such as CauKer can provide in a scalable and controllable manner. As a result, synthetic data offer both strong sample efficiency and improved generalization performance.

In the remainder of Section `\ref{sec:key-improvements}`{=latex}, we use 100,000 synthetic samples for pre-training unless stated otherwise. This choice allows us to significantly reduce pre-training time while maintaining high accuracy, as evidenced in Table `\ref{tab:pre-train-data-comparison}`{=latex}. Each model is pre-trained using three different random seeds, and we report the average accuracy across seeds and the 128 UCR datasets. For downstream classification, we employ a Random Forest classifier with 200 trees and unlimited maximum depth. Once architectural choices are finalized, we pre-train the selected models on the full set of 2 million synthetic time series, which is publicly available at [HuggingFace](https://huggingface.co/datasets/paris-noah/CauKer2M).

Refining Architecture through Ablation {#sec:refined-architecture}
--------------------------------------

Before introducing improvements to Mantis, we first conduct additional ablation studies to further validate several design choices made in the original architecture. In particular, we analyze the proposed Token Generator Unit, which combines three parallel branches: tokens extracted from the raw time series, tokens derived from its first-order differential, and the encoded patch-wise statistics (mean and standard deviation).

```{=latex}
\vspace{0.35cm}
```
```{=latex}
\centering
```
![Ablation study confirming the proposed Token Generator Unit.](pics/version2/ablation-study-tokenizer.png){#fig:ablation-tokenizer width="0.7\\linewidth"}

```{=latex}
\vspace{0.35cm}
```
We compare the proposed tokenizer against the following baselines:

-   Only the first branch is used, consisting of a convolutional layer followed by mean pooling.

-   The first and third branches are combined, i.e., patch-wise scalar encoding is added to the raw signal branch.

-   The first branch is duplicated and combined with the scalar statistics branch. This baseline is introduced to determine whether the benefit of the second branch arises from incorporating the first-order differential or simply from increasing the number of parameters.

-   The first and second branches are combined, removing the scalar encoder.

-   All three branches are retained, but the convolution with mean pooling is replaced by embeddings of non-overlapping patches. This baseline is introduced to compare the proposed convolution-based patching against the non-overlapping patching used in NuTime [@lin2024nutime].

-   All three branches are retained, but mean pooling is replaced with max pooling to assess the impact of the pooling strategy.

Figure `\ref{fig:ablation-tokenizer}`{=latex} presents the corresponding results. Each branch of the proposed Token Generator Unit contributes positively to performance (when comparing the 1st, 2nd, 4th, and 7th bars). Moreover, comparing the 3rd and 4th bars shows that incorporating features derived from the first-order differential yields a clear performance gain, confirming that the improvement is not merely due to increased model capacity. Regarding patching strategies, convolution with mean pooling consistently outperforms both max pooling and the non-overlapping patch embedder.

```{=latex}
\vspace{0.3cm}
```
```{=latex}
\centering
```
![Convolution kernel size.](pics/version2/kernel_ablation.png){#fig:ablation-kernel width="\\textwidth"}

![Transformer head dimension.](pics/version2/dimhead_ablation.png){#fig:ablation-dimhead width="\\textwidth"}

```{=latex}
\vspace{0.4cm}
```
Continuing our ablation study, we investigate the impact of several architectural hyperparameters. First, we vary the kernel size of the convolutional layers, which was originally set to 17. We evaluate kernel sizes in ${9, 17, 25, 33, 41, 49}$ and observe that performance improves monotonically up to a kernel size of 41, which provides the best trade-off between temporal resolution and receptive field (Figure `\ref{fig:ablation-kernel}`{=latex}).

Next, we study the effect of the per-head projection dimension in the Transformer, varying it over ${32, 64, 128, 256}$ (Figure `\ref{fig:ablation-dimhead}`{=latex}). In standard Transformer implementations, this dimension is typically set to the hidden dimension divided by the number of attention heads (32 for 8 heads). In the original Mantis architecture, it was increased to 128. However, when averaging performance over three random seeds, this increase does not yield consistent gains, indicating a seed-dependent bias of the previous conclusion. Consequently, we revert to the default value of 32, which slightly improves average performance while reducing the number of parameters.

```{=latex}
\centering
```
![Comparison of different transformer configurations.](pics/version2/ablation-transformer.png){#fig:ablation-transformer width="0.5\\linewidth"}

Finally, we explore refinements to the Transformer architecture itself, inspired by the design choices of @cohen2025toto. Specifically, we consider three modifications: Rotary Positional Encoding (RoPE; [@su2024roformer]), RMS Layer Normalization [@zhang2019root] in place of standard Layer Normalization, and the use of SwiGLU activations [@shazeer2020glu] instead of GELU in the feedforward layers. We empirically evaluate the following variants: (a) Classical activations and normalization with sinusoidal positional encoding (original Mantis), (b) SwiGLU and RMS normalization without positional encoding, (c) SwiGLU and RMS normalization with sinusoidal positional encoding, (d) Classical activations and normalization with RoPE, (e) SwiGLU and RMS normalization with RoPE.

Figure `\ref{fig:ablation-transformer}`{=latex} reports the corresponding results on the UCR benchmark, averaged over three seeds. The combination of SwiGLU, RMS Layer Normalization, and RoPE yields the best performance. Although the absolute improvement is modest, it is stable across runs. We further confirm this trend for larger-scale pre-training with one million synthetic samples in Appendix `\ref{sec:appendix-arch-refine}`{=latex}.

Layer by Layer, Epoch by Epoch {#sec:layer-by-layer}
------------------------------

[@skean2025layerbylayer] showed that intermediate layers of large language models encode rich representations and can even outperform final layers on downstream tasks. This observation has since motivated a broader line of work investigating layer-wise representations, including their use for cross-domain transfer. In particular, @roschmann2025tivit demonstrated that intermediate layers of vision transformers can be highly effective for time series classification when signals are converted to images. More recently, @auer2025tirexclassification showed that forecasting-oriented foundation models can also be leveraged for classification by aggregating representations across layers.

`\setlength{\intextsep}{5pt}`{=latex} `\setlength{\columnsep}{10pt}`{=latex}

```{=latex}
\begin{wrapfigure}[17]{r}{0.45\textwidth}
\includegraphics[width=\linewidth, clip=true, trim=0 0 1.3cm 1.4cm]{pics/version2/layer-by-layer-original-mantis.pdf}
\caption{Mantis, layer by layer performance.}
\label{fig:mantis-layer-by-layer}
\end{wrapfigure}
```
In this section, we extend the layer-by-layer analysis to Mantis. Figure `\ref{fig:mantis-layer-by-layer}`{=latex} reports the zero-shot feature extraction performance of each of Transformer layers. Despite Mantis being a relatively compact model, we observe the same phenomenon: intermediate layers yield stronger representations than the final layer, with the third layer achieving the highest accuracy. This observation motivated us a deeper investigation into the evolution of layer-wise representations during pre-training.

In Figure `\ref{fig:layer-by-layer-epoch-by-epoch}`{=latex}, we track the UCR classification performance of all six Transformer layers across pre-training epochs. We additionally vary the size of the synthetic pre-training dataset from 100,000 to 2,000,000 samples. Several notable patterns emerge. First, the relative improvement of intermediate layers appears to be closely tied to the total number of parameter updates. In our training setup, each epoch processes the full dataset, so increasing the dataset size directly increases the number of updates. With 100K samples, the final layer remains the strongest throughout training. In contrast, for larger datasets, intermediate layers steadily improve over time and eventually surpass the final layer. To test whether this behavior is indeed driven by the number of updates rather than dataset size per se, we pre-train Mantis on 100K samples for 1,000 epochs, matching the number of updates used for 1M samples over 100 epochs. The corresponding results, reported in Appendix `\ref{sec:layer-by-layer-appendix}`{=latex} (Figure `\ref{fig:layer-by-layer-100k-1000epochs}`{=latex}), confirm that one of the intermediate layers eventually becomes the most performant even in this setting.

However, increasing the number of epochs for the 100K dataset does not lead to further gains in overall performance. Instead, performance follows a double-descent--like behavior and converges to the same level achieved during the first 100 epochs. This leads to our second key observation. While the final-layer performance remains largely unchanged (or may even degrade) as the pre-training dataset grows, the *best-performing intermediate layer* improves consistently with increased data scale. In other words, **intermediate representations unlock the scaling benefits of pre-training**. By selecting the most informative layer, Mantis exhibits a clear and monotonic improvement in performance as the number of pre-training samples increases.

For a more comprehensive comparison, we conduct a layer-by-layer analysis of several other foundation models, including NuTime [@lin2024nutime], MOMENT [@goswami2024moment], TiRex [@auer2025tirexclassification], and Chronos2 [@ansari2025chronos2], which are formally introduced in Section `\ref{sec:exp-setup}`{=latex}. Figure `\ref{fig:layer-by-layer-sota}`{=latex} summarizes the results. TiRex and Chronos2 exhibit strong discriminative power in early layers, while their final layers appear to be more specialized for forecasting. MOMENT's performance plateaus after approximately the 10th layer, suggesting that the model could be significantly compressed by truncating its final layers. NuTime benefits the least from intermediate representations, with its best-performing layer located immediately before the final one.

**Final pre-training.** Based on these findings, we perform the final pre-training using 2 million synthetic time series generated by CauKer for 200 epochs. In the remainder of the paper, we refer to the original Mantis architecture trained under this protocol as **Mantis+**. The variant incorporating all architectural refinements---convolution kernel size set to 41, Transformer head dimension set to 32, and the upgraded Transformer design---is referred to as **MantisV2**. Layer by layer, epoch by epoch performance evolution curves for both models are provided in Appendix `\ref{sec:layer-by-layer-appendix}`{=latex} (Figure `\ref{fig:final-pre-training}`{=latex}).

Finally, we emphasize that the advantage of intermediate-layer representations is specific to the zero-shot feature extraction setting, where the encoder is kept frozen. In the fine-tuning regime, retaining and updating all layers remains preferable; additional details are provided in Appendix `\ref{sec:appendix-fine-tuning}`{=latex}.

```{=latex}
\centering
```
![Downstream performance evolution: layer by layer, epoch by epoch.](pics/version2/layer-by-layer-epoch-by-epoch.png){#fig:layer-by-layer-epoch-by-epoch width="\\textwidth"}

```{=latex}
\centering
```
![Layer-by-Layer for other models.](pics/version2/layer-by-layer-other-methods.png){#fig:layer-by-layer-sota width="\\textwidth"}

Aggregation of Output Tokens {#sec:output-token}
----------------------------

When extracting representations from Transformer models, the choice of how to aggregate output tokens remains an open question. In discriminative Transformers such as BERT [@devlin2019bert] and ViT [@dosovitskiy2021vit], it is standard practice to discard all outputs except the classification (CLS) token. After multiple self-attention layers, this token is expected to aggregate information from all other tokens, whose representations are typically considered redundant and are often ignored to avoid increasing the dimensionality of the final embedding.

However, this assumption may no longer hold when considering intermediate layer representations. Recently, @roschmann2025tivit showed that, for vision transformers applied to time series classification, averaging representations across tokens can yield more discriminative embeddings than relying solely on the classification token. Inspired by this observation, we investigate alternative token aggregation strategies for Mantis. Specifically, we evaluate three approaches to generate the final embedding: (a) using only the classification token (as in the original Mantis), (b) computing the mean of all tokens except the classification token, (c) concatenating the classification token with the mean of the remaining tokens. Figure `\ref{fig:ablation-token}`{=latex} reports the corresponding results on the UCR benchmark. Consistent with the findings of @roschmann2025tivit, we observe that non-classification tokens encode complementary and discriminative information and should not be discarded. Moreover, concatenating the classification token with the mean token consistently yields the best performance, improving accuracy by $0.1\pct$ for Mantis+ and by $0.8\pct$ for MantisV2. Based on these results, we adopt the concatenation strategy as the default aggregation method, increasing the embedding dimensionality from 256 to 512.

```{=latex}
\centering
```
```{=latex}
\centering
```
![Output-token aggregation.](pics/version2/ablation-token-plot.png){#fig:ablation-token height="5.5cm"}

![All improvements.](pics/version2/improvement-evolution.png){#fig:improvement height="5.5cm"}

We note that the effectiveness of this combined aggregation strategy may depend on the depth of the Transformer, as the number of layers determines how effectively the classification token aggregates information from the remaining tokens. In Appendix `\ref{sec:appendix-output-token}`{=latex}, we show that the combined strategy does not help to strengthen representations from the last transformer layer. Nevertheless, in Appendix `\ref{sec:appendix-fine-tuning}`{=latex}, we show that concatenation of the classification and mean tokens improves the performance also for fine-tuned truncated models.

Experimental Results, Part I: Towards Strongest Feature Extractor.
==================================================================

In this section, we compare Mantis with the State-of-the-Art (SOTA) time series classification foundation modeling. As in the previous section, we follow the zero-shot feature extraction setup, i.e., extract features keeping the encode frozen and use them together with the training labels to learn a classifier.

Setup {#sec:exp-setup}
-----

We compare Mantis with the following baselines, whose implementation details can be found in Appendix `\ref{sec:appendix-exp-setup}`{=latex}:

-   Catch22 [@lubba2019catch22] is a set of statistical features powerful for time series classification. In this paper, we have significantly improved this baseline by incorporating also patched statistics. We name this modification as **Catch22+** and refer to Appendix `\ref{sec:catch22plus}`{=latex} for more details and the corresponding ablation study.

-   **TabPFN** [@hollmann2022tabpfn] and **TabICL** [@qu2025tabicl] are tabular foundation models. In this case, we treat each timestamp as a feature and ignore the sequential nature of data.

-   **MOMENT** [@goswami2024moment] is a T5-based auto-encoder [@raffel2020exploring] pre-trained in a self-supervised way on 1.13 billion samples. Following Section `\ref{sec:layer-by-layer}`{=latex}, we use the output of its 10th transformer layer, which shrinks the model size from 341.2 to 161.4 million parameters.

-   **TiRex** [@auer2025tirex] a time series forecasting foundation model based on the xLSTM architecture [@beck2024xlstm]. We use its 5th layer for the output, reducing the model size from 35.3 to 16.5 million parameters.

-   **Chronos2** [@ansari2025chronos2] is a transformer-based time series forecasting foundation model. We use the output of its 4th layer, shrinking the model size from 119.5 to 44 million parameters.

-   **TiViT-H** is the approach proposed by [@roschmann2025tivit] to leverage a pre-trained vision model for time series classification. We follow their recommendations and use the 14th layer of CLIP ViT-H leading to 276.6 million parameters. **TiConvNext** leverages the pre-trained CLIP ConvNext in the same spirit. We use its 15th layer which results in 180.3 million parameters.

-   **NuTime** [@lin2024nutime] is a classification TSFM based on the BYOL self-distillation pre-training [@grill2020byol]. Their pre-training dataset is similar to the one used for pre-training of the first version of Mantis (1.89 million time series examples). We use its 5th layer to get embeddings, which reduces the model size from 2.4 to 2 million parameters.

We have decided to remove UniTS [@gao2024units] and GPT4TS [@zhou2023onefitsall] from consideration due to their low performance (please see @feofanov2025mantis, [-@feofanov2025mantis] for more details).

**Datasets.** We use the following datasets to evaluate the performance of Mantis and compare it to other methods:

-   **UCR** [@dau2019ucr] is our default benchmark that consists of 128 univariate datasets.

-   **UEA** [@bagnall2018uea] benchmark that has 30 multivariate datasets. We exclude 3 datasets due to small test size or small sequence length, and subsample one dataset to ease computations (see more details in Appendix `\ref{sec:appendix-exp-setup}`{=latex}). We further tag the remaining 27 datasets by UEA-27.

-   We additionally gathered a collection of datasets for Human Activity Recognition (**HAR**) that is one of the most spread applications of time series classification models [@chen2025comodo; @li2025zara]. We consider 7 datasets, where data come from either an inertial measurement unit (Ego4D, HHAR, UCI-HAR) or motion capture (EMOPain, MP8, MP50). For HHAR dataset, we follow [@gagnon2022woods] and consider two splits: in-distribution (ID) and out-of-distribution (OOD). Please refer Appendix `\ref{sec:har-data-appendix}`{=latex} for more details.

-   In a similar fashion, we test Mantis also on electroencephalogram (**EEG**) data that represent recordings of the brain's electrical activity. We consider different tasks including sleep stage prediction (CAP, SEDFx), brain--computer interface (BCI) control tasks (FingerMovements, PCL, SelfRegulationSCP), seizure detection (Epilepsy-EEG), blink-type classification (Blink). We take FingerMovements and SelfRegulationSCP from UEA, the other datasets we describe in Appendix `\ref{sec:eeg-data-appendix}`{=latex}.

Experimental Results {#sec:sota-exp-res-rf}
--------------------

**UCR.** Figure `\ref{fig:ucr-sota-rf}`{=latex} displays the performance of the considered models in average over 128 univariate datasets, while the complete table with results is deferred to Appendix `\ref{sec:complete-results}`{=latex} (Table `\ref{tab:sota-ucr-res-rf-a}`{=latex} and `\ref{tab:sota-ucr-res-rf-b}`{=latex}). We can see that MantisV2 significantly outperforms the other models with Mantis+ being the second best model. Modern forecasting models (TiRex, Chronos2) and adapted vision models (TiViT-H, TiConvNext) are very competitive establishing strong baselines. Interestingly, Catch22+, which is a set of statistical features, also establishes a good baseline, even outperforming foundation models such as NuTime and MOMENT. Tabular foundation models are competitive as well: although their average performance is not very high, their win rate is similar to MantisV2 (see Appendix `\ref{sec:complete-results}`{=latex}). This also indicates that a considerable portion of datasets in UCR are nearly tabular, so their sequential nature can simply be ignored.

```{=latex}
\centering
```
![Classification accuracy of different methods in average of 128 univariate UCR datasets.](pics/version2/ucr-sota-rf.png){#fig:ucr-sota-rf width="0.75\\linewidth"}

**UEA-27.** The average performance of the considered models in average over 27 multivariate datasets is illustrated in Figure `\ref{fig:uea-sota-rf}`{=latex} while the complete results are deferred to Appendix `\ref{sec:complete-results}`{=latex} (Table `\ref{tab:uea-sota-rf}`{=latex}). One can see that both Mantis+ and MantisV2 significantly outperform the other models for more than 2%. Interestingly the model ranking is quite different in this benchmark. For example, NuTime, being one of the worst on UCR, is the third best model this time. Since its hidden dimension is the smallest among the foundation models (128, to be exact), we hypothesize that NuTime may be less sensitive to the curse of dimensionality as the total number of deep features grows linearly with the number of channels.

```{=latex}
\centering
```
![Classification accuracy of different methods in average of 27 multivariate UEA datasets.](pics/version2/uea-sota-rf.png){#fig:uea-sota-rf width="0.75\\linewidth"}

```{=latex}
\centering
```
`\scalebox{0.6}{
    \begin{tabular}{l|lllllllllll}
    \toprule
                                   & Catch22+ & TabPFN & TabICL & MOMENT & TiRex & Chronos2 & TiViT-H & TiConvNext & NuTime & Mantis+ & MantisV2\\
    \midrule
    Ego4D & $0.4397_{\pm 0.002}$ & \texttt{NaN} & \texttt{NaN} & $0.4068_{\pm 0.001}$ & $0.5037_{\pm 0.001}$ & $0.4742_{\pm 0.0}$ & 0.1912 & 0.1907 & $0.5108_{\pm 0.0}$ & \textbf{0.5273}$_{\pm 0.0}$ & $0.5258_{\pm 0.001}$\\
    EMOPain & $0.8826_{\pm 0.006}$ & 0.7831 & 0.7831 & $0.8469_{\pm 0.002}$ & $0.7915_{\pm 0.003}$ & $0.8075_{\pm 0.004}$ & $0.83_{\pm 0.002}$ & $0.8225_{\pm 0.003}$ & $0.8901_{\pm 0.005}$ & \textbf{0.8939}$_{\pm 0.002}$ & $0.8798_{\pm 0.009}$\\
    HHAR-ID & $0.9738_{\pm 0.001}$ & 0.8938 & 0.9073 & $0.9299_{\pm 0.0}$ & $0.9481_{\pm 0.0}$ & $0.9475_{\pm 0.002}$ & $0.9338_{\pm 0.001}$ & $0.9461_{\pm 0.001}$ & $0.9808_{\pm 0.001}$ & $0.9835_{\pm 0.001}$ & \textbf{0.9845}$_{\pm 0.001}$\\
    HHAR-OOD & $0.4808_{\pm 0.008}$ & 0.5311 & 0.5441 & $0.3081_{\pm 0.001}$ & $0.5135_{\pm 0.014}$ & $0.4174_{\pm 0.008}$ & $0.3748_{\pm 0.007}$ & $0.4031_{\pm 0.01}$ & $0.56_{\pm 0.005}$ & $0.5624_{\pm 0.016}$ & \textbf{0.5822}$_{\pm 0.005}$\\
    MP8 & $0.572_{\pm 0.008}$ & 0.6235 & 0.6185 & $0.6392_{\pm 0.011}$ & $0.5994_{\pm 0.016}$ & $0.5966_{\pm 0.006}$ & $0.6022_{\pm 0.009}$ & $0.5966_{\pm 0.003}$ & $0.6504_{\pm 0.008}$ & $0.6403_{\pm 0.006}$ & \textbf{0.6857}$_{\pm 0.012}$\\
    MP50 & $0.3602_{\pm 0.016}$ & 0.5664 & 0.5042 & \textbf{0.7434}$_{\pm 0.017}$ & $0.6644_{\pm 0.01}$ & $0.6639_{\pm 0.007}$ & $0.6908_{\pm 0.008}$ & $0.6801_{\pm 0.008}$ & $0.6913_{\pm 0.012}$ & $0.7227_{\pm 0.008}$ & $0.7345_{\pm 0.007}$\\
    UCI-HAR & $0.8273_{\pm 0.003}$ & 0.809 & 0.8157 & $0.8782_{\pm 0.001}$ & $0.8744_{\pm 0.002}$ & $0.8873_{\pm 0.002}$ & $0.8936_{\pm 0.001}$ & $0.8918_{\pm 0.001}$ & $0.8842_{\pm 0.001}$ & $0.8911_{\pm 0.0}$ & \textbf{0.9013}$_{\pm 0.002}$\\
    \midrule
    Avg & 0.6481 & \texttt{NaN} & \texttt{NaN} & 0.6789 & 0.6993 & 0.6849 & 0.6452 & 0.6473 & 0.7382 & 0.7459 & \textbf{0.7562}\\
    Avg UCR HAR & 0.7564 & 0.7479 & 0.7285 & 0.7572 & 0.7687 & 0.7702 & 0.7726 & 0.772 & 0.756 & 0.7867 & \textbf{0.8007}\\
    Avg UEA HAR & 0.8138 & 0.7591 & 0.764 & 0.8431 & 0.846 & 0.8492 & 0.8535 & 0.8488 & 0.8539 & \textbf{0.8796} & 0.8739\\
    \bottomrule
    \end{tabular}
    }`{=latex}

**HAR.** Table `\ref{tab:exp-res-har-rf}`{=latex} displays the experimental results for human activity recognition tasks. We also include the average performance over 19 UCR and 9 UEA datasets that are related to HAR. We can see that the gap between MantisV2/Mantis+ and the other models is quite noticeable, indicating high relevance of the proposed models to this application domain. Similarly to UEA, NuTime performs quite well, while other models show less robust results.

```{=latex}
\vspace{0.2cm}
```
```{=latex}
\centering
```
`\scalebox{0.6}{
    \begin{tabular}{l|lllllllllll}
    \toprule
                                   & Catch22+ & TabPFN & TabICL & MOMENT & TiRex & Chronos2 & TiViT-H & TiConvNext & NuTime & Mantis+ & MantisV2\\
    \midrule
    Blink & $0.9963_{\pm 0.001}$ & 0.9178 & 0.8978 & $0.9674_{\pm 0.003}$ & \textbf{0.997}$_{\pm 0.001}$ & $0.9733_{\pm 0.01}$ & $0.9852_{\pm 0.006}$ & $0.98_{\pm 0.002}$ & $0.66_{\pm 0.006}$ & $0.9778_{\pm 0.002}$ & $0.9956_{\pm 0.002}$\\
    CAP-ID & $0.751_{\pm 0.001}$ & \texttt{NaN} & \texttt{NaN} & $0.7374_{\pm 0.002}$ & $0.8104_{\pm 0.002}$ & $0.8179_{\pm 0.001}$ & $0.7983_{\pm 0.001}$ & $0.8126_{\pm 0.001}$ & $0.8044_{\pm 0.0}$ & \textbf{0.8189}$_{\pm 0.001}$ & $0.8152_{\pm 0.001}$\\
    CAP-OOD & $0.71_{\pm 0.003}$ & \texttt{NaN} & \texttt{NaN} & $0.7408_{\pm 0.001}$ & $0.7821_{\pm 0.001}$ & $0.7857_{\pm 0.001}$ & $0.7781_{\pm 0.001}$ & $0.784_{\pm 0.001}$ & $0.767_{\pm 0.002}$ & \textbf{0.791}$_{\pm 0.001}$ & $0.7859_{\pm 0.0}$\\
    Epilepsy-EEG & $0.9507_{\pm 0.002}$ & 0.9496 & 0.9447 & $0.952_{\pm 0.001}$ & \textbf{0.9614}$_{\pm 0.001}$ & $0.95_{\pm 0.001}$ & $0.9548_{\pm 0.0}$ & $0.9495_{\pm 0.001}$ & $0.9234_{\pm 0.002}$ & $0.9518_{\pm 0.001}$ & $0.9558_{\pm 0.001}$\\
    FingerMovements & $0.4967_{\pm 0.064}$ & 0.5 & 0.49 & $0.53_{\pm 0.02}$ & $0.52_{\pm 0.04}$ & $0.5233_{\pm 0.021}$ & $0.5367_{\pm 0.045}$ & $0.5333_{\pm 0.059}$ & $0.5267_{\pm 0.015}$ & $0.51_{\pm 0.026}$ & \textbf{0.55}$_{\pm 0.01}$\\
    PCL-ID & $0.5241_{\pm 0.003}$ & \texttt{NaN} & \texttt{NaN} & $0.5431_{\pm 0.012}$ & $0.5272_{\pm 0.011}$ & $0.5308_{\pm 0.002}$ & $0.5347_{\pm 0.007}$ & $0.5385_{\pm 0.005}$ & $0.5615_{\pm 0.003}$ & \textbf{0.5764}$_{\pm 0.006}$ & $0.5716_{\pm 0.003}$\\
    PCL-OOD & $0.5025_{\pm 0.004}$ & \texttt{NaN} & \texttt{NaN} & $0.5129_{\pm 0.001}$ & $0.4988_{\pm 0.001}$ & $0.5072_{\pm 0.004}$ & $0.5003_{\pm 0.005}$ & $0.5132_{\pm 0.001}$ & $0.5415_{\pm 0.004}$ & \textbf{0.5427}$_{\pm 0.002}$ & $0.5311_{\pm 0.009}$\\
    SEDFx-ID & $0.7574_{\pm 0.0}$ & \texttt{NaN} & \texttt{NaN} & $0.7516_{\pm 0.001}$ & $0.7904_{\pm 0.0}$ & $0.8_{\pm 0.0}$ & $0.7884_{\pm 0.001}$ & $0.8008_{\pm 0.001}$ & $0.7822_{\pm 0.001}$ & \textbf{0.8066}$_{\pm 0.0}$ & $0.8_{\pm 0.0}$\\
    SEDFx-OOD & $0.7142_{\pm 0.001}$ & \texttt{NaN} & \texttt{NaN} & $0.7244_{\pm 0.001}$ & $0.7714_{\pm 0.0}$ & \textbf{0.7758}$_{\pm 0.0}$ & $0.7709_{\pm 0.0}$ & $0.7755_{\pm 0.001}$ & $0.741_{\pm 0.0}$ & $0.7731_{\pm 0.0}$ & $0.7636_{\pm 0.0}$\\
    SelfRegulationSCP1 & $0.7702_{\pm 0.007}$ & \textbf{0.8942} & 0.8874 & $0.7747_{\pm 0.006}$ & $0.7884_{\pm 0.012}$ & $0.785_{\pm 0.003}$ & $0.7986_{\pm 0.007}$ & $0.7929_{\pm 0.01}$ & $0.7952_{\pm 0.003}$ & $0.7736_{\pm 0.005}$ & $0.8134_{\pm 0.009}$\\
    SelfRegulationSCP2 & $0.4926_{\pm 0.031}$ & 0.4778 & 0.5056 & $0.4907_{\pm 0.023}$ & $0.4963_{\pm 0.049}$ & $0.5167_{\pm 0.006}$ & $0.4907_{\pm 0.033}$ & $0.5056_{\pm 0.011}$ & $0.5074_{\pm 0.018}$ & \textbf{0.5611}$_{\pm 0.02}$ & $0.5167_{\pm 0.006}$\\
    \midrule
    Avg & 0.6969 & \texttt{NaN} & \texttt{NaN} & 0.7023 & 0.7221 & 0.7242 & 0.7215 & 0.726 & 0.6919 & 0.7348 & \textbf{0.7363}\\
    \bottomrule
    \end{tabular}
    }`{=latex}

```{=latex}
\vspace{0.4cm}
```
**EEG.** The experimental results for EEG data can be found in Table `\ref{tab:exp-res-eeg-rf}`{=latex}. What it concerns big datasets (CAP, PCL, SEDFx), tabular foundation models do not pass the memory and/or time constraints (we set 10 hours as the time deadline for runtime of one method over one dataset). In contrast to the HAR results, the performance gap between Mantis and other models is smaller, and model ranking reminds more the one was on UCR. As EEG classification is notoriously known to be a difficult task, it will be interesting to see if time series foundation models can have an impact in the progress in this field. Recently, @gnassounou2025leveraging have demonstrated that Mantis is able to outperform CBraMod (a recent EEG-focused foundation model, @wang2024cbramod, [-@wang2024cbramod]) on sleep stage and motor imagery tasks, giving positive expectations from TSFMs.

Complexity Analysis
-------------------

In this section, we compare all the models in terms of memory consumption and running time. As a rule of thumb, the memory consumption of a deep learning model correlates with the total number of its parameters. Table `\ref{tab:model-sizes}`{=latex} shows the number of parameters before and after layer pruning for each model (see Section `\ref{sec:layer-by-layer}`{=latex} for more details). For TabPFN and TabICL it is not possible to perform layer pruning due to the fundamental difference of their approach. As we can see, the four biggest models are TiConvNext, TiViT-H, MOMENT and Chronos2 with more than 100 million parameters. After layer pruning, their sizes are significantly reduced though still being above 100 million parameters fro TiViT-H, TiConvNext and MOMENT. Layer pruning also helps to reduce the size of Mantis+ and MantisV2 to just less than 3 millions, making them even more lightweight and comparable to the smallest model, namely, NuTime.

```{=latex}
\setlength{\tabcolsep}{2.3pt}
```
::: {#tab:model-sizes}
  \# of Params.    TabPFN   TabICL   MOMENT   TiRex   Chronos2   TiViT-H   TiConvNext   NuTime   Mantis+   MantisV2
  --------------- -------- -------- -------- ------- ---------- --------- ------------ -------- --------- ----------
  Original          7.2M    27.1M    341.2M   35.3M    119.5M    630.8M      843.4M      2.4M     8.1M       4.2M
  After Pruning      \-       \-     161.4M   16.5M     44M      276.6M      180.3M       2M      2.9M       2.2M

  : Comparison of Model Sizes.
:::

To measure running time, we compare how much it takes for a forward pass over an entire dataset with batch size equal to 256. We generate univariate synthetic data with length 100, varying the total number of samples within $n\in\{10^2, 10^3, 10^4, 5\cdot10^4, 10^5\}$. As TabPFN and TabICL require both train and test as an input, for them, we use $n/2$ samples as train data and $n/2$ for test, while using batch strategy (256) for test data. The experimental results are displayed in Figure `\ref{fig:runtime}`{=latex} and reveal that tabular foundation models are the slowest for large sample sizes taking more than 10 hours for 100,000 examples. They are followed by vision-based models, TiViT-H and TiConvNext, that are slow but, nevertheless, feasible. Next cluster is represented by MOMENT and TiRex. Note that we did not manage to compile TiRex as they suggest, otherwise their model should be faster. The other models, including Chronos2, Mantis+, MantisV2 and NuTime are significantly faster. It is interesting that Mantis performs as fast as Catch22+, which consists in simply calculating pre-determined statistics. Together with Table `\ref{tab:model-sizes}`{=latex}, this result demonstrates the wide-applicability of our models including scenarios when computational resources are limited.

```{=latex}
\centering
```
![The inference time of the considered models as a function of the number of samples.](pics/version2/runtime.png){#fig:runtime width="70%"}

Experimental Results, Part II: Closing the Zero-Shot Gap.
=========================================================

In this section, we extend our experiments by showing how we can push the zero-shot feature extraction performance further. We introduce two simple strategies, namely, self-ensembling and cross-model fusion, and show the importance of the choice of the classification method.

Self-Ensembling (SE) {#sec:self-ensembling}
--------------------

First, we propose a simple test-time strategy to improve performance by perturbing the input time series. Our motivation stems from the observation that interpolating the same signal to different sequence lengths leads to the extraction of complementary features.

```{=latex}
\centering
```
![Motivation to look at different interpolation sizes that, in the context of our architecture, imply different degree of overlap between tokens.](pics/version2/self-ensembling-motivation2.png){#fig:se-motivation width="0.9\\linewidth"}

In Figure `\ref{fig:se-motivation}`{=latex}, we illustrate how the convolution--mean-pooling pipeline generates a single token for the same input time series when interpolated to two different lengths. Since the number of tokens in our architecture is fixed, the pooling window size increases with the interpolation length, though not linearly. In the example shown, with a convolution kernel size of 41, the receptive field of a single token for interpolation lengths of 128 and 256 corresponds to approximately $(41 + (128/32 - 1)) / 128 \approx 34\pct$ and $(41 + (256/32 - 1)) / 256 \approx 19\pct$ of the input sequence, respectively. This implies that different interpolation lengths result in different degrees of overlap between tokens. As the sequence length tends to infinity, the receptive field converges to $1/32$, corresponding to the case of non-overlapping tokens. Conversely, decreasing the sequence length increases the degree of overlap between neighboring tokens.

```{=latex}
\centering
```
![Self-Ensembling Ablation Study.](pics/version2/ablation-self-ensembling-plot.png){#fig:ablation-self-ensembling width="0.8\\linewidth"}

We therefore can generate multiple interpolated versions of the same input, encode each independently, and concatenate the resulting embeddings. In addition, we explore augmenting the representation with embeddings computed from the first-order difference of the input time series. @auer2025tirex showed that this strategy improves classification performance for TiRex and other forecasting-oriented foundation models, motivating us to investigate its effectiveness for Mantis.

We compare the following four approaches:

-   interpolating the input time series to a fixed length of 512, as in the original setup.

-   generating four interpolated versions of the input at lengths 128, 256, 512, and 1024, encoding each version independently, and concatenating the resulting embeddings.

-   applying the same multi-scale interpolation strategy to the first-order difference of the input time series.

-   concatenating the embeddings obtained from (b) and (c).

Figure `\ref{fig:ablation-self-ensembling}`{=latex} reports the corresponding results for both Mantis+ and MantisV2. Multi-scale interpolation alone improves performance by $0.66\pct$ for Mantis+ and by $0.55\pct$ for MantisV2 on the UCR benchmark. While embeddings derived solely from the first-order difference are individually weak, incorporating them further improves performance by $0.67\pct$ and $0.29\pct$, respectively. This result is particularly noteworthy given that first-order differential features are already included in the Mantis architecture.

In the remainder of the paper, we refer to approach (d) as *Self-Ensembling (SE)*, and denote the resulting models as **SE-Mantis+** and **SE-MantisV2**.

Cross-model Embedding Fusion
----------------------------

@roschmann2025tivit have showed that the embedding of TiViT and Mantis are complimentary to each other, so their combination can improve the performance. We perform a similar experiment combining the embedding of MantisV2 with the embedding of other models. The experimental results are shown in Figure `\ref{fig:combination-ucr}`{=latex}. One can observe that model combination is indeed beneficial, even when MantisV2 is combined with statistical features (Catch22+). The smallest improvement comes from MOMENT, NuTime and Mantis+, which may be connected to the lack of complementarity (especially for Mantis+) or low individual performance (particularly for NuTime and MOMENT). The highest performance is achieved when MantisV2 is combined with TiConvNext.

The Magic of Logistic Regression {#sec:log-reg}
--------------------------------

In our earlier work [@feofanov2025mantis], we have found that using Random Forest on deep features outperform linear probing (more specifically, layer norm followed by a linear classification layer). However, we have surprisingly found that it is heavily depends on the implementation details. Following @roschmann2025tivit, we couple the Standard Scaler and Logistic Regression from the scikit-learn package [@pedregosa2011scikit]. We set the maximum number of iterations to 500 and leave the other hyperparameters to the default ones, including the solver, which is L-BFGS-B [@zhu1997algorithm; @morales2011remark]. The results on UCR are displayed in Figure `\ref{fig:logreg-ucr}`{=latex}, while the other results are deferred to Appendix `\ref{sec:log-reg-exp-appendix}`{=latex}. From the obtained results we find that using Logistic Regression significantly improves the performance of all methods except Catch22+ on UCR and UEA benchmarks. It is not the case, however, for EEG benchmark, which indicate that there is no free lunch for practitioners. It is interesting to note that Logistic Regression is in practice slower than Random Forest when a sufficient number of CPUs is available. Thus, we have decided to keep the results for both classifiers.

```{=latex}
\vspace{0.3cm}
```
```{=latex}
\centering
```
![Combination of MantisV2 with other models.](pics/version2/combination-ucr-plot.png){#fig:combination-ucr width="\\linewidth"}

```{=latex}
\centering
```
![Using logistic regression for classification.](pics/version2/ucr-sota-logreg.png){#fig:logreg-ucr width="\\linewidth"}

Final Comparison {#sec:final-comparison}
----------------

Finally, we are now able to explain Figure `\ref{fig:teaser-plot}`{=latex} we presented at the beginning of the paper. For Catch22+, we take Random Forest as a classifier, and Logistic Regression for all the other methods. We add self-ensembling results for Mantis+ as well as MantisV2 and select two best cross-model fusion results, namely, combination of MantisV2 with TiViT-H and TiConvNext. In addition, we display the performance of MantisV2 when it is fine-tuned to each dataset (more details can be found in Appendix `\ref{sec:appendix-fine-tuning}`{=latex}). The obtained results are very encouraging: using frozen foundation models, we can reach the performance of fine-tuned Mantis, thereby closing the zero-shot gap been present before.

In addition, by extracting experimental results from [@ismail2019deep] and [@goswami2024moment], we compare Mantis with 11 more baselines including self-supervised methods such as TS2Vec [@yue2022ts2vec], T-Loss [@franceschi2019tloss], TNC [@tonekaboni2021unsupervised], TS-TCC [@eldele2021time], supervised deep learning methods such as CNN [@zhao2017convolutional], Encoder [@serra2018towards], TWIESN [@tanisaro2016time], FCN, MLP and ResNet [@wang2017time], and statistical methods such as DTW [@dau2019ucr]. In Figure `\ref{fig:more-baselines-comparison}`{=latex}, we illustrate the average performance over 91 UCR datasets for these 11 models, the official MOMENT's pipeline [@goswami2024moment] and all previously considered methods. This result also gives us encouragement as foundation models with frozen encoders (more specifically, MantisV2, SE-Mantis+, SE-MantisV2, MantisV2 & TiViT-H, MantisV2 & TiConvNext) finally beat TS2Vec, which performs self-supervised learning for each dataset independently. This indicates that in time series classification it is possible to learn a universal encoder for all problems without sacrificing performance.

```{=latex}
\centering
```
![Comparison with more baselines (taken from [@goswami2024moment]) on 91 UCR datasets.](pics/version2/comparison-with-ts2vec.png){#fig:more-baselines-comparison width="\\linewidth"}

Conclusion and Future Work
==========================

In this paper, we presented MantisV2 and Mantis+, a new generation of time series classification foundation models pre-trained exclusively on synthetic data. We proposed an enhanced test-time methodology, achieving a positive outcome: we showed that zero-shot capabilities of a pre-trained model can be significantly improved by leveraging intermediate-layer representations, refined output-token aggregation, and self-ensembling. These findings suggest that scaling laws in time series classification are attainable, and that larger models can be trained whose full potential is unlocked at test time. Beyond scaling, several promising research directions remain open, including multi-modal architectures, zero-shot classification via in-context learning, and joint foundation models for classification and forecasting.

Acknowledgements {#acknowledgements .unnumbered}
================

We would like to thank all the members of Paris Noah's Ark Lab for their constructive comments that helped to improve the manuscript.

```{=latex}
\bibliographystyle{apalike}
```
```{=latex}
\appendix
```
```{=latex}
\newpage
```
Experimental Setup {#sec:appendix-exp-setup}
==================

In this section, we provide more details on the chosen datasets and models. We also experimentally justify Catch22+, a strong baseline that we have proposed.

Datasets {#sec:datasets}
--------

We consider four benchmarks. For UCR [@dau2019ucr], we follow the standard protocol and detail the three other benchmarks below.

### UEA Benchmark

From the original set [@bagnall2018uea], we have excluded AtrialFibrillation and StandWalkJump datasets due to their very small test size and PenDigits due to its very small sequence length. For InsectWingbeat dataset, we subsampled 1000 examples from the original training set (which contains 30,000 examples) and 1000 from the original test set (of 20,000 examples) to reduce computational overhead while maintaining sufficient variety in the data for robust model evaluation.

### HAR Datasets {#sec:har-data-appendix}

Below we give more details on the human activity recognition datasets used in our experiments, which characteristics can be found in Table `\ref{tab:har-datasets}`{=latex}.

```{=latex}
\centering
```
::: {#tab:har-datasets}
  Dataset     \# of Channels   Seq. Length   Train Size   Test Size   Num. classes  
  ---------- ---------------- ------------- ------------ ----------- -------------- --
  Ego4D             6             1000         247095       66994          31       
  EMOPain           30             200          968          355           3        
  HHAR-ID           6              500          8716        3419           6        
  HHAR-OOD          6              500         11150        2154           6        
  MP8               8              161          1426         595           4        
  MP50              50             161          1426         595           4        
  UCI-HAR           3              206          5881        2947           6        

  : Characteristics of HAR datasets.
:::

-   *Ego4D* [@grauman2022ego4d] is a multi-modal dataset for ego-centric human activity recognition. The use of the dataset requires signing the license agreement. We take time series data coming from Inertial Measurement Units (IMU) and pre-process them using the script provided by @chen2025comodo.

-   *EMOPain* [@egede2020emopain] is a movement-based chronic pain detection dataset. We downloaded it using the script provided by @gao2024units.

-   *HHAR* [@stisen2015smart]: we take from the WOODS benchmark [@gagnon2022woods]. For *ID* setting, we merge all domains and make a train-validation-test split in proportion 63.75%-11.25%-25%. For *OOD* setting, we use three domains for training, namely, \"nexus4\", \"s3\" and \"s3mini\", and \"lgwatch\" domain for test.

-   Military Press (MP, @singh2023fast, [-@singh2023fast]) is a dataset for human exercise performance classification. We use two versions of the dataset provided by @sungu2025empirical: *MP8* (8 key body point coordinates) and *MP50* (all 50 coordinates from 25 body parts).

-   *UCI-HAR* [@anguita2013hardataset] is preprocessed following @NEURIPS2022_194b8dac.

-   In the UCR collection, the following datasets are related to HAR: AllGestureWiimoteX,AllGestureWiimoteY, AllGestureWiimoteZ, CricketX, CricketY, CricketZ, GestureMidAirD1, GestureMidAirD2, GestureMidAirD3, GesturePebbleZ1, GesturePebbleZ2, GunPoint, GunPointAgeSpan, GunPointMaleVersusFemale, GunPointOldVersusYoung, PickupGestureWiimoteZ, ShakeGestureWiimoteZ, UWaveGestureLibraryAll, UWaveGestureLibraryX, UWaveGestureLibraryY, UWaveGestureLibraryZ.

-   In the UEA collection, these datasets are related to HAR: BasicMotions, Cricket, ERing, Epilepsy, Handwriting, Libras, NATOPS, RacketSports, UWaveGestureLibrary.

### EEG Datasets {#sec:eeg-data-appendix}

Below we provide more details on the EEG datasets used in our experiments, which main characteristics are given in Table `\ref{tab:eeg-datasets}`{=latex}.

```{=latex}
\centering
```
::: {#tab:eeg-datasets}
  Dataset               \# of Channels   Seq. Length   Train Size   Test Size   Num. Classes  
  -------------------- ---------------- ------------- ------------ ----------- -------------- --
  Blink                       4              510          500          450           2        
  CAP-ID                      19            3000         25748        10098          6        
  CAP-OOD                     19            3000         27393        8265           6        
  Epilepsy-EEG                1              178           60         11420          2        
  FingerMovements             28             50           316          100           2        
  PCL-ID                      48             750         14405        5650           2        
  PCL-OOD                     48             750          9880        7800           2        
  SEDFx-ID                    4             3000         152178       59678          6        
  SEDFx-OOD                   4             3000         133746       52838          6        
  SelfRegulationSCP1          6              896          268          293           2        
  SelfRegulationSCP2          7             1152          200          180           2        

  : Characteristics of EEG datasets.
:::

-   *Blink* [@chicaiza2021blink] is a dataset for classification of eye blink types. We downloaded it using the script provided by @gao2024units.

-   *Epilepsy-EEG* [@andrzejak2001epilepsydataset] is preprocessed following @NEURIPS2022_194b8dac.

-   *FingerMovements*, *SelfRegulationSCP1* and *SelfRegulationSCP2* are taken from the UEA archive.

-   3 datasets are taken from the WOODS benchmark [@gagnon2022woods]: motor imagery classification with *PCL* [@schalk2004bci2000; @cho2017eeg; @lee2019eeg], and sleep stage classification with *CAP* [@terzano2002atlas] and *SEDFx* [@kemp2000analysis]. For *ID* setting, we merge all domains and make a train-validation-test split in proportion 63.75%-11.25%-25%. For *OOD* setting, we split by domains. For PCL, we use \"Cho2017\" for training, \"PhysionetMI\" for validation, \"Lee2019MI\" for test. For CAP, we use Machine 0,1,3 for training, Machine 2 for validation, Machine 4 for test. For SEDFx, we use Age 20-60 for training, Age 60-80 for validation, Age 80-100 for test.

Models
------

Below, we give more implementation details on the baselines.

-   *Catch22.* We use the official python implementation `pycatch22==0.4.5`.

-   *NuTime.* We use the pre-trained weights provided by the authors in their [GitHub](https://github.com/chenguolin/NuTime/blob/main/ckpt/checkpoint_bias9.pth) repository, while fixing the hyperparameters of the architecture according to [this](https://github.com/chenguolin/NuTime/blob/main/configs/demo_ft_epilepsy.json) configuration file. In contrast to the original implementation, we do not use their adapter (described in Section 3.4 of their paper) but process all channels independently as for Mantis and MOMENT. This allows us to use NuTime in the zero-shot feature extraction setting as their adapter has to be fine-tuned.

-   *TabPFN.* We use the default version of `TabPFNClassifier` from `tabpfn==2.2.1`. As TabPFN does not support more than 10 classes, we use the `ManyClassClassifier` wrapper from TabPFN Extensions [@ye2025tabpfnextensions].

-   *TabICL.* We use the official implementation with default parameters from `tabicl==0.1.3`.

-   *MOMENT:* We use the MOMENT-large model (`d_model`=1024), which pre-trained weights can be found on the corresponding [HuggingFace](https://huggingface.co/AutonLab/MOMENT-1-large) repository. To handle the multi-channel setup, we process every channel independently and concatenate all the embeddings before passing them to the classification head. In the paper, they have considered datasets with a sequence length $\leq 512$ and use zero-padding to fix the input size to 512. At the same time, we have also tried to interpolate sequences to 512 instead, and it did not affect the performance of MOMENT. Thus, we have decided to stick to the latter option as it allows us to evaluate MOMENT for any sequence length.

-   *TiRex.* We use the official implementation with default parameters from `tirex-ts==1.1.1`.

-   *Chronos2.* We use the official implementation with default parameters from `chronos-forecasting==2.0.0`.

-   *TiViT-H* and *TiConvNext*. We use the official implementation available at [GitHub](https://github.com/ExplainableML/TiViT/tree/main). For CLIP ViT-H, we use [this checkpoint](https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K), while CLIP ConvNext's checkpoint can be found [here](https://huggingface.co/laion/CLIP-convnext_xxlarge-laion2B-s34B-b82K-augreg) [@wolf2020transformers; @radford2021CLIP; @ilharco2021openclip; @liu2022convnext; @schuhmann2022laionb].

Catch22+ {#sec:catch22plus}
--------

`\setlength{\intextsep}{-5pt}`{=latex} `\setlength{\columnsep}{10pt}`{=latex}

```{=latex}
\begin{wrapfigure}[15]{r}{0.4\textwidth}
\centering    
\vspace{0.1cm}
\includegraphics[width=0.825\linewidth]{pics/version2/ablation-catch22.pdf}
\caption{Catch22+ ablation study.}
\label{fig:ablation-catch22}
\end{wrapfigure}
```
Catch22 [@lubba2019catch22] is a manually selected set of time series statistics with a high discriminative power. Despite the authors later proposed an improvement by adding the global mean and standard deviation to the set (so-called Catch24), we have found that it can be further improved by splitting the input into non-overlapping patches (we follow @auer2025tirexclassification and set their number to 8) and computing the mean and standard deviation for each of them. Using these means and standard deviations as features for classification we further refer by Stats. Figure `\ref{fig:ablation-catch22}`{=latex} illustrates the average accuracy of different configurations across UCR datasets. One can notice that the combination of Catch22 with Stats yields a big improvement by 3.58%, while the improvement from adding global mean and standard deviation (Catch24) is more modest (0.75%). For this reason, we have considered Catch22+ (concatenation of Catch22 and Stats) as a baseline in the main part of our paper.

Additional Experiments
======================

In this section, we present our additional experimental results that complement Section `\ref{sec:key-improvements}`{=latex}.

Layer by Layer, Epoch by Epoch {#sec:layer-by-layer-appendix}
------------------------------

Figure `\ref{fig:layer-by-layer-100k-1000epochs}`{=latex} illustrates the downstream performance on UCR over 1000 pre-training epochs for all transformer layers of Mantis pre-trained on 100,000 samples. We observe an interesting pattern where the intermediate layers start to dominate in performance when increasing the number of updates. This suggests that the final layer may overfit the contrastive objective, so intermediate layers start to generalize better.

```{=latex}
\vspace{0.5cm}
```
```{=latex}
\centering
```
![Layer-by-layer downstream performance (UCR) over 1000 epochs with 100K pre-training samples.](pics/version2/layer-by-layer-100k-1000epochs.png){#fig:layer-by-layer-100k-1000epochs width="0.65\\linewidth"}

```{=latex}
\vspace{0.35cm}
```
In Figure `\ref{fig:final-pre-training}`{=latex}, we show the layer-by-layer downstream performance on UCR over 200 pre-training epochs with 2 million synthetic time series samples. These curves are used to derive final checkpoints for Mantis+ and MantisV2.

```{=latex}
\centering
```
![Final pre-training. 200 epochs with 2M pre-training samples.](pics/version2/layer-by-layer-epoch-by-epoch-final-pretraining.png){#fig:final-pre-training width="\\linewidth"}

Aggregation of Output Tokens {#sec:appendix-output-token}
----------------------------

In Section `\ref{sec:output-token}`{=latex}, we showed that the combined output-token strategy improves the performance when intermediate representations are used. This may be connected to the fact that with a small transformer depth the classification token does not aggregate well the information from the remaining tokens. To support this hypothesis, we compare the same three strategies (cls token, mean token, the combination of both) when the last transformer layer is used. From the results, which are illustrated in Figure `\ref{fig:output-token-last-layer}`{=latex}, we can see that the combined strategy is not really beneficial anymore: although it improves slightly the performance of Mantis+, it is not the case for MantisV2.

```{=latex}
\centering
```
![Ablation study on output-token aggregation when the last transformer layer is used.](pics/version2/ablation-token-last-layer-plot.png){#fig:output-token-last-layer width="0.7\\linewidth"}

Architecture Refining {#sec:appendix-arch-refine}
---------------------

In Table `\ref{tab:v1-vs-v2-transformer-1m}`{=latex}, we compare performance of Mantis with the first (classical activations / sinusoidal PE) and the second version (new activations / RoPE) of the transformer architecture when pre-trained on 1 million synthetic examples. Note that for 1 million samples, the final layer is not anymore most powerful, so we have to look at all intermediate layers. As we can see, the second version of the transformer yields slightly better results.

```{=latex}
\vspace{0.35cm}
```
```{=latex}
\centering
```
::: {#tab:v1-vs-v2-transformer-1m}
  -------- ------------------------------------------- -------------------------------------------
   Layer    `\multirow{2}{*}{V1 Transformer}`{=latex}   `\multirow{2}{*}{V2 Transformer}`{=latex}
   Number                                              
     1                       0.7872                                      0.7669
     2                       0.7979                                      0.7987
     3                       0.7989                                    **0.7994**
     4                       0.7927                                      0.7986
     5                       0.7927                                      0.7919
     6                       0.7892                                       0.78
  -------- ------------------------------------------- -------------------------------------------

  : Performance comparison between the first and the second version of transformer architecture (keeping all the other parameters of Mantis architecture fixed as in the first version). Both architectures were pre-trained on 1 million synthetic examples generated by CauKer, while the last epoch accuracy on UCR benchmark is reported.
:::

Fine-tuning {#sec:appendix-fine-tuning}
-----------

Following [@feofanov2025mantis], we append a prediction head after an encoder and fine-tune all layers on the training data of a dataset. The prediction head is a layer normalization step followed by a linear layer. We fix a fine-tuning scheme: we minimize the cross-entropy loss for 500 epochs with a fixed batch size equal to 128, using an AdamW optimizer [@loshchilov2017fixing] with a learning rate of $2\cdot10^{-4}$ and a weight decay of 0.05. We report the performance of a model at the last epoch in average over 3 experimental runs.

First, we explore whether it is reasonable to fine-tune a truncated model instead of keeping all the layers, following the success of intermediate layers for zero-shot feature extraction. As we can see in Figure `\ref{fig:fine-tuning-layer}`{=latex}, the answer to the question is negative, and keeping the original mode size leads to the superior performance. We may conclude that, despite of their low utility at the end of pre-training, the latter transformer layers remain to be important for fine-tuning as they enrich the capacity of the model to fit the training data.

```{=latex}
\vspace{0.35cm}
```
```{=latex}
\centering
```
```{=latex}
\centering
```
![Truncated Model vs Full Model.](pics/version2/ablation-fine-tuning-layer.png){#fig:fine-tuning-layer width="0.9\\linewidth"}

```{=latex}
\centering
```
![Output-token aggregation strategies for Truncated Model.](pics/version2/ablation-fine-tuning-token.png){#fig:fine-tuning-token width="0.9\\linewidth"}

```{=latex}
\vspace{0.4cm}
```
As a side experiment, we also tested if the output-token aggregation strategy proposed in Section `\ref{sec:output-token}`{=latex} improves the fine-tuning performance for the truncated Mantis+ and MantisV2. As it can be seen in Figure `\ref{fig:fine-tuning-token}`{=latex}, the impact of incorporating the information from other tokens is positive, though being smaller than for the zero-shot experiment.

Complete Results {#sec:complete-results}
================

Below we provide the complete tables that display the performance per dataset.

Tables for Section `\ref{sec:sota-exp-res-rf}`{=latex}
------------------------------------------------------

First, we show results for Section `\ref{sec:sota-exp-res-rf}`{=latex}, which performs the model comparison with the random forest as a classifier. Table `\ref{tab:sota-ucr-res-rf-a}`{=latex} and Table `\ref{tab:sota-ucr-res-rf-b}`{=latex} correspond to the results on UCR, while Table `\ref{tab:uea-sota-rf}`{=latex} refers to UEA.

```{=latex}
\newcommand{\scalefactor}{0.67}
```
```{=latex}
\vspace{0.3cm}
```
```{=latex}
\centering
```
`\scalebox{0.55}{
    \begin{tabular}{l|lllllllllll}
\toprule
                               & Catch22+ & TabPFN & TabICL & MOMENT & TiRex & Chronos2 & TiViT-H & TiConvNext & NuTime & Mantis+ & MantisV2\\
\midrule
ACSF1 & $0.8233_{\pm 0.021}$ & 0.8 & 0.81 & $0.82_{\pm 0.026}$ & $0.85_{\pm 0.01}$ & $0.8467_{\pm 0.012}$ & $0.85_{\pm 0.0}$ & \textbf{0.8667}$_{\pm 0.006}$ & $0.75_{\pm 0.02}$ & $0.8033_{\pm 0.006}$ & $0.8233_{\pm 0.012}$\\
Adiac & $0.734_{\pm 0.007}$ & 0.8031 & 0.8031 & $0.7894_{\pm 0.003}$ & $0.7852_{\pm 0.011}$ & $0.8107_{\pm 0.009}$ & $0.7059_{\pm 0.007}$ & $0.6547_{\pm 0.018}$ & $0.7357_{\pm 0.006}$ & $0.8321_{\pm 0.006}$ & \textbf{0.8483}$_{\pm 0.003}$\\
AllGestureWiimoteX & $0.5957_{\pm 0.01}$ & 0.6229 & 0.5043 & $0.6138_{\pm 0.002}$ & $0.6495_{\pm 0.012}$ & $0.6038_{\pm 0.005}$ & $0.6029_{\pm 0.011}$ & $0.6676_{\pm 0.002}$ & $0.6505_{\pm 0.005}$ & $0.6414_{\pm 0.012}$ & \textbf{0.6843}$_{\pm 0.006}$\\
AllGestureWiimoteY & $0.6319_{\pm 0.016}$ & 0.6329 & 0.5114 & $0.6624_{\pm 0.004}$ & \textbf{0.7052}$_{\pm 0.019}$ & $0.6419_{\pm 0.002}$ & $0.6514_{\pm 0.01}$ & $0.6805_{\pm 0.011}$ & $0.63_{\pm 0.01}$ & $0.691_{\pm 0.006}$ & $0.691_{\pm 0.004}$\\
AllGestureWiimoteZ & $0.5462_{\pm 0.005}$ & 0.5329 & 0.4529 & $0.5767_{\pm 0.007}$ & $0.6057_{\pm 0.001}$ & $0.6205_{\pm 0.004}$ & $0.6129_{\pm 0.014}$ & $0.6067_{\pm 0.011}$ & $0.6224_{\pm 0.009}$ & $0.6452_{\pm 0.004}$ & \textbf{0.6881}$_{\pm 0.001}$\\
ArrowHead & $0.741_{\pm 0.012}$ & 0.7543 & 0.7429 & $0.7943_{\pm 0.015}$ & $0.7886_{\pm 0.01}$ & $0.779_{\pm 0.037}$ & $0.7638_{\pm 0.012}$ & $0.7714_{\pm 0.006}$ & $0.7733_{\pm 0.023}$ & $0.7714_{\pm 0.006}$ & \textbf{0.8057}$_{\pm 0.015}$\\
BME & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0} & 0.98 & $0.9867_{\pm 0.007}$ & $0.96_{\pm 0.007}$ & $0.9667_{\pm 0.007}$ & $0.9778_{\pm 0.017}$ & $0.9933_{\pm 0.007}$ & $0.8467_{\pm 0.035}$ & $0.9667_{\pm 0.007}$ & $0.9956_{\pm 0.004}$\\
Beef & $0.6889_{\pm 0.051}$ & 0.8 & 0.7667 & $0.7889_{\pm 0.019}$ & \textbf{0.8444}$_{\pm 0.019}$ & $0.7333_{\pm 0.033}$ & $0.7556_{\pm 0.019}$ & $0.8222_{\pm 0.019}$ & $0.6667_{\pm 0.067}$ & $0.6333_{\pm 0.033}$ & $0.6889_{\pm 0.019}$\\
BeetleFly & $0.7833_{\pm 0.029}$ & 0.9 & 0.8 & $0.95_{\pm 0.0}$ & $0.9167_{\pm 0.029}$ & $0.8667_{\pm 0.029}$ & $0.9333_{\pm 0.029}$ & $0.95_{\pm 0.0}$ & $0.8_{\pm 0.05}$ & $0.9167_{\pm 0.029}$ & \textbf{0.9833}$_{\pm 0.029}$\\
BirdChicken & $0.85_{\pm 0.0}$ & 0.85 & 0.75 & $0.9833_{\pm 0.029}$ & $0.8833_{\pm 0.029}$ & $0.9_{\pm 0.0}$ & $0.85_{\pm 0.087}$ & \textbf{1.0}$_{\pm 0.0}$ & $0.9667_{\pm 0.029}$ & $0.8833_{\pm 0.029}$ & $0.9_{\pm 0.0}$\\
CBF & $0.9763_{\pm 0.002}$ & 0.9133 & 0.9244 & $0.9678_{\pm 0.005}$ & $0.9915_{\pm 0.003}$ & $0.993_{\pm 0.002}$ & \textbf{0.9978}$_{\pm 0.001}$ & $0.9937_{\pm 0.002}$ & $0.9726_{\pm 0.002}$ & $0.9859_{\pm 0.002}$ & $0.9948_{\pm 0.002}$\\
Car & $0.7556_{\pm 0.025}$ & 0.7833 & \textbf{0.8167} & $0.7556_{\pm 0.025}$ & $0.6667_{\pm 0.017}$ & $0.7389_{\pm 0.025}$ & $0.7389_{\pm 0.019}$ & $0.7222_{\pm 0.035}$ & $0.7667_{\pm 0.029}$ & $0.7944_{\pm 0.01}$ & $0.7556_{\pm 0.025}$\\
Chinatown & $0.9796_{\pm 0.003}$ & \textbf{0.9854} & 0.9796 & $0.9776_{\pm 0.002}$ & $0.9689_{\pm 0.006}$ & $0.9718_{\pm 0.004}$ & $0.9631_{\pm 0.002}$ & $0.8717_{\pm 0.02}$ & $0.9281_{\pm 0.008}$ & $0.9495_{\pm 0.003}$ & $0.9534_{\pm 0.003}$\\
ChlorineConcentration & $0.6682_{\pm 0.001}$ & 0.95 & \textbf{0.9773} & $0.6973_{\pm 0.007}$ & $0.7018_{\pm 0.004}$ & $0.7095_{\pm 0.003}$ & $0.7076_{\pm 0.001}$ & $0.7073_{\pm 0.002}$ & $0.671_{\pm 0.002}$ & $0.6874_{\pm 0.004}$ & $0.7041_{\pm 0.001}$\\
CinCECGTorso & $0.8872_{\pm 0.013}$ & 0.8341 & 0.8225 & $0.6831_{\pm 0.005}$ & \textbf{0.8976}$_{\pm 0.002}$ & $0.8135_{\pm 0.018}$ & $0.8164_{\pm 0.004}$ & $0.8101_{\pm 0.012}$ & $0.7314_{\pm 0.016}$ & $0.7384_{\pm 0.005}$ & $0.8089_{\pm 0.01}$\\
Coffee & $0.9881_{\pm 0.021}$ & 0.9643 & \textbf{1.0} & $0.9286_{\pm 0.0}$ & $0.9881_{\pm 0.021}$ & $0.9286_{\pm 0.0}$ & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0}$_{\pm 0.0}$ & $0.9286_{\pm 0.0}$ & $0.9762_{\pm 0.021}$ & \textbf{1.0}$_{\pm 0.0}$\\
Computers & $0.7347_{\pm 0.006}$ & 0.62 & 0.656 & $0.6267_{\pm 0.022}$ & $0.76_{\pm 0.011}$ & $0.7413_{\pm 0.008}$ & $0.7333_{\pm 0.008}$ & $0.7413_{\pm 0.014}$ & \textbf{0.7627}$_{\pm 0.01}$ & $0.74_{\pm 0.004}$ & $0.7213_{\pm 0.012}$\\
CricketX & $0.6974_{\pm 0.01}$ & 0.6667 & 0.6641 & $0.6855_{\pm 0.01}$ & $0.6923_{\pm 0.017}$ & $0.6991_{\pm 0.017}$ & $0.7137_{\pm 0.005}$ & $0.6641_{\pm 0.003}$ & $0.6667_{\pm 0.01}$ & $0.735_{\pm 0.007}$ & \textbf{0.7735}$_{\pm 0.011}$\\
CricketY & $0.6923_{\pm 0.008}$ & 0.7 & 0.6333 & $0.6735_{\pm 0.005}$ & $0.7222_{\pm 0.01}$ & $0.7137_{\pm 0.012}$ & $0.7128_{\pm 0.012}$ & $0.665_{\pm 0.004}$ & $0.6889_{\pm 0.01}$ & $0.7581_{\pm 0.009}$ & \textbf{0.788}$_{\pm 0.02}$\\
CricketZ & $0.7427_{\pm 0.008}$ & 0.6718 & 0.6692 & $0.7487_{\pm 0.017}$ & $0.7248_{\pm 0.004}$ & $0.7299_{\pm 0.004}$ & $0.7641_{\pm 0.007}$ & $0.712_{\pm 0.008}$ & $0.6778_{\pm 0.005}$ & $0.7795_{\pm 0.008}$ & \textbf{0.8171}$_{\pm 0.005}$\\
Crop & $0.7523_{\pm 0.001}$ & 0.7989 & \textbf{0.812} & $0.7141_{\pm 0.001}$ & $0.7201_{\pm 0.001}$ & $0.7208_{\pm 0.003}$ & $0.6552_{\pm 0.002}$ & $0.6409_{\pm 0.002}$ & $0.6754_{\pm 0.001}$ & $0.731_{\pm 0.001}$ & $0.7341_{\pm 0.0}$\\
DiatomSizeReduction & $0.9401_{\pm 0.008}$ & \textbf{0.9608} & 0.951 & $0.8824_{\pm 0.006}$ & $0.8617_{\pm 0.005}$ & $0.8998_{\pm 0.014}$ & $0.8649_{\pm 0.016}$ & $0.9009_{\pm 0.008}$ & $0.8617_{\pm 0.011}$ & $0.829_{\pm 0.03}$ & $0.8301_{\pm 0.02}$\\
DistalPhalanxOutlineAgeGroup & $0.717_{\pm 0.004}$ & 0.7626 & 0.7626 & $0.753_{\pm 0.004}$ & $0.741_{\pm 0.014}$ & \textbf{0.7818}$_{\pm 0.015}$ & \textbf{0.7818}$_{\pm 0.008}$ & $0.777_{\pm 0.007}$ & $0.7386_{\pm 0.004}$ & \textbf{0.7818}$_{\pm 0.015}$ & $0.7602_{\pm 0.008}$\\
DistalPhalanxOutlineCorrect & $0.7911_{\pm 0.002}$ & 0.7826 & 0.7754 & $0.7959_{\pm 0.006}$ & $0.7874_{\pm 0.008}$ & $0.8031_{\pm 0.002}$ & $0.779_{\pm 0.01}$ & $0.7548_{\pm 0.004}$ & $0.7778_{\pm 0.006}$ & \textbf{0.8188}$_{\pm 0.004}$ & $0.7886_{\pm 0.002}$\\
DistalPhalanxTW & $0.6475_{\pm 0.019}$ & 0.6978 & 0.6835 & $0.6571_{\pm 0.004}$ & $0.6667_{\pm 0.011}$ & $0.693_{\pm 0.015}$ & $0.6715_{\pm 0.004}$ & $0.6954_{\pm 0.015}$ & $0.6882_{\pm 0.018}$ & $0.6882_{\pm 0.015}$ & \textbf{0.7074}$_{\pm 0.011}$\\
DodgerLoopDay & $0.6417_{\pm 0.052}$ & 0.6125 & \textbf{0.725} & $0.4583_{\pm 0.052}$ & $0.5042_{\pm 0.019}$ & $0.4917_{\pm 0.007}$ & $0.4417_{\pm 0.014}$ & $0.5333_{\pm 0.014}$ & $0.5417_{\pm 0.029}$ & $0.5875_{\pm 0.025}$ & $0.5333_{\pm 0.019}$\\
DodgerLoopGame & \textbf{0.8333}$_{\pm 0.007}$ & 0.7899 & 0.7971 & $0.8116_{\pm 0.007}$ & $0.7633_{\pm 0.004}$ & \textbf{0.8333}$_{\pm 0.014}$ & $0.8164_{\pm 0.004}$ & $0.7826_{\pm 0.0}$ & $0.7826_{\pm 0.007}$ & $0.8164_{\pm 0.011}$ & $0.7995_{\pm 0.022}$\\
DodgerLoopWeekend & \textbf{0.9855}$_{\pm 0.0}$ & \textbf{0.9855} & 0.9783 & $0.9758_{\pm 0.004}$ & $0.9493_{\pm 0.007}$ & $0.9638_{\pm 0.013}$ & $0.9275_{\pm 0.0}$ & $0.901_{\pm 0.015}$ & $0.9589_{\pm 0.004}$ & \textbf{0.9855}$_{\pm 0.0}$ & $0.9517_{\pm 0.004}$\\
ECG200 & $0.85_{\pm 0.017}$ & \textbf{0.89} & 0.88 & $0.87_{\pm 0.01}$ & $0.86_{\pm 0.017}$ & $0.84_{\pm 0.01}$ & $0.83_{\pm 0.0}$ & $0.7833_{\pm 0.025}$ & $0.8233_{\pm 0.006}$ & $0.8167_{\pm 0.015}$ & $0.8367_{\pm 0.012}$\\
ECG5000 & $0.9398_{\pm 0.001}$ & 0.942 & \textbf{0.9447} & $0.938_{\pm 0.001}$ & $0.9331_{\pm 0.002}$ & $0.9331_{\pm 0.001}$ & $0.9403_{\pm 0.001}$ & $0.9386_{\pm 0.001}$ & $0.9332_{\pm 0.0}$ & $0.9374_{\pm 0.001}$ & $0.9381_{\pm 0.0}$\\
ECGFiveDays & $0.7975_{\pm 0.013}$ & 0.9245 & \textbf{0.9826} & $0.813_{\pm 0.002}$ & $0.8955_{\pm 0.022}$ & $0.9102_{\pm 0.02}$ & $0.9346_{\pm 0.015}$ & $0.971_{\pm 0.004}$ & $0.7871_{\pm 0.008}$ & $0.8448_{\pm 0.016}$ & $0.8707_{\pm 0.001}$\\
EOGHorizontalSignal & $0.5948_{\pm 0.004}$ & 0.5276 & 0.5 & $0.5654_{\pm 0.01}$ & $0.5313_{\pm 0.02}$ & $0.5387_{\pm 0.013}$ & $0.5387_{\pm 0.007}$ & $0.5626_{\pm 0.007}$ & $0.4521_{\pm 0.008}$ & \textbf{0.6013}$_{\pm 0.003}$ & $0.5884_{\pm 0.006}$\\
EOGVerticalSignal & \textbf{0.5092}$_{\pm 0.002}$ & 0.489 & 0.4337 & $0.453_{\pm 0.008}$ & $0.4144_{\pm 0.003}$ & $0.384_{\pm 0.003}$ & $0.442_{\pm 0.011}$ & $0.3757_{\pm 0.007}$ & $0.267_{\pm 0.002}$ & $0.4696_{\pm 0.015}$ & $0.4816_{\pm 0.011}$\\
Earthquakes & \textbf{0.7506}$_{\pm 0.008}$ & 0.7482 & 0.7482 & $0.7458_{\pm 0.011}$ & $0.7482_{\pm 0.0}$ & $0.7458_{\pm 0.004}$ & $0.741_{\pm 0.0}$ & $0.7458_{\pm 0.004}$ & $0.7434_{\pm 0.004}$ & $0.741_{\pm 0.012}$ & $0.7434_{\pm 0.004}$\\
ElectricDevices & $0.7396_{\pm 0.003}$ & 0.7025 & 0.6614 & $0.7258_{\pm 0.002}$ & $0.704_{\pm 0.003}$ & $0.7489_{\pm 0.002}$ & \textbf{0.762}$_{\pm 0.003}$ & $0.7617_{\pm 0.002}$ & $0.7066_{\pm 0.002}$ & $0.7269_{\pm 0.001}$ & $0.7424_{\pm 0.004}$\\
EthanolLevel & $0.38_{\pm 0.013}$ & \textbf{0.848} & 0.694 & $0.44_{\pm 0.007}$ & $0.306_{\pm 0.004}$ & $0.4367_{\pm 0.005}$ & $0.4047_{\pm 0.005}$ & $0.348_{\pm 0.007}$ & $0.3487_{\pm 0.005}$ & $0.332_{\pm 0.003}$ & $0.3793_{\pm 0.014}$\\
FaceAll & $0.7682_{\pm 0.028}$ & \textbf{0.8077} & 0.771 & $0.7178_{\pm 0.003}$ & $0.7651_{\pm 0.015}$ & $0.7024_{\pm 0.005}$ & $0.6911_{\pm 0.006}$ & $0.6866_{\pm 0.004}$ & $0.6487_{\pm 0.002}$ & $0.6921_{\pm 0.005}$ & $0.7205_{\pm 0.003}$\\
FaceFour & $0.8977_{\pm 0.02}$ & 0.9091 & 0.8864 & $0.7689_{\pm 0.046}$ & $0.7917_{\pm 0.046}$ & $0.6818_{\pm 0.057}$ & $0.6477_{\pm 0.05}$ & $0.8182_{\pm 0.011}$ & $0.803_{\pm 0.013}$ & $0.7841_{\pm 0.063}$ & \textbf{0.9167}$_{\pm 0.007}$\\
FacesUCR & $0.8558_{\pm 0.004}$ & 0.8766 & \textbf{0.8771} & $0.7959_{\pm 0.001}$ & $0.7995_{\pm 0.005}$ & $0.7603_{\pm 0.004}$ & $0.7815_{\pm 0.005}$ & $0.7665_{\pm 0.006}$ & $0.714_{\pm 0.009}$ & $0.8016_{\pm 0.005}$ & $0.8439_{\pm 0.006}$\\
FiftyWords & $0.726_{\pm 0.006}$ & \textbf{0.7385} & 0.7165 & $0.7099_{\pm 0.004}$ & $0.6381_{\pm 0.01}$ & $0.6689_{\pm 0.009}$ & $0.6425_{\pm 0.003}$ & $0.674_{\pm 0.006}$ & $0.6139_{\pm 0.005}$ & $0.6813_{\pm 0.011}$ & $0.7106_{\pm 0.001}$\\
Fish & $0.7619_{\pm 0.013}$ & 0.88 & 0.8857 & $0.8476_{\pm 0.014}$ & $0.8324_{\pm 0.012}$ & $0.8629_{\pm 0.006}$ & $0.9029_{\pm 0.011}$ & $0.9067_{\pm 0.012}$ & $0.9162_{\pm 0.007}$ & $0.8686_{\pm 0.01}$ & \textbf{0.9314}$_{\pm 0.0}$\\
FordA & $0.9101_{\pm 0.004}$ & 0.897 & 0.8758 & $0.8611_{\pm 0.006}$ & \textbf{0.9409}$_{\pm 0.002}$ & $0.9174_{\pm 0.002}$ & $0.8975_{\pm 0.004}$ & $0.9081_{\pm 0.001}$ & $0.8934_{\pm 0.003}$ & $0.896_{\pm 0.005}$ & $0.9303_{\pm 0.0}$\\
FordB & $0.7292_{\pm 0.007}$ & 0.7556 & 0.7136 & $0.7374_{\pm 0.002}$ & \textbf{0.814}$_{\pm 0.006}$ & $0.7918_{\pm 0.004}$ & $0.7794_{\pm 0.003}$ & $0.7626_{\pm 0.001}$ & $0.7527_{\pm 0.008}$ & $0.7584_{\pm 0.002}$ & $0.7979_{\pm 0.003}$\\
FreezerRegularTrain & \textbf{0.9996}$_{\pm 0.0}$ & 0.9986 & 0.9877 & $0.9108_{\pm 0.005}$ & $0.9261_{\pm 0.002}$ & $0.9718_{\pm 0.005}$ & $0.9827_{\pm 0.002}$ & $0.9908_{\pm 0.001}$ & $0.9766_{\pm 0.002}$ & $0.9608_{\pm 0.003}$ & $0.9857_{\pm 0.001}$\\
FreezerSmallTrain & $0.9251_{\pm 0.002}$ & 0.8933 & 0.8098 & $0.7878_{\pm 0.011}$ & $0.8208_{\pm 0.016}$ & $0.9185_{\pm 0.009}$ & $0.9353_{\pm 0.005}$ & \textbf{0.9559}$_{\pm 0.005}$ & $0.9395_{\pm 0.004}$ & $0.8713_{\pm 0.018}$ & $0.935_{\pm 0.004}$\\
Fungi & $0.9229_{\pm 0.041}$ & 0.8656 & 0.7849 & \textbf{1.0}$_{\pm 0.0}$ & $0.8405_{\pm 0.05}$ & $0.9104_{\pm 0.008}$ & $0.914_{\pm 0.014}$ & $0.8835_{\pm 0.03}$ & $0.7007_{\pm 0.031}$ & $0.7796_{\pm 0.019}$ & $0.9427_{\pm 0.028}$\\
GestureMidAirD1 & $0.6795_{\pm 0.036}$ & 0.6231 & 0.6769 & $0.6744_{\pm 0.009}$ & $0.6615_{\pm 0.013}$ & $0.6692_{\pm 0.023}$ & $0.7_{\pm 0.008}$ & \textbf{0.7359}$_{\pm 0.016}$ & $0.6564_{\pm 0.009}$ & $0.7077_{\pm 0.008}$ & $0.7128_{\pm 0.018}$\\
GestureMidAirD2 & $0.6205_{\pm 0.016}$ & 0.5615 & 0.5846 & $0.5769_{\pm 0.013}$ & $0.6513_{\pm 0.031}$ & \textbf{0.7436}$_{\pm 0.024}$ & $0.6667_{\pm 0.019}$ & $0.6795_{\pm 0.012}$ & $0.5667_{\pm 0.012}$ & $0.6077_{\pm 0.013}$ & $0.6154_{\pm 0.013}$\\
GestureMidAirD3 & $0.3923_{\pm 0.031}$ & 0.3692 & 0.3615 & $0.3846_{\pm 0.013}$ & $0.3897_{\pm 0.031}$ & $0.3538_{\pm 0.023}$ & $0.441_{\pm 0.019}$ & \textbf{0.4692}$_{\pm 0.008}$ & $0.3923_{\pm 0.008}$ & $0.4205_{\pm 0.024}$ & $0.4385_{\pm 0.013}$\\
GesturePebbleZ1 & $0.8779_{\pm 0.012}$ & 0.8488 & 0.8779 & $0.8779_{\pm 0.006}$ & $0.8702_{\pm 0.007}$ & $0.8857_{\pm 0.009}$ & $0.8663_{\pm 0.0}$ & $0.8295_{\pm 0.003}$ & $0.8915_{\pm 0.003}$ & $0.9167_{\pm 0.003}$ & \textbf{0.9205}$_{\pm 0.007}$\\
GesturePebbleZ2 & $0.7384_{\pm 0.004}$ & 0.7848 & 0.7532 & $0.865_{\pm 0.01}$ & $0.8755_{\pm 0.013}$ & $0.8186_{\pm 0.007}$ & $0.8439_{\pm 0.02}$ & $0.8439_{\pm 0.01}$ & $0.8376_{\pm 0.01}$ & \textbf{0.9388}$_{\pm 0.004}$ & $0.9114_{\pm 0.011}$\\
GunPoint & $0.9467_{\pm 0.018}$ & 0.9667 & 0.9533 & $0.9867_{\pm 0.007}$ & $0.9644_{\pm 0.01}$ & \textbf{0.9956}$_{\pm 0.004}$ & $0.9933_{\pm 0.0}$ & $0.9667_{\pm 0.018}$ & $0.9422_{\pm 0.004}$ & $0.98_{\pm 0.0}$ & $0.9778_{\pm 0.004}$\\
GunPointAgeSpan & $0.9884_{\pm 0.002}$ & \textbf{0.9905} & 0.9842 & $0.9525_{\pm 0.008}$ & $0.9821_{\pm 0.005}$ & $0.9852_{\pm 0.002}$ & $0.9873_{\pm 0.0}$ & $0.9673_{\pm 0.01}$ & $0.9715_{\pm 0.003}$ & $0.9852_{\pm 0.002}$ & $0.9842_{\pm 0.003}$\\
GunPointMaleVersusFemale & $0.9937_{\pm 0.0}$ & 0.9968 & \textbf{1.0} & $0.9916_{\pm 0.004}$ & $0.9947_{\pm 0.004}$ & $0.9884_{\pm 0.002}$ & $0.9968_{\pm 0.003}$ & $0.9979_{\pm 0.004}$ & $0.9789_{\pm 0.005}$ & $0.9905_{\pm 0.0}$ & \textbf{1.0}$_{\pm 0.0}$\\
GunPointOldVersusYoung & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0} & \textbf{1.0} & $0.9598_{\pm 0.005}$ & $0.9672_{\pm 0.007}$ & $0.9778_{\pm 0.003}$ & $0.9852_{\pm 0.005}$ & $0.9937_{\pm 0.0}$ & \textbf{1.0}$_{\pm 0.0}$ & $0.9968_{\pm 0.0}$ & $0.9968_{\pm 0.0}$\\
Ham & $0.6063_{\pm 0.005}$ & \textbf{0.7429} & 0.7238 & $0.7333_{\pm 0.019}$ & $0.7143_{\pm 0.01}$ & $0.6413_{\pm 0.015}$ & $0.6508_{\pm 0.04}$ & $0.6349_{\pm 0.02}$ & $0.7111_{\pm 0.048}$ & $0.7016_{\pm 0.02}$ & $0.6698_{\pm 0.031}$\\
HandOutlines & $0.9036_{\pm 0.004}$ & 0.9162 & \textbf{0.927} & $0.909_{\pm 0.006}$ & $0.8757_{\pm 0.016}$ & $0.9108_{\pm 0.003}$ & $0.8874_{\pm 0.009}$ & $0.8874_{\pm 0.01}$ & $0.8964_{\pm 0.007}$ & $0.8973_{\pm 0.003}$ & $0.9036_{\pm 0.006}$\\
Haptics & $0.4838_{\pm 0.015}$ & 0.4708 & 0.461 & $0.5238_{\pm 0.008}$ & $0.5271_{\pm 0.014}$ & \textbf{0.5444}$_{\pm 0.009}$ & $0.5076_{\pm 0.012}$ & $0.5141_{\pm 0.008}$ & $0.474_{\pm 0.006}$ & $0.54_{\pm 0.005}$ & $0.5011_{\pm 0.011}$\\
Herring & $0.5208_{\pm 0.024}$ & 0.5938 & 0.6406 & $0.5885_{\pm 0.024}$ & $0.6354_{\pm 0.065}$ & \textbf{0.6667}$_{\pm 0.018}$ & $0.5938_{\pm 0.031}$ & $0.5573_{\pm 0.033}$ & $0.6458_{\pm 0.018}$ & $0.6042_{\pm 0.009}$ & $0.6302_{\pm 0.018}$\\
HouseTwenty & $0.9692_{\pm 0.005}$ & 0.8487 & 0.7563 & $0.9384_{\pm 0.005}$ & $0.9692_{\pm 0.005}$ & $0.958_{\pm 0.008}$ & \textbf{0.9776}$_{\pm 0.005}$ & $0.9496_{\pm 0.008}$ & $0.8852_{\pm 0.013}$ & $0.9496_{\pm 0.0}$ & $0.9524_{\pm 0.013}$\\
InlineSkate & $0.3891_{\pm 0.005}$ & 0.3345 & 0.3436 & $0.3224_{\pm 0.006}$ & $0.4491_{\pm 0.014}$ & $0.4194_{\pm 0.007}$ & $0.3848_{\pm 0.012}$ & \textbf{0.4576}$_{\pm 0.006}$ & $0.32_{\pm 0.007}$ & $0.3927_{\pm 0.008}$ & $0.3855_{\pm 0.005}$\\
InsectEPGRegularTrain & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0} & \textbf{1.0} & $0.9304_{\pm 0.002}$ & $0.9692_{\pm 0.01}$ & $0.9518_{\pm 0.007}$ & \textbf{1.0}$_{\pm 0.0}$ & $0.9893_{\pm 0.006}$ & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0}$_{\pm 0.0}$\\
InsectEPGSmallTrain & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0} & \textbf{1.0} & $0.8246_{\pm 0.006}$ & $0.8889_{\pm 0.002}$ & $0.8715_{\pm 0.018}$ & $0.9451_{\pm 0.026}$ & $0.9505_{\pm 0.012}$ & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0}$_{\pm 0.0}$\\
InsectWingbeatSound & $0.629_{\pm 0.003}$ & \textbf{0.6672} & 0.6556 & $0.6258_{\pm 0.003}$ & $0.651_{\pm 0.001}$ & $0.6271_{\pm 0.003}$ & $0.5258_{\pm 0.013}$ & $0.5365_{\pm 0.004}$ & $0.5167_{\pm 0.006}$ & $0.5359_{\pm 0.002}$ & $0.6056_{\pm 0.009}$\\
\bottomrule
\end{tabular}
    }`{=latex}

```{=latex}
\newpage
```
```{=latex}
\centering
```
`\scalebox{0.55}{
    \begin{tabular}{l|lllllllllll}
\toprule
                               & Catch22+ & TabPFN & TabICL & MOMENT & TiRex & Chronos2 & TiViT-H & TiConvNext & NuTime & Mantis+ & MantisV2\\
\midrule
ItalyPowerDemand & $0.9537_{\pm 0.002}$ & \textbf{0.9699} & 0.9631 & $0.9482_{\pm 0.004}$ & $0.9624_{\pm 0.004}$ & $0.9559_{\pm 0.003}$ & $0.8776_{\pm 0.003}$ & $0.9248_{\pm 0.002}$ & $0.8737_{\pm 0.004}$ & $0.9002_{\pm 0.005}$ & $0.9248_{\pm 0.004}$\\
LargeKitchenAppliances & $0.8169_{\pm 0.009}$ & 0.6373 & 0.696 & $0.7342_{\pm 0.007}$ & $0.8053_{\pm 0.005}$ & \textbf{0.8729}$_{\pm 0.003}$ & $0.816_{\pm 0.003}$ & $0.8702_{\pm 0.002}$ & $0.7556_{\pm 0.009}$ & $0.8133_{\pm 0.011}$ & $0.7662_{\pm 0.006}$\\
Lightning2 & $0.7322_{\pm 0.009}$ & 0.6721 & 0.7049 & $0.7869_{\pm 0.033}$ & $0.7432_{\pm 0.009}$ & $0.7268_{\pm 0.009}$ & $0.7596_{\pm 0.009}$ & $0.7432_{\pm 0.025}$ & $0.6776_{\pm 0.009}$ & \textbf{0.7923}$_{\pm 0.009}$ & $0.7268_{\pm 0.009}$\\
Lightning7 & \textbf{0.7671}$_{\pm 0.014}$ & 0.6986 & 0.7397 & $0.7352_{\pm 0.029}$ & $0.7352_{\pm 0.016}$ & $0.653_{\pm 0.021}$ & $0.7489_{\pm 0.029}$ & $0.7306_{\pm 0.016}$ & $0.6575_{\pm 0.027}$ & $0.7397_{\pm 0.027}$ & $0.7215_{\pm 0.044}$\\
Mallat & $0.9555_{\pm 0.009}$ & \textbf{0.9689} & 0.9484 & $0.9164_{\pm 0.016}$ & $0.9166_{\pm 0.027}$ & $0.9016_{\pm 0.009}$ & $0.9032_{\pm 0.007}$ & $0.8387_{\pm 0.005}$ & $0.8311_{\pm 0.005}$ & $0.8708_{\pm 0.01}$ & $0.8887_{\pm 0.002}$\\
Meat & $0.9222_{\pm 0.01}$ & \textbf{0.9833} & 0.9333 & $0.9222_{\pm 0.01}$ & $0.8611_{\pm 0.01}$ & $0.9333_{\pm 0.017}$ & $0.8389_{\pm 0.038}$ & $0.8667_{\pm 0.0}$ & $0.9278_{\pm 0.019}$ & $0.8944_{\pm 0.01}$ & $0.9111_{\pm 0.025}$\\
MedicalImages & $0.7737_{\pm 0.005}$ & 0.7947 & \textbf{0.8079} & $0.714_{\pm 0.008}$ & $0.7092_{\pm 0.007}$ & $0.7246_{\pm 0.009}$ & $0.7504_{\pm 0.005}$ & $0.7368_{\pm 0.0}$ & $0.7132_{\pm 0.002}$ & $0.7434_{\pm 0.005}$ & $0.7522_{\pm 0.006}$\\
MelbournePedestrian & $0.9597_{\pm 0.0}$ & \textbf{0.9803} & \textbf{0.9803} & $0.8643_{\pm 0.0}$ & $0.8812_{\pm 0.002}$ & $0.8927_{\pm 0.006}$ & $0.8074_{\pm 0.004}$ & $0.8195_{\pm 0.003}$ & $0.9172_{\pm 0.002}$ & $0.9464_{\pm 0.0}$ & $0.9513_{\pm 0.001}$\\
MiddlePhalanxOutlineAgeGroup & $0.5952_{\pm 0.004}$ & 0.6234 & \textbf{0.6299} & $0.5649_{\pm 0.017}$ & $0.5996_{\pm 0.016}$ & $0.5584_{\pm 0.017}$ & $0.6039_{\pm 0.006}$ & \textbf{0.6299}$_{\pm 0.017}$ & $0.6061_{\pm 0.004}$ & $0.6082_{\pm 0.025}$ & $0.5801_{\pm 0.004}$\\
MiddlePhalanxOutlineCorrect & $0.811_{\pm 0.012}$ & 0.8522 & 0.8351 & $0.858_{\pm 0.008}$ & $0.8179_{\pm 0.007}$ & \textbf{0.8717}$_{\pm 0.008}$ & $0.8247_{\pm 0.009}$ & $0.8156_{\pm 0.012}$ & $0.7892_{\pm 0.009}$ & $0.7927_{\pm 0.011}$ & $0.8339_{\pm 0.004}$\\
MiddlePhalanxTW & $0.5844_{\pm 0.006}$ & 0.6169 & \textbf{0.6234} & $0.5779_{\pm 0.017}$ & $0.5736_{\pm 0.01}$ & $0.5714_{\pm 0.013}$ & $0.5519_{\pm 0.023}$ & $0.5693_{\pm 0.019}$ & $0.5281_{\pm 0.014}$ & $0.5346_{\pm 0.02}$ & $0.5325_{\pm 0.006}$\\
MixedShapesRegularTrain & $0.9306_{\pm 0.003}$ & 0.9344 & 0.9299 & $0.9166_{\pm 0.001}$ & $0.9472_{\pm 0.001}$ & $0.9427_{\pm 0.001}$ & $0.9513_{\pm 0.001}$ & \textbf{0.9564}$_{\pm 0.003}$ & $0.9381_{\pm 0.002}$ & $0.9461_{\pm 0.003}$ & $0.9467_{\pm 0.0}$\\
MixedShapesSmallTrain & $0.8827_{\pm 0.002}$ & 0.8293 & 0.8767 & $0.8506_{\pm 0.008}$ & $0.909_{\pm 0.002}$ & $0.9019_{\pm 0.003}$ & \textbf{0.919}$_{\pm 0.005}$ & $0.9182_{\pm 0.002}$ & $0.908_{\pm 0.003}$ & $0.9146_{\pm 0.002}$ & $0.9157_{\pm 0.001}$\\
MoteStrain & $0.8818_{\pm 0.015}$ & 0.889 & 0.8794 & $0.8895_{\pm 0.008}$ & $0.9193_{\pm 0.003}$ & $0.9332_{\pm 0.004}$ & $0.8586_{\pm 0.005}$ & $0.9055_{\pm 0.002}$ & \textbf{0.9481}$_{\pm 0.002}$ & $0.9121_{\pm 0.003}$ & $0.931_{\pm 0.004}$\\
NonInvasiveFetalECGThorax1 & $0.8877_{\pm 0.004}$ & \textbf{0.941} & 0.9272 & $0.8863_{\pm 0.002}$ & $0.8656_{\pm 0.0}$ & $0.8295_{\pm 0.001}$ & $0.8137_{\pm 0.005}$ & $0.8244_{\pm 0.006}$ & $0.78_{\pm 0.005}$ & $0.8575_{\pm 0.004}$ & $0.864_{\pm 0.002}$\\
NonInvasiveFetalECGThorax2 & $0.9091_{\pm 0.0}$ & \textbf{0.9476} & 0.9405 & $0.9104_{\pm 0.001}$ & $0.888_{\pm 0.004}$ & $0.8675_{\pm 0.001}$ & $0.8755_{\pm 0.002}$ & $0.8692_{\pm 0.004}$ & $0.8175_{\pm 0.006}$ & $0.8906_{\pm 0.002}$ & $0.8867_{\pm 0.0}$\\
OSULeaf & $0.6832_{\pm 0.009}$ & 0.5661 & 0.595 & $0.7521_{\pm 0.007}$ & $0.9174_{\pm 0.007}$ & $0.8953_{\pm 0.01}$ & $0.9463_{\pm 0.004}$ & \textbf{0.9793}$_{\pm 0.004}$ & $0.8003_{\pm 0.005}$ & $0.9298_{\pm 0.004}$ & $0.9353_{\pm 0.002}$\\
OliveOil & $0.8444_{\pm 0.019}$ & \textbf{0.9333} & 0.9 & $0.8889_{\pm 0.019}$ & $0.8778_{\pm 0.019}$ & $0.8556_{\pm 0.019}$ & $0.5778_{\pm 0.038}$ & $0.8556_{\pm 0.019}$ & $0.7_{\pm 0.0}$ & $0.8333_{\pm 0.0}$ & $0.8667_{\pm 0.033}$\\
PLAID & $0.8752_{\pm 0.009}$ & 0.7896 & 0.5661 & $0.7393_{\pm 0.005}$ & $0.8684_{\pm 0.003}$ & $0.8591_{\pm 0.008}$ & $0.8709_{\pm 0.007}$ & \textbf{0.892}$_{\pm 0.005}$ & $0.7765_{\pm 0.004}$ & $0.8324_{\pm 0.004}$ & $0.8187_{\pm 0.004}$\\
PhalangesOutlinesCorrect & $0.8252_{\pm 0.003}$ & 0.8403 & \textbf{0.8613} & $0.8225_{\pm 0.002}$ & $0.8197_{\pm 0.003}$ & $0.8349_{\pm 0.004}$ & $0.796_{\pm 0.002}$ & $0.7949_{\pm 0.003}$ & $0.7766_{\pm 0.003}$ & $0.8116_{\pm 0.003}$ & $0.824_{\pm 0.002}$\\
Phoneme & $0.3216_{\pm 0.005}$ & 0.1097 & 0.1361 & $0.2938_{\pm 0.008}$ & $0.3771_{\pm 0.004}$ & \textbf{0.3936}$_{\pm 0.002}$ & $0.355_{\pm 0.001}$ & $0.3745_{\pm 0.004}$ & $0.2913_{\pm 0.008}$ & $0.3486_{\pm 0.001}$ & $0.3641_{\pm 0.005}$\\
PickupGestureWiimoteZ & $0.6933_{\pm 0.012}$ & 0.76 & 0.74 & $0.5867_{\pm 0.031}$ & $0.7333_{\pm 0.023}$ & $0.68_{\pm 0.0}$ & \textbf{0.82}$_{\pm 0.02}$ & \textbf{0.82}$_{\pm 0.035}$ & $0.6667_{\pm 0.031}$ & $0.7667_{\pm 0.031}$ & $0.7867_{\pm 0.031}$\\
PigAirwayPressure & $0.2372_{\pm 0.003}$ & 0.0192 & 0.1538 & $0.1074_{\pm 0.014}$ & $0.3462_{\pm 0.029}$ & $0.3333_{\pm 0.012}$ & $0.4744_{\pm 0.029}$ & \textbf{0.5769}$_{\pm 0.005}$ & $0.3478_{\pm 0.012}$ & $0.4663_{\pm 0.013}$ & $0.4904_{\pm 0.017}$\\
PigArtPressure & $0.891_{\pm 0.007}$ & 0.0337 & 0.2548 & $0.5369_{\pm 0.015}$ & $0.8734_{\pm 0.02}$ & $0.8061_{\pm 0.018}$ & $0.8173_{\pm 0.027}$ & $0.9087_{\pm 0.01}$ & $0.9359_{\pm 0.012}$ & \textbf{0.9391}$_{\pm 0.007}$ & $0.9343_{\pm 0.003}$\\
PigCVP & $0.5128_{\pm 0.011}$ & 0.0192 & 0.1731 & $0.4407_{\pm 0.007}$ & $0.8349_{\pm 0.007}$ & $0.6731_{\pm 0.024}$ & $0.6795_{\pm 0.01}$ & $0.7131_{\pm 0.025}$ & $0.8285_{\pm 0.01}$ & $0.8686_{\pm 0.007}$ & \textbf{0.8974}$_{\pm 0.02}$\\
Plane & \textbf{1.0}$_{\pm 0.0}$ & 0.9905 & 0.9905 & $0.9968_{\pm 0.005}$ & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0}$_{\pm 0.0}$\\
PowerCons & $0.9944_{\pm 0.0}$ & \textbf{1.0} & \textbf{1.0} & $0.9407_{\pm 0.003}$ & $0.8981_{\pm 0.008}$ & $0.9407_{\pm 0.006}$ & $0.8907_{\pm 0.008}$ & $0.9074_{\pm 0.017}$ & $0.9333_{\pm 0.006}$ & $0.95_{\pm 0.01}$ & $0.9648_{\pm 0.003}$\\
ProximalPhalanxOutlineAgeGroup & $0.8472_{\pm 0.007}$ & \textbf{0.8585} & 0.839 & $0.8341_{\pm 0.005}$ & $0.852_{\pm 0.003}$ & $0.8569_{\pm 0.007}$ & $0.8537_{\pm 0.013}$ & $0.8537_{\pm 0.01}$ & $0.8504_{\pm 0.003}$ & $0.8309_{\pm 0.003}$ & $0.8293_{\pm 0.005}$\\
ProximalPhalanxOutlineCorrect & $0.8648_{\pm 0.012}$ & 0.9038 & \textbf{0.9244} & $0.8603_{\pm 0.01}$ & $0.8774_{\pm 0.009}$ & $0.8706_{\pm 0.004}$ & $0.8511_{\pm 0.005}$ & $0.8328_{\pm 0.008}$ & $0.8373_{\pm 0.004}$ & $0.8442_{\pm 0.005}$ & $0.8671_{\pm 0.005}$\\
ProximalPhalanxTW & $0.8033_{\pm 0.007}$ & 0.8098 & \textbf{0.8293} & $0.8114_{\pm 0.01}$ & $0.8065_{\pm 0.023}$ & $0.7984_{\pm 0.003}$ & $0.7772_{\pm 0.011}$ & $0.7919_{\pm 0.01}$ & $0.8146_{\pm 0.005}$ & $0.7854_{\pm 0.015}$ & $0.8211_{\pm 0.016}$\\
RefrigerationDevices & $0.5493_{\pm 0.023}$ & 0.504 & 0.4933 & $0.5493_{\pm 0.01}$ & $0.568_{\pm 0.0}$ & $0.5502_{\pm 0.009}$ & $0.5467_{\pm 0.003}$ & \textbf{0.5911}$_{\pm 0.016}$ & $0.5369_{\pm 0.009}$ & $0.5316_{\pm 0.009}$ & $0.5636_{\pm 0.006}$\\
Rock & $0.62_{\pm 0.02}$ & 0.76 & 0.64 & $0.8333_{\pm 0.012}$ & \textbf{0.9}$_{\pm 0.04}$ & $0.8733_{\pm 0.031}$ & $0.8933_{\pm 0.031}$ & \textbf{0.9}$_{\pm 0.02}$ & $0.6533_{\pm 0.061}$ & $0.6733_{\pm 0.046}$ & $0.82_{\pm 0.02}$\\
ScreenType & $0.5271_{\pm 0.007}$ & 0.4187 & 0.4107 & $0.4284_{\pm 0.017}$ & $0.5156_{\pm 0.015}$ & $0.4871_{\pm 0.008}$ & \textbf{0.5449}$_{\pm 0.01}$ & $0.5396_{\pm 0.026}$ & $0.5058_{\pm 0.012}$ & $0.4498_{\pm 0.008}$ & $0.4596_{\pm 0.014}$\\
SemgHandGenderCh2 & $0.9272_{\pm 0.003}$ & \textbf{0.9467} & 0.8867 & $0.7778_{\pm 0.003}$ & $0.8817_{\pm 0.0}$ & $0.8906_{\pm 0.008}$ & $0.8394_{\pm 0.003}$ & $0.8717_{\pm 0.002}$ & $0.8561_{\pm 0.003}$ & $0.9283_{\pm 0.005}$ & $0.9139_{\pm 0.006}$\\
SemgHandMovementCh2 & \textbf{0.8615}$_{\pm 0.007}$ & 0.7711 & 0.5689 & $0.4252_{\pm 0.005}$ & $0.6489_{\pm 0.016}$ & $0.6259_{\pm 0.016}$ & $0.537_{\pm 0.008}$ & $0.597_{\pm 0.013}$ & $0.6756_{\pm 0.006}$ & $0.7711_{\pm 0.01}$ & $0.7311_{\pm 0.008}$\\
SemgHandSubjectCh2 & $0.8837_{\pm 0.007}$ & \textbf{0.9356} & 0.8333 & $0.6504_{\pm 0.006}$ & $0.8244_{\pm 0.008}$ & $0.8259_{\pm 0.007}$ & $0.7822_{\pm 0.008}$ & $0.8237_{\pm 0.008}$ & $0.7622_{\pm 0.004}$ & $0.8385_{\pm 0.012}$ & $0.8341_{\pm 0.011}$\\
ShakeGestureWiimoteZ & $0.8467_{\pm 0.031}$ & 0.82 & 0.74 & $0.84_{\pm 0.0}$ & $0.88_{\pm 0.035}$ & $0.86_{\pm 0.02}$ & $0.8467_{\pm 0.042}$ & $0.8267_{\pm 0.012}$ & $0.9133_{\pm 0.012}$ & \textbf{0.9333}$_{\pm 0.012}$ & \textbf{0.9333}$_{\pm 0.012}$\\
ShapeletSim & $0.9704_{\pm 0.008}$ & 0.4778 & 0.5056 & $0.9593_{\pm 0.008}$ & $0.9426_{\pm 0.008}$ & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0}$_{\pm 0.0}$ & $0.9204_{\pm 0.012}$ & $0.9556_{\pm 0.01}$ & $0.9593_{\pm 0.003}$\\
ShapesAll & $0.8206_{\pm 0.002}$ & 0.8017 & 0.7917 & $0.8322_{\pm 0.006}$ & $0.8356_{\pm 0.012}$ & \textbf{0.8839}$_{\pm 0.003}$ & $0.8678_{\pm 0.003}$ & $0.8444_{\pm 0.005}$ & $0.8394_{\pm 0.004}$ & $0.8628_{\pm 0.003}$ & $0.87_{\pm 0.009}$\\
SmallKitchenAppliances & $0.8302_{\pm 0.008}$ & 0.7867 & 0.7627 & $0.7111_{\pm 0.007}$ & $0.8382_{\pm 0.002}$ & $0.8373_{\pm 0.009}$ & $0.8284_{\pm 0.006}$ & $0.8302_{\pm 0.011}$ & $0.8196_{\pm 0.004}$ & \textbf{0.8436}$_{\pm 0.004}$ & $0.8373_{\pm 0.0}$\\
SmoothSubspace & $0.9867_{\pm 0.007}$ & \textbf{1.0} & \textbf{1.0} & $0.9578_{\pm 0.004}$ & $0.9222_{\pm 0.01}$ & $0.9289_{\pm 0.023}$ & $0.9333_{\pm 0.007}$ & $0.9267_{\pm 0.007}$ & $0.8733_{\pm 0.007}$ & $0.9578_{\pm 0.008}$ & $0.9511_{\pm 0.004}$\\
SonyAIBORobotSurface1 & $0.8397_{\pm 0.002}$ & 0.772 & 0.6722 & $0.8541_{\pm 0.008}$ & \textbf{0.8625}$_{\pm 0.03}$ & $0.6628_{\pm 0.02}$ & $0.7876_{\pm 0.01}$ & $0.7643_{\pm 0.012}$ & $0.8092_{\pm 0.012}$ & $0.8453_{\pm 0.012}$ & $0.8231_{\pm 0.011}$\\
SonyAIBORobotSurface2 & $0.8835_{\pm 0.021}$ & 0.809 & 0.8279 & $0.8279_{\pm 0.002}$ & $0.8255_{\pm 0.003}$ & $0.8475_{\pm 0.002}$ & $0.9038_{\pm 0.002}$ & $0.9185_{\pm 0.004}$ & $0.8391_{\pm 0.01}$ & $0.9164_{\pm 0.005}$ & \textbf{0.922}$_{\pm 0.009}$\\
StarLightCurves & $0.9702_{\pm 0.001}$ & 0.9732 & 0.9718 & $0.9768_{\pm 0.0}$ & $0.9789_{\pm 0.0}$ & $0.98_{\pm 0.0}$ & $0.9788_{\pm 0.0}$ & $0.9803_{\pm 0.0}$ & $0.979_{\pm 0.0}$ & $0.98_{\pm 0.0}$ & \textbf{0.9806}$_{\pm 0.0}$\\
Strawberry & $0.9333_{\pm 0.003}$ & 0.9811 & \textbf{0.9838} & $0.9568_{\pm 0.005}$ & $0.9532_{\pm 0.004}$ & $0.9432_{\pm 0.003}$ & $0.927_{\pm 0.007}$ & $0.9414_{\pm 0.002}$ & $0.936_{\pm 0.006}$ & $0.9649_{\pm 0.0}$ & $0.9595_{\pm 0.007}$\\
SwedishLeaf & $0.9115_{\pm 0.002}$ & 0.9504 & 0.9456 & $0.9211_{\pm 0.002}$ & $0.9381_{\pm 0.005}$ & $0.9381_{\pm 0.004}$ & $0.9408_{\pm 0.002}$ & $0.9376_{\pm 0.006}$ & $0.9221_{\pm 0.002}$ & $0.9456_{\pm 0.003}$ & \textbf{0.9547}$_{\pm 0.004}$\\
Symbols & $0.9618_{\pm 0.005}$ & 0.8824 & 0.8945 & $0.9377_{\pm 0.002}$ & $0.937_{\pm 0.004}$ & $0.9745_{\pm 0.006}$ & $0.9779_{\pm 0.005}$ & $0.9759_{\pm 0.003}$ & $0.9387_{\pm 0.005}$ & \textbf{0.9806}$_{\pm 0.002}$ & $0.9698_{\pm 0.005}$\\
SyntheticControl & $0.9922_{\pm 0.002}$ & 0.99 & 0.9833 & $0.9578_{\pm 0.002}$ & $0.9867_{\pm 0.0}$ & $0.9933_{\pm 0.003}$ & $0.9956_{\pm 0.002}$ & \textbf{0.9978}$_{\pm 0.002}$ & $0.9722_{\pm 0.002}$ & $0.9822_{\pm 0.002}$ & $0.99_{\pm 0.0}$\\
ToeSegmentation1 & $0.8553_{\pm 0.016}$ & 0.5746 & 0.6667 & $0.9313_{\pm 0.005}$ & $0.9474_{\pm 0.008}$ & $0.924_{\pm 0.024}$ & $0.9415_{\pm 0.003}$ & $0.8611_{\pm 0.028}$ & $0.8436_{\pm 0.022}$ & $0.9313_{\pm 0.009}$ & \textbf{0.9547}$_{\pm 0.007}$\\
ToeSegmentation2 & $0.7846_{\pm 0.013}$ & 0.6538 & 0.8154 & $0.8462_{\pm 0.008}$ & \textbf{0.9026}$_{\pm 0.004}$ & $0.8769_{\pm 0.008}$ & $0.8564_{\pm 0.012}$ & $0.8513_{\pm 0.016}$ & $0.7385_{\pm 0.008}$ & $0.8513_{\pm 0.009}$ & $0.8821_{\pm 0.004}$\\
Trace & \textbf{1.0}$_{\pm 0.0}$ & 0.91 & 0.98 & $0.99_{\pm 0.0}$ & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0}$_{\pm 0.0}$ & $0.99_{\pm 0.0}$ & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0}$_{\pm 0.0}$\\
TwoLeadECG & $0.8584_{\pm 0.015}$ & 0.9508 & 0.9254 & $0.9485_{\pm 0.007}$ & $0.9511_{\pm 0.007}$ & $0.9309_{\pm 0.011}$ & $0.9936_{\pm 0.001}$ & $0.983_{\pm 0.002}$ & $0.9166_{\pm 0.025}$ & $0.9886_{\pm 0.005}$ & \textbf{0.9956}$_{\pm 0.001}$\\
TwoPatterns & $0.9935_{\pm 0.001}$ & \textbf{0.995} & 0.9032 & $0.9158_{\pm 0.004}$ & $0.9813_{\pm 0.002}$ & $0.908_{\pm 0.006}$ & $0.9474_{\pm 0.001}$ & $0.9693_{\pm 0.003}$ & $0.8502_{\pm 0.002}$ & $0.9622_{\pm 0.003}$ & $0.9819_{\pm 0.001}$\\
UMD & $0.9398_{\pm 0.033}$ & \textbf{1.0} & 0.9375 & $0.9838_{\pm 0.004}$ & $0.919_{\pm 0.004}$ & $0.9931_{\pm 0.0}$ & $0.9861_{\pm 0.0}$ & $0.9907_{\pm 0.004}$ & $0.9352_{\pm 0.014}$ & $0.9861_{\pm 0.0}$ & $0.9815_{\pm 0.004}$\\
UWaveGestureLibraryAll & $0.9454_{\pm 0.001}$ & \textbf{0.9665} & 0.9629 & $0.9162_{\pm 0.002}$ & $0.9073_{\pm 0.002}$ & $0.9229_{\pm 0.001}$ & $0.8637_{\pm 0.0}$ & $0.8702_{\pm 0.003}$ & $0.8882_{\pm 0.002}$ & $0.8733_{\pm 0.001}$ & $0.8848_{\pm 0.001}$\\
UWaveGestureLibraryX & $0.8062_{\pm 0.003}$ & 0.8079 & 0.7984 & $0.7938_{\pm 0.002}$ & $0.7824_{\pm 0.003}$ & $0.8036_{\pm 0.001}$ & $0.7948_{\pm 0.003}$ & $0.7992_{\pm 0.001}$ & $0.8125_{\pm 0.002}$ & $0.8068_{\pm 0.001}$ & \textbf{0.813}$_{\pm 0.001}$\\
UWaveGestureLibraryY & $0.7248_{\pm 0.002}$ & 0.715 & 0.7164 & $0.7139_{\pm 0.002}$ & $0.7236_{\pm 0.002}$ & $0.7344_{\pm 0.003}$ & $0.739_{\pm 0.002}$ & \textbf{0.7633}$_{\pm 0.001}$ & $0.7391_{\pm 0.003}$ & $0.7475_{\pm 0.004}$ & $0.7572_{\pm 0.001}$\\
UWaveGestureLibraryZ & $0.7519_{\pm 0.002}$ & 0.7513 & 0.7379 & $0.7339_{\pm 0.004}$ & $0.7361_{\pm 0.001}$ & $0.7464_{\pm 0.004}$ & $0.743_{\pm 0.006}$ & $0.7562_{\pm 0.005}$ & $0.7518_{\pm 0.005}$ & $0.7495_{\pm 0.002}$ & \textbf{0.7587}$_{\pm 0.0}$\\
Wafer & $0.9988_{\pm 0.0}$ & 0.9953 & 0.9959 & $0.9878_{\pm 0.001}$ & $0.9986_{\pm 0.0}$ & $0.985_{\pm 0.001}$ & $0.9953_{\pm 0.0}$ & \textbf{0.9993}$_{\pm 0.0}$ & $0.9935_{\pm 0.001}$ & $0.995_{\pm 0.001}$ & $0.9944_{\pm 0.001}$\\
Wine & $0.6358_{\pm 0.039}$ & 0.7778 & 0.7222 & $0.7531_{\pm 0.021}$ & $0.6296_{\pm 0.049}$ & \textbf{0.8704}$_{\pm 0.019}$ & $0.5123_{\pm 0.021}$ & $0.6605_{\pm 0.028}$ & $0.7099_{\pm 0.039}$ & $0.716_{\pm 0.011}$ & $0.8086_{\pm 0.06}$\\
WordSynonyms & $0.639_{\pm 0.008}$ & \textbf{0.6442} & 0.5972 & $0.6186_{\pm 0.01}$ & $0.5204_{\pm 0.005}$ & $0.5711_{\pm 0.001}$ & $0.5674_{\pm 0.007}$ & $0.5569_{\pm 0.008}$ & $0.5303_{\pm 0.014}$ & $0.5914_{\pm 0.007}$ & $0.6181_{\pm 0.006}$\\
Worms & $0.7229_{\pm 0.015}$ & 0.5714 & 0.5584 & $0.7359_{\pm 0.015}$ & $0.8009_{\pm 0.007}$ & $0.7662_{\pm 0.0}$ & \textbf{0.8312}$_{\pm 0.013}$ & $0.8009_{\pm 0.02}$ & $0.7229_{\pm 0.02}$ & $0.7749_{\pm 0.007}$ & $0.7576_{\pm 0.02}$\\
WormsTwoClass & $0.8182_{\pm 0.013}$ & 0.6104 & 0.5714 & $0.8009_{\pm 0.007}$ & \textbf{0.8485}$_{\pm 0.015}$ & $0.8139_{\pm 0.027}$ & $0.8355_{\pm 0.02}$ & $0.8225_{\pm 0.015}$ & $0.7835_{\pm 0.007}$ & $0.7922_{\pm 0.022}$ & $0.8182_{\pm 0.013}$\\
Yoga & $0.8289_{\pm 0.005}$ & 0.8603 & \textbf{0.8653} & $0.8369_{\pm 0.006}$ & $0.7677_{\pm 0.002}$ & $0.8162_{\pm 0.003}$ & $0.8091_{\pm 0.007}$ & $0.8144_{\pm 0.002}$ & $0.8207_{\pm 0.002}$ & $0.789_{\pm 0.006}$ & $0.848_{\pm 0.003}$\\
\midrule
\textit{\textbf{Average}} & 0.7969 & 0.7806 & 0.7707 &  0.7789 & 0.8013 & 0.8002 & 0.7943 & 0.8029 & 0.7732 & 0.8061 & \textbf{0.8195}\\
\textit{\textbf{Best Counts}} & 13 & 30 & 24 &  1 & 11 & 14 & 13 & 21 & 6 & 14 & 27 \\
\bottomrule
\end{tabular}
    }`{=latex}

```{=latex}
\vspace{0.4cm}
```
```{=latex}
\centering
```
`\scalebox{0.55}{
    \begin{tabular}{l|lllllllllll}
        \toprule
                                       & Catch22+ & TabPFN & TabICL & MOMENT & TiRex & Chronos2 & TiViT-H & TiConvNext & NuTime & Mantis+ & MantisV2\\
        \midrule
        ArticularyWordRecognition & $0.96_{\pm 0.007}$ & 0.93 & 0.91 & $0.9844_{\pm 0.002}$ & $0.99_{\pm 0.0}$ & $0.9933_{\pm 0.003}$ & $0.9833_{\pm 0.003}$ & $0.98_{\pm 0.003}$ & $0.9911_{\pm 0.004}$ & \textbf{0.9956}$_{\pm 0.002}$ & $0.9922_{\pm 0.002}$\\
        BasicMotions & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0} & 0.95 & $0.975_{\pm 0.0}$ & $0.9667_{\pm 0.014}$ & \textbf{1.0}$_{\pm 0.0}$ & $0.9833_{\pm 0.014}$ & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0}$_{\pm 0.0}$\\
        CharacterTrajectories & \textbf{0.9814}$_{\pm 0.001}$ & 0.9673 & 0.9694 & $0.9698_{\pm 0.002}$ & $0.9603_{\pm 0.0}$ & $0.9631_{\pm 0.001}$ & $0.9373_{\pm 0.002}$ & $0.9517_{\pm 0.002}$ & $0.967_{\pm 0.002}$ & $0.9761_{\pm 0.001}$ & $0.9761_{\pm 0.0}$\\
        Cricket & $0.9444_{\pm 0.014}$ & 0.8194 & 0.8472 & $0.9722_{\pm 0.0}$ & $0.9954_{\pm 0.008}$ & $0.9861_{\pm 0.0}$ & \textbf{1.0}$_{\pm 0.0}$ & $0.9954_{\pm 0.008}$ & $0.9907_{\pm 0.008}$ & $0.9861_{\pm 0.0}$ & $0.9861_{\pm 0.0}$\\
        DuckDuckGeese & $0.5133_{\pm 0.023}$ & 0.32 & 0.28 & $0.4467_{\pm 0.042}$ & $0.42_{\pm 0.02}$ & $0.4133_{\pm 0.023}$ & $0.4333_{\pm 0.023}$ & $0.44_{\pm 0.04}$ & $0.4267_{\pm 0.023}$ & $0.5067_{\pm 0.031}$ & \textbf{0.54}$_{\pm 0.02}$\\
        ERing & $0.8827_{\pm 0.022}$ & 0.8889 & 0.8519 & $0.9679_{\pm 0.008}$ & $0.9765_{\pm 0.008}$ & \textbf{0.9877}$_{\pm 0.004}$ & $0.9815_{\pm 0.004}$ & $0.963_{\pm 0.007}$ & $0.9765_{\pm 0.004}$ & \textbf{0.9877}$_{\pm 0.006}$ & \textbf{0.9877}$_{\pm 0.002}$\\
        EigenWorms & $0.8753_{\pm 0.019}$ & 0.4198 & 0.5649 & $0.7608_{\pm 0.004}$ & $0.7863_{\pm 0.038}$ & $0.8015_{\pm 0.015}$ & $0.888_{\pm 0.027}$ & \textbf{0.9313}$_{\pm 0.008}$ & $0.7354_{\pm 0.022}$ & $0.8193_{\pm 0.004}$ & $0.8142_{\pm 0.016}$\\
        Epilepsy & $0.9855_{\pm 0.0}$ & 0.913 & 0.9565 & $0.9928_{\pm 0.0}$ & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0}$_{\pm 0.0}$ & $0.9976_{\pm 0.004}$\\
        EthanolConcentration & $0.4271_{\pm 0.002}$ & \textbf{0.7833} & 0.6046 & $0.2953_{\pm 0.019}$ & $0.275_{\pm 0.006}$ & $0.3485_{\pm 0.01}$ & $0.3866_{\pm 0.008}$ & $0.3663_{\pm 0.012}$ & $0.4132_{\pm 0.01}$ & $0.4056_{\pm 0.014}$ & $0.4183_{\pm 0.007}$\\
        FaceDetection & $0.5273_{\pm 0.0}$ & 0.6325 & \textbf{0.6402} & $0.554_{\pm 0.005}$ & $0.6133_{\pm 0.006}$ & $0.5716_{\pm 0.011}$ & $0.55_{\pm 0.005}$ & $0.5321_{\pm 0.003}$ & $0.5691_{\pm 0.004}$ & $0.5566_{\pm 0.005}$ & $0.5492_{\pm 0.004}$\\
        FingerMovements & $0.4967_{\pm 0.064}$ & 0.5 & 0.49 & $0.53_{\pm 0.02}$ & $0.52_{\pm 0.04}$ & $0.5233_{\pm 0.021}$ & $0.5367_{\pm 0.045}$ & $0.5333_{\pm 0.059}$ & $0.5267_{\pm 0.015}$ & $0.51_{\pm 0.026}$ & \textbf{0.55}$_{\pm 0.01}$\\
        HandMovementDirection & $0.3378_{\pm 0.049}$ & \textbf{0.4324} & 0.3919 & $0.2928_{\pm 0.021}$ & $0.3153_{\pm 0.028}$ & $0.2748_{\pm 0.008}$ & $0.2883_{\pm 0.031}$ & $0.3288_{\pm 0.055}$ & $0.2883_{\pm 0.021}$ & $0.3243_{\pm 0.049}$ & $0.2793_{\pm 0.067}$\\
        Handwriting & $0.2812_{\pm 0.009}$ & 0.1388 & 0.2259 & $0.26_{\pm 0.002}$ & $0.2525_{\pm 0.002}$ & $0.2576_{\pm 0.007}$ & $0.2373_{\pm 0.009}$ & $0.2494_{\pm 0.011}$ & $0.2055_{\pm 0.007}$ & \textbf{0.3086}$_{\pm 0.002}$ & $0.2808_{\pm 0.008}$\\
        Heartbeat & $0.7496_{\pm 0.006}$ & 0.722 & 0.7268 & $0.7317_{\pm 0.005}$ & $0.7252_{\pm 0.006}$ & $0.7317_{\pm 0.008}$ & $0.7252_{\pm 0.003}$ & $0.735_{\pm 0.007}$ & $0.7756_{\pm 0.005}$ & \textbf{0.7984}$_{\pm 0.017}$ & $0.7919_{\pm 0.003}$\\
        InsectWingbeatSubset & $0.3617_{\pm 0.01}$ & 0.238 & \texttt{NaN} & $0.267_{\pm 0.017}$ & $0.287_{\pm 0.004}$ & $0.2803_{\pm 0.018}$ & $0.3203_{\pm 0.014}$ & $0.3243_{\pm 0.006}$ & $0.6143_{\pm 0.012}$ & \textbf{0.6277}$_{\pm 0.006}$ & $0.6073_{\pm 0.007}$\\
        JapaneseVowels & \textbf{0.955}$_{\pm 0.003}$ & 0.8135 & 0.8162 & $0.8847_{\pm 0.006}$ & $0.8928_{\pm 0.015}$ & $0.8162_{\pm 0.026}$ & $0.882_{\pm 0.006}$ & $0.8721_{\pm 0.004}$ & $0.9342_{\pm 0.006}$ & $0.9405_{\pm 0.0}$ & $0.9396_{\pm 0.006}$\\
        LSST & $0.6158_{\pm 0.002}$ & 0.5479 & 0.5016 & $0.6313_{\pm 0.003}$ & $0.5733_{\pm 0.003}$ & $0.5892_{\pm 0.004}$ & $0.5953_{\pm 0.006}$ & $0.6004_{\pm 0.004}$ & $0.5673_{\pm 0.005}$ & \textbf{0.6676}$_{\pm 0.004}$ & $0.6599_{\pm 0.004}$\\
        Libras & $0.8389_{\pm 0.01}$ & 0.6722 & 0.6389 & $0.8463_{\pm 0.008}$ & $0.8963_{\pm 0.008}$ & $0.8648_{\pm 0.008}$ & $0.9037_{\pm 0.006}$ & $0.9074_{\pm 0.008}$ & $0.8778_{\pm 0.01}$ & \textbf{0.9278}$_{\pm 0.0}$ & $0.9241_{\pm 0.003}$\\
        MotorImagery & $0.4333_{\pm 0.086}$ & \textbf{0.58} & 0.57 & $0.5233_{\pm 0.015}$ & $0.4933_{\pm 0.038}$ & $0.53_{\pm 0.026}$ & $0.51_{\pm 0.01}$ & $0.49_{\pm 0.036}$ & $0.48_{\pm 0.026}$ & $0.4967_{\pm 0.038}$ & $0.4667_{\pm 0.025}$\\
        NATOPS & $0.7148_{\pm 0.029}$ & 0.7778 & 0.7944 & $0.8352_{\pm 0.012}$ & $0.8296_{\pm 0.012}$ & $0.8389_{\pm 0.011}$ & $0.8685_{\pm 0.017}$ & $0.8574_{\pm 0.014}$ & $0.8333_{\pm 0.031}$ & \textbf{0.8926}$_{\pm 0.02}$ & $0.8889_{\pm 0.01}$\\
        PEMS-SF & $0.8401_{\pm 0.009}$ & 0.948 & 0.9422 & \textbf{0.9961}$_{\pm 0.007}$ & \textbf{0.9961}$_{\pm 0.007}$ & \textbf{0.9961}$_{\pm 0.007}$ & $0.973_{\pm 0.007}$ & $0.9769_{\pm 0.0}$ & $0.9923_{\pm 0.007}$ & \textbf{0.9961}$_{\pm 0.007}$ & \textbf{0.9961}$_{\pm 0.007}$\\
        PhonemeSpectra & $0.2499_{\pm 0.003}$ & 0.1485 & 0.1712 & $0.2112_{\pm 0.006}$ & $0.2686_{\pm 0.003}$ & $0.2709_{\pm 0.003}$ & $0.2713_{\pm 0.005}$ & $0.2709_{\pm 0.003}$ & $0.2664_{\pm 0.005}$ & $0.3107_{\pm 0.005}$ & \textbf{0.3213}$_{\pm 0.004}$\\
        RacketSports & $0.8004_{\pm 0.004}$ & 0.8158 & 0.8487 & $0.8465_{\pm 0.025}$ & $0.8355_{\pm 0.007}$ & $0.8246_{\pm 0.03}$ & $0.8531_{\pm 0.01}$ & $0.8509_{\pm 0.004}$ & $0.9123_{\pm 0.019}$ & \textbf{0.9232}$_{\pm 0.01}$ & $0.9101_{\pm 0.008}$\\
        SelfRegulationSCP1 & $0.7702_{\pm 0.007}$ & \textbf{0.8942} & 0.8874 & $0.7747_{\pm 0.006}$ & $0.7884_{\pm 0.012}$ & $0.785_{\pm 0.003}$ & $0.7986_{\pm 0.007}$ & $0.7929_{\pm 0.01}$ & $0.7952_{\pm 0.003}$ & $0.7736_{\pm 0.005}$ & $0.8134_{\pm 0.009}$\\
        SelfRegulationSCP2 & $0.4926_{\pm 0.031}$ & 0.4778 & 0.5056 & $0.4907_{\pm 0.023}$ & $0.4963_{\pm 0.049}$ & $0.5167_{\pm 0.006}$ & $0.4907_{\pm 0.033}$ & $0.5056_{\pm 0.011}$ & $0.5074_{\pm 0.018}$ & \textbf{0.5611}$_{\pm 0.02}$ & $0.5167_{\pm 0.006}$\\
        SpokenArabicDigits & $0.8898_{\pm 0.003}$ & 0.9591 & \textbf{0.9613} & $0.9447_{\pm 0.003}$ & $0.7547_{\pm 0.01}$ & $0.7619_{\pm 0.006}$ & $0.8931_{\pm 0.005}$ & $0.9139_{\pm 0.004}$ & $0.9016_{\pm 0.002}$ & $0.9285_{\pm 0.001}$ & $0.9374_{\pm 0.003}$\\
        UWaveGestureLibrary & $0.876_{\pm 0.015}$ & 0.8062 & 0.7625 & \textbf{0.8917}$_{\pm 0.007}$ & $0.8615_{\pm 0.002}$ & $0.8833_{\pm 0.004}$ & $0.8542_{\pm 0.004}$ & $0.8156_{\pm 0.005}$ & $0.8885_{\pm 0.007}$ & $0.8906_{\pm 0.008}$ & $0.8896_{\pm 0.011}$\\
        \midrule
        \textbf{\textit{Average}} & 0.6963 & 0.6721 & 0.685 & 0.6991 & 0.6952 & 0.6967 & 0.7091 & 0.7105 & 0.7199 & \textbf{0.7449} & 0.742\\
        \textit{\textbf{Best Counts}} & 3 & 5 & 2 & 2 & 2 & 4 & 2 & 3 & 2 & \textbf{12} & 6\\
        \bottomrule
    \end{tabular}
    }`{=latex}

Tables for Section `\ref{sec:log-reg}`{=latex} {#sec:log-reg-exp-appendix}
----------------------------------------------

We would like also to provide the complete results for all benchmarks when we use the Logistic Regression as a classifier. Table `\ref{tab:sota-ucr-res-logreg-a}`{=latex} and Table `\ref{tab:sota-ucr-res-logreg-b}`{=latex} correspond to the results on UCR, Table `\ref{tab:uea-sota-logreg}`{=latex} to UEA, Table `\ref{tab:har-sota-logreg}`{=latex} to HAR and Table `\ref{tab:eeg-sota-logreg}`{=latex} to EEG.

```{=latex}
\vspace{0.3cm}
```
```{=latex}
\centering
```
`\scalebox{0.58}{
    \begin{tabular}{l|lllllllllll}
    \toprule
                                   & Catch22+ & TabPFN & TabICL & MOMENT & TiRex & Chronos2 & TiViT-H & TiConvNext & NuTime & Mantis+ & MantisV2\\
    \midrule
    ACSF1 & 0.7 & 0.8 & 0.81 & 0.75 & 0.75 & \textbf{0.86} & 0.83 & 0.83 & 0.7 & 0.79 & 0.76\\
    Adiac & 0.665 & 0.8031 & 0.8031 & 0.7775 & 0.7826 & 0.8286 & 0.7187 & 0.7059 & 0.8005 & \textbf{0.8414} & 0.8363\\
    AllGestureWiimoteX & 0.5486 & 0.6229 & 0.5043 & 0.6614 & 0.7014 & 0.6843 & 0.67 & 0.6943 & 0.6071 & 0.7171 & \textbf{0.73}\\
    AllGestureWiimoteY & 0.5229 & 0.6329 & 0.5114 & 0.7029 & 0.7314 & 0.7043 & 0.7371 & 0.7229 & 0.6529 & 0.7414 & \textbf{0.7471}\\
    AllGestureWiimoteZ & 0.4829 & 0.5329 & 0.4529 & 0.6057 & 0.6343 & 0.66 & 0.6771 & 0.6586 & 0.5414 & 0.6657 & \textbf{0.6814}\\
    ArrowHead & 0.7143 & 0.7543 & 0.7429 & 0.7771 & 0.7829 & \textbf{0.8514} & 0.8229 & 0.8229 & 0.7257 & 0.8457 & \textbf{0.8514}\\
    BME & 0.9933 & \textbf{1.0} & 0.98 & 0.9933 & 0.9933 & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & 0.9667 & \textbf{1.0} & \textbf{1.0}\\
    Beef & 0.6333 & 0.8 & 0.7667 & 0.8 & \textbf{0.9333} & 0.7 & 0.7667 & 0.8333 & 0.7333 & 0.6667 & 0.7333\\
    BeetleFly & 0.7 & 0.9 & 0.8 & \textbf{0.95} & 0.9 & 0.8 & 0.85 & \textbf{0.95} & 0.7 & 0.8 & 0.85\\
    BirdChicken & 0.8 & 0.85 & 0.75 & \textbf{1.0} & 0.9 & 0.9 & 0.9 & 0.95 & \textbf{1.0} & 0.9 & 0.9\\
    CBF & 0.9733 & 0.9133 & 0.9244 & 0.9889 & 0.9967 & \textbf{1.0} & 0.9989 & 0.9989 & 0.99 & 0.9956 & 0.9967\\
    Car & 0.8333 & 0.7833 & 0.8167 & \textbf{0.9} & 0.8 & 0.85 & 0.8167 & 0.85 & 0.7333 & 0.8833 & 0.8167\\
    Chinatown & 0.9767 & \textbf{0.9854} & 0.9796 & 0.9825 & 0.9825 & \textbf{0.9854} & 0.9679 & 0.9388 & 0.9621 & 0.9738 & 0.9621\\
    ChlorineConcentration & 0.5888 & 0.95 & \textbf{0.9773} & 0.7789 & 0.8018 & 0.7672 & 0.7622 & 0.7771 & 0.6086 & 0.6857 & 0.7115\\
    CinCECGTorso & 0.8428 & 0.8341 & 0.8225 & 0.7681 & \textbf{0.9565} & 0.8986 & 0.9355 & 0.942 & 0.8551 & 0.8558 & 0.8507\\
    Coffee & \textbf{1.0} & 0.9643 & \textbf{1.0} & 0.9643 & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0}\\
    Computers & 0.668 & 0.62 & 0.656 & 0.612 & 0.8 & 0.768 & 0.764 & \textbf{0.828} & 0.788 & 0.712 & 0.704\\
    CricketX & 0.6103 & 0.6667 & 0.6641 & 0.7436 & 0.7 & 0.7564 & 0.759 & 0.7487 & 0.7051 & 0.7564 & \textbf{0.7821}\\
    CricketY & 0.5949 & 0.7 & 0.6333 & 0.7077 & 0.7538 & 0.7359 & 0.7872 & 0.7462 & 0.6897 & 0.8077 & \textbf{0.8154}\\
    CricketZ & 0.6333 & 0.6718 & 0.6692 & 0.7436 & 0.7333 & 0.7231 & 0.7974 & 0.759 & 0.659 & 0.7923 & \textbf{0.8128}\\
    Crop & 0.6874 & 0.7989 & \textbf{0.812} & 0.7234 & 0.7029 & 0.7163 & 0.6867 & 0.6737 & 0.6895 & 0.7364 & 0.7348\\
    DiatomSizeReduction & \textbf{0.9673} & 0.9608 & 0.951 & 0.9412 & 0.9281 & 0.9477 & \textbf{0.9673} & \textbf{0.9673} & 0.915 & 0.9216 & 0.9281\\
    DistalPhalanxOutlineAgeGroup & 0.7122 & 0.7626 & 0.7626 & 0.6691 & 0.7482 & 0.741 & 0.7122 & 0.7266 & 0.705 & 0.7266 & \textbf{0.777}\\
    DistalPhalanxOutlineCorrect & 0.7536 & \textbf{0.7826} & 0.7754 & 0.7536 & 0.75 & 0.7645 & 0.7536 & 0.7681 & 0.7428 & 0.779 & 0.7681\\
    DistalPhalanxTW & 0.6187 & \textbf{0.6978} & 0.6835 & 0.6259 & 0.6547 & 0.6547 & 0.6691 & 0.6547 & 0.6835 & 0.6547 & 0.6835\\
    DodgerLoopDay & 0.575 & 0.6125 & \textbf{0.725} & 0.4375 & 0.55 & 0.525 & 0.5 & 0.5625 & 0.55 & 0.5875 & 0.5\\
    DodgerLoopGame & 0.8551 & 0.7899 & 0.7971 & 0.8406 & 0.7899 & \textbf{0.8913} & 0.8406 & 0.8406 & 0.8188 & 0.8406 & 0.8623\\
    DodgerLoopWeekend & 0.9783 & \textbf{0.9855} & 0.9783 & \textbf{0.9855} & 0.9565 & 0.9638 & 0.9493 & 0.913 & 0.9638 & \textbf{0.9855} & 0.971\\
    ECG200 & 0.85 & 0.89 & 0.88 & \textbf{0.9} & 0.83 & 0.84 & 0.86 & 0.86 & 0.85 & 0.89 & 0.88\\
    ECG5000 & 0.9391 & 0.942 & \textbf{0.9447} & 0.9333 & 0.9391 & 0.9342 & 0.9378 & 0.9311 & 0.9193 & 0.9322 & 0.9353\\
    ECGFiveDays & 0.8699 & 0.9245 & 0.9826 & 0.9698 & 0.9756 & 0.9907 & 0.9791 & 0.9895 & 0.899 & 0.9779 & \textbf{0.993}\\
    EOGHorizontalSignal & 0.5276 & 0.5276 & 0.5 & 0.5635 & 0.5718 & 0.5884 & \textbf{0.5967} & 0.5746 & 0.5331 & 0.5773 & 0.5691\\
    EOGVerticalSignal & 0.4779 & 0.489 & 0.4337 & 0.5 & 0.4171 & 0.4392 & 0.4724 & 0.4254 & 0.3315 & 0.4613 & \textbf{0.5138}\\
    Earthquakes & 0.741 & \textbf{0.7482} & \textbf{0.7482} & 0.705 & 0.6906 & \textbf{0.7482} & 0.705 & 0.6978 & 0.7338 & 0.6835 & 0.7194\\
    ElectricDevices & 0.6753 & 0.7025 & 0.6614 & 0.7404 & 0.6709 & 0.7168 & 0.7635 & \textbf{0.7764} & 0.6786 & 0.7301 & 0.7175\\
    EthanolLevel & 0.568 & \textbf{0.848} & 0.694 & 0.682 & 0.508 & 0.654 & 0.584 & 0.58 & 0.52 & 0.55 & 0.592\\
    FaceAll & 0.8024 & \textbf{0.8077} & 0.771 & 0.7787 & 0.7876 & 0.7533 & 0.7521 & 0.7367 & 0.7207 & 0.745 & 0.7574\\
    FaceFour & 0.8864 & 0.9091 & 0.8864 & 0.875 & 0.9545 & 0.9091 & 0.875 & 0.9205 & 0.9659 & 0.9205 & \textbf{0.9886}\\
    FacesUCR & 0.8498 & 0.8766 & 0.8771 & 0.8912 & 0.8776 & 0.8576 & 0.8751 & 0.8561 & 0.8195 & 0.881 & \textbf{0.9117}\\
    FiftyWords & 0.7231 & 0.7385 & 0.7165 & 0.7758 & 0.7231 & 0.7956 & 0.7868 & 0.7714 & 0.7165 & 0.7868 & \textbf{0.8198}\\
    Fish & 0.8629 & 0.88 & 0.8857 & 0.96 & 0.9029 & 0.9429 & 0.9486 & \textbf{0.9714} & 0.9143 & 0.96 & 0.96\\
    FordA & 0.8742 & 0.897 & 0.8758 & 0.9061 & \textbf{0.947} & 0.928 & 0.9136 & 0.9227 & 0.9061 & 0.9265 & 0.9432\\
    FordB & 0.7272 & 0.7556 & 0.7136 & 0.7556 & \textbf{0.816} & 0.8074 & 0.7963 & 0.779 & 0.7654 & 0.7889 & 0.8099\\
    FreezerRegularTrain & 0.9916 & \textbf{0.9986} & 0.9877 & 0.9881 & 0.9912 & 0.9951 & 0.9965 & 0.9975 & 0.994 & 0.9905 & 0.9961\\
    FreezerSmallTrain & 0.9407 & 0.8933 & 0.8098 & 0.8186 & 0.8912 & 0.9688 & 0.9839 & \textbf{0.9902} & 0.9863 & 0.9158 & 0.9849\\
    Fungi & 0.9194 & 0.8656 & 0.7849 & \textbf{1.0} & 0.9516 & 0.9462 & 0.9785 & 0.9946 & 0.7473 & 0.9247 & 0.9731\\
    GestureMidAirD1 & 0.6231 & 0.6231 & 0.6769 & 0.6846 & 0.6923 & 0.7308 & 0.7538 & 0.7615 & 0.6846 & 0.7385 & \textbf{0.7769}\\
    GestureMidAirD2 & 0.6385 & 0.5615 & 0.5846 & 0.5769 & 0.6154 & \textbf{0.7} & 0.6615 & 0.6769 & 0.5615 & 0.5615 & 0.6462\\
    GestureMidAirD3 & 0.4077 & 0.3692 & 0.3615 & 0.3692 & 0.4077 & 0.3538 & 0.4692 & \textbf{0.4923} & 0.4462 & 0.4462 & 0.4462\\
    GesturePebbleZ1 & 0.8198 & 0.8488 & 0.8779 & 0.8953 & 0.8837 & 0.9012 & 0.907 & 0.8547 & 0.8953 & 0.9244 & \textbf{0.9302}\\
    GesturePebbleZ2 & 0.7025 & 0.7848 & 0.7532 & 0.8924 & 0.8418 & 0.8671 & 0.8544 & 0.8165 & 0.7722 & 0.8797 & \textbf{0.9051}\\
    GunPoint & 0.9267 & 0.9667 & 0.9533 & \textbf{1.0} & 0.98 & 0.9933 & 0.9933 & 0.9933 & 0.9733 & 0.9867 & 0.98\\
    GunPointAgeSpan & 0.943 & 0.9905 & 0.9842 & 0.9778 & 0.9905 & 0.9905 & \textbf{0.9937} & 0.9873 & 0.9684 & 0.9873 & \textbf{0.9937}\\
    GunPointMaleVersusFemale & \textbf{1.0} & 0.9968 & \textbf{1.0} & 0.9968 & 0.9937 & 0.9937 & \textbf{1.0} & 0.9968 & 0.9873 & 0.9968 & \textbf{1.0}\\
    GunPointOldVersusYoung & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & 0.927 & 0.9651 & 0.9778 & 0.9873 & 0.9937 & \textbf{1.0} & 0.9968 & 0.9968\\
    Ham & 0.6286 & \textbf{0.7429} & 0.7238 & 0.6571 & 0.7048 & 0.6952 & \textbf{0.7429} & 0.7238 & 0.7333 & 0.581 & 0.7048\\
    HandOutlines & 0.8838 & 0.9162 & 0.927 & 0.9405 & 0.8757 & 0.9378 & \textbf{0.9432} & 0.9216 & 0.8865 & 0.927 & 0.9297\\
    Haptics & 0.4188 & 0.4708 & 0.461 & 0.4903 & 0.4773 & 0.5032 & 0.4935 & 0.5227 & 0.4253 & \textbf{0.5325} & 0.5032\\
    Herring & 0.5781 & 0.5938 & 0.6406 & 0.6406 & 0.6094 & 0.5156 & 0.5312 & 0.6094 & 0.5312 & 0.625 & \textbf{0.6875}\\
    HouseTwenty & 0.9496 & 0.8487 & 0.7563 & 0.9328 & \textbf{0.9832} & 0.9748 & \textbf{0.9832} & \textbf{0.9832} & 0.8992 & 0.9412 & 0.9328\\
    InlineSkate & 0.3782 & 0.3345 & 0.3436 & 0.3636 & 0.4382 & 0.4364 & 0.3982 & \textbf{0.4473} & 0.3564 & 0.4036 & 0.3909\\
    InsectEPGRegularTrain & 0.992 & \textbf{1.0} & \textbf{1.0} & 0.9478 & 0.996 & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0}\\
    InsectEPGSmallTrain & 0.9317 & \textbf{1.0} & \textbf{1.0} & 0.8353 & 0.9558 & 0.9197 & 0.988 & 0.988 & \textbf{1.0} & \textbf{1.0} & \textbf{1.0}\\
    InsectWingbeatSound & 0.5818 & \textbf{0.6672} & 0.6556 & 0.6212 & 0.6253 & 0.6313 & 0.551 & 0.5652 & 0.5101 & 0.5838 & 0.596\\
    \bottomrule
    \end{tabular}
    }`{=latex}

```{=latex}
\newpage
```
```{=latex}
\centering
```
`\scalebox{0.58}{
    \begin{tabular}{l|lllllllllll}
    \toprule
                                   & Catch22+ & TabPFN & TabICL & MOMENT & TiRex & Chronos2 & TiViT-H & TiConvNext & NuTime & Mantis+ & MantisV2\\
    \midrule
    ItalyPowerDemand & 0.9388 & \textbf{0.9699} & 0.9631 & 0.9543 & 0.9602 & 0.964 & 0.9407 & 0.9378 & 0.9116 & 0.9281 & 0.9456\\
    LargeKitchenAppliances & 0.8373 & 0.6373 & 0.696 & 0.76 & 0.8293 & 0.8613 & 0.872 & \textbf{0.8933} & 0.7707 & 0.824 & 0.816\\
    Lightning2 & 0.7541 & 0.6721 & 0.7049 & 0.7869 & 0.7377 & 0.7869 & 0.8197 & \textbf{0.8361} & 0.7377 & 0.7705 & 0.7869\\
    Lightning7 & 0.6986 & 0.6986 & 0.7397 & 0.7397 & 0.8082 & 0.7945 & 0.8493 & 0.7671 & 0.7808 & 0.8356 & \textbf{0.863}\\
    Mallat & 0.8866 & 0.9689 & 0.9484 & 0.9186 & 0.9548 & 0.9467 & \textbf{0.9765} & 0.9642 & 0.8486 & 0.9049 & 0.9198\\
    Meat & 0.9333 & 0.9833 & 0.9333 & \textbf{1.0} & 0.9 & 0.9333 & 0.8167 & 0.85 & 0.9333 & 0.9833 & 0.9667\\
    MedicalImages & 0.6789 & 0.7947 & \textbf{0.8079} & 0.7461 & 0.725 & 0.7474 & 0.7316 & 0.7513 & 0.7092 & 0.7618 & 0.7711\\
    MelbournePedestrian & 0.9028 & \textbf{0.9803} & \textbf{0.9803} & 0.8954 & 0.8938 & 0.9049 & 0.8626 & 0.87 & 0.9332 & 0.9582 & 0.9582\\
    MiddlePhalanxOutlineAgeGroup & 0.5584 & 0.6234 & \textbf{0.6299} & 0.5065 & 0.487 & 0.5519 & 0.5584 & 0.526 & 0.5519 & 0.5195 & 0.5455\\
    MiddlePhalanxOutlineCorrect & 0.7766 & \textbf{0.8522} & 0.8351 & 0.8076 & 0.8076 & 0.8351 & 0.7938 & 0.8179 & 0.7491 & 0.8351 & 0.8419\\
    MiddlePhalanxTW & 0.5455 & 0.6169 & \textbf{0.6234} & 0.5455 & 0.5 & 0.5 & 0.5195 & 0.4805 & 0.4416 & 0.487 & 0.513\\
    MixedShapesRegularTrain & 0.9155 & 0.9344 & 0.9299 & 0.9357 & 0.9699 & 0.9666 & \textbf{0.9781} & \textbf{0.9781} & 0.9365 & 0.9678 & 0.9744\\
    MixedShapesSmallTrain & 0.8693 & 0.8293 & 0.8767 & 0.8899 & 0.9365 & 0.939 & 0.9501 & \textbf{0.9542} & 0.9208 & 0.932 & 0.946\\
    MoteStrain & 0.8498 & 0.889 & 0.8794 & 0.905 & 0.9241 & 0.9257 & 0.9161 & 0.9065 & \textbf{0.9553} & 0.8866 & 0.9393\\
    NonInvasiveFetalECGThorax1 & 0.8992 & \textbf{0.941} & 0.9272 & 0.9237 & 0.8992 & 0.8845 & 0.9003 & 0.8911 & 0.8545 & 0.9303 & 0.9293\\
    NonInvasiveFetalECGThorax2 & 0.9181 & \textbf{0.9476} & 0.9405 & 0.9369 & 0.9033 & 0.913 & 0.9237 & 0.9226 & 0.9028 & 0.9344 & 0.9364\\
    OSULeaf & 0.7355 & 0.5661 & 0.595 & 0.7934 & 0.938 & 0.938 & 0.9835 & \textbf{0.9917} & 0.8595 & 0.9669 & 0.9628\\
    OliveOil & 0.8333 & \textbf{0.9333} & 0.9 & 0.9 & \textbf{0.9333} & 0.8333 & 0.7333 & 0.9 & 0.7667 & 0.8667 & 0.8667\\
    PLAID & 0.6536 & 0.7896 & 0.5661 & 0.7858 & 0.8696 & 0.8864 & 0.9181 & \textbf{0.9311} & 0.7561 & 0.8529 & 0.8529\\
    PhalangesOutlinesCorrect & 0.7459 & 0.8403 & \textbf{0.8613} & 0.8135 & 0.7925 & 0.8228 & 0.7995 & 0.7832 & 0.7471 & 0.7925 & 0.7949\\
    Phoneme & 0.2421 & 0.1097 & 0.1361 & 0.3001 & 0.3898 & \textbf{0.4167} & 0.3956 & 0.3972 & 0.2758 & 0.3492 & 0.355\\
    PickupGestureWiimoteZ & 0.66 & 0.76 & 0.74 & 0.7 & 0.84 & 0.66 & \textbf{0.92} & 0.88 & 0.64 & 0.82 & 0.76\\
    PigAirwayPressure & 0.1442 & 0.0192 & 0.1538 & 0.1106 & 0.4183 & 0.4183 & 0.6298 & \textbf{0.7452} & 0.4519 & 0.5433 & 0.5769\\
    PigArtPressure & 0.7644 & 0.0337 & 0.2548 & 0.5481 & 0.8894 & 0.8413 & 0.8894 & 0.9327 & \textbf{0.9423} & 0.9327 & 0.9087\\
    PigCVP & 0.4183 & 0.0192 & 0.1731 & 0.5337 & 0.8654 & 0.7885 & 0.7933 & 0.8317 & 0.8846 & 0.9135 & \textbf{0.9375}\\
    Plane & 0.9905 & 0.9905 & 0.9905 & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0}\\
    PowerCons & 0.9889 & \textbf{1.0} & \textbf{1.0} & 0.9556 & 0.9389 & 0.9333 & 0.9056 & 0.9222 & 0.9722 & 0.9722 & 0.9722\\
    ProximalPhalanxOutlineAgeGroup & 0.839 & \textbf{0.8585} & 0.839 & 0.8293 & 0.8488 & 0.8341 & 0.8293 & 0.8293 & 0.8537 & \textbf{0.8585} & 0.839\\
    ProximalPhalanxOutlineCorrect & 0.8454 & 0.9038 & \textbf{0.9244} & 0.8866 & 0.8866 & 0.8832 & 0.8729 & 0.8625 & 0.8385 & 0.8935 & 0.8763\\
    ProximalPhalanxTW & 0.7902 & 0.8098 & \textbf{0.8293} & 0.7951 & 0.761 & 0.7561 & 0.761 & 0.7463 & 0.7854 & 0.7415 & 0.761\\
    RefrigerationDevices & 0.5333 & 0.504 & 0.4933 & 0.5147 & 0.5413 & 0.568 & 0.576 & \textbf{0.5973} & 0.5627 & 0.544 & 0.5253\\
    Rock & 0.8 & 0.76 & 0.64 & 0.9 & 0.88 & 0.9 & \textbf{0.96} & 0.92 & 0.76 & 0.82 & 0.84\\
    ScreenType & 0.4693 & 0.4187 & 0.4107 & 0.392 & 0.528 & \textbf{0.5387} & 0.536 & 0.4987 & 0.4693 & 0.48 & 0.5093\\
    SemgHandGenderCh2 & 0.92 & \textbf{0.9467} & 0.8867 & 0.765 & 0.8583 & 0.8883 & 0.9017 & 0.91 & 0.8617 & 0.9017 & 0.8967\\
    SemgHandMovementCh2 & 0.6578 & \textbf{0.7711} & 0.5689 & 0.4444 & 0.52 & 0.5756 & 0.56 & 0.5911 & 0.6244 & 0.6978 & 0.6244\\
    SemgHandSubjectCh2 & 0.8267 & \textbf{0.9356} & 0.8333 & 0.7089 & 0.8133 & 0.8444 & 0.8644 & 0.8489 & 0.7556 & 0.86 & 0.8533\\
    ShakeGestureWiimoteZ & 0.82 & 0.82 & 0.74 & 0.84 & 0.88 & 0.92 & 0.84 & 0.86 & 0.9 & \textbf{0.94} & 0.92\\
    ShapeletSim & 0.9556 & 0.4778 & 0.5056 & 0.9667 & 0.9667 & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & 0.9278 & 0.9722 & 0.9778\\
    ShapesAll & 0.78 & 0.8017 & 0.7917 & 0.8483 & 0.8633 & 0.8967 & \textbf{0.9117} & 0.8967 & 0.8717 & 0.8933 & 0.9017\\
    SmallKitchenAppliances & 0.7733 & 0.7867 & 0.7627 & 0.6107 & 0.8133 & 0.8107 & 0.8213 & 0.8267 & \textbf{0.8293} & 0.8187 & 0.7867\\
    SmoothSubspace & 0.94 & \textbf{1.0} & \textbf{1.0} & 0.98 & 0.9667 & 0.9667 & 0.9667 & 0.98 & 0.9467 & 0.9733 & 0.9733\\
    SonyAIBORobotSurface1 & 0.7587 & 0.772 & 0.6722 & 0.8253 & \textbf{0.9368} & 0.8236 & 0.8636 & 0.8502 & 0.8153 & 0.8319 & 0.8336\\
    SonyAIBORobotSurface2 & 0.9276 & 0.809 & 0.8279 & 0.8909 & 0.8909 & 0.8846 & 0.9381 & \textbf{0.9454} & 0.8835 & 0.9224 & 0.937\\
    StarLightCurves & 0.9601 & 0.9732 & 0.9718 & 0.9641 & 0.9739 & 0.9745 & 0.974 & 0.9716 & 0.9745 & \textbf{0.9766} & 0.9733\\
    Strawberry & 0.9432 & 0.9811 & \textbf{0.9838} & 0.9622 & 0.9649 & 0.973 & 0.9568 & 0.9541 & 0.9595 & 0.9514 & 0.9622\\
    SwedishLeaf & 0.8976 & 0.9504 & 0.9456 & 0.9312 & 0.9536 & 0.9472 & 0.9584 & 0.9456 & 0.9392 & 0.9616 & \textbf{0.9728}\\
    Symbols & 0.9548 & 0.8824 & 0.8945 & 0.9558 & 0.9668 & 0.9839 & 0.9859 & 0.9859 & 0.9688 & \textbf{0.9869} & 0.9769\\
    SyntheticControl & 0.98 & 0.99 & 0.9833 & 0.9767 & 0.9933 & 0.9967 & \textbf{1.0} & \textbf{1.0} & 0.9733 & 0.99 & 0.99\\
    ToeSegmentation1 & 0.7588 & 0.5746 & 0.6667 & 0.9474 & \textbf{0.9693} & 0.9167 & 0.9386 & 0.9035 & 0.8421 & 0.9605 & 0.9649\\
    ToeSegmentation2 & 0.7692 & 0.6538 & 0.8154 & 0.9077 & 0.9154 & 0.8615 & 0.9 & 0.8769 & 0.8692 & 0.8769 & \textbf{0.9308}\\
    Trace & \textbf{1.0} & 0.91 & 0.98 & 0.99 & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0}\\
    TwoLeadECG & 0.9508 & 0.9508 & 0.9254 & 0.9921 & 0.9851 & 0.993 & \textbf{1.0} & 0.993 & 0.9622 & 0.9939 & 0.9991\\
    TwoPatterns & 0.9788 & 0.995 & 0.9032 & 0.9918 & 0.996 & 0.9948 & 0.998 & \textbf{0.999} & 0.9415 & 0.9948 & 0.997\\
    UMD & 0.9931 & \textbf{1.0} & 0.9375 & 0.9931 & 0.9583 & 0.9931 & 0.9931 & 0.9931 & 0.9722 & 0.9931 & 0.9931\\
    UWaveGestureLibraryAll & 0.9174 & \textbf{0.9665} & 0.9629 & 0.9595 & 0.9428 & 0.9587 & 0.9436 & 0.9375 & 0.9319 & 0.9492 & 0.9506\\
    UWaveGestureLibraryX & 0.7806 & 0.8079 & 0.7984 & 0.7931 & 0.7998 & 0.821 & 0.8314 & 0.8417 & 0.8004 & 0.8386 & \textbf{0.8445}\\
    UWaveGestureLibraryY & 0.6876 & 0.715 & 0.7164 & 0.7144 & 0.7074 & 0.7387 & 0.7741 & \textbf{0.7744} & 0.6957 & 0.7557 & 0.7658\\
    UWaveGestureLibraryZ & 0.7172 & 0.7513 & 0.7379 & 0.7401 & 0.7328 & 0.7552 & 0.7739 & 0.7744 & 0.7554 & \textbf{0.78} & 0.7747\\
    Wafer & 0.9945 & 0.9953 & 0.9959 & 0.994 & 0.9992 & 0.9977 & 0.9992 & \textbf{0.9998} & 0.9961 & 0.9974 & 0.9969\\
    Wine & 0.7037 & 0.7778 & 0.7222 & 0.8333 & \textbf{0.8704} & 0.8519 & 0.6667 & 0.8333 & 0.7778 & 0.8519 & 0.8333\\
    WordSynonyms & 0.5643 & 0.6442 & 0.5972 & 0.685 & 0.5987 & 0.6677 & 0.6771 & 0.6771 & 0.6003 & 0.6897 & \textbf{0.7116}\\
    Worms & 0.5844 & 0.5714 & 0.5584 & 0.6883 & 0.7922 & 0.7792 & 0.7792 & \textbf{0.8571} & 0.7662 & 0.7922 & 0.7792\\
    WormsTwoClass & 0.7273 & 0.6104 & 0.5714 & 0.7532 & 0.8052 & 0.8052 & 0.8182 & \textbf{0.8442} & 0.6753 & 0.7922 & 0.7922\\
    Yoga & 0.7113 & 0.8603 & 0.8653 & 0.8507 & 0.7947 & 0.819 & 0.8403 & \textbf{0.8793} & 0.7493 & 0.8497 & 0.8557\\
    \midrule
    \textbf{\textit{Average}} & 0.7704 & 0.7806 & 0.7707 & 0.7984 & 0.8202 & 0.8244 & 0.8288 & 0.8347 & 0.7905 & 0.8283 & \textbf{0.836}\\
    \textbf{\textit{Best Counts}} & 5 & 28 & 20 & 9 & 12 & 15 & 20 & \textbf{31} & 10 & 14 & \textbf{31}\\
    \bottomrule
    \end{tabular}
    }`{=latex}

```{=latex}
\centering
```
`\scalebox{0.6}{
    \begin{tabular}{l|lllllllllll}
    \toprule
                                   & Catch22+ & TabPFN & TabICL & MOMENT & TiRex & Chronos2 & TiViT-H & TiConvNext & NuTime & Mantis+ & MantisV2\\
    \midrule
    ArticularyWordRecognition & 0.9333 & 0.93 & 0.91 & 0.9867 & 0.99 & \textbf{0.9933} & 0.9833 & 0.9767 & \textbf{0.9933} & \textbf{0.9933} & \textbf{0.9933}\\
    BasicMotions & \textbf{1.0} & \textbf{1.0} & 0.95 & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0}\\
    CharacterTrajectories & 0.984 & 0.9673 & 0.9694 & 0.984 & 0.9694 & 0.9742 & 0.9659 & 0.9714 & 0.9847 & 0.9812 & \textbf{0.9868}\\
    Cricket & 0.9028 & 0.8194 & 0.8472 & 0.9861 & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & 0.9861 & 0.9861\\
    DuckDuckGeese & 0.42 & 0.32 & 0.28 & 0.48 & 0.38 & 0.34 & 0.48 & \textbf{0.56} & 0.5 & 0.54 & 0.5\\
    ERing & 0.9074 & 0.8889 & 0.8519 & 0.9704 & 0.9778 & 0.9778 & 0.9852 & 0.9815 & 0.9815 & \textbf{0.9926} & 0.9889\\
    EigenWorms & 0.7634 & 0.4198 & 0.5649 & 0.8321 & 0.8397 & 0.8397 & 0.9313 & \textbf{0.9618} & 0.8702 & 0.916 & 0.8855\\
    Epilepsy & 0.9638 & 0.913 & 0.9565 & 0.9928 & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0}\\
    EthanolConcentration & 0.3726 & \textbf{0.7833} & 0.6046 & 0.4715 & 0.403 & 0.5475 & 0.5057 & 0.4601 & 0.4106 & 0.3954 & 0.4259\\
    FaceDetection & 0.5474 & 0.6325 & 0.6402 & 0.6206 & \textbf{0.66} & 0.6362 & 0.5996 & 0.5806 & 0.6189 & 0.5925 & 0.5834\\
    FingerMovements & 0.52 & 0.5 & 0.49 & 0.57 & 0.62 & 0.6 & 0.57 & 0.53 & 0.61 & 0.57 & \textbf{0.64}\\
    HandMovementDirection & 0.3514 & \textbf{0.4324} & 0.3919 & 0.3108 & 0.3243 & 0.3378 & 0.2973 & 0.3919 & 0.3784 & 0.3514 & 0.3108\\
    Handwriting & 0.2306 & 0.1388 & 0.2259 & 0.3788 & \textbf{0.4165} & 0.3541 & 0.3341 & 0.3471 & 0.2529 & 0.4153 & 0.3988\\
    Heartbeat & 0.7561 & 0.722 & 0.7268 & 0.7024 & 0.7707 & 0.7512 & 0.7268 & 0.7415 & 0.7415 & 0.7561 & \textbf{0.7951}\\
    InsectWingbeatSubset & 0.324 & 0.238 & \texttt{NaN} & 0.298 & 0.393 & 0.344 & 0.411 & 0.422 & \textbf{0.618} & 0.564 & 0.604\\
    JapaneseVowels & 0.9649 & 0.8135 & 0.8162 & 0.9243 & 0.9405 & 0.9 & 0.9486 & 0.9432 & \textbf{0.9811} & 0.9622 & 0.9595\\
    LSST & 0.5093 & 0.5479 & 0.5016 & 0.6054 & 0.5921 & 0.6014 & 0.6014 & 0.5912 & 0.5572 & \textbf{0.6594} & 0.659\\
    Libras & 0.8444 & 0.6722 & 0.6389 & 0.8556 & 0.8944 & 0.9056 & 0.9278 & 0.9278 & 0.9167 & \textbf{0.95} & 0.9444\\
    MotorImagery & 0.5 & 0.58 & 0.57 & 0.49 & 0.48 & 0.53 & 0.57 & 0.5 & \textbf{0.59} & 0.55 & 0.5\\
    NATOPS & 0.8167 & 0.7778 & 0.7944 & 0.8444 & 0.8389 & 0.8222 & 0.8722 & 0.8722 & 0.8667 & \textbf{0.9167} & 0.8944\\
    PEMS-SF & 0.711 & \textbf{0.948} & 0.9422 & 0.7861 & 0.7746 & 0.815 & 0.7052 & 0.8035 & 0.896 & 0.7977 & 0.8324\\
    PhonemeSpectra & 0.2058 & 0.1485 & 0.1712 & 0.1986 & 0.2359 & 0.2431 & 0.252 & 0.2568 & 0.2475 & 0.2869 & \textbf{0.3215}\\
    RacketSports & 0.7303 & 0.8158 & 0.8487 & 0.8487 & 0.8882 & 0.8684 & 0.8684 & 0.8553 & 0.9145 & \textbf{0.9211} & 0.9013\\
    SelfRegulationSCP1 & 0.744 & \textbf{0.8942} & 0.8874 & 0.8259 & 0.8498 & 0.8498 & 0.8498 & 0.8498 & 0.8567 & 0.8737 & 0.8669\\
    SelfRegulationSCP2 & 0.4778 & 0.4778 & 0.5056 & 0.5111 & 0.4944 & 0.4833 & 0.5278 & 0.4944 & \textbf{0.55} & 0.5389 & 0.5111\\
    SpokenArabicDigits & 0.9004 & 0.9591 & 0.9613 & 0.9791 & 0.9641 & 0.9159 & 0.96 & 0.9673 & 0.9645 & 0.9714 & \textbf{0.9804}\\
    UWaveGestureLibrary & 0.8594 & 0.8062 & 0.7625 & 0.9156 & 0.8719 & \textbf{0.925} & 0.9188 & 0.8594 & \textbf{0.925} & 0.9062 & 0.9219\\
    \midrule
    \textbf{\textit{Average}} & 0.6756 & 0.6721 & \texttt{NaN} & 0.7174 & 0.7248 & 0.7243 & 0.733 & 0.735 & 0.7491 & 0.7551 & \textbf{0.7552}\\
    \textit{\textbf{Best Counts}} & 1 & 5 & 0 & 1 & 5 & 5 & 3 & 5 & \textbf{8} & \textbf{8} & \textbf{8}\\
    \bottomrule
    \end{tabular}
    }`{=latex}

```{=latex}
\vspace{1cm}
```
```{=latex}
\centering
```
`\scalebox{0.60}{
    \begin{tabular}{l|lllllllllll}
    \toprule
                                   & Catch22+ & TabPFN & TabICL & MOMENT & TiRex & Chronos2 & TiViT-H & TiConvNext & NuTime & Mantis+ & MantisV2\\
    \midrule
    Ego4D & $0.3719_{\pm 0.0}$ & \texttt{NaN} & \texttt{NaN} & $0.4324_{\pm 0.0}$ & $0.5267_{\pm 0.0}$ & $0.5216_{\pm 0.0}$ & 0.1394 & 0.1288 & $0.5248_{\pm 0.0}$ & $0.5519_{\pm 0.0}$ & \textbf{0.5551}$_{\pm 0.0}$\\
    EMOPain & $0.7915_{\pm 0.0}$ & 0.7831 & 0.7831 & $0.8225_{\pm 0.0}$ & $0.8254_{\pm 0.0}$ & $0.8338_{\pm 0.0}$ & \textbf{0.8507}$_{\pm 0.0}$ & $0.8028_{\pm 0.0}$ & $0.7831_{\pm 0.0}$ & $0.8451_{\pm 0.0}$ & $0.8169_{\pm 0.0}$\\
    HHAR-ID & $0.9111_{\pm 0.0}$ & 0.8938 & 0.9073 & $0.9491_{\pm 0.0}$ & $0.9708_{\pm 0.0}$ & $0.9699_{\pm 0.0}$ & $0.9661_{\pm 0.0}$ & $0.9693_{\pm 0.0}$ & $0.9751_{\pm 0.0}$ & $0.9871_{\pm 0.0}$ & \textbf{0.9883}$_{\pm 0.0}$\\
    HHAR-OOD & $0.4085_{\pm 0.0}$ & 0.5311 & 0.5441 & $0.3329_{\pm 0.0}$ & $0.3338_{\pm 0.0}$ & $0.3691_{\pm 0.0}$ & $0.318_{\pm 0.0}$ & $0.3222_{\pm 0.0}$ & $0.5084_{\pm 0.0}$ & $0.4907_{\pm 0.0}$ & \textbf{0.5604}$_{\pm 0.0}$\\
    MP8 & $0.642_{\pm 0.0}$ & 0.6235 & 0.6185 & $0.595_{\pm 0.0}$ & $0.6134_{\pm 0.0}$ & $0.6387_{\pm 0.0}$ & $0.6403_{\pm 0.0}$ & $0.642_{\pm 0.0}$ & $0.6084_{\pm 0.0}$ & \textbf{0.6588}$_{\pm 0.0}$ & $0.6521_{\pm 0.0}$\\
    MP50 & $0.4471_{\pm 0.0}$ & 0.5664 & 0.5042 & $0.7294_{\pm 0.0}$ & \textbf{0.7563}$_{\pm 0.0}$ & $0.7345_{\pm 0.0}$ & $0.7193_{\pm 0.0}$ & $0.6857_{\pm 0.0}$ & $0.6992_{\pm 0.0}$ & $0.6924_{\pm 0.0}$ & $0.716_{\pm 0.0}$\\
    UCI-HAR & $0.8483_{\pm 0.0}$ & 0.809 & 0.8157 & $0.8619_{\pm 0.0}$ & $0.8887_{\pm 0.0}$ & $0.8809_{\pm 0.0}$ & \textbf{0.9175}$_{\pm 0.0}$ & $0.9118_{\pm 0.0}$ & $0.8806_{\pm 0.0}$ & $0.8962_{\pm 0.0}$ & $0.9145_{\pm 0.0}$\\
    \midrule
    Avg & 0.6315 & \texttt{NaN} & \texttt{NaN} & 0.6747 & 0.7022 & 0.7069 & 0.6502 & 0.6375 & 0.7114 & 0.7317 & \textbf{0.7433}\\
    Avg UCR HAR & 0.7135 & 0.7479 & 0.7285 & 0.7733 & 0.7793 & 0.7919 & 0.809 & 0.8016 & 0.7541 & 0.8064 & \textbf{0.82}\\
    Avg UEA HAR & 0.8061 & 0.7591 & 0.764 & 0.8658 & 0.8764 & 0.8726 & 0.8785 & 0.8715 & 0.873 & \textbf{0.8987} & 0.8929\\
    \bottomrule
    \end{tabular}
    }`{=latex}

```{=latex}
\vspace{1cm}
```
```{=latex}
\centering
```
`\scalebox{0.60}{
    \begin{tabular}{l|lllllllllll}
    \toprule
                                   & Catch22+ & TabPFN & TabICL & MOMENT & TiRex & Chronos2 & TiViT-H & TiConvNext & NuTime & Mantis+ & MantisV2\\
    \midrule
    Blink & 0.96 & 0.9178 & 0.8978 & 0.9867 & 0.9889 & \textbf{1.0} & 0.9889 & 0.9822 & 0.5911 & 0.9489 & 0.9733\\
    CAP & 0.6415 & \texttt{NaN} & \texttt{NaN} & 0.7374 & 0.801 & \textbf{0.8168} & 0.8139 & 0.815 & 0.7767 & 0.8078 & 0.7945\\
    CAP-OOD & 0.6704 & \texttt{NaN} & \texttt{NaN} & 0.6946 & 0.7224 & \textbf{0.7535} & 0.7223 & 0.729 & 0.633 & 0.6415 & 0.6391\\
    Epilepsy-EEG & 0.947 & 0.9496 & 0.9447 & 0.9366 & \textbf{0.9563} & 0.9414 & 0.9322 & 0.9186 & 0.907 & 0.9424 & 0.9312\\
    FingerMovements & 0.52 & 0.5 & 0.49 & 0.57 & 0.62 & 0.6 & 0.57 & 0.53 & 0.61 & 0.57 & \textbf{0.64}\\
    PCL & 0.5402 & \texttt{NaN} & \texttt{NaN} & 0.5616 & 0.5667 & 0.5596 & 0.5685 & 0.5731 & 0.5754 & 0.575 & \textbf{0.5931}\\
    PCL-OOD & 0.5279 & \texttt{NaN} & \texttt{NaN} & 0.5215 & 0.5305 & 0.5291 & 0.5259 & 0.5237 & 0.5381 & 0.5329 & \textbf{0.5459}\\
    SEDFx & 0.7113 & \texttt{NaN} & \texttt{NaN} & 0.7714 & 0.8116 & \textbf{0.8218} & 0.8019 & 0.8097 & 0.7878 & 0.8126 & 0.8104\\
    SEDFx-OOD & 0.6934 & \texttt{NaN} & \texttt{NaN} & 0.7423 & 0.7763 & 0.7826 & 0.7692 & 0.7674 & 0.767 & 0.7777 & \textbf{0.7841}\\
    SelfRegulationSCP1 & 0.744 & \textbf{0.8942} & 0.8874 & 0.8259 & 0.8498 & 0.8498 & 0.8498 & 0.8498 & 0.8567 & 0.8737 & 0.8669\\
    SelfRegulationSCP2 & 0.4778 & 0.4778 & 0.5056 & 0.5111 & 0.4944 & 0.4833 & 0.5278 & 0.4944 & \textbf{0.55} & 0.5389 & 0.5111\\
    \midrule
    Avg & 0.6758 & \texttt{NaN} & \texttt{NaN} & 0.7145 & 0.738 & \textbf{0.7398} & 0.7337 & 0.7266 & 0.6902 & 0.7292 & 0.7354\\
    \bottomrule
    \end{tabular}
    }`{=latex}

```{=latex}
\newpage
```
Tables for Section `\ref{sec:final-comparison}`{=latex} {#tables-for-section-1}
-------------------------------------------------------

Finally, we provide the complete tables that correspond to Figure `\ref{fig:teaser-plot}`{=latex} (Table `\ref{tab:final-sota-ucr-res-a}`{=latex} and Table `\ref{tab:final-sota-ucr-res-b}`{=latex}) and Figure `\ref{fig:more-baselines-comparison}`{=latex} (Table `\ref{tab:more-baselines-comparison}`{=latex}). `\vspace{0.4cm}`{=latex}

```{=latex}
\centering
```
`\scalebox{0.45}{
    \begin{tabular}{l|lllllllllll|ll|ll|l}
    \toprule
     & Catch22+ & TabPFN & TabICL & MOMENT & TiRex & Chronos2 & TiViT-H & TiConvNext & NuTime & Mantis+ & MantisV2 & SE-Mantis+ & SE-MantisV2 & MantisV2 \&  & MantisV2 \& & MantisV2-FT\\
     & & & & & & & & & & & & & &  TiViT-H & TiConvNext & \\
    \midrule
    ACSF1 & $0.8233_{\pm 0.021}$ & 0.8 & 0.81 & 0.75 & 0.75 & \textbf{0.86} & 0.83 & 0.83 & 0.7 & 0.79 & 0.76 & 0.8 & 0.77 & 0.81 & 0.82 & $0.77_{\pm 0.01}$\\
    Adiac & $0.734_{\pm 0.007}$ & 0.8031 & 0.8031 & 0.7775 & 0.7826 & 0.8286 & 0.7187 & 0.7059 & 0.8005 & 0.8414 & 0.8363 & 0.8312 & \textbf{0.8465} & 0.7519 & 0.7749 & $0.8321_{\pm 0.007}$\\
    AllGestureWiimoteX & $0.5957_{\pm 0.01}$ & 0.6229 & 0.5043 & 0.6614 & 0.7014 & 0.6843 & 0.67 & 0.6943 & 0.6071 & 0.7171 & 0.73 & 0.69 & 0.6871 & 0.7229 & 0.7357 & \textbf{0.779}$_{\pm 0.006}$\\
    AllGestureWiimoteY & $0.6319_{\pm 0.016}$ & 0.6329 & 0.5114 & 0.7029 & 0.7314 & 0.7043 & 0.7371 & 0.7229 & 0.6529 & 0.7414 & 0.7471 & 0.73 & 0.7271 & 0.7557 & 0.7486 & \textbf{0.7905}$_{\pm 0.004}$\\
    AllGestureWiimoteZ & $0.5462_{\pm 0.005}$ & 0.5329 & 0.4529 & 0.6057 & 0.6343 & 0.66 & 0.6771 & 0.6586 & 0.5414 & 0.6657 & 0.6814 & 0.6329 & 0.6357 & 0.6886 & 0.7043 & \textbf{0.7505}$_{\pm 0.017}$\\
    ArrowHead & $0.741_{\pm 0.012}$ & 0.7543 & 0.7429 & 0.7771 & 0.7829 & 0.8514 & 0.8229 & 0.8229 & 0.7257 & 0.8457 & 0.8514 & 0.84 & 0.8514 & \textbf{0.88} & 0.8457 & $0.8457_{\pm 0.006}$\\
    BME & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0} & 0.98 & 0.9933 & 0.9933 & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & 0.9667 & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & $0.9933_{\pm 0.007}$\\
    Beef & $0.6889_{\pm 0.051}$ & 0.8 & 0.7667 & 0.8 & \textbf{0.9333} & 0.7 & 0.7667 & 0.8333 & 0.7333 & 0.6667 & 0.7333 & 0.7333 & 0.8 & 0.7667 & 0.8333 & $0.7778_{\pm 0.069}$\\
    BeetleFly & $0.7833_{\pm 0.029}$ & 0.9 & 0.8 & \textbf{0.95} & 0.9 & 0.8 & 0.85 & \textbf{0.95} & 0.7 & 0.8 & 0.85 & \textbf{0.95} & \textbf{0.95} & \textbf{0.95} & \textbf{0.95} & $0.8833_{\pm 0.029}$\\
    BirdChicken & $0.85_{\pm 0.0}$ & 0.85 & 0.75 & \textbf{1.0} & 0.9 & 0.9 & 0.9 & 0.95 & \textbf{1.0} & 0.9 & 0.9 & 0.9 & 0.9 & 0.9 & 0.9 & $0.9_{\pm 0.0}$\\
    CBF & $0.9763_{\pm 0.002}$ & 0.9133 & 0.9244 & 0.9889 & 0.9967 & \textbf{1.0} & 0.9989 & 0.9989 & 0.99 & 0.9956 & 0.9967 & 0.9989 & 0.9944 & 0.9989 & \textbf{1.0} & $0.9959_{\pm 0.003}$\\
    Car & $0.7556_{\pm 0.025}$ & 0.7833 & 0.8167 & 0.9 & 0.8 & 0.85 & 0.8167 & 0.85 & 0.7333 & 0.8833 & 0.8167 & \textbf{0.9167} & 0.9 & 0.9 & 0.9 & $0.8667_{\pm 0.0}$\\
    Chinatown & $0.9796_{\pm 0.003}$ & \textbf{0.9854} & 0.9796 & 0.9825 & 0.9825 & \textbf{0.9854} & 0.9679 & 0.9388 & 0.9621 & 0.9738 & 0.9621 & 0.9621 & 0.9534 & 0.9592 & 0.9504 & $0.9738_{\pm 0.008}$\\
    ChlorineConcentration & $0.6682_{\pm 0.001}$ & 0.95 & \textbf{0.9773} & 0.7789 & 0.8018 & 0.7672 & 0.7622 & 0.7771 & 0.6086 & 0.6857 & 0.7115 & 0.7982 & 0.7867 & 0.7914 & 0.8026 & $0.818_{\pm 0.009}$\\
    CinCECGTorso & $0.8872_{\pm 0.013}$ & 0.8341 & 0.8225 & 0.7681 & \textbf{0.9565} & 0.8986 & 0.9355 & 0.942 & 0.8551 & 0.8558 & 0.8507 & 0.9181 & 0.9268 & 0.9507 & 0.9449 & $0.83_{\pm 0.007}$\\
    Coffee & $0.9881_{\pm 0.021}$ & 0.9643 & \textbf{1.0} & 0.9643 & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0}$_{\pm 0.0}$\\
    Computers & $0.7347_{\pm 0.006}$ & 0.62 & 0.656 & 0.612 & 0.8 & 0.768 & 0.764 & \textbf{0.828} & 0.788 & 0.712 & 0.704 & 0.756 & 0.724 & 0.756 & 0.804 & $0.716_{\pm 0.011}$\\
    CricketX & $0.6974_{\pm 0.01}$ & 0.6667 & 0.6641 & 0.7436 & 0.7 & 0.7564 & 0.759 & 0.7487 & 0.7051 & 0.7564 & 0.7821 & 0.7667 & 0.8 & 0.7769 & 0.7718 & \textbf{0.8256}$_{\pm 0.003}$\\
    CricketY & $0.6923_{\pm 0.008}$ & 0.7 & 0.6333 & 0.7077 & 0.7538 & 0.7359 & 0.7872 & 0.7462 & 0.6897 & 0.8077 & 0.8154 & 0.8103 & 0.8179 & 0.8231 & 0.8026 & \textbf{0.8427}$_{\pm 0.006}$\\
    CricketZ & $0.7427_{\pm 0.008}$ & 0.6718 & 0.6692 & 0.7436 & 0.7333 & 0.7231 & 0.7974 & 0.759 & 0.659 & 0.7923 & 0.8128 & 0.7923 & 0.8179 & 0.8103 & 0.7846 & \textbf{0.8556}$_{\pm 0.013}$\\
    Crop & $0.7523_{\pm 0.001}$ & 0.7989 & \textbf{0.812} & 0.7234 & 0.7029 & 0.7163 & 0.6867 & 0.6737 & 0.6895 & 0.7364 & 0.7348 & 0.7362 & 0.732 & 0.7231 & 0.7171 & $0.7634_{\pm 0.004}$\\
    DiatomSizeReduction & $0.9401_{\pm 0.008}$ & 0.9608 & 0.951 & 0.9412 & 0.9281 & 0.9477 & 0.9673 & 0.9673 & 0.915 & 0.9216 & 0.9281 & 0.9216 & 0.9281 & 0.9575 & \textbf{0.9706} & $0.9412_{\pm 0.018}$\\
    DistalPhalanxOutlineAgeGroup & $0.717_{\pm 0.004}$ & 0.7626 & 0.7626 & 0.6691 & 0.7482 & 0.741 & 0.7122 & 0.7266 & 0.705 & 0.7266 & \textbf{0.777} & 0.705 & 0.7122 & 0.705 & 0.7338 & $0.7362_{\pm 0.018}$\\
    DistalPhalanxOutlineCorrect & $0.7911_{\pm 0.002}$ & 0.7826 & 0.7754 & 0.7536 & 0.75 & 0.7645 & 0.7536 & 0.7681 & 0.7428 & 0.779 & 0.7681 & \textbf{0.7935} & 0.7536 & 0.75 & 0.7681 & $0.7899_{\pm 0.016}$\\
    DistalPhalanxTW & $0.6475_{\pm 0.019}$ & 0.6978 & 0.6835 & 0.6259 & 0.6547 & 0.6547 & 0.6691 & 0.6547 & 0.6835 & 0.6547 & 0.6835 & 0.6763 & \textbf{0.705} & 0.6691 & 0.6763 & $0.6547_{\pm 0.019}$\\
    DodgerLoopDay & $0.6417_{\pm 0.052}$ & 0.6125 & \textbf{0.725} & 0.4375 & 0.55 & 0.525 & 0.5 & 0.5625 & 0.55 & 0.5875 & 0.5 & 0.5375 & 0.525 & 0.5125 & 0.55 & $0.5333_{\pm 0.007}$\\
    DodgerLoopGame & $0.8333_{\pm 0.007}$ & 0.7899 & 0.7971 & 0.8406 & 0.7899 & 0.8913 & 0.8406 & 0.8406 & 0.8188 & 0.8406 & 0.8623 & 0.8478 & 0.8551 & 0.8188 & 0.8841 & \textbf{0.901}$_{\pm 0.022}$\\
    DodgerLoopWeekend & \textbf{0.9855}$_{\pm 0.0}$ & \textbf{0.9855} & 0.9783 & \textbf{0.9855} & 0.9565 & 0.9638 & 0.9493 & 0.913 & 0.9638 & \textbf{0.9855} & 0.971 & \textbf{0.9855} & 0.971 & \textbf{0.9855} & 0.9565 & $0.9758_{\pm 0.004}$\\
    ECG200 & $0.85_{\pm 0.017}$ & 0.89 & 0.88 & \textbf{0.9} & 0.83 & 0.84 & 0.86 & 0.86 & 0.85 & 0.89 & 0.88 & 0.87 & 0.87 & 0.85 & 0.87 & $0.88_{\pm 0.017}$\\
    ECG5000 & $0.9398_{\pm 0.001}$ & 0.942 & \textbf{0.9447} & 0.9333 & 0.9391 & 0.9342 & 0.9378 & 0.9311 & 0.9193 & 0.9322 & 0.9353 & 0.9338 & 0.9329 & 0.9371 & 0.9289 & $0.9404_{\pm 0.002}$\\
    ECGFiveDays & $0.7975_{\pm 0.013}$ & 0.9245 & 0.9826 & 0.9698 & 0.9756 & 0.9907 & 0.9791 & 0.9895 & 0.899 & 0.9779 & 0.993 & 0.9791 & \textbf{0.9977} & 0.9895 & 0.9895 & $0.9853_{\pm 0.007}$\\
    EOGHorizontalSignal & $0.5948_{\pm 0.004}$ & 0.5276 & 0.5 & 0.5635 & 0.5718 & 0.5884 & 0.5967 & 0.5746 & 0.5331 & 0.5773 & 0.5691 & 0.5994 & 0.605 & 0.6381 & 0.605 & \textbf{0.6464}$_{\pm 0.022}$\\
    EOGVerticalSignal & $0.5092_{\pm 0.002}$ & 0.489 & 0.4337 & 0.5 & 0.4171 & 0.4392 & 0.4724 & 0.4254 & 0.3315 & 0.4613 & 0.5138 & 0.4779 & 0.4807 & 0.489 & 0.4448 & \textbf{0.5451}$_{\pm 0.008}$\\
    Earthquakes & \textbf{0.7506}$_{\pm 0.008}$ & 0.7482 & 0.7482 & 0.705 & 0.6906 & 0.7482 & 0.705 & 0.6978 & 0.7338 & 0.6835 & 0.7194 & 0.7266 & 0.7122 & 0.6978 & 0.7122 & $0.6835_{\pm 0.019}$\\
    ElectricDevices & $0.7396_{\pm 0.003}$ & 0.7025 & 0.6614 & 0.7404 & 0.6709 & 0.7168 & 0.7635 & \textbf{0.7764} & 0.6786 & 0.7301 & 0.7175 & 0.7125 & 0.7257 & 0.7636 & 0.7679 & $0.7505_{\pm 0.009}$\\
    EthanolLevel & $0.38_{\pm 0.013}$ & \textbf{0.848} & 0.694 & 0.682 & 0.508 & 0.654 & 0.584 & 0.58 & 0.52 & 0.55 & 0.592 & 0.484 & 0.542 & 0.592 & 0.574 & $0.7947_{\pm 0.005}$\\
    FaceAll & $0.7682_{\pm 0.028}$ & \textbf{0.8077} & 0.771 & 0.7787 & 0.7876 & 0.7533 & 0.7521 & 0.7367 & 0.7207 & 0.745 & 0.7574 & 0.7444 & 0.7568 & 0.7562 & 0.7444 & $0.7964_{\pm 0.003}$\\
    FaceFour & $0.8977_{\pm 0.02}$ & 0.9091 & 0.8864 & 0.875 & 0.9545 & 0.9091 & 0.875 & 0.9205 & 0.9659 & 0.9205 & \textbf{0.9886} & 0.8977 & 0.9432 & 0.9659 & 0.9659 & $0.9697_{\pm 0.013}$\\
    FacesUCR & $0.8558_{\pm 0.004}$ & 0.8766 & 0.8771 & 0.8912 & 0.8776 & 0.8576 & 0.8751 & 0.8561 & 0.8195 & 0.881 & 0.9117 & 0.8776 & 0.9083 & 0.9088 & 0.898 & \textbf{0.9416}$_{\pm 0.004}$\\
    FiftyWords & $0.726_{\pm 0.006}$ & 0.7385 & 0.7165 & 0.7758 & 0.7231 & 0.7956 & 0.7868 & 0.7714 & 0.7165 & 0.7868 & 0.8198 & 0.7824 & 0.8088 & 0.8264 & 0.8088 & \textbf{0.8432}$_{\pm 0.01}$\\
    Fish & $0.7619_{\pm 0.013}$ & 0.88 & 0.8857 & 0.96 & 0.9029 & 0.9429 & 0.9486 & 0.9714 & 0.9143 & 0.96 & 0.96 & 0.9486 & 0.9714 & 0.9657 & \textbf{0.9829} & \textbf{0.9829}$_{\pm 0.0}$\\
    FordA & $0.9101_{\pm 0.004}$ & 0.897 & 0.8758 & 0.9061 & 0.947 & 0.928 & 0.9136 & 0.9227 & 0.9061 & 0.9265 & 0.9432 & 0.9432 & 0.9462 & 0.9242 & 0.9318 & \textbf{0.95}$_{\pm 0.005}$\\
    FordB & $0.7292_{\pm 0.007}$ & 0.7556 & 0.7136 & 0.7556 & 0.816 & 0.8074 & 0.7963 & 0.779 & 0.7654 & 0.7889 & 0.8099 & 0.816 & 0.8259 & 0.8136 & 0.8111 & \textbf{0.8358}$_{\pm 0.002}$\\
    FreezerRegularTrain & \textbf{0.9996}$_{\pm 0.0}$ & 0.9986 & 0.9877 & 0.9881 & 0.9912 & 0.9951 & 0.9965 & 0.9975 & 0.994 & 0.9905 & 0.9961 & 0.9968 & 0.9954 & 0.9972 & 0.9982 & $0.9946_{\pm 0.0}$\\
    FreezerSmallTrain & $0.9251_{\pm 0.002}$ & 0.8933 & 0.8098 & 0.8186 & 0.8912 & 0.9688 & 0.9839 & 0.9902 & 0.9863 & 0.9158 & 0.9849 & 0.974 & \textbf{0.9905} & 0.9891 & 0.9895 & $0.9473_{\pm 0.023}$\\
    Fungi & $0.9229_{\pm 0.041}$ & 0.8656 & 0.7849 & \textbf{1.0} & 0.9516 & 0.9462 & 0.9785 & 0.9946 & 0.7473 & 0.9247 & 0.9731 & 0.9839 & 0.9946 & 0.9839 & \textbf{1.0} & $0.9409_{\pm 0.023}$\\
    GestureMidAirD1 & $0.6795_{\pm 0.036}$ & 0.6231 & 0.6769 & 0.6846 & 0.6923 & 0.7308 & 0.7538 & 0.7615 & 0.6846 & 0.7385 & 0.7769 & 0.7538 & 0.7692 & 0.7769 & 0.7538 & \textbf{0.7846}$_{\pm 0.013}$\\
    GestureMidAirD2 & $0.6205_{\pm 0.016}$ & 0.5615 & 0.5846 & 0.5769 & 0.6154 & 0.7 & 0.6615 & 0.6769 & 0.5615 & 0.5615 & 0.6462 & 0.6 & 0.6231 & 0.7 & 0.6923 & \textbf{0.7026}$_{\pm 0.025}$\\
    GestureMidAirD3 & $0.3923_{\pm 0.031}$ & 0.3692 & 0.3615 & 0.3692 & 0.4077 & 0.3538 & 0.4692 & 0.4923 & 0.4462 & 0.4462 & 0.4462 & 0.4385 & 0.4308 & 0.4385 & \textbf{0.5} & $0.4641_{\pm 0.012}$\\
    GesturePebbleZ1 & $0.8779_{\pm 0.012}$ & 0.8488 & 0.8779 & 0.8953 & 0.8837 & 0.9012 & 0.907 & 0.8547 & 0.8953 & 0.9244 & 0.9302 & 0.9186 & 0.9302 & 0.936 & 0.9302 & \textbf{0.9457}$_{\pm 0.009}$\\
    GesturePebbleZ2 & $0.7384_{\pm 0.004}$ & 0.7848 & 0.7532 & 0.8924 & 0.8418 & 0.8671 & 0.8544 & 0.8165 & 0.7722 & 0.8797 & 0.9051 & 0.8987 & 0.8924 & \textbf{0.9114} & 0.8797 & $0.8903_{\pm 0.007}$\\
    GunPoint & $0.9467_{\pm 0.018}$ & 0.9667 & 0.9533 & \textbf{1.0} & 0.98 & 0.9933 & 0.9933 & 0.9933 & 0.9733 & 0.9867 & 0.98 & 0.9867 & 0.98 & \textbf{1.0} & 0.9933 & $0.9933_{\pm 0.007}$\\
    GunPointAgeSpan & $0.9884_{\pm 0.002}$ & 0.9905 & 0.9842 & 0.9778 & 0.9905 & 0.9905 & 0.9937 & 0.9873 & 0.9684 & 0.9873 & 0.9937 & 0.9905 & 0.9937 & \textbf{0.9968} & 0.9905 & \textbf{0.9968}$_{\pm 0.0}$\\
    GunPointMaleVersusFemale & $0.9937_{\pm 0.0}$ & 0.9968 & \textbf{1.0} & 0.9968 & 0.9937 & 0.9937 & \textbf{1.0} & 0.9968 & 0.9873 & 0.9968 & \textbf{1.0} & 0.9968 & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & $0.9968_{\pm 0.0}$\\
    GunPointOldVersusYoung & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0} & \textbf{1.0} & 0.927 & 0.9651 & 0.9778 & 0.9873 & 0.9937 & \textbf{1.0} & 0.9968 & 0.9968 & 0.9968 & 0.9968 & 0.9968 & 0.9968 & $0.9968_{\pm 0.0}$\\
    Ham & $0.6063_{\pm 0.005}$ & \textbf{0.7429} & 0.7238 & 0.6571 & 0.7048 & 0.6952 & \textbf{0.7429} & 0.7238 & 0.7333 & 0.581 & 0.7048 & 0.6571 & 0.6667 & \textbf{0.7429} & 0.7333 & $0.6635_{\pm 0.02}$\\
    HandOutlines & $0.9036_{\pm 0.004}$ & 0.9162 & 0.927 & 0.9405 & 0.8757 & 0.9378 & \textbf{0.9432} & 0.9216 & 0.8865 & 0.927 & 0.9297 & 0.8892 & 0.9108 & 0.9378 & 0.9135 & $0.9306_{\pm 0.004}$\\
    Haptics & $0.4838_{\pm 0.015}$ & 0.4708 & 0.461 & 0.4903 & 0.4773 & 0.5032 & 0.4935 & 0.5227 & 0.4253 & 0.5325 & 0.5032 & 0.5097 & 0.5195 & 0.5325 & 0.5519 & \textbf{0.5714}$_{\pm 0.017}$\\
    Herring & $0.5208_{\pm 0.024}$ & 0.5938 & 0.6406 & 0.6406 & 0.6094 & 0.5156 & 0.5312 & 0.6094 & 0.5312 & 0.625 & 0.6875 & \textbf{0.7344} & 0.6562 & 0.6562 & 0.6406 & $0.6198_{\pm 0.009}$\\
    HouseTwenty & $0.9692_{\pm 0.005}$ & 0.8487 & 0.7563 & 0.9328 & \textbf{0.9832} & 0.9748 & \textbf{0.9832} & \textbf{0.9832} & 0.8992 & 0.9412 & 0.9328 & 0.958 & 0.9748 & 0.9748 & 0.9748 & $0.9552_{\pm 0.005}$\\
    InlineSkate & $0.3891_{\pm 0.005}$ & 0.3345 & 0.3436 & 0.3636 & 0.4382 & 0.4364 & 0.3982 & 0.4473 & 0.3564 & 0.4036 & 0.3909 & 0.4236 & 0.4309 & 0.4255 & \textbf{0.46} & $0.4473_{\pm 0.026}$\\
    InsectEPGRegularTrain & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0} & \textbf{1.0} & 0.9478 & 0.996 & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0}$_{\pm 0.0}$\\
    InsectEPGSmallTrain & \textbf{1.0}$_{\pm 0.0}$ & \textbf{1.0} & \textbf{1.0} & 0.8353 & 0.9558 & 0.9197 & 0.988 & 0.988 & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0}$_{\pm 0.0}$\\
    InsectWingbeatSound & $0.629_{\pm 0.003}$ & \textbf{0.6672} & 0.6556 & 0.6212 & 0.6253 & 0.6313 & 0.551 & 0.5652 & 0.5101 & 0.5838 & 0.596 & 0.5596 & 0.5793 & 0.5899 & 0.6081 & $0.6032_{\pm 0.008}$\\
    \bottomrule
    \end{tabular}
    }`{=latex}

```{=latex}
\newpage
```
```{=latex}
\centering
```
`\scalebox{0.45}{
    \begin{tabular}{l|lllllllllll|ll|ll|l}
    \toprule
     & Catch22+ & TabPFN & TabICL & MOMENT & TiRex & Chronos2 & TiViT-H & TiConvNext & NuTime & Mantis+ & MantisV2 & SE-Mantis+ & SE-MantisV2 & MantisV2 \&  & MantisV2 \& & MantisV2-FT\\
     & & & & & & & & & & & & & &  TiViT-H & TiConvNext & \\
    \midrule
    ItalyPowerDemand & $0.9537_{\pm 0.002}$ & \textbf{0.9699} & 0.9631 & 0.9543 & 0.9602 & 0.964 & 0.9407 & 0.9378 & 0.9116 & 0.9281 & 0.9456 & 0.9436 & 0.9475 & 0.9427 & 0.9446 & $0.9517_{\pm 0.001}$\\
    LargeKitchenAppliances & $0.8169_{\pm 0.009}$ & 0.6373 & 0.696 & 0.76 & 0.8293 & 0.8613 & 0.872 & \textbf{0.8933} & 0.7707 & 0.824 & 0.816 & 0.832 & 0.816 & 0.8507 & 0.888 & $0.8596_{\pm 0.008}$\\
    Lightning2 & $0.7322_{\pm 0.009}$ & 0.6721 & 0.7049 & 0.7869 & 0.7377 & 0.7869 & 0.8197 & 0.8361 & 0.7377 & 0.7705 & 0.7869 & 0.8361 & 0.7705 & \textbf{0.8525} & 0.8033 & $0.847_{\pm 0.025}$\\
    Lightning7 & $0.7671_{\pm 0.014}$ & 0.6986 & 0.7397 & 0.7397 & 0.8082 & 0.7945 & 0.8493 & 0.7671 & 0.7808 & 0.8356 & 0.863 & 0.863 & 0.8219 & \textbf{0.8767} & 0.7945 & $0.7717_{\pm 0.032}$\\
    Mallat & $0.9555_{\pm 0.009}$ & 0.9689 & 0.9484 & 0.9186 & 0.9548 & 0.9467 & \textbf{0.9765} & 0.9642 & 0.8486 & 0.9049 & 0.9198 & 0.9339 & 0.9663 & 0.974 & 0.9697 & $0.9512_{\pm 0.021}$\\
    Meat & $0.9222_{\pm 0.01}$ & 0.9833 & 0.9333 & \textbf{1.0} & 0.9 & 0.9333 & 0.8167 & 0.85 & 0.9333 & 0.9833 & 0.9667 & 0.9 & 0.8833 & 0.8833 & 0.8833 & $0.8833_{\pm 0.033}$\\
    MedicalImages & $0.7737_{\pm 0.005}$ & 0.7947 & 0.8079 & 0.7461 & 0.725 & 0.7474 & 0.7316 & 0.7513 & 0.7092 & 0.7618 & 0.7711 & 0.7579 & 0.7592 & 0.7566 & 0.7711 & \textbf{0.8224}$_{\pm 0.003}$\\
    MelbournePedestrian & $0.9597_{\pm 0.0}$ & \textbf{0.9803} & \textbf{0.9803} & 0.8954 & 0.8938 & 0.9049 & 0.8626 & 0.87 & 0.9332 & 0.9582 & 0.9582 & 0.9574 & 0.9537 & 0.9377 & 0.9426 & $0.9641_{\pm 0.003}$\\
    MiddlePhalanxOutlineAgeGroup & $0.5952_{\pm 0.004}$ & 0.6234 & \textbf{0.6299} & 0.5065 & 0.487 & 0.5519 & 0.5584 & 0.526 & 0.5519 & 0.5195 & 0.5455 & 0.5 & 0.4675 & 0.539 & 0.539 & $0.5498_{\pm 0.023}$\\
    MiddlePhalanxOutlineCorrect & $0.811_{\pm 0.012}$ & \textbf{0.8522} & 0.8351 & 0.8076 & 0.8076 & 0.8351 & 0.7938 & 0.8179 & 0.7491 & 0.8351 & 0.8419 & 0.8076 & 0.811 & 0.8007 & 0.8144 & $0.819_{\pm 0.022}$\\
    MiddlePhalanxTW & $0.5844_{\pm 0.006}$ & 0.6169 & \textbf{0.6234} & 0.5455 & 0.5 & 0.5 & 0.5195 & 0.4805 & 0.4416 & 0.487 & 0.513 & 0.4935 & 0.513 & 0.5195 & 0.5 & $0.4935_{\pm 0.022}$\\
    MixedShapesRegularTrain & $0.9306_{\pm 0.003}$ & 0.9344 & 0.9299 & 0.9357 & 0.9699 & 0.9666 & 0.9781 & 0.9781 & 0.9365 & 0.9678 & 0.9744 & 0.9728 & 0.974 & 0.9781 & \textbf{0.9802} & $0.9739_{\pm 0.003}$\\
    MixedShapesSmallTrain & $0.8827_{\pm 0.002}$ & 0.8293 & 0.8767 & 0.8899 & 0.9365 & 0.939 & 0.9501 & 0.9542 & 0.9208 & 0.932 & 0.946 & 0.9621 & \textbf{0.9625} & 0.9567 & 0.9579 & $0.9372_{\pm 0.002}$\\
    MoteStrain & $0.8818_{\pm 0.015}$ & 0.889 & 0.8794 & 0.905 & 0.9241 & 0.9257 & 0.9161 & 0.9065 & \textbf{0.9553} & 0.8866 & 0.9393 & 0.9289 & 0.9545 & 0.9481 & 0.9497 & $0.93_{\pm 0.004}$\\
    NonInvasiveFetalECGThorax1 & $0.8877_{\pm 0.004}$ & 0.941 & 0.9272 & 0.9237 & 0.8992 & 0.8845 & 0.9003 & 0.8911 & 0.8545 & 0.9303 & 0.9293 & 0.9323 & 0.9379 & 0.9298 & 0.9196 & \textbf{0.943}$_{\pm 0.004}$\\
    NonInvasiveFetalECGThorax2 & $0.9091_{\pm 0.0}$ & 0.9476 & 0.9405 & 0.9369 & 0.9033 & 0.913 & 0.9237 & 0.9226 & 0.9028 & 0.9344 & 0.9364 & 0.945 & 0.9481 & 0.9344 & 0.9293 & \textbf{0.953}$_{\pm 0.001}$\\
    OSULeaf & $0.6832_{\pm 0.009}$ & 0.5661 & 0.595 & 0.7934 & 0.938 & 0.938 & 0.9835 & \textbf{0.9917} & 0.8595 & 0.9669 & 0.9628 & 0.9835 & 0.9835 & 0.9876 & 0.9835 & $0.9807_{\pm 0.002}$\\
    OliveOil & $0.8444_{\pm 0.019}$ & \textbf{0.9333} & 0.9 & 0.9 & \textbf{0.9333} & 0.8333 & 0.7333 & 0.9 & 0.7667 & 0.8667 & 0.8667 & 0.8667 & 0.8333 & 0.8 & 0.9 & $0.6889_{\pm 0.019}$\\
    PLAID & $0.8752_{\pm 0.009}$ & 0.7896 & 0.5661 & 0.7858 & 0.8696 & 0.8864 & 0.9181 & 0.9311 & 0.7561 & 0.8529 & 0.8529 & 0.851 & 0.8417 & \textbf{0.9385} & 0.9348 & $0.9143_{\pm 0.005}$\\
    PhalangesOutlinesCorrect & $0.8252_{\pm 0.003}$ & 0.8403 & \textbf{0.8613} & 0.8135 & 0.7925 & 0.8228 & 0.7995 & 0.7832 & 0.7471 & 0.7925 & 0.7949 & 0.7949 & 0.8054 & 0.7995 & 0.7972 & $0.8287_{\pm 0.008}$\\
    Phoneme & $0.3216_{\pm 0.005}$ & 0.1097 & 0.1361 & 0.3001 & 0.3898 & \textbf{0.4167} & 0.3956 & 0.3972 & 0.2758 & 0.3492 & 0.355 & 0.4114 & 0.4109 & 0.4088 & 0.4014 & $0.3557_{\pm 0.006}$\\
    PickupGestureWiimoteZ & $0.6933_{\pm 0.012}$ & 0.76 & 0.74 & 0.7 & 0.84 & 0.66 & \textbf{0.92} & 0.88 & 0.64 & 0.82 & 0.76 & 0.76 & 0.84 & \textbf{0.92} & 0.86 & $0.78_{\pm 0.02}$\\
    PigAirwayPressure & $0.2372_{\pm 0.003}$ & 0.0192 & 0.1538 & 0.1106 & 0.4183 & 0.4183 & 0.6298 & \textbf{0.7452} & 0.4519 & 0.5433 & 0.5769 & 0.5433 & 0.5625 & 0.6731 & \textbf{0.7452} & $0.7083_{\pm 0.02}$\\
    PigArtPressure & $0.891_{\pm 0.007}$ & 0.0337 & 0.2548 & 0.5481 & 0.8894 & 0.8413 & 0.8894 & 0.9327 & 0.9423 & 0.9327 & 0.9087 & 0.9471 & 0.9327 & 0.9471 & \textbf{0.9663} & $0.9295_{\pm 0.007}$\\
    PigCVP & $0.5128_{\pm 0.011}$ & 0.0192 & 0.1731 & 0.5337 & 0.8654 & 0.7885 & 0.7933 & 0.8317 & 0.8846 & 0.9135 & \textbf{0.9375} & 0.9279 & 0.9231 & 0.8798 & 0.9038 & $0.9199_{\pm 0.007}$\\
    Plane & \textbf{1.0}$_{\pm 0.0}$ & 0.9905 & 0.9905 & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0}$_{\pm 0.0}$\\
    PowerCons & $0.9944_{\pm 0.0}$ & \textbf{1.0} & \textbf{1.0} & 0.9556 & 0.9389 & 0.9333 & 0.9056 & 0.9222 & 0.9722 & 0.9722 & 0.9722 & 0.9778 & 0.9556 & 0.9667 & 0.9556 & $0.9648_{\pm 0.012}$\\
    ProximalPhalanxOutlineAgeGroup & $0.8472_{\pm 0.007}$ & \textbf{0.8585} & 0.839 & 0.8293 & 0.8488 & 0.8341 & 0.8293 & 0.8293 & 0.8537 & \textbf{0.8585} & 0.839 & 0.839 & 0.8341 & 0.8293 & 0.8293 & $0.8439_{\pm 0.01}$\\
    ProximalPhalanxOutlineCorrect & $0.8648_{\pm 0.012}$ & 0.9038 & \textbf{0.9244} & 0.8866 & 0.8866 & 0.8832 & 0.8729 & 0.8625 & 0.8385 & 0.8935 & 0.8763 & 0.8832 & 0.9107 & 0.8729 & 0.8557 & $0.9049_{\pm 0.002}$\\
    ProximalPhalanxTW & $0.8033_{\pm 0.007}$ & 0.8098 & \textbf{0.8293} & 0.7951 & 0.761 & 0.7561 & 0.761 & 0.7463 & 0.7854 & 0.7415 & 0.761 & 0.7512 & 0.7561 & 0.7561 & 0.761 & $0.7431_{\pm 0.012}$\\
    RefrigerationDevices & $0.5493_{\pm 0.023}$ & 0.504 & 0.4933 & 0.5147 & 0.5413 & 0.568 & 0.576 & \textbf{0.5973} & 0.5627 & 0.544 & 0.5253 & 0.536 & 0.56 & 0.584 & 0.5813 & $0.552_{\pm 0.019}$\\
    Rock & $0.62_{\pm 0.02}$ & 0.76 & 0.64 & 0.9 & 0.88 & 0.9 & \textbf{0.96} & 0.92 & 0.76 & 0.82 & 0.84 & 0.86 & 0.9 & 0.92 & 0.92 & $0.8933_{\pm 0.042}$\\
    ScreenType & $0.5271_{\pm 0.007}$ & 0.4187 & 0.4107 & 0.392 & 0.528 & 0.5387 & 0.536 & 0.4987 & 0.4693 & 0.48 & 0.5093 & 0.536 & 0.496 & \textbf{0.544} & 0.5253 & $0.5218_{\pm 0.011}$\\
    SemgHandGenderCh2 & $0.9272_{\pm 0.003}$ & \textbf{0.9467} & 0.8867 & 0.765 & 0.8583 & 0.8883 & 0.9017 & 0.91 & 0.8617 & 0.9017 & 0.8967 & 0.9117 & 0.93 & 0.9217 & 0.9317 & $0.9261_{\pm 0.011}$\\
    SemgHandMovementCh2 & \textbf{0.8615}$_{\pm 0.007}$ & 0.7711 & 0.5689 & 0.4444 & 0.52 & 0.5756 & 0.56 & 0.5911 & 0.6244 & 0.6978 & 0.6244 & 0.6956 & 0.6133 & 0.6467 & 0.6578 & $0.7978_{\pm 0.029}$\\
    SemgHandSubjectCh2 & $0.8837_{\pm 0.007}$ & \textbf{0.9356} & 0.8333 & 0.7089 & 0.8133 & 0.8444 & 0.8644 & 0.8489 & 0.7556 & 0.86 & 0.8533 & 0.8378 & 0.88 & 0.8933 & 0.9022 & $0.8948_{\pm 0.008}$\\
    ShakeGestureWiimoteZ & $0.8467_{\pm 0.031}$ & 0.82 & 0.74 & 0.84 & 0.88 & 0.92 & 0.84 & 0.86 & 0.9 & 0.94 & 0.92 & 0.88 & 0.9 & 0.88 & 0.88 & \textbf{0.9533}$_{\pm 0.031}$\\
    ShapeletSim & $0.9704_{\pm 0.008}$ & 0.4778 & 0.5056 & 0.9667 & 0.9667 & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & 0.9278 & 0.9722 & 0.9778 & 0.9944 & 0.9944 & \textbf{1.0} & \textbf{1.0} & $0.9815_{\pm 0.014}$\\
    ShapesAll & $0.8206_{\pm 0.002}$ & 0.8017 & 0.7917 & 0.8483 & 0.8633 & 0.8967 & 0.9117 & 0.8967 & 0.8717 & 0.8933 & 0.9017 & 0.8933 & 0.9167 & \textbf{0.925} & 0.9167 & $0.9139_{\pm 0.01}$\\
    SmallKitchenAppliances & \textbf{0.8302}$_{\pm 0.008}$ & 0.7867 & 0.7627 & 0.6107 & 0.8133 & 0.8107 & 0.8213 & 0.8267 & 0.8293 & 0.8187 & 0.7867 & 0.8 & 0.7893 & 0.8 & 0.8267 & $0.816_{\pm 0.003}$\\
    SmoothSubspace & $0.9867_{\pm 0.007}$ & \textbf{1.0} & \textbf{1.0} & 0.98 & 0.9667 & 0.9667 & 0.9667 & 0.98 & 0.9467 & 0.9733 & 0.9733 & 0.9533 & 0.96 & 0.9667 & 0.9667 & $0.9733_{\pm 0.0}$\\
    SonyAIBORobotSurface1 & $0.8397_{\pm 0.002}$ & 0.772 & 0.6722 & 0.8253 & \textbf{0.9368} & 0.8236 & 0.8636 & 0.8502 & 0.8153 & 0.8319 & 0.8336 & 0.9085 & 0.8952 & 0.8652 & 0.8552 & $0.8907_{\pm 0.002}$\\
    SonyAIBORobotSurface2 & $0.8835_{\pm 0.021}$ & 0.809 & 0.8279 & 0.8909 & 0.8909 & 0.8846 & 0.9381 & 0.9454 & 0.8835 & 0.9224 & 0.937 & 0.9486 & \textbf{0.9507} & 0.9486 & 0.9496 & $0.9307_{\pm 0.019}$\\
    StarLightCurves & $0.9702_{\pm 0.001}$ & 0.9732 & 0.9718 & 0.9641 & 0.9739 & 0.9745 & 0.974 & 0.9716 & 0.9745 & 0.9766 & 0.9733 & 0.9763 & 0.9734 & 0.9751 & 0.9726 & \textbf{0.978}$_{\pm 0.001}$\\
    Strawberry & $0.9333_{\pm 0.003}$ & 0.9811 & \textbf{0.9838} & 0.9622 & 0.9649 & 0.973 & 0.9568 & 0.9541 & 0.9595 & 0.9514 & 0.9622 & 0.9703 & 0.9622 & 0.9676 & 0.9541 & $0.9676_{\pm 0.005}$\\
    SwedishLeaf & $0.9115_{\pm 0.002}$ & 0.9504 & 0.9456 & 0.9312 & 0.9536 & 0.9472 & 0.9584 & 0.9456 & 0.9392 & 0.9616 & \textbf{0.9728} & 0.9648 & 0.9664 & 0.968 & 0.9584 & $0.9685_{\pm 0.002}$\\
    Symbols & $0.9618_{\pm 0.005}$ & 0.8824 & 0.8945 & 0.9558 & 0.9668 & 0.9839 & 0.9859 & 0.9859 & 0.9688 & 0.9869 & 0.9769 & 0.9879 & 0.9839 & 0.9879 & \textbf{0.9889} & $0.9792_{\pm 0.003}$\\
    SyntheticControl & $0.9922_{\pm 0.002}$ & 0.99 & 0.9833 & 0.9767 & 0.9933 & 0.9967 & \textbf{1.0} & \textbf{1.0} & 0.9733 & 0.99 & 0.99 & 0.99 & 0.9867 & \textbf{1.0} & \textbf{1.0} & $0.9989_{\pm 0.002}$\\
    ToeSegmentation1 & $0.8553_{\pm 0.016}$ & 0.5746 & 0.6667 & 0.9474 & 0.9693 & 0.9167 & 0.9386 & 0.9035 & 0.8421 & 0.9605 & 0.9649 & 0.943 & 0.9737 & 0.9561 & 0.9693 & \textbf{0.9795}$_{\pm 0.013}$\\
    ToeSegmentation2 & $0.7846_{\pm 0.013}$ & 0.6538 & 0.8154 & 0.9077 & 0.9154 & 0.8615 & 0.9 & 0.8769 & 0.8692 & 0.8769 & \textbf{0.9308} & 0.9231 & 0.9231 & 0.9154 & 0.9231 & $0.9179_{\pm 0.012}$\\
    Trace & \textbf{1.0}$_{\pm 0.0}$ & 0.91 & 0.98 & 0.99 & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0}$_{\pm 0.0}$\\
    TwoLeadECG & $0.8584_{\pm 0.015}$ & 0.9508 & 0.9254 & 0.9921 & 0.9851 & 0.993 & \textbf{1.0} & 0.993 & 0.9622 & 0.9939 & 0.9991 & 0.9939 & 0.9982 & \textbf{1.0} & 0.9956 & $0.9988_{\pm 0.001}$\\
    TwoPatterns & $0.9935_{\pm 0.001}$ & 0.995 & 0.9032 & 0.9918 & 0.996 & 0.9948 & 0.998 & 0.999 & 0.9415 & 0.9948 & 0.997 & 0.9988 & 0.9992 & 0.9995 & 0.9992 & \textbf{1.0}$_{\pm 0.0}$\\
    UMD & $0.9398_{\pm 0.033}$ & \textbf{1.0} & 0.9375 & 0.9931 & 0.9583 & 0.9931 & 0.9931 & 0.9931 & 0.9722 & 0.9931 & 0.9931 & 0.9861 & 0.9931 & 0.9931 & 0.9931 & $0.9931_{\pm 0.0}$\\
    UWaveGestureLibraryAll & $0.9454_{\pm 0.001}$ & 0.9665 & 0.9629 & 0.9595 & 0.9428 & 0.9587 & 0.9436 & 0.9375 & 0.9319 & 0.9492 & 0.9506 & 0.9453 & 0.9573 & 0.9559 & 0.952 & \textbf{0.9689}$_{\pm 0.003}$\\
    UWaveGestureLibraryX & $0.8062_{\pm 0.003}$ & 0.8079 & 0.7984 & 0.7931 & 0.7998 & 0.821 & 0.8314 & 0.8417 & 0.8004 & 0.8386 & 0.8445 & 0.8509 & 0.8495 & 0.8431 & 0.8498 & \textbf{0.8664}$_{\pm 0.004}$\\
    UWaveGestureLibraryY & $0.7248_{\pm 0.002}$ & 0.715 & 0.7164 & 0.7144 & 0.7074 & 0.7387 & 0.7741 & 0.7744 & 0.6957 & 0.7557 & 0.7658 & 0.7761 & 0.7834 & 0.7783 & 0.787 & \textbf{0.8135}$_{\pm 0.005}$\\
    UWaveGestureLibraryZ & $0.7519_{\pm 0.002}$ & 0.7513 & 0.7379 & 0.7401 & 0.7328 & 0.7552 & 0.7739 & 0.7744 & 0.7554 & 0.78 & 0.7747 & 0.7951 & 0.7956 & 0.7887 & 0.7859 & \textbf{0.8113}$_{\pm 0.004}$\\
    Wafer & $0.9988_{\pm 0.0}$ & 0.9953 & 0.9959 & 0.994 & 0.9992 & 0.9977 & 0.9992 & \textbf{0.9998} & 0.9961 & 0.9974 & 0.9969 & 0.999 & 0.9995 & 0.9995 & 0.9995 & $0.997_{\pm 0.001}$\\
    Wine & $0.6358_{\pm 0.039}$ & 0.7778 & 0.7222 & 0.8333 & 0.8704 & 0.8519 & 0.6667 & 0.8333 & 0.7778 & 0.8519 & 0.8333 & 0.8704 & 0.8148 & 0.8333 & \textbf{0.8889} & $0.7284_{\pm 0.057}$\\
    WordSynonyms & $0.639_{\pm 0.008}$ & 0.6442 & 0.5972 & 0.685 & 0.5987 & 0.6677 & 0.6771 & 0.6771 & 0.6003 & 0.6897 & 0.7116 & 0.7022 & 0.7194 & 0.732 & 0.721 & \textbf{0.7435}$_{\pm 0.013}$\\
    Worms & $0.7229_{\pm 0.015}$ & 0.5714 & 0.5584 & 0.6883 & 0.7922 & 0.7792 & 0.7792 & \textbf{0.8571} & 0.7662 & 0.7922 & 0.7792 & 0.7922 & 0.7922 & 0.8052 & 0.8312 & $0.7922_{\pm 0.039}$\\
    WormsTwoClass & $0.8182_{\pm 0.013}$ & 0.6104 & 0.5714 & 0.7532 & 0.8052 & 0.8052 & 0.8182 & 0.8442 & 0.6753 & 0.7922 & 0.7922 & 0.8312 & 0.8312 & 0.8182 & \textbf{0.8831} & $0.8182_{\pm 0.013}$\\
    Yoga & $0.8289_{\pm 0.005}$ & 0.8603 & 0.8653 & 0.8507 & 0.7947 & 0.819 & 0.8403 & 0.8793 & 0.7493 & 0.8497 & 0.8557 & 0.875 & 0.887 & 0.8713 & \textbf{0.901} & $0.893_{\pm 0.012}$\\
    \midrule
    \textit{\textbf{Average}} & 0.7969 & 0.7806 & 0.7707 & 0.7984 & 0.8202 & 0.8244 & 0.8288 & 0.8347 & 0.7905 & 0.8283 & 0.836 & 0.8369 & 0.8397 & 0.8466 & 0.8494 & 0.85 \\
    \textit{\textbf{Best Counts}} & 11 & 20 & 18 & 8 & 8 & 10 & 15 & 17 & 8 & 8 & 12 & 11 & 14 & 23 & 23 & 36 \\
    \bottomrule
    \end{tabular}
    }`{=latex}

```{=latex}
\addtolength{\tabcolsep}{-0.3em}
```
`\scalebox{0.36}{
    \begin{tabular}{l|lllllllllllllllllllllllllll}
    \toprule
    & TS2Vec & T-Loss & TNC & TS-TCC & CNN & Encoder & FCN & MLP & ResNet & TWIESN & DTW & Official & Improved & Catch22+ & TabPFN & TabICL & TiRex & Chronos2 & TiViT-H & TiConvNext & NuTime & Mantis+ & MantisV2 & SE-Mantis+ & SE-MantisV2 & MantisV2 \& & MantisV2 \& \\
    & &  &  & &  &  & &  & &  & & MOMENT & MOMENT &  &  &  & &  & &  &  &  &  &  &  & TiViT-H & TiConvNext\\
    \midrule
    Adiac & 0.762 & 0.675 & 0.767 & 0.767 & 0.393 & 0.318 & 0.841 & 0.391 & 0.833 & 0.428 & 0.604 & 0.688 & 0.7775 & 0.734 & 0.8031 & 0.8031 & 0.7826 & 0.8286 & 0.7187 & 0.7059 & 0.8005 & 0.8414 & 0.8363 & 0.8312 & \textbf{0.8465} & 0.7519 & 0.7749\\
    AllGestureWiimoteX & \textbf{0.777} & 0.763 & 0.697 & 0.697 & 0.411 & 0.475 & 0.713 & 0.477 & 0.741 & 0.522 & 0.716 & 0.607 & 0.6614 & 0.5957 & 0.6229 & 0.5043 & 0.7014 & 0.6843 & 0.67 & 0.6943 & 0.6071 & 0.7171 & 0.73 & 0.69 & 0.6871 & 0.7229 & 0.7357\\
    AllGestureWiimoteY & 0.793 & 0.726 & 0.741 & 0.741 & 0.479 & 0.509 & 0.784 & 0.571 & \textbf{0.794} & 0.6 & 0.729 & 0.666 & 0.7029 & 0.6319 & 0.6329 & 0.5114 & 0.7314 & 0.7043 & 0.7371 & 0.7229 & 0.6529 & 0.7414 & 0.7471 & 0.73 & 0.7271 & 0.7557 & 0.7486\\
    AllGestureWiimoteZ & \textbf{0.746} & 0.723 & 0.689 & 0.689 & 0.375 & 0.396 & 0.692 & 0.439 & 0.726 & 0.516 & 0.643 & 0.537 & 0.6057 & 0.5462 & 0.5329 & 0.4529 & 0.6343 & 0.66 & 0.6771 & 0.6586 & 0.5414 & 0.6657 & 0.6814 & 0.6329 & 0.6357 & 0.6886 & 0.7043\\
    ArrowHead & 0.857 & 0.766 & 0.737 & 0.737 & 0.717 & 0.63 & 0.843 & 0.784 & 0.838 & 0.689 & 0.703 & 0.743 & 0.7771 & 0.741 & 0.7543 & 0.7429 & 0.7829 & 0.8514 & 0.8229 & 0.8229 & 0.7257 & 0.8457 & 0.8514 & 0.84 & 0.8514 & \textbf{0.88} & 0.8457\\
    BME & 0.993 & 0.993 & 0.933 & 0.933 & 0.947 & 0.827 & 0.836 & 0.905 & 0.999 & 0.819 & 0.9 & 0.96 & 0.9933 & \textbf{1.0} & \textbf{1.0} & 0.98 & 0.9933 & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & 0.9667 & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0}\\
    Beef & 0.767 & 0.667 & 0.6 & 0.6 & 0.767 & 0.707 & 0.68 & 0.713 & 0.753 & 0.527 & 0.633 & 0.833 & 0.8 & 0.6889 & 0.8 & 0.7667 & \textbf{0.9333} & 0.7 & 0.7667 & 0.8333 & 0.7333 & 0.6667 & 0.7333 & 0.7333 & 0.8 & 0.7667 & 0.8333\\
    BeetleFly & 0.9 & 0.8 & 0.8 & 0.8 & 0.9 & 0.62 & 0.91 & 0.88 & 0.85 & 0.79 & 0.7 & 0.9 & \textbf{0.95} & 0.7833 & 0.9 & 0.8 & 0.9 & 0.8 & 0.85 & \textbf{0.95} & 0.7 & 0.8 & 0.85 & \textbf{0.95} & \textbf{0.95} & \textbf{0.95} & \textbf{0.95}\\
    BirdChicken & 0.8 & 0.85 & 0.65 & 0.65 & 0.71 & 0.51 & 0.94 & 0.74 & 0.88 & 0.62 & 0.75 & 0.85 & \textbf{1.0} & 0.85 & 0.85 & 0.75 & 0.9 & 0.9 & 0.9 & 0.95 & \textbf{1.0} & 0.9 & 0.9 & 0.9 & 0.9 & 0.9 & 0.9\\
    CBF & \textbf{1.0} & 0.983 & 0.998 & 0.998 & 0.959 & 0.977 & 0.994 & 0.869 & 0.996 & 0.896 & 0.997 & 0.96 & 0.9889 & 0.9763 & 0.9133 & 0.9244 & 0.9967 & \textbf{1.0} & 0.9989 & 0.9989 & 0.99 & 0.9956 & 0.9967 & 0.9989 & 0.9944 & 0.9989 & \textbf{1.0}\\
    Chinatown & 0.965 & 0.951 & 0.983 & 0.983 & 0.977 & 0.966 & 0.98 & 0.872 & 0.978 & 0.825 & 0.957 & 0.965 & 0.9825 & 0.9796 & \textbf{0.9854} & 0.9796 & 0.9825 & \textbf{0.9854} & 0.9679 & 0.9388 & 0.9621 & 0.9738 & 0.9621 & 0.9621 & 0.9534 & 0.9592 & 0.9504\\
    ChlorineConcentration & 0.832 & 0.749 & 0.753 & 0.753 & 0.608 & 0.583 & 0.817 & 0.8 & 0.853 & 0.554 & 0.648 & 0.765 & 0.7789 & 0.6682 & 0.95 & \textbf{0.9773} & 0.8018 & 0.7672 & 0.7622 & 0.7771 & 0.6086 & 0.6857 & 0.7115 & 0.7982 & 0.7867 & 0.7914 & 0.8026\\
    Coffee & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & 0.886 & \textbf{1.0} & 0.993 & \textbf{1.0} & 0.979 & \textbf{1.0} & 0.893 & 0.9643 & 0.9881 & 0.9643 & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0}\\
    CricketX & 0.782 & 0.713 & 0.731 & 0.731 & 0.535 & 0.644 & 0.794 & 0.591 & 0.799 & 0.627 & 0.754 & 0.749 & 0.7436 & 0.6974 & 0.6667 & 0.6641 & 0.7 & 0.7564 & 0.759 & 0.7487 & 0.7051 & 0.7564 & 0.7821 & 0.7667 & \textbf{0.8} & 0.7769 & 0.7718\\
    CricketY & 0.749 & 0.728 & 0.718 & 0.718 & 0.582 & 0.639 & 0.793 & 0.598 & 0.81 & 0.652 & 0.744 & 0.746 & 0.7077 & 0.6923 & 0.7 & 0.6333 & 0.7538 & 0.7359 & 0.7872 & 0.7462 & 0.6897 & 0.8077 & 0.8154 & 0.8103 & 0.8179 & \textbf{0.8231} & 0.8026\\
    CricketZ & 0.792 & 0.708 & 0.713 & 0.713 & 0.501 & 0.651 & 0.81 & 0.629 & 0.809 & 0.643 & 0.754 & 0.731 & 0.7436 & 0.7427 & 0.6718 & 0.6692 & 0.7333 & 0.7231 & 0.7974 & 0.759 & 0.659 & 0.7923 & 0.8128 & 0.7923 & \textbf{0.8179} & 0.8103 & 0.7846\\
    Crop & 0.756 & 0.722 & 0.742 & 0.742 & 0.67 & 0.76 & 0.738 & 0.618 & 0.743 & 0.489 & 0.665 & 0.734 & 0.7234 & 0.7523 & 0.7989 & \textbf{0.812} & 0.7029 & 0.7163 & 0.6867 & 0.6737 & 0.6895 & 0.7364 & 0.7348 & 0.7362 & 0.732 & 0.7231 & 0.7171\\
    DiatomSizeReduction & \textbf{0.984} & \textbf{0.984} & 0.977 & 0.977 & 0.954 & 0.88 & 0.346 & 0.909 & 0.301 & 0.914 & 0.967 & 0.879 & 0.9412 & 0.9401 & 0.9608 & 0.951 & 0.9281 & 0.9477 & 0.9673 & 0.9673 & 0.915 & 0.9216 & 0.9281 & 0.9216 & 0.9281 & 0.9575 & 0.9706\\
    DistalPhalanxOutlineAgeGroup & 0.727 & 0.727 & 0.755 & 0.755 & 0.758 & 0.761 & 0.718 & 0.647 & 0.718 & 0.705 & 0.77 & 0.669 & 0.6691 & 0.717 & 0.7626 & 0.7626 & 0.7482 & 0.741 & 0.7122 & 0.7266 & 0.705 & 0.7266 & \textbf{0.777} & 0.705 & 0.7122 & 0.705 & 0.7338\\
    DistalPhalanxOutlineCorrect & 0.761 & 0.775 & 0.754 & 0.754 & 0.772 & 0.724 & 0.76 & 0.727 & 0.77 & 0.711 & 0.717 & 0.717 & 0.7536 & 0.7911 & 0.7826 & 0.7754 & 0.75 & 0.7645 & 0.7536 & 0.7681 & 0.7428 & 0.779 & 0.7681 & \textbf{0.7935} & 0.7536 & 0.75 & 0.7681\\
    DistalPhalanxTW & 0.698 & 0.676 & 0.676 & 0.676 & 0.671 & 0.694 & 0.695 & 0.61 & 0.663 & 0.591 & 0.59 & 0.612 & 0.6259 & 0.6475 & 0.6978 & 0.6835 & 0.6547 & 0.6547 & 0.6691 & 0.6547 & 0.6835 & 0.6547 & 0.6835 & 0.6763 & \textbf{0.705} & 0.6691 & 0.6763\\
    DodgerLoopDay & 0.562 & \texttt{NaN} & \texttt{NaN} & \texttt{NaN} & 0.312 & 0.487 & 0.143 & 0.16 & 0.15 & 0.593 & 0.5 & 0.438 & 0.4375 & 0.6417 & 0.6125 & \textbf{0.725} & 0.55 & 0.525 & 0.5 & 0.5625 & 0.55 & 0.5875 & 0.5 & 0.5375 & 0.525 & 0.5125 & 0.55\\
    DodgerLoopGame & 0.841 & \texttt{NaN} & \texttt{NaN} & \texttt{NaN} & 0.816 & 0.81 & 0.768 & 0.865 & 0.71 & 0.716 & 0.877 & 0.623 & 0.8406 & 0.8333 & 0.7899 & 0.7971 & 0.7899 & \textbf{0.8913} & 0.8406 & 0.8406 & 0.8188 & 0.8406 & 0.8623 & 0.8478 & 0.8551 & 0.8188 & 0.8841\\
    DodgerLoopWeekend & 0.964 & \texttt{NaN} & \texttt{NaN} & \texttt{NaN} & 0.974 & 0.983 & 0.904 & 0.978 & 0.952 & 0.954 & 0.949 & 0.826 & \textbf{0.9855} & \textbf{0.9855} & \textbf{0.9855} & 0.9783 & 0.9565 & 0.9638 & 0.9493 & 0.913 & 0.9638 & \textbf{0.9855} & 0.971 & \textbf{0.9855} & 0.971 & \textbf{0.9855} & 0.9565\\
    ECG200 & 0.92 & \textbf{0.94} & 0.88 & 0.88 & 0.816 & 0.884 & 0.888 & 0.914 & 0.874 & 0.874 & 0.77 & 0.76 & 0.9 & 0.85 & 0.89 & 0.88 & 0.83 & 0.84 & 0.86 & 0.86 & 0.85 & 0.89 & 0.88 & 0.87 & 0.87 & 0.85 & 0.87\\
    ECG5000 & 0.935 & 0.933 & 0.941 & 0.941 & 0.928 & 0.941 & 0.94 & 0.93 & 0.935 & 0.922 & 0.924 & 0.942 & 0.9333 & 0.9398 & 0.942 & \textbf{0.9447} & 0.9391 & 0.9342 & 0.9378 & 0.9311 & 0.9193 & 0.9322 & 0.9353 & 0.9338 & 0.9329 & 0.9371 & 0.9289\\
    ECGFiveDays & \textbf{1.0} & \textbf{1.0} & 0.878 & 0.878 & 0.874 & 0.842 & 0.985 & 0.973 & 0.966 & 0.723 & 0.768 & 0.804 & 0.9698 & 0.7975 & 0.9245 & 0.9826 & 0.9756 & 0.9907 & 0.9791 & 0.9895 & 0.899 & 0.9779 & 0.993 & 0.9791 & 0.9977 & 0.9895 & 0.9895\\
    Earthquakes & 0.748 & 0.748 & 0.748 & 0.748 & 0.709 & 0.74 & 0.725 & 0.727 & 0.712 & 0.748 & 0.719 & 0.748 & 0.705 & \textbf{0.7506} & 0.7482 & 0.7482 & 0.6906 & 0.7482 & 0.705 & 0.6978 & 0.7338 & 0.6835 & 0.7194 & 0.7266 & 0.7122 & 0.6978 & 0.7122\\
    ElectricDevices & 0.721 & 0.707 & 0.686 & 0.686 & 0.686 & 0.702 & 0.706 & 0.593 & 0.728 & 0.605 & 0.602 & 0.646 & 0.7404 & 0.7396 & 0.7025 & 0.6614 & 0.6709 & 0.7168 & 0.7635 & \textbf{0.7764} & 0.6786 & 0.7301 & 0.7175 & 0.7125 & 0.7257 & 0.7636 & 0.7679\\
    FaceAll & 0.771 & 0.786 & 0.813 & 0.813 & 0.774 & 0.794 & \textbf{0.938} & 0.794 & 0.867 & 0.673 & 0.808 & 0.791 & 0.7787 & 0.7682 & 0.8077 & 0.771 & 0.7876 & 0.7533 & 0.7521 & 0.7367 & 0.7207 & 0.745 & 0.7574 & 0.7444 & 0.7568 & 0.7562 & 0.7444\\
    FaceFour & 0.932 & 0.92 & 0.773 & 0.773 & 0.905 & 0.852 & 0.93 & 0.836 & 0.955 & 0.857 & 0.83 & 0.852 & 0.875 & 0.8977 & 0.9091 & 0.8864 & 0.9545 & 0.9091 & 0.875 & 0.9205 & 0.9659 & 0.9205 & \textbf{0.9886} & 0.8977 & 0.9432 & 0.9659 & 0.9659\\
    FacesUCR & 0.924 & 0.884 & 0.863 & 0.863 & 0.873 & 0.867 & 0.943 & 0.831 & \textbf{0.954} & 0.641 & 0.905 & 0.811 & 0.8912 & 0.8558 & 0.8766 & 0.8771 & 0.8776 & 0.8576 & 0.8751 & 0.8561 & 0.8195 & 0.881 & 0.9117 & 0.8776 & 0.9083 & 0.9088 & 0.898\\
    FiftyWords & 0.771 & 0.732 & 0.653 & 0.653 & 0.624 & 0.658 & 0.646 & 0.708 & 0.74 & 0.518 & 0.69 & 0.802 & 0.7758 & 0.726 & 0.7385 & 0.7165 & 0.7231 & 0.7956 & 0.7868 & 0.7714 & 0.7165 & 0.7868 & 0.8198 & 0.7824 & 0.8088 & \textbf{0.8264} & 0.8088\\
    Fish & 0.926 & 0.891 & 0.817 & 0.817 & 0.855 & 0.734 & 0.961 & 0.848 & 0.981 & 0.878 & 0.823 & 0.8 & 0.96 & 0.7619 & 0.88 & 0.8857 & 0.9029 & 0.9429 & 0.9486 & 0.9714 & 0.9143 & 0.96 & 0.96 & 0.9486 & 0.9714 & 0.9657 & \textbf{0.9829}\\
    FordA & 0.936 & 0.928 & 0.93 & 0.93 & 0.896 & 0.928 & 0.914 & 0.816 & 0.937 & 0.555 & 0.555 & 0.936 & 0.9061 & 0.9101 & 0.897 & 0.8758 & \textbf{0.947} & 0.928 & 0.9136 & 0.9227 & 0.9061 & 0.9265 & 0.9432 & 0.9432 & 0.9462 & 0.9242 & 0.9318\\
    FordB & 0.794 & 0.793 & 0.815 & 0.815 & 0.749 & 0.777 & 0.772 & 0.707 & 0.813 & 0.512 & 0.62 & 0.798 & 0.7556 & 0.7292 & 0.7556 & 0.7136 & 0.816 & 0.8074 & 0.7963 & 0.779 & 0.7654 & 0.7889 & 0.8099 & 0.816 & \textbf{0.8259} & 0.8136 & 0.8111\\
    FreezerRegularTrain & 0.986 & 0.956 & 0.989 & 0.989 & 0.987 & 0.76 & 0.997 & 0.906 & 0.998 & 0.946 & 0.899 & 0.982 & 0.9881 & \textbf{0.9996} & 0.9986 & 0.9877 & 0.9912 & 0.9951 & 0.9965 & 0.9975 & 0.994 & 0.9905 & 0.9961 & 0.9968 & 0.9954 & 0.9972 & 0.9982\\
    FreezerSmallTrain & 0.87 & 0.933 & 0.979 & 0.979 & 0.739 & 0.676 & 0.683 & 0.686 & 0.832 & 0.917 & 0.753 & 0.902 & 0.8186 & 0.9251 & 0.8933 & 0.8098 & 0.8912 & 0.9688 & 0.9839 & 0.9902 & 0.9863 & 0.9158 & 0.9849 & 0.974 & \textbf{0.9905} & 0.9891 & 0.9895\\
    Fungi & 0.957 & \textbf{1.0} & 0.753 & 0.753 & 0.961 & 0.934 & 0.018 & 0.863 & 0.177 & 0.439 & 0.839 & 0.898 & \textbf{1.0} & 0.9229 & 0.8656 & 0.7849 & 0.9516 & 0.9462 & 0.9785 & 0.9946 & 0.7473 & 0.9247 & 0.9731 & 0.9839 & 0.9946 & 0.9839 & \textbf{1.0}\\
    GestureMidAirD1 & 0.608 & 0.608 & 0.369 & 0.369 & 0.534 & 0.528 & 0.695 & 0.575 & 0.698 & 0.549 & 0.569 & 0.646 & 0.6846 & 0.6795 & 0.6231 & 0.6769 & 0.6923 & 0.7308 & 0.7538 & 0.7615 & 0.6846 & 0.7385 & \textbf{0.7769} & 0.7538 & 0.7692 & \textbf{0.7769} & 0.7538\\
    GestureMidAirD2 & 0.469 & 0.546 & 0.254 & 0.254 & 0.518 & 0.48 & 0.631 & 0.545 & 0.668 & 0.575 & 0.608 & 0.608 & 0.5769 & 0.6205 & 0.5615 & 0.5846 & 0.6154 & \textbf{0.7} & 0.6615 & 0.6769 & 0.5615 & 0.5615 & 0.6462 & 0.6 & 0.6231 & \textbf{0.7} & 0.6923\\
    GestureMidAirD3 & 0.292 & 0.285 & 0.177 & 0.177 & 0.317 & 0.368 & 0.326 & 0.382 & 0.34 & 0.275 & 0.323 & 0.369 & 0.3692 & 0.3923 & 0.3692 & 0.3615 & 0.4077 & 0.3538 & 0.4692 & 0.4923 & 0.4462 & 0.4462 & 0.4462 & 0.4385 & 0.4308 & 0.4385 & \textbf{0.5}\\
    GesturePebbleZ1 & 0.93 & 0.919 & 0.395 & 0.395 & 0.844 & 0.821 & 0.88 & 0.792 & 0.901 & 0.84 & 0.791 & 0.849 & 0.8953 & 0.8779 & 0.8488 & 0.8779 & 0.8837 & 0.9012 & 0.907 & 0.8547 & 0.8953 & 0.9244 & 0.9302 & 0.9186 & 0.9302 & \textbf{0.936} & 0.9302\\
    GesturePebbleZ2 & 0.873 & 0.899 & 0.43 & 0.43 & 0.778 & 0.796 & 0.781 & 0.701 & 0.777 & 0.843 & 0.671 & 0.816 & 0.8924 & 0.7384 & 0.7848 & 0.7532 & 0.8418 & 0.8671 & 0.8544 & 0.8165 & 0.7722 & 0.8797 & 0.9051 & 0.8987 & 0.8924 & \textbf{0.9114} & 0.8797\\
    GunPoint & 0.98 & 0.98 & 0.993 & 0.993 & 0.948 & 0.784 & \textbf{1.0} & 0.928 & 0.991 & 0.989 & 0.907 & 0.927 & \textbf{1.0} & 0.9467 & 0.9667 & 0.9533 & 0.98 & 0.9933 & 0.9933 & 0.9933 & 0.9733 & 0.9867 & 0.98 & 0.9867 & 0.98 & \textbf{1.0} & 0.9933\\
    GunPointAgeSpan & 0.987 & 0.994 & 0.994 & 0.994 & 0.912 & 0.89 & 0.996 & 0.934 & \textbf{0.997} & 0.965 & 0.918 & 0.962 & 0.9778 & 0.9884 & 0.9905 & 0.9842 & 0.9905 & 0.9905 & 0.9937 & 0.9873 & 0.9684 & 0.9873 & 0.9937 & 0.9905 & 0.9937 & 0.9968 & 0.9905\\
    GunPointMaleVersusFemale & \textbf{1.0} & 0.997 & 0.997 & 0.997 & 0.977 & 0.978 & 0.997 & 0.98 & 0.992 & 0.988 & 0.997 & 0.991 & 0.9968 & 0.9937 & 0.9968 & \textbf{1.0} & 0.9937 & 0.9937 & \textbf{1.0} & 0.9968 & 0.9873 & 0.9968 & \textbf{1.0} & 0.9968 & \textbf{1.0} & \textbf{1.0} & \textbf{1.0}\\
    GunPointOldVersusYoung & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & 0.922 & 0.923 & 0.989 & 0.941 & 0.989 & 0.975 & 0.838 & 0.981 & 0.927 & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & 0.9651 & 0.9778 & 0.9873 & 0.9937 & \textbf{1.0} & 0.9968 & 0.9968 & 0.9968 & 0.9968 & 0.9968 & 0.9968\\
    Ham & 0.714 & 0.724 & 0.743 & 0.743 & 0.72 & 0.682 & 0.707 & 0.699 & 0.758 & \textbf{0.768} & 0.467 & 0.581 & 0.6571 & 0.6063 & 0.7429 & 0.7238 & 0.7048 & 0.6952 & 0.7429 & 0.7238 & 0.7333 & 0.581 & 0.7048 & 0.6571 & 0.6667 & 0.7429 & 0.7333\\
    Herring & 0.641 & 0.594 & 0.594 & 0.594 & 0.531 & 0.512 & 0.644 & 0.491 & 0.6 & 0.625 & 0.531 & 0.594 & 0.6406 & 0.5208 & 0.5938 & 0.6406 & 0.6094 & 0.5156 & 0.5312 & 0.6094 & 0.5312 & 0.625 & 0.6875 & \textbf{0.7344} & 0.6562 & 0.6562 & 0.6406\\
    InsectWingbeatSound & 0.63 & 0.597 & 0.415 & 0.415 & 0.585 & 0.63 & 0.392 & 0.604 & 0.499 & 0.435 & 0.355 & 0.607 & 0.6212 & 0.629 & \textbf{0.6672} & 0.6556 & 0.6253 & 0.6313 & 0.551 & 0.5652 & 0.5101 & 0.5838 & 0.596 & 0.5596 & 0.5793 & 0.5899 & 0.6081\\
    ItalyPowerDemand & 0.925 & 0.954 & 0.955 & 0.955 & 0.954 & 0.964 & 0.963 & 0.953 & 0.962 & 0.871 & 0.95 & 0.911 & 0.9543 & 0.9537 & \textbf{0.9699} & 0.9631 & 0.9602 & 0.964 & 0.9407 & 0.9378 & 0.9116 & 0.9281 & 0.9456 & 0.9436 & 0.9475 & 0.9427 & 0.9446\\
    Lightning7 & 0.863 & 0.795 & 0.685 & 0.685 & 0.647 & 0.696 & 0.825 & 0.616 & 0.827 & 0.608 & 0.726 & 0.726 & 0.7397 & 0.7671 & 0.6986 & 0.7397 & 0.8082 & 0.7945 & 0.8493 & 0.7671 & 0.7808 & 0.8356 & 0.863 & 0.863 & 0.8219 & \textbf{0.8767} & 0.7945\\
    Meat & 0.95 & 0.95 & 0.883 & 0.883 & 0.913 & 0.787 & 0.803 & 0.893 & 0.99 & 0.97 & 0.933 & 0.917 & \textbf{1.0} & 0.9222 & 0.9833 & 0.9333 & 0.9 & 0.9333 & 0.8167 & 0.85 & 0.9333 & 0.9833 & 0.9667 & 0.9 & 0.8833 & 0.8833 & 0.8833\\
    MedicalImages & 0.789 & 0.75 & 0.747 & 0.747 & 0.671 & 0.664 & 0.778 & 0.719 & 0.77 & 0.649 & 0.737 & 0.762 & 0.7461 & 0.7737 & 0.7947 & \textbf{0.8079} & 0.725 & 0.7474 & 0.7316 & 0.7513 & 0.7092 & 0.7618 & 0.7711 & 0.7579 & 0.7592 & 0.7566 & 0.7711\\
    MelbournePedestrian & 0.959 & 0.944 & 0.949 & 0.949 & 0.813 & 0.884 & 0.912 & 0.863 & 0.909 & 0.73 & 0.791 & 0.876 & 0.8954 & 0.9597 & \textbf{0.9803} & \textbf{0.9803} & 0.8938 & 0.9049 & 0.8626 & 0.87 & 0.9332 & 0.9582 & 0.9582 & 0.9574 & 0.9537 & 0.9377 & 0.9426\\
    MiddlePhalanxOutlineAgeGroup & 0.636 & \textbf{0.656} & 0.63 & 0.63 & 0.534 & 0.577 & 0.535 & 0.522 & 0.545 & 0.578 & 0.5 & 0.461 & 0.5065 & 0.5952 & 0.6234 & 0.6299 & 0.487 & 0.5519 & 0.5584 & 0.526 & 0.5519 & 0.5195 & 0.5455 & 0.5 & 0.4675 & 0.539 & 0.539\\
    MiddlePhalanxOutlineCorrect & 0.838 & 0.825 & 0.818 & 0.818 & 0.744 & 0.752 & 0.795 & 0.755 & 0.826 & 0.743 & 0.698 & 0.467 & 0.8076 & 0.811 & \textbf{0.8522} & 0.8351 & 0.8076 & 0.8351 & 0.7938 & 0.8179 & 0.7491 & 0.8351 & 0.8419 & 0.8076 & 0.811 & 0.8007 & 0.8144\\
    MiddlePhalanxTW & 0.584 & 0.591 & 0.61 & 0.61 & 0.551 & 0.597 & 0.501 & 0.536 & 0.495 & 0.569 & 0.506 & 0.532 & 0.5455 & 0.5844 & 0.6169 & \textbf{0.6234} & 0.5 & 0.5 & 0.5195 & 0.4805 & 0.4416 & 0.487 & 0.513 & 0.4935 & 0.513 & 0.5195 & 0.5\\
    MoteStrain & 0.861 & 0.851 & 0.843 & 0.843 & 0.885 & 0.872 & 0.936 & 0.855 & 0.924 & 0.809 & 0.835 & 0.774 & 0.905 & 0.8818 & 0.889 & 0.8794 & 0.9241 & 0.9257 & 0.9161 & 0.9065 & \textbf{0.9553} & 0.8866 & 0.9393 & 0.9289 & 0.9545 & 0.9481 & 0.9497\\
    OSULeaf & 0.851 & 0.76 & 0.723 & 0.723 & 0.482 & 0.554 & 0.979 & 0.56 & 0.98 & 0.628 & 0.591 & 0.785 & 0.7934 & 0.6832 & 0.5661 & 0.595 & 0.938 & 0.938 & 0.9835 & \textbf{0.9917} & 0.8595 & 0.9669 & 0.9628 & 0.9835 & 0.9835 & 0.9876 & 0.9835\\
    PhalangesOutlinesCorrect & 0.809 & 0.784 & 0.804 & 0.804 & 0.799 & 0.745 & 0.818 & 0.756 & 0.845 & 0.656 & 0.728 & 0.652 & 0.8135 & 0.8252 & 0.8403 & \textbf{0.8613} & 0.7925 & 0.8228 & 0.7995 & 0.7832 & 0.7471 & 0.7925 & 0.7949 & 0.7949 & 0.8054 & 0.7995 & 0.7972\\
    PickupGestureWiimoteZ & 0.82 & 0.74 & 0.6 & 0.6 & 0.608 & 0.496 & 0.744 & 0.604 & 0.704 & 0.616 & 0.66 & 0.62 & 0.7 & 0.6933 & 0.76 & 0.74 & 0.84 & 0.66 & \textbf{0.92} & 0.88 & 0.64 & 0.82 & 0.76 & 0.76 & 0.84 & \textbf{0.92} & 0.86\\
    Plane & \textbf{1.0} & 0.99 & \textbf{1.0} & \textbf{1.0} & 0.962 & 0.964 & \textbf{1.0} & 0.977 & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & 0.99 & \textbf{1.0} & \textbf{1.0} & 0.9905 & 0.9905 & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0}\\
    PowerCons & 0.961 & 0.9 & 0.961 & 0.961 & 0.96 & 0.971 & 0.863 & 0.977 & 0.879 & 0.852 & 0.878 & 0.894 & 0.9556 & 0.9944 & \textbf{1.0} & \textbf{1.0} & 0.9389 & 0.9333 & 0.9056 & 0.9222 & 0.9722 & 0.9722 & 0.9722 & 0.9778 & 0.9556 & 0.9667 & 0.9556\\
    ProximalPhalanxOutlineAgeGroup & 0.834 & 0.844 & 0.839 & 0.839 & 0.812 & \textbf{0.872} & 0.825 & 0.849 & 0.847 & 0.839 & 0.805 & 0.863 & 0.8293 & 0.8472 & 0.8585 & 0.839 & 0.8488 & 0.8341 & 0.8293 & 0.8293 & 0.8537 & 0.8585 & 0.839 & 0.839 & 0.8341 & 0.8293 & 0.8293\\
    ProximalPhalanxOutlineCorrect & 0.887 & 0.859 & 0.873 & 0.873 & 0.807 & 0.768 & 0.907 & 0.73 & 0.92 & 0.817 & 0.784 & 0.856 & 0.8866 & 0.8648 & 0.9038 & \textbf{0.9244} & 0.8866 & 0.8832 & 0.8729 & 0.8625 & 0.8385 & 0.8935 & 0.8763 & 0.8832 & 0.9107 & 0.8729 & 0.8557\\
    ProximalPhalanxTW & 0.824 & 0.771 & 0.8 & 0.8 & 0.777 & 0.791 & 0.761 & 0.767 & 0.773 & 0.784 & 0.761 & 0.712 & 0.7951 & 0.8033 & 0.8098 & \textbf{0.8293} & 0.761 & 0.7561 & 0.761 & 0.7463 & 0.7854 & 0.7415 & 0.761 & 0.7512 & 0.7561 & 0.7561 & 0.761\\
    ShakeGestureWiimoteZ & 0.94 & 0.92 & 0.86 & 0.86 & 0.58 & 0.756 & 0.884 & 0.548 & 0.88 & 0.864 & 0.86 & \textbf{0.96} & 0.84 & 0.8467 & 0.82 & 0.74 & 0.88 & 0.92 & 0.84 & 0.86 & 0.9 & 0.94 & 0.92 & 0.88 & 0.9 & 0.88 & 0.88\\
    ShapeletSim & \textbf{1.0} & 0.672 & 0.683 & 0.683 & 0.497 & 0.51 & 0.706 & 0.513 & 0.782 & 0.546 & 0.65 & 0.961 & 0.9667 & 0.9704 & 0.4778 & 0.5056 & 0.9667 & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & 0.9278 & 0.9722 & 0.9778 & 0.9944 & 0.9944 & \textbf{1.0} & \textbf{1.0}\\
    ShapesAll & 0.902 & 0.848 & 0.773 & 0.773 & 0.617 & 0.679 & 0.894 & 0.776 & \textbf{0.926} & 0.643 & 0.768 & 0.815 & 0.8483 & 0.8206 & 0.8017 & 0.7917 & 0.8633 & 0.8967 & 0.9117 & 0.8967 & 0.8717 & 0.8933 & 0.9017 & 0.8933 & 0.9167 & 0.925 & 0.9167\\
    SmoothSubspace & 0.98 & 0.96 & 0.953 & 0.953 & 0.976 & 0.964 & 0.975 & 0.98 & 0.98 & 0.849 & 0.827 & 0.82 & 0.98 & 0.9867 & \textbf{1.0} & \textbf{1.0} & 0.9667 & 0.9667 & 0.9667 & 0.98 & 0.9467 & 0.9733 & 0.9733 & 0.9533 & 0.96 & 0.9667 & 0.9667\\
    SonyAIBORobotSurface1 & 0.903 & 0.902 & 0.899 & 0.899 & 0.69 & 0.729 & 0.958 & 0.692 & \textbf{0.961} & 0.725 & 0.725 & 0.729 & 0.8253 & 0.8397 & 0.772 & 0.6722 & 0.9368 & 0.8236 & 0.8636 & 0.8502 & 0.8153 & 0.8319 & 0.8336 & 0.9085 & 0.8952 & 0.8652 & 0.8552\\
    SonyAIBORobotSurface2 & 0.871 & 0.889 & 0.907 & 0.907 & 0.831 & 0.844 & \textbf{0.98} & 0.831 & 0.975 & 0.635 & 0.831 & 0.829 & 0.8909 & 0.8835 & 0.809 & 0.8279 & 0.8909 & 0.8846 & 0.9381 & 0.9454 & 0.8835 & 0.9224 & 0.937 & 0.9486 & 0.9507 & 0.9486 & 0.9496\\
    Strawberry & 0.962 & 0.954 & 0.965 & 0.965 & 0.952 & 0.959 & 0.975 & 0.959 & 0.98 & 0.911 & 0.941 & 0.951 & 0.9622 & 0.9333 & 0.9811 & \textbf{0.9838} & 0.9649 & 0.973 & 0.9568 & 0.9541 & 0.9595 & 0.9514 & 0.9622 & 0.9703 & 0.9622 & 0.9676 & 0.9541\\
    SwedishLeaf & 0.941 & 0.914 & 0.923 & 0.923 & 0.884 & 0.902 & 0.967 & 0.845 & 0.963 & 0.837 & 0.792 & 0.923 & 0.9312 & 0.9115 & 0.9504 & 0.9456 & 0.9536 & 0.9472 & 0.9584 & 0.9456 & 0.9392 & 0.9616 & \textbf{0.9728} & 0.9648 & 0.9664 & 0.968 & 0.9584\\
    Symbols & 0.976 & 0.963 & 0.916 & 0.916 & 0.808 & 0.754 & 0.955 & 0.836 & 0.893 & 0.798 & 0.95 & 0.936 & 0.9558 & 0.9618 & 0.8824 & 0.8945 & 0.9668 & 0.9839 & 0.9859 & 0.9859 & 0.9688 & 0.9869 & 0.9769 & 0.9879 & 0.9839 & 0.9879 & \textbf{0.9889}\\
    SyntheticControl & 0.997 & 0.987 & 0.99 & 0.99 & 0.987 & 0.973 & 0.989 & 0.973 & 0.997 & 0.879 & 0.993 & 0.99 & 0.9767 & 0.9922 & 0.99 & 0.9833 & 0.9933 & 0.9967 & \textbf{1.0} & \textbf{1.0} & 0.9733 & 0.99 & 0.99 & 0.99 & 0.9867 & \textbf{1.0} & \textbf{1.0}\\
    ToeSegmentation1 & 0.917 & 0.939 & 0.93 & 0.93 & 0.598 & 0.706 & 0.961 & 0.589 & 0.957 & 0.882 & 0.772 & 0.925 & 0.9474 & 0.8553 & 0.5746 & 0.6667 & 0.9693 & 0.9167 & 0.9386 & 0.9035 & 0.8421 & 0.9605 & 0.9649 & 0.943 & \textbf{0.9737} & 0.9561 & 0.9693\\
    ToeSegmentation2 & 0.892 & 0.9 & 0.877 & 0.877 & 0.752 & 0.702 & 0.889 & 0.745 & 0.894 & 0.794 & 0.838 & 0.915 & 0.9077 & 0.7846 & 0.6538 & 0.8154 & 0.9154 & 0.8615 & 0.9 & 0.8769 & 0.8692 & 0.8769 & \textbf{0.9308} & 0.9231 & 0.9231 & 0.9154 & 0.9231\\
    Trace & \textbf{1.0} & 0.99 & \textbf{1.0} & \textbf{1.0} & 0.952 & 0.74 & \textbf{1.0} & 0.806 & \textbf{1.0} & 0.934 & \textbf{1.0} & \textbf{1.0} & 0.99 & \textbf{1.0} & 0.91 & 0.98 & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0}\\
    TwoLeadECG & 0.986 & 0.999 & 0.976 & 0.976 & 0.877 & 0.784 & 0.999 & 0.753 & \textbf{1.0} & 0.949 & 0.905 & 0.847 & 0.9921 & 0.8584 & 0.9508 & 0.9254 & 0.9851 & 0.993 & \textbf{1.0} & 0.993 & 0.9622 & 0.9939 & 0.9991 & 0.9939 & 0.9982 & \textbf{1.0} & 0.9956\\
    TwoPatterns & \textbf{1.0} & 0.999 & 0.999 & 0.999 & 0.991 & \textbf{1.0} & 0.87 & 0.948 & \textbf{1.0} & 0.875 & \textbf{1.0} & 0.994 & 0.9918 & 0.9935 & 0.995 & 0.9032 & 0.996 & 0.9948 & 0.998 & 0.999 & 0.9415 & 0.9948 & 0.997 & 0.9988 & 0.9992 & 0.9995 & 0.9992\\
    UMD & \textbf{1.0} & 0.993 & 0.986 & 0.986 & 0.96 & 0.771 & 0.988 & 0.949 & 0.99 & 0.835 & 0.993 & 0.993 & 0.9931 & 0.9398 & \textbf{1.0} & 0.9375 & 0.9583 & 0.9931 & 0.9931 & 0.9931 & 0.9722 & 0.9931 & 0.9931 & 0.9861 & 0.9931 & 0.9931 & 0.9931\\
    UWaveGestureLibraryX & 0.795 & 0.785 & 0.733 & 0.733 & 0.721 & 0.771 & 0.754 & 0.768 & 0.781 & 0.608 & 0.728 & 0.821 & 0.7931 & 0.8062 & 0.8079 & 0.7984 & 0.7998 & 0.821 & 0.8314 & 0.8417 & 0.8004 & 0.8386 & 0.8445 & \textbf{0.8509} & 0.8495 & 0.8431 & 0.8498\\
    UWaveGestureLibraryY & 0.719 & 0.71 & 0.641 & 0.641 & 0.626 & 0.676 & 0.642 & 0.699 & 0.666 & 0.497 & 0.634 & 0.738 & 0.7144 & 0.7248 & 0.715 & 0.7164 & 0.7074 & 0.7387 & 0.7741 & 0.7744 & 0.6957 & 0.7557 & 0.7658 & 0.7761 & 0.7834 & 0.7783 & \textbf{0.787}\\
    UWaveGestureLibraryZ & 0.77 & 0.757 & 0.69 & 0.69 & 0.63 & 0.684 & 0.727 & 0.697 & 0.749 & 0.573 & 0.658 & 0.765 & 0.7401 & 0.7519 & 0.7513 & 0.7379 & 0.7328 & 0.7552 & 0.7739 & 0.7744 & 0.7554 & 0.78 & 0.7747 & 0.7951 & \textbf{0.7956} & 0.7887 & 0.7859\\
    Wafer & 0.998 & 0.992 & 0.994 & 0.994 & 0.961 & 0.998 & 0.997 & 0.996 & 0.998 & 0.916 & 0.98 & 0.997 & 0.994 & 0.9988 & 0.9953 & 0.9959 & 0.9992 & 0.9977 & 0.9992 & \textbf{0.9998} & 0.9961 & 0.9974 & 0.9969 & 0.999 & 0.9995 & 0.9995 & 0.9995\\
    Wine & 0.87 & 0.815 & 0.778 & 0.778 & 0.519 & 0.556 & 0.611 & 0.541 & 0.722 & 0.744 & 0.574 & 0.537 & 0.8333 & 0.6358 & 0.7778 & 0.7222 & 0.8704 & 0.8519 & 0.6667 & 0.8333 & 0.7778 & 0.8519 & 0.8333 & 0.8704 & 0.8148 & 0.8333 & \textbf{0.8889}\\
    WordSynonyms & 0.676 & 0.691 & 0.531 & 0.531 & 0.568 & 0.557 & 0.561 & 0.599 & 0.617 & 0.506 & 0.649 & 0.688 & 0.685 & 0.639 & 0.6442 & 0.5972 & 0.5987 & 0.6677 & 0.6771 & 0.6771 & 0.6003 & 0.6897 & 0.7116 & 0.7022 & 0.7194 & \textbf{0.732} & 0.721\\
    Yoga & 0.887 & 0.837 & 0.791 & 0.791 & 0.786 & 0.753 & 0.837 & 0.856 & 0.867 & 0.626 & 0.837 & 0.834 & 0.8507 & 0.8289 & 0.8603 & 0.8653 & 0.7947 & 0.819 & 0.8403 & 0.8793 & 0.7493 & 0.8497 & 0.8557 & 0.875 & 0.887 & 0.8713 & \textbf{0.901}\\
    \midrule
    \textit{\textbf{Average}} & 0.8516 & 0.8336 & 0.7933 & 0.7933 & 0.752 & 0.7433 & 0.8093 & 0.7506 & 0.8255 & 0.7268 & 0.7641 & 0.7941 & 0.8338 & 0.8145 & 0.8173 & 0.8083 & 0.8394 & 0.8415 & 0.8436 & 0.8458 & 0.8121 & 0.8465 & 0.8584 & 0.8539 & 0.8582 & 0.8612 & \textbf{0.862}\\
    \textit{\textbf{Best Counts}} & 13 & 7 & 4 & 4 & 1 & 2 & 6 & 0 & 10 & 2 & 4 & 2 & 6 & 6 & 10 & 15 & 5 & 8 & 9 & 10 & 6 & 4 & 10 & 8 & 14 & \textbf{20} & 16\\
    \bottomrule
    \end{tabular}
}`{=latex}
