Skip to content

This page was last updated on 2025-03-03 06:05:46 UTC

Recommendations for the article Tiny Time Mixers (TTMs): Fast Pre-trained Models for Enhanced Zero/Few-Shot Forecasting of Multivariate Time Series

Abstract Title Authors Publication Date Journal/ Conference Citation count Highest h-index
visibility_off MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers L. Yu, Daniel Simig, Colin Flaherty, Armen Aghajanyan, Luke Zettlemoyer, M. Lewis 2023-05-12 ArXiv 74 111
visibility_off PoNet: Pooling Network for Efficient Token Mixing in Long Sequences Chao-Hong Tan, Qian Chen, Wen Wang, Qinglin Zhang, Siqi Zheng, Zhenhua Ling 2021-10-06 ArXiv 10 45
visibility_off HAM-TTS: Hierarchical Acoustic Modeling for Token-Based Zero-Shot Text-to-Speech with Model and Data Scaling Chunhui Wang, Chang Zeng, Bowen Zhang, Ziyang Ma, Yefan Zhu, Zifeng Cai, Jian Zhao, Zhonglin Jiang, Yong Chen 2024-03-09 ArXiv 4 13
visibility_off DiTAR: Diffusion Transformer Autoregressive Modeling for Speech Generation Dongya Jia, Zhuo Chen, Jiawei Chen, Chenpeng Du, Jian Wu, Jian Cong, Xiaobin Zhuang, Chumin Li, Zhengnan Wei, Yuping Wang, Yuxuan Wang 2025-02-06 ArXiv 0 4
visibility_off DiTTo-TTS: Diffusion Transformers for Scalable Text-to-Speech without Domain-Specific Factors Keon Lee, Dong Won Kim, Jaehyeon Kim, Seungjun Chung, Jaewoong Cho 2024-06-17 ArXiv 13 4
visibility_off StyleTTS-ZS: Efficient High-Quality Zero-Shot Text-to-Speech Synthesis with Distilled Time-Varying Style Diffusion Yinghao Aaron Li, Xilin Jiang, Cong Han, N. Mesgarani 2024-09-16 ArXiv 3 41
visibility_off Mega-TTS: Zero-Shot Text-to-Speech at Scale with Intrinsic Inductive Bias Ziyue Jiang, Yi Ren, Zhe Ye, Jinglin Liu, Chen Zhang, Qiang Yang, Shengpeng Ji, Rongjie Huang, Chunfeng Wang, Xiang Yin, Zejun Ma, Zhou Zhao 2023-06-06 ArXiv 64 107
visibility_off E1 TTS: Simple and Fast Non-Autoregressive TTS Zhijun Liu, Shuai Wang, Pengcheng Zhu, Mengxiao Bi, Haizhou Li 2024-09-14 ArXiv 2 9
visibility_off Towards Lightweight and Stable Zero-shot TTS with Self-distilled Representation Disentanglement Qianniu Chen, Xiaoyang Hao, Bowen Li, Yue Liu, Li Lu 2025-01-15 ArXiv 0 1
visibility_off A Unified View of Long-Sequence Models towards Modeling Million-Scale Dependencies Hongyu Hè, Marko Kabic 2023-02-13 ArXiv 2 5
Abstract Title Authors Publication Date Journal/Conference Citation count Highest h-index