Tag: Generative Modeling
All the articles with the tag "Generative Modeling".
-
Contaminated Multivariate Time-Series Anomaly Detection with Spatio-Temporal Graph Conditional Diffusion Models
TSAD-C introduces a pioneering unsupervised framework for multivariate time-series anomaly detection on contaminated data, using a Decontaminator with S4-based diffusion, long-range dependency modeling via a time-then-graph approach, and anomaly scoring, achieving state-of-the-art performance across diverse datasets.
-
Deformable Beta Splatting
Deformable Beta Splatting (DBS) enhances real-time radiance field rendering by introducing deformable Beta Kernels for superior geometric fidelity, Spherical Beta for efficient color encoding, and kernel-agnostic MCMC optimization, achieving state-of-the-art visual quality with 45% fewer parameters and 1.5x faster rendering than 3DGS-MCMC.
-
Why Knowledge Distillation Works in Generative Models: A Minimal Working Explanation
本文通过混合高斯模拟和大规模语言模型实验,揭示了知识蒸馏在生成模型中通过教师模型熵控制学生模型精度-召回权衡的机制,从而提升样本质量。
-
Compression via Pre-trained Transformers: A Study on Byte-Level Multimodal Data
本文通过大规模实验证明,预训练小型Transformer模型在考虑参数大小的情况下,能在文本、图像和音频的分布外数据上实现与传统压缩算法竞争的压缩比,尤其在训练模态内表现优异,但跨模态迁移能力较弱。
-
From Attention to Atoms: Spectral Dictionary Learning for Fast, Interpretable Language Models
本文提出光谱字典生成模型(SDGM),通过学习全局傅里叶字典和 token 混合系数替换自注意力机制,实现 O(KL) 复杂度的高效语言建模,并在基准数据集上取得竞争性 perplexity 和显著的资源节省。