메인메뉴 바로가기 본문으로 바로가기

International Journal

HOME Publication International Journal
International Journal게시글 상세보기
LFTD: Transformer-Enhanced Diffusion Model for Realistic Financial Time-Series Data Generation
게시자 조** 등록일 2026. 2. 9 16:11
국제저널
Gyumun Choi, Donghyeon Jo, Wonho Song, Hyungjong Na, Hyungjoon Kim
AI
Volume: 7 Issue: 2 Article number: 60
2026/02/05

https://www.mdpi.com/2673-2688/7/2/60

스크린샷 2026-02-09 161301.png

 

Abstract

Firm-level financial statement data form multivariate annual time series with strong cross-variable dependencies and temporal dynamics, yet publicly available panels are often short and incomplete, limiting the generalization of predictive models. We present Latent Financial Time-Series Diffusion (LFTD), a structure-aware augmentation framework that synthesizes realistic firm-level financial time series in a compact latent space. LFTD first learns information-preserving representations with a dual encoder: an FT-Transformer that captures within-year interactions across financial variables and a Time Series Transformer (TST) that models long-horizon evolution across years. On this latent sequence, we train a Transformer-based denoising diffusion model whose reverse process is FiLM-conditioned on the diffusion step as well as year, firm identity, and firm age, enabling controllable generation aligned with firm- and time-specific context. A TST-based Cross-Decoder then reconstructs continuous and binary financial variables for each year. Empirical evaluation on Korean listed-firm data from 2011 to 2023 shows that augmenting training sets with LFTD-generated samples consistently improves firm-value prediction for market-to-book and Tobin’s Q under both static (same-year) and dynamic (τ → τ + 1) forecasting settings and outperforms conventional generative augmentation baselines and ablated variants. These results suggest that domain-conditioned latent diffusion is a practical route to reliable augmentation for firm-level financial time series.
붙임자료