Paper Accepted: CAST-GNN: Continual adaptive learning for custom spatio-temporal knowledge graphs via graph neural networks

icdm2

The paper “CAST-GNN: Continual adaptive learning for custom spatio-temporal knowledge graphs via graph neural networks” by  Özbulak, G., Shrestha, Y. R., & Calbimonte, J. P. has be accepted at Proceedings of the IEEE International Conference on Data Mining, ICDM 2025!

Real-time video streams present unique challenges for continual learning systems, demanding models that can incrementally update representations, preserve past knowledge, and reason over complex semantic relationships without sacrificing efficiency. In this paper, we introduce CAST-GNN, the first unified Graph Neural Network architecture expressly designed for continual adaptation on streaming Spatio-Temporal Knowledge Graphs derived from open-source video benchmarks. CAST-GNN integrates dynamic temporal embedding layers, adaptive self attention, episodic graph pattern memory, and a novel hybrid selective replay buffer with Fisher based regularization and knowledge distillation to mitigate catastrophic forgetting. Through comprehensive experiments on four diverse STKG benchmarks (UCF-101, HMDB-51, Kinetics 400, and Something-Something), our model achieves 96–97\% accuracy, between 0.13-0.31\% forgetting, by consistently outperforming re-implemented continual-learning baselines under identical conditions. Ablation studies confirm the critical synergy between temporal embeddings and adaptive attention. We further demonstrate XAI-driven interpretability by aligning global distributional shifts with local node-level attributions. CAST-GNN not only advances robust semantic reasoning and knowledge retention but also provides a scalable, explainable framework applicable to a wide array of real-world streaming scenarios.