Arik Reuter
I am a PhD student at the University of Cambridge and the Max Planck Institute for Intelligent Systems in Tübingen, working with Bernhard Schölkopf, Adrian Weller and José Miguel Hernández-Lobato.
Please feel free to contact me via email.
You can also find me on Google Scholar and GitHub.
Publications
- Do-PFN: In-Context Learning for Causal Effect EstimationIn Advances in Neural Information Processing Systems (NeurIPS), 2025Spotlight
- Can Transformers Learn Full Bayesian Inference in Context?In Proceedings of the 42nd International Conference on Machine Learning (ICML), 2025
- Position: The Future of Bayesian Prediction Is Prior-FittedIn Proceedings of the 42nd International Conference on Machine Learning (ICML), 2025
- Probabilistic Topic Modeling With Transformer RepresentationsIEEE Transactions on Neural Networks and Learning Systems, 2025
- Beyond Black-Box Predictions: Identifying Marginal Feature Effects in Tabular Transformer NetworksarXiv preprint arXiv:2504.08712, 2025
- STREAM: Simplified Topic Retrieval, Exploration, and Analysis ModuleIn Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 2024
- Interpretable Additive Tabular Transformer NetworksTransactions on Machine Learning Research, 2024
- Topics in the haystack: Enhancing topic quality through corpus expansionComputational Linguistics, 2024
- Mambular: A Sequential Model for Tabular Deep LearningarXiv preprint arXiv:2408.06291, 2024
- GPTopic: Dynamic and Interactive Topic RepresentationsarXiv preprint arXiv:2403.03628, 2024
- Neural Additive Image Model: Interpretation through InterpolationarXiv preprint arXiv:2405.02295, 2024
- Pseudo-document simulation for comparing LDA, GSDMM and GPM topic models on short and sparse text using Twitter dataComputational Statistics, 2023
- Can Transformers Learn Full Bayesian Inference in Context?In Proceedings of the 42nd International Conference on Machine Learning (ICML), 2025
- Position: The Future of Bayesian Prediction Is Prior-FittedIn Proceedings of the 42nd International Conference on Machine Learning (ICML), 2025
- Probabilistic Topic Modeling With Transformer RepresentationsIEEE Transactions on Neural Networks and Learning Systems, 2025
- Beyond Black-Box Predictions: Identifying Marginal Feature Effects in Tabular Transformer NetworksarXiv preprint arXiv:2504.08712, 2025
- STREAM: Simplified Topic Retrieval, Exploration, and Analysis ModuleIn Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 2024
- Interpretable Additive Tabular Transformer NetworksTransactions on Machine Learning Research, 2024
- Topics in the haystack: Enhancing topic quality through corpus expansionComputational Linguistics, 2024
- Mambular: A Sequential Model for Tabular Deep LearningarXiv preprint arXiv:2408.06291, 2024
- GPTopic: Dynamic and Interactive Topic RepresentationsarXiv preprint arXiv:2403.03628, 2024
- Neural Additive Image Model: Interpretation through InterpolationarXiv preprint arXiv:2405.02295, 2024
- Pseudo-document simulation for comparing LDA, GSDMM and GPM topic models on short and sparse text using Twitter dataComputational Statistics, 2023