Fast and Easy Infinite Neural Networks in Python
-
Updated
Mar 1, 2024 - Jupyter Notebook
Fast and Easy Infinite Neural Networks in Python
CVPR 2024-Improved Implicit Neural Representation with Fourier Reparameterized Training
ICML2025-Inductive Gradient Adjustment for Spectral Bias in Implicit Neural Representations
Existing literature about training-data analysis.
Official repository for "FOCUS: First Order Concentrated Updating Scheme"
A unified framework for attributing model components, data, and training dynamics to model behavior.
Code for "Effect of equivariance on training dynamics"
Source code for <Probability Consistency in Large Language Models: Theoretical Foundations Meet Empirical Discrepancies>
Official repository for the EMNLP 2024 paper "How Hard is this Test Set? NLI Characterization by Exploiting Training Dynamics"
Code for 'Towards a Theoretical Understanding of the 'Reversal Curse' via Training Dynamics'
Code for "Abrupt Learning in Transformers: A Case Study on Matrix Completion" (NeurIPS 2024)
Supplementary code for the paper 'Dynamic Rescaling for Training GNNs' to be published at NeurIPS 2024
Code for the paper "What Happens During the Loss Plateau? Understanding Abrupt Learning in Transformers"
This repository documents the process of finding the optimal learning rate for deep neural networks
Supplementary code for the paper 'Are GATs Out of Balance?' to be published at NeurIPS 2023
Add a description, image, and links to the training-dynamics topic page so that developers can more easily learn about it.
To associate your repository with the training-dynamics topic, visit your repo's landing page and select "manage topics."