Skip to content

Commit 015ac30

Browse files
authored
Update README.md
1 parent 8b9b682 commit 015ac30

File tree

1 file changed

+15
-0
lines changed

1 file changed

+15
-0
lines changed

README.md

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,21 @@
1111
- [Citing](#citing)
1212

1313
## What's New
14+
15+
## Nov 12, 2024
16+
* Optimizer factory refactor
17+
* New factory works by registering optimizers using an OptimInfo dataclass w/ some key traits
18+
* Add `list_optimizers`, `get_optimizer_class`, `get_optimizer_info` to reworked `create_optimizer_v2` fn to explore optimizers, get info or class
19+
* deprecate `optim.optim_factory`, move fns to `optim/_optim_factory.py` and `optim/_param_groups.py` and encourage import via `timm.optim`
20+
* Add Adopt (https://github.com/iShohei220/adopt) optimizer
21+
* Add 'Big Vision' variant of Adafactor (https://github.com/google-research/big_vision/blob/main/big_vision/optax.py) optimizer
22+
* Fix original Adafactor to pick better factorization dims for convolutions
23+
* Tweak LAMB optimizer with some improvements in torch.where functionality since original, refactor clipping a bit
24+
* dynamic img size support in vit, deit, eva improved to support resize from non-square patch grids, thanks https://github.com/wojtke
25+
*
26+
## Oct 31, 2024
27+
Add a set of new very well trained ResNet & ResNet-V2 18/34 (basic block) weights. See https://huggingface.co/blog/rwightman/resnet-trick-or-treat
28+
1429
## Oct 19, 2024
1530
* Cleanup torch amp usage to avoid cuda specific calls, merge support for Ascend (NPU) devices from [MengqingCao](https://github.com/MengqingCao) that should work now in PyTorch 2.5 w/ new device extension autoloading feature. Tested Intel Arc (XPU) in Pytorch 2.5 too and it (mostly) worked.
1631

0 commit comments

Comments
 (0)