Linear Warmup With Linear Decay is a learning rate schedule in which we increase the learning rate linearly for $n$ updates and then linearly decay afterwards.
Paper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Retrieval | 124 | 13.22% |
Language Modelling | 96 | 10.23% |
Question Answering | 63 | 6.72% |
Large Language Model | 39 | 4.16% |
Sentence | 29 | 3.09% |
Text Classification | 28 | 2.99% |
Sentiment Analysis | 26 | 2.77% |
Information Retrieval | 24 | 2.56% |
Text Generation | 22 | 2.35% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |