Zero-Shot Facial Expression Recognition
1 papers with code • 1 benchmarks • 1 datasets
This task has no description! Would you like to contribute one?
Most implemented papers
EmoCLIP: A Vision-Language Method for Zero-Shot Video Facial Expression Recognition
To test this, we evaluate using zero-shot classification of the model trained on sample-level descriptions on four popular dynamic FER datasets.