The Deep Learning for NLP course covers advanced topics in deep learning architectures for natural language processing. The focus is on attention-based architectures (like Transformers), structure processing and variational-Bayesian approaches, and why these models are particularly suited to the properties of human language, such as categorical, unbounded, and structured representations.
- Professor: James HENDERSON
- Professor: James Henderson
- Teacher: Andrei Catalin Coman
- Teacher: Fabio James Fehr
- Teacher: Haruki Shirakami