Hello, I reviewed the answers on Quiz 1 and have the following thoughts:
Q13: A major specificity of natural languages is that they are inherently implicit and ambiguous. How should this be taken into account in the NLP perspective? (penalty for wrong ticks)
- by teaching humans to talk and write in a way that reduces implicitness and ambiguity
- by increasing the amount of a priori knowledge that NLP systems are able to exploit
- by interacting with human experts to formulate precise interpretation rules for linguistic entities
- by designing NLP algorithms and data structures able to efficiently cope with very ambiguous representations
The correct answer is 2 & 4, but I think 3 should also be accepted as an answer.
This question does not specifically discuss in the context of large language models. Bigram and trigram models incorporate some form of linguistic bias. Also, previous research shows that learning grammar in models does help. Under NLP, I believe we should also consider this?