This was the final project for a class at MIT (6.804 - Computational Cognitive Science), done in collaboration with Adib
One of the most important human skills in generalizing language is compositionality: once a human learn a new word, they can combine it with other previously known words and generate new sentences that are easily understood. For instance, if a person learns the meaning of selfie, he/she is very likely to understand mirror selfie, animal selfie etc. However, a modern deep learning model would require vast amount of training data to be able to do so. Recently, (Lake et al., 2019) have been able to develop a task that quantitatively showed an implication of this crucial difference between humans and machines. In this paper we shall develop a Bayesian Inference model that will approximate the human data.