Main Article Content
In this paper, we have studied LSTM (Long Short- Term Memory) network and presented a siamese adaptation of it for labelled data composed of variable-length pattern and pairs. Our model first takes in right answer and then assesses semantic similarity between the right answer and the given answer. In order to accomplish these we use word embedding vectors which are supplemented with synonymic information to the LSTMs. These vectors encode the expressed underlying meaning of the sentence which is of fixed size. The wording and syntax are also taken care of. We limit subsequent operations that rely on the simple Manhattan metric. The model's learned sentence representations are compelled to a highly structured space. The geometry of this space represents complex semantic relationships. Our results show that LSTM’s can be really powerful language models and are especially suited to tasks which require intricate understanding.