Harvard NLP studies machine learning methods for processing and generating human language. We are interested in deep learning methods for sequence generation, challenges of artificial intelligence grounded in human language, and modeling linguistic structure with statistical tools.

Our group's research publications and open-source projects have focused on text summarization, tools for neural machine translation, tools for visualizing recurrent neural networks, algorithms for shrinking neural networks, models for entity tracking in documents, OCR for mathematical expressions, a new approach to grammatical error correction, and approaches for extending deep learning for text generation.