English grammar for parsing in NLTK

PythonNlpGrammarNltk

Python Problem Overview


Is there a ready-to-use English grammar that I can just load it and use in NLTK? I've searched around examples of parsing with NLTK, but it seems like that I have to manually specify grammar before parsing a sentence.

Thanks a lot!

Python Solutions


Solution 1 - Python

You can take a look at pyStatParser, a simple python statistical parser that returns NLTK parse Trees. It comes with public treebanks and it generates the grammar model only the first time you instantiate a Parser object (in about 8 seconds). It uses a CKY algorithm and it parses average length sentences (like the one below) in under a second.

>>> from stat_parser import Parser
>>> parser = Parser()
>>> print parser.parse("How can the net amount of entropy of the universe be massively decreased?")
(SBARQ
  (WHADVP (WRB how))
  (SQ
    (MD can)
    (NP
      (NP (DT the) (JJ net) (NN amount))
      (PP
        (IN of)
        (NP
          (NP (NNS entropy))
          (PP (IN of) (NP (DT the) (NN universe))))))
    (VP (VB be) (ADJP (RB massively) (VBN decreased))))
  (. ?))

Solution 2 - Python

My library, spaCy, provides a high performance dependency parser.

Installation:

pip install spacy
python -m spacy.en.download all

Usage:

from spacy.en import English
nlp = English()
doc = nlp(u'A whole document.\nNo preprocessing require.   Robust to arbitrary formating.')
for sent in doc:
    for token in sent:
        if token.is_alpha:
            print token.orth_, token.tag_, token.head.lemma_

Choi et al. (2015) found spaCy to be the fastest dependency parser available. It processes over 13,000 sentences a second, on a single thread. On the standard WSJ evaluation it scores 92.7%, over 1% more accurate than any of CoreNLP's models.

Solution 3 - Python

There are a few grammars in the nltk_data distribution. In your Python interpreter, issue nltk.download().

Solution 4 - Python

There is a Library called Pattern. It is quite fast and easy to use.

>>> from pattern.en import parse
>>>  
>>> s = 'The mobile web is more important than mobile apps.'
>>> s = parse(s, relations=True, lemmata=True)
>>> print s

'The/DT/B-NP/O/NP-SBJ-1/the mobile/JJ/I-NP/O/NP-SBJ-1/mobile' ... 

Solution 5 - Python

Use the MaltParser, there you have a pretrained english-grammar, and also some other pretrained languages. And the Maltparser is a dependency parser and not some simple bottom-up, or top-down Parser.

Just download the MaltParser from http://www.maltparser.org/index.html and use the NLTK like this:

import nltk
parser = nltk.parse.malt.MaltParser()

Solution 6 - Python

I've tried NLTK, PyStatParser, Pattern. IMHO Pattern is best English parser introduced in above article. Because it supports pip install and There is a fancy document on the website (http://www.clips.ua.ac.be/pages/pattern-en). I couldn't find reasonable document for NLTK (And it gave me inaccurate result for me by its default. And I couldn't find how to tune it). pyStatParser is much slower than described above in my Environment. (About one minute for initialization and It took couple of seconds to parse long sentences. Maybe I didn't use it correctly).

Solution 7 - Python

Did you try POS tagging in NLTK?

text = word_tokenize("And now for something completely different")
nltk.pos_tag(text)

The answer is something like this

[('And', 'CC'), ('now', 'RB'), ('for', 'IN'), ('something', 'NN'),('completely', 'RB'), ('different', 'JJ')]

Got this example from here NLTK_chapter03

Solution 8 - Python

I'm found out that nltk working good with parser grammar developed by Stanford.

Syntax Parsing with Stanford CoreNLP and NLTK

It is very easy to start to use Stanford CoreNLP and NLTK. All you need is small preparation, after that you can parse sentences with following code:

from nltk.parse.corenlp import CoreNLPParser
parser = CoreNLPParser()
parse = next(parser.raw_parse("I put the book in the box on the table."))

Preparation:

  1. Download Java Stanford model
  2. Run CoreNLPServer

You can use following code to run CoreNLPServer:

import os
from nltk.parse.corenlp import CoreNLPServer
# The server needs to know the location of the following files:
#   - stanford-corenlp-X.X.X.jar
#   - stanford-corenlp-X.X.X-models.jar
STANFORD = os.path.join("models", "stanford-corenlp-full-2018-02-27")
# Create the server
server = CoreNLPServer(
   os.path.join(STANFORD, "stanford-corenlp-3.9.1.jar"),
   os.path.join(STANFORD, "stanford-corenlp-3.9.1-models.jar"),    
)
# Start the server in the background
server.start()

> Do not forget stop server with executing server.stop()

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionroborenView Question on Stackoverflow
Solution 1 - PythonemilmontView Answer on Stackoverflow
Solution 2 - Pythonsyllogism_View Answer on Stackoverflow
Solution 3 - PythonFred FooView Answer on Stackoverflow
Solution 4 - Pythonuser3798928View Answer on Stackoverflow
Solution 5 - PythonblackmambaView Answer on Stackoverflow
Solution 6 - PythonPiyo HogeView Answer on Stackoverflow
Solution 7 - Pythonmaverik_akagamiView Answer on Stackoverflow
Solution 8 - PythonStepan RogonovView Answer on Stackoverflow