October 6, 2009 · notes · (No comments)

Here are the notes for today’s class on using python and the nltk to process raw texts, including a discussion of processing delimited text.

pdf iconling5200-nltk3-notes.pdf

October 5, 2009 · homework · (No comments)

Overall students did very well on this assignment. Grades and comments have been committed to the repository. I also added my hmwk5.py file to the repository under resources/homework.

Class statistics for Homework 5
mean 53.67
standard deviation 6.76

homework 5 distribution

  1. Create a list called my_ints, with the following values: (10, 15, 24, 67, 1098, 500, 700) (2 points)

    my_ints = [10, 15, 24, 67, 1098, 500, 700]
  2. Print the maximum value in my_ints (3 points)
    print max(my_ints)
  3. Use a for loop and a conditional to print out whether the value is an odd or even number. For example, for 10, your program should print “10 is even”. (5 points)
    for my_int in my_ints:
        if my_int % 2 == 0:
            print my_int, "is even"
        else:
            print my_int, "is odd"
  4. Now create a new list called new_ints and fill it with values from my_ints which are divisible by 3. In addition, double each of the new values. For example, the new list should contain 30 (15*2). Use a for loop and a conditional to accomplish this task. (5 points)
    new_ints = []
    for my_int in my_ints:
        if my_int % 3 == 0:
            new_ints.append(my_int*2)        
    print new_ints
  5. Now do the same thing as in the last question, but use a list expression to accomplish the task. (5 points)
    new_ints = [my_int*2 for my_int in my_ints if my_int % 3 == 0]
    print new_ints
  6. Import the Reuters corpus from the NLTK. How many documents contain stories about coffee? (4 points)
    import nltk
    import nltk.corpus
    from nltk.corpus import reuters
    print len(reuters.fileids('coffee'))
  7. Print the number of words in the Reuters corpus which belong to the barley category. (5 points)
    print len(reuters.words(categories='barley'))
  8. Create a conditional frequency distribution of word lengths from the Reuters corpus for the categories barley, corn, and rye. (8 points)
    reuters_word_lengths = [(category, len(word))
                            for category in ['barley', 'corn', 'rye']
                            for word in reuters.words(categories=category)]
    reuters_word_lengths_cfd = nltk.ConditionalFreqDist(reuters_word_lengths)
  9. Using the cfd you just created, print out a table which lists cumulative counts of word lengths (up to nine letters long) for each category. (5 points)
    reuters_word_lengths_cfd.tabulate(samples=range(1,10),cumulative=True)
  10. Load the devilsDictionary.txt file from the ling5200 svn repository in resources/texts into the NLTK as a plaintext corpus (3 points)
    from nltk.corpus import PlaintextCorpusReader
    corpus_root = 'resources/texts'
    wordlists = PlaintextCorpusReader(corpus_root, 'devilsDictionary.txt')
  11. Store a list of all the words from the Devil’s Dictionary into a variable called devil_words (4 points)
    devil_words = wordlists.words('devilsDictionary.txt')
  12. Now create a list of words which does not include punctuation, and store it in devil_words_nopunc. Import the string module to get a handy list of punctuation marks, stored in string.punctuation. (5 points)
    import string
    devil_words_nopunc = [word for word in devil_words if word not in string.punctuation]
  13. Create a frequency distribution for each of the two lists of words from the Devil’s dictionary, one which includes punctuation, and one which doesn’t. Find the most frequently occuring word in each list. (6 points)
    devil_fd = nltk.FreqDist(devil_words)
    devil_nopunc_fd = nltk.FreqDist(devil_words_nopunc)
    print devil_fd.max()
    print devil_nopunc_fd.max()
October 4, 2009 · homework · (No comments)

In this homework you will practice writing your own functions to extract
information from various corpora. Please put your answers in an executable python
script named hmwk6.py, and commit it to the subversion repository. Don’t forget to use normal division.
It is due Oct. 9th and covers material up to Oct. 1st.

  1. Create a function called mean_word_len, which accepts a list of words (e.g. text1 — Moby Dick), and returns the mean characters per word. You should remove punctuation and stopwords from the calculation. (10 points)
  2. Create a function called mean_sent_len, which accepts a list of sentences, and returns the mean words per sentence. You should remove punctuation and stopwords from the calculation. Note that the NLTK .sents() method returns a list of lists. That is, each item in the list represents a sentence, which itself is composed of a list of words. (15 points)
  3. Now use your two new functions to print out the mean sentence length and the mean word length for all of the texts from the gutenberg project included in the NLTK. You should print out these statistics with one file per line, with the fileid first, and then the mean word length and sentence length. One example would be:
    melville-moby_dick.txt 5.64357969913 9.79009882174
    (10 points)
  4. Using the CMU pronouncing dictionary, create a list of all words which have 3 letters, and 2 syllables. Your final list should include just the spelling of the words. To calculate the number of syllables, use the number of vowels in the word (every vowel includes the digit 1, 2, or 0, marking primary, secondary, or no stress). (15 points)
  5. Imagine you are writing a play, and you are you thinking of interesting places to stage a scene. You would like it be somewhere like a house, but not exactly. Use the wordnet corpus to help you brainstorm for possible locations. First, find the hypernyms of the first definition of the word house. Then find all the hyponyms of those hypernyms, and print out the names of the words. Your output should contain one synset per line, with first the synset name, and then all of the lemma_names for that synset, e.g.:
    lodge.n.05 - lodge, indian_lodge
    (10 points)
October 1, 2009 · slides · (No comments)

Here are the slides from today’s class covering semantic relations. We also talked about automatic historical linguistics. You can try out the handy script from the repository in resources/py/auto_histling.py

Martha Palmer also gave a brief introduction to some of the corpora and databases that are available for use. Please look at the list on the linguistics website. If you are interested in using one of these, you will need to ask Martha for an account on the verbs or babel server.

pdf iconling5200-nltk2-2-slides.pdf

October 1, 2009 · News · (No comments)

I have made several changes to the syllabus, including:

  • Added a reading for Tue, Oct. 6th
  • Changed homework schedule, so that final project proposals are due Nov. 6th

In addition, I have added a file while contains all of the notes from the class so far. You can get it from the svn repository under slides.

October 1, 2009 · News, notes · (No comments)

Here are the notes for today’s class on Semantic relations.

pdf iconling5200-nltk2-2-notes.pdf