emma = nltk.Text(nltk.corpus.gutenberg.words('austen-emma.txt')) emma = emma
from sklearn.feature_extraction.text import CountVectorizer dtm_vectorizer =
CountVectorizer(stop_words='english') dtm = dtm_vectorizer.fit_transform(emma)
dtm.toarray()出现memoryerror,就是内存超出,但是怎么解决我还不清楚,所以先放在这,记一下,以后再解决