1

text = c('护士非常乐于助人', '她真的是个宝石','帮助', '没问题', '还不错')

我想为大多数单词提取 1-gram 标记,为极端、no、not 等单词提取 2 gram 标记

例如,当我得到代币时,它们应该如下所示:the,nurse, was,非常有帮助,她,真的,gem,帮助,没问题,还不错

这些是应在术语文档矩阵中显示的术语

感谢您的帮助!!

4

1 回答 1

1

这是一个可能的解决方案(假设您不仅要在上拆分c("extremely", "no", "not"),还希望包含与它们相似的单词)。pkgqdapDictionaries有一些字典用于amplification.words(如“extremely”)、negation.words(如“no”和“not”)等等。

这是一个如何在空格上拆分的示例,除非空格跟在预定义向量中的单词之后(这里我们使用amplification.words, negation.words, & deamplification.wordsfrom定义向量qdapDictionaries)。no_split_words如果要使用更自定义的单词列表,您可以更改定义。

执行拆分

library(stringr)
library(qdapDictionaries)

text <-  c('the nurse was extremely helpful', 'she was truly a gem','helping', 'no issue', 'not bad')

# define list of words where we dont want to split on space
no_split_words <- c(amplification.words, negation.words, deamplification.words)
# collapse words into form "word1|word2| ... |wordn
regex_or       <- paste(no_split_words, collapse="|")
# define regex to split on space given that the prev word not in no_split_words
split_regex    <- regex(paste("((?<!",regex_or,"))\\s"))

# perform split
str_split(text, split_regex)

#output
[[1]]
[1] "the"               "nurse"             "was"               "extremely helpful"

[[2]]
[1] "she"     "was"     "truly a" "gem"    

[[3]]
[1] "helping"

[[4]]
[1] "no issue"

[[5]]
[1] "not bad"

创建 dtmtidytext

(假设上面的代码块已经运行)

library(tidytext)
library(dplyr)

doc_df <- data_frame(text) %>% 
  mutate(doc_id = row_number())

# creates doc term matrix from tm package
# creates a binary dtm
# can define value as term freq, tfidf, etc for a nonbinary dtm
tm_dtm <- doc_df %>% 
  unnest_tokens(tokens, text, token="regex", pattern=split_regex) %>% 
  mutate(value = 1) %>%  
  cast_dtm(doc_id, tokens, value)

# can coerce to matrix if desired
matrix_dtm <- as.matrix(tm_dtm)
于 2017-05-17T12:11:59.663 回答