- import jieba
- import numpy as np
-
- def sentiment_analysis(text):
- # 读取情感词典
- sentiment_words = []
- with open('sentiment_words.txt', 'r', encoding='utf-8') as f:
- for line in f:
- word, sentiment = line.strip().split()
- sentiment_words.append((word, int(sentiment)))
-
- # 分词
- words = list(jieba.cut(text))
-
- # 统计情感词数量
- sentiment_count = 0
- for word, sentiment in sentiment_words:
- if word in words:
- sentiment_count += sentiment
-
- # 分类
- if sentiment_count > 0:
- return '正面情感'
- elif sentiment_count < 0:
- return '负面情感'
- else:
- return '中立情感'
-
-
在这个代码示例中,我们使用了结巴分词(jieba)对文本进行分词,并读取了情感词典,统计了文本中的情感词数量,最后根据情感词数量的正负判断文本的情感。
注意:此代码仅作为示例,在实际应用中可能需要更加复杂和精细的情感分析方法。
做英文情感分析可以使用以下几个常用的 Python 库:
- import nltk
- from nltk.sentiment import SentimentIntensityAnalyzer
-
- sentiment_analyzer = SentimentIntensityAnalyzer()
- sentiment = sentiment_analyzer.polarity_scores("This is a positive sentence.")
- print(sentiment)
-
- from textblob import TextBlob
-
- text = "This is a positive sentence."
- blob = TextBlob(text)
- sentiment = blob.sentiment.polarity
- print(sentiment)
-
- from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
-
- sentiment_analyzer = SentimentIntensityAnalyzer()
- sentiment = sentiment_analyzer.polarity_scores("This is a positive sentence.")
- print(sentiment)
-
-
在 Python 中实现中文情感分析,常用的库有:jieba、SnowNLP 和 Pyltp。
下面是一个 jieba 的案例 demo:
- import jieba
- import jieba.analyse
-
- text = "这部电影很棒,非常喜欢!"
- words = jieba.cut(text)
- print(" ".join(words))
-
下面是一个 SnowNLP 的案例 demo:
- import snownlp
-
- text = "这部电影很棒,非常喜欢!"
- sentiment = snownlp.SnowNLP(text).sentiments
- print(sentiment)
-
-
下面是一个 Pyltp 的案例 demo:
- from pyltp import SentimentAnalysis
-
- text = "这部电影很棒,非常喜欢!"
- sentiment = SentimentAnalysis.classify(text)
- print(sentiment)
- 根据实际情况选择使用哪个库即可。
-