• 欢迎访问搞代码网站,推荐使用最新版火狐浏览器和Chrome浏览器访问本网站!
  • 如果您觉得本站非常有看点,那么赶紧使用Ctrl+D 收藏搞代码吧

用 Python 爬取最炫国漫《雾山五行》,看看十万网友在说些啥,【词云】

python 搞java代码 3年前 (2022-05-21) 16次浏览 已收录 0个评论

本文的文字及图片来源于网络,仅供学习、交流使用,不具有任何商业用途,版权归原作者所有,如有问题请及时联系我们以作处理

本文来自腾讯云,作者:Python小二
动漫的小伙伴应该知道最近出了一部神漫《雾山五行》,其以极具特色的水墨画风和超燃的打斗场面广受好评,首集播出不到 24 小时登顶 B 站热搜第一,豆瓣开分 9.5,火爆程度可见一斑,就打斗场面而言,说是最炫动漫也不为过,当然唯一有一点不足之处就是集数有点少,只有 3 集。

看过动图之后,是不是觉得我所说的最炫动漫,并非虚言,接下来我们爬取一些评论,了解一下大家对这部动漫的看法,这里我们选取 B 站、微博和豆瓣这 3 个平台来爬取数据。

爬取 B 站

我们先来爬取 B 站弹幕数据,动漫链接为:https://www.bilibili.com/bangumi/play/ep331423,弹幕链接为:http://comment.bilibili.com/186803402.xml,爬取代码如下:

url = <span>"</span><span>http://comment.bilibili.com/218796492.xml</span><span>"</span><span>
req </span>=<span> requests.get(url)
html </span>=<span> req.content
html_doc </span>= str(html, <span>"</span><span>utf-8</span><span>"</span>)  <span>#</span><span> 修改成utf-8</span><span>
#</span><span> 解析</span>
soup = BeautifulSoup(html_doc, <span>"</span><span>lxml</span><span>"</span><span>)
results </span>= soup.find_all(<span>"</span><span>d</span><span>"</span><span>)
contents </span>= [x.text <span>for</span> x <span>in</span><span> results]
</span><span>#</span><span> 保存结果</span>
dic = {<span>"</span><span>contents</span><span>"</span><span>: contents}
df </span>=<span> pd.DataFrame(dic)
df[</span><span>"</span><span>contents</span><span>"</span>].to_csv(<span>"</span><span>bili.csv</span><span>"</span>, encoding=<span>"</span><span>utf-8</span><span>"</span>, index=False)

www#gaodaima.com来源gao!%daima.com搞$代*!码$网搞代码

 

如果对爬取 B 站弹幕数据不了解的小伙伴可以看一下:爬取 B 站弹幕。

我们接着将爬取的弹幕数据生成词云,代码实现如下:

<span>def</span><span> jieba_():
    </span><span>#</span><span> 打开评论数据文件</span>
    content = open(<span>"</span><span>bili.csv</span><span>"</span>, <span>"</span><span>rb</span><span>"</span><span>).read()
    </span><span>#</span><span> jieba 分词</span>
    word_list =<span> jieba.cut(content)
    words </span>=<span> []
    </span><span>#</span><span> 过滤掉的词</span>
    stopwords = open(<span>"</span><span>stopwords.txt</span><span>"</span>, <span>"</span><span>r</span><span>"</span>, encoding=<span>"</span><span>utf-8</span><span>"</span>).read().split(<span>"</span><span>
</span><span>"</span>)[:-1<span>]
    </span><span>for</span> word <span>in</span><span> word_list:
        </span><span>if</span> word <span>not</span> <span>in</span><span> stopwords:
            words.append(word)
    </span><span>global</span><span> word_cloud
    </span><span>#</span><span> 用逗号隔开词语</span>
    word_cloud = <span>"</span><span>,</span><span>"</span><span>.join(words)

</span><span>def</span><span> cloud():
    </span><span>#</span><span> 打开词云背景图</span>
    cloud_mask = np.array(Image.open(<span>"</span><span>bg.png</span><span>"</span><span>))
    </span><span>#</span><span> 定义词云的一些属性</span>
    wc =<span> WordCloud(
        </span><span>#</span><span> 背景图分割颜色为白色</span>
        background_color=<span>"</span><span>white</span><span>"</span><span>,
        </span><span>#</span><span> 背景图样</span>
        mask=<span>cloud_mask,
        </span><span>#</span><span> 显示最大词数</span>
        max_words=500<span>,
        </span><span>#</span><span> 显示中文</span>
        font_path=<span>"</span><span>./fonts/simhei.ttf</span><span>"</span><span>,
        </span><span>#</span><span> 最大尺寸</span>
        max_font_size=60<span>,
        repeat</span>=<span>True
    )
    </span><span>global</span><span> word_cloud
    </span><span>#</span><span> 词云函数</span>
    x =<span> wc.generate(word_cloud)
    </span><span>#</span><span> 生成词云图片</span>
    image =<span> x.to_image()
    </span><span>#</span><span> 展示词云图片</span>
<span>    image.show()
    </span><span>#</span><span> 保存词云图片</span>
    wc.to_file(<span>"</span><span>cloud.png</span><span>"</span><span>)

jieba_()
cloud()</span>

 

看一下效果:

爬取微博

我们再接着爬取动漫的微博评论,我们选择的爬取目标是雾山五行官博顶置的这条微博的评论数据,如图所示:

爬取代码实现如下所示:

<span>urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)

</span><span>#</span><span> 爬取一页评论内容</span>
<span>def</span><span> get_one_page(url):
    headers </span>=<span> {
        </span><span>"</span><span>User-agent</span><span>"</span> : <span>"</span><span>Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3880.4 Safari/537.36</span><span>"</span><span>,
        </span><span>"</span><span>Host</span><span>"</span> : <span>"</span><span>weibo.cn</span><span>"</span><span>,
        </span><span>"</span><span>Accept</span><span>"</span> : <span>"</span><span>application/json, text/plain, */*</span><span>"</span><span>,
        </span><span>"</span><span>Accept-Language</span><span>"</span> : <span>"</span><span>zh-CN,zh;q=0.9</span><span>"</span><span>,
        </span><span>"</span><span>Accept-Encoding</span><span>"</span> : <span>"</span><span>gzip, deflate, br</span><span>"</span><span>,
        </span><span>"</span><span>Cookie</span><span>"</span> : <span>"</span><span>自己的cookie</span><span>"</span><span>,
        </span><span>"</span><span>DNT</span><span>"</span> : <span>"</span><span>1</span><span>"</span><span>,
        </span><span>"</span><span>Connection</span><span>"</span> : <span>"</span><span>keep-alive</span><span>"</span><span>
    }
    </span><span>#</span><span> 获取网页 html</span>
    response = requests.get(url, headers = headers, verify=<span>False)
    </span><span>#</span><span> 爬取成功</span>
    <span>if</span> response.status_code == 200<span>:
        </span><span>#</span><span> 返回值为 html 文档,传入到解析函数当中</span>
        <span>return</span><span> response.text
    </span><span>return</span><span> None

</span><span>#</span><span> 解析保存评论信息</span>
<span>def</span><span> save_one_page(html):
    comments </span>= re.findall(<span>"</span><span><span class="ctt">(.*?)</span></span><span>"</span><span>, html)
    </span><span>for</span> comment <span>in</span> comments[1<span>:]:
        result </span>= re.sub(<span>"</span><span><.*?></span><span>"</span>, <span>""</span><span>, comment)
        </span><span>if</span> <span>"</span><span>回复@</span><span>"</span> <span>not</span> <span>in</span><span> result:
            with open(</span><span>"</span><span>wx_comment.txt</span><span>"</span>, <span>"</span><span>a+</span><span>"</span>, encoding=<span>"</span><span>utf-8</span><span>"</span><span>) as fp:
                fp.write(result)

</span><span>for</span> i <span>in</span> range(50<span>):
    url </span>= <span>"</span><span>https://weibo.cn/comment/Je5bqpmCn?uid=6569999648&rl=0&page=</span><span>"</span>+<span>str(i) 
    html </span>=<span> get_one_page(url)
    </span><span>print</span>(<span>"</span><span>正在爬取第 %d 页评论</span><span>"</span> % (i+1<span>))
    save_one_page(html)
    time.sleep(</span>3)

 

对于爬取微博评论不熟悉的小伙伴可以参考:爬取微博评论。

同样的,我们还是将评论生成词云,看一下效果:

爬取豆瓣

最后,我们爬取动漫的豆瓣评论数据,动漫的豆瓣地址为:https://movie.douban.com/subject/30395914/,爬取的实现代码如下:

<span>def</span><span> spider():
    url </span>= <span>"</span><span>https://accounts.douban.com/j/mobile/login/basic</span><span>"</span><span>
    headers </span>= {<span>"</span><span>User-Agent</span><span>"</span>: <span>"</span><span>Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0)</span><span>"</span><span>}
    </span><span>#</span><span> 评论网址,为了动态翻页,start 后加了格式化数字,短评页面有 20 条数据,每页增加 20 条</span>
    url_comment = <span>"</span><span>https://movie.douban.com/subject/30395914/comments?start=%d&limit=20&sort=new_score&status=P</span><span>"</span><span>
    data </span>=<span> {
        </span><span>"</span><span>ck</span><span>"</span>: <span>""</span><span>,
        </span><span>"</span><span>name</span><span>"</span>: <span>"</span><span>用户名</span><span>"</span><span>,
        </span><span>"</span><span>password</span><span>"</span>: <span>"</span><span>密码</span><span>"</span><span>,
        </span><span>"</span><span>remember</span><span>"</span>: <span>"</span><span>false</span><span>"</span><span>,
        </span><span>"</span><span>ticket</span><span>"</span>: <span>""</span><span>
    }
    session </span>=<span> requests.session()
    session.post(url</span>=url, headers=headers, data=<span>data)
    </span><span>#</span><span> 初始化 4 个 list 分别存用户名、评星、时间、评论文字</span>
    users =<span> []
    stars </span>=<span> []
    times </span>=<span> []
    content </span>=<span> []
    </span><span>#</span><span> 抓取 500 条,每页 20 条,这也是豆瓣给的上限</span>
    <span>for</span> i <span>in</span> range(0, 500, 20<span>):
        </span><span>#</span><span> 获取 HTML</span>
        data = session.get(url_comment % i, headers=<span>headers)
        </span><span>#</span><span> 状态码 200 表是成功</span>
        <span>print</span>(<span>"</span><span>第</span><span>"</span>, i, <span>"</span><span>页</span><span>"</span>, <span>"</span><span>状态码:</span><span>"</span><span>,data.status_code)
        </span><span>#</span><span> 暂停 0-1 秒时间,防止IP被封</span>
<span>        time.sleep(random.random())
        </span><span>#</span><span> 解析 HTML</span>
        selector =<span> etree.HTML(data.text)
        </span><span>#</span><span> 用 xpath 获取单页所有评论</span>
        comments = selector.xpath(<span>"</span><span>//div[@class="comment"]</span><span>"</span><span>)
        </span><span>#</span><span> 遍历所有评论,获取详细信息</span>
        <span>for</span> comment <span>in</span><span> comments:
            </span><span>#</span><span> 获取用户名</span>
            user = comment.xpath(<span>"</span><span>.//h3/span[2]/a/text()</span><span>"</span><span>)[0]
            </span><span>#</span><span> 获取评星</span>
            star = comment.xpath(<span>"</span><span>.//h3/span[2]/span[2]/@class</span><span>"</span>)[0][7:8<span>]
            </span><span>#</span><span> 获取时间</span>
            date_time = comment.xpath(<span>"</span><span>.//h3/span[2]/span[3]/@title</span><span>"</span><span>)
            </span><span>#</span><span> 有的时间为空,需要判断下</span>
            <span>if</span> len(date_time) !=<span> 0:
                date_time </span>=<span> date_time[0]
                date_time </span>= date_time[:10<span>]
            </span><span>else</span><span>:
                date_time </span>=<span> None
            </span><span>#</span><span> 获取评论文字</span>
            comment_text = comment.xpath(<span>"</span><span>.//p/span/text()</span><span>"</span><span>)[0].strip()
            </span><span>#</span><span> 添加所有信息到列表</span>
<span>            users.append(user)
            stars.append(star)
            times.append(date_time)
            content.append(comment_text)
    </span><span>#</span><span> 用字典包装</span>
    comment_dic = {<span>"</span><span>user</span><span>"</span>: users, <span>"</span><span>star</span><span>"</span>: stars, <span>"</span><span>time</span><span>"</span>: times, <span>"</span><span>comments</span><span>"</span><span>: content}
    </span><span>#</span><span> 转换成 DataFrame 格式</span>
    comment_df =<span> pd.DataFrame(comment_dic)
    </span><span>#</span><span> 保存数据</span>
    comment_df.to_csv(<span>"</span><span>db.csv</span><span>"</span><span>)
    </span><span>#</span><span> 将评论单独再保存下来</span>
    comment_df[<span>"</span><span>comments</span><span>"</span>].to_csv(<span>"</span><span>comment.csv</span><span>"</span>, index=<span>False)

spider()</span>

 

对于爬取豆瓣评论不熟悉的小伙伴,可以参考:爬取豆瓣评论。

看一下生成的词云效果:


搞代码网(gaodaima.com)提供的所有资源部分来自互联网,如果有侵犯您的版权或其他权益,请说明详细缘由并提供版权或权益证明然后发送到邮箱[email protected],我们会在看到邮件的第一时间内为您处理,或直接联系QQ:872152909。本网站采用BY-NC-SA协议进行授权
转载请注明原文链接:用 Python 爬取最炫国漫《雾山五行》,看看十万网友在说些啥,【词云】

喜欢 (0)
[搞代码]
分享 (0)
发表我的评论
取消评论

表情 贴图 加粗 删除线 居中 斜体 签到

Hi,您需要填写昵称和邮箱!

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址