前言
本文的文字及图片来源于网络,仅供学习、交流使用,不具有任何商业用途,版权归原作者所有,如有问题请及时联系我们以作处理
以下文章来源于菜鸟学Python数据分析
1.网页分析
本文以爬取《脱口秀大会 第3季》最后一期视频弹幕为例,首先通过以下步骤找到存放弹幕的真实url。
通过删减各参数,发现仅有timestamp参数的变化会影响弹幕数据的爬取,且timestamp参数是首项为15,公差为30的等差数列。可以大胆猜测腾讯视频每30秒更新一页弹幕数据,该视频长度为12399秒。而数据格式为标准的json格式,因此json.loads直接解析数据即可。
2.爬虫实战
<span><a href="https://www.gaodaima.com/tag/import" title="查看更多关于import的文章" target="_blank">import</a></span><span> requests </span><span>import</span><span> json </span><span>import</span><span> time </span><span>import</span><span> pandas as pd df </span>=<span> pd.DataFrame() </span><span>for</span> page <span>in</span> range(15, 12399, 30<span>): headers </span>= {<span>"</span><span>User-Agent</span><span>"</span>: <span>"</span><span>Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36</span><span>"</span><span>} url </span>= <span>"</span><span>https://mfm.video.qq.com/danmu?otype=json×tamp={}&target_id=5938032297%26vid%3Dx0034hxucmw&count=80</span><span>"</span><span>.format(page) </span><span>print</span>(<span>"</span><span>正在提取第</span><span>"</span> + str(page) + <span>"</span><span>页</span><span>"</span><span>) html </span>= requests.get(url,headers =<span> headers) bs </span>= json.loads(html.text,strict = False) <span>#</span><span>strict参数解决部分内容json格式解析报错</span> time.sleep(1<span>) </span><span>#</span><span>遍历获取目标字段</span> <span>for</span> i <span>in</span> bs[<span>"</span><span>comments</span><span>"</span><span>]: content </span>= i[<span>"</span><span>content</span><span>"</span>] <span>#</span><span>弹幕</span> upcount = i[<span>"</span><span>upcount</span><span>"</span>] <span>#</span><span>点赞数</span> user_degree =i[<span>"</span><span>uservip_degree</span><span>"</span>] <span>#</span><span>会员等级</span> timepoint = i[<span>"</span><span>timepoint</span><span>"</span>] <span>#</span><span>发布时间</span> comment_id = i[<span>"</span><span>commentid</span><span>"</span>] <span>#</span><span>弹幕id</span> cache = pd.DataFrame({<span>"</span><span>弹幕</span><span>"</span>:[content],<span>"</span><span>会员等级</span><span>"</span><span>:[user_degree], </span><span>"</span><span>发布时间</span><span>"</span>:[timepoint],<span>"</span><span>弹幕点赞</span><span>"</span>:[upcount],<span>"</span><span>弹幕id</span><span>"</span><span>:[comment_id]}) df </span>=<span> pd.concat([df,cache]) df.to_csv(</span><span>"</span><span>tengxun_danmu.csv</span><span>"</span>,encoding = <span>"</span><span>utf-8</span><span>"</span><span>) </span><span>print</span>(df.shape)
www#gaodaima.com来源gaodaima#com搞(代@码网搞代码