前言
本文的文字及图片过滤网络,可以学习,交流使用,不具有任何商业用途,如有问题请及时联系我们以作处理。
以下文章来源于青灯编程 ,作者:清风
如上图所示,爬取171个视频,共计2.6G的内存大小,用时仅有88秒,还不到一分半。
基本开发环境
- Python 3.6
- 皮查姆
相关模块的使用
<span><a href="https://www.gaodaima.com/tag/import" title="查看更多关于import的文章" target="_blank">import</a></span><span> re </span><span>import</span><span> time </span><span>import</span><span> requests </span><span>import</span> concurrent.futures
www#gaodaima.com来源gaodai#ma#com搞@代~码网搞代码
相关模块pip安装即可。
完整代码
<span>import</span><span> re </span><span>import</span><span> time </span><span>import</span><span> requests </span><span>import</span><span> concurrent.futures </span><span>def</span><span> get_response(html_url): headers </span>= {<span>"</span><span>User-Agent</span><span>"</span>: <span>"</span><span>Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36</span><span>"</span><span>} response </span>= requests.get(url=html_url, headers=<span>headers) </span><span>return</span><span> response </span><span>def</span><span> save(video_url, video_title): filename </span>= <span>"</span><span>video</span><span>"</span> + video_title + <span>"</span><span>.mp4</span><span>"</span><span> video_data </span>=<span> get_response(video_url).content with open(filename, mode</span>=<span>"</span><span>wb</span><span>"</span><span>) as f: f.write(video_data) </span><span>print</span>(<span>"</span><span>正在保存:</span><span>"</span><span>, video_title) </span><span>def</span><span> main(html_url): html_data </span>=<span> get_response(html_url).text lis </span>= re.findall(<span>"</span><span><div id="(d+)" class="newslv_share"></span><span>"</span><span>, html_data) </span><span>for</span> li <span>in</span><span> lis: page_url </span>= f<span>"</span><span>https://www.thepaper.cn/newsDetail_forward_{li}</span><span>"</span><span> page_data </span>=<span> get_response(page_url).text video_url </span>= re.findall(<span>"</span><span><source src="(.*?)" type="video/mp4"/></span><span>"</span><span>, page_data)[0] video_title </span>= re.findall(<span>"</span><span><h2>(.*?)</h2></span><span>"</span><span>, page_data)[0] save(video_url, video_title) end_time </span>=<span> time.time() use_time </span>= end_time -<span> start_time </span><span>print</span>(<span>"</span><span>总共耗时:</span><span>"</span><span>, use_time) </span><span>if</span> <span>__name__</span> == <span>"</span><span>__main__</span><span>"</span><span>: start_time </span>=<span> time.time() executor </span>= concurrent.futures.ThreadPoolExecutor(max_workers=5<span>) </span><span>for</span> page <span>in</span> range(1, 11<span>): url </span>= f<span>"</span><span>https://www.thepaper.cn/load_video_chosen.jsp?channelID=26916&pageidx={page}</span><span>"</span><span> executor.submit(main, url) executor.shutdown()</span>
完整代码已经给了,自己的感受〜,我这还是使用的5个线程,你给10个线程效率会更高,可以一分钟不到就可以爬完了
Python爬虫、数据分析、网站开发等案例教程视频免费在线观看
https://space.bilibili.com/523606542