本文分二个示例,第一个是个简单的爬网站的小例子,第二个例子实现目是从一个网站的列表页抓取文章列表,然后存入数据库中,数据库包括文章标题、链接、时间,大家参考使用吧
代码如下:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import Selector
# -*- coding: utf-8 -*-
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import Selector
from cnbeta.items import CnbetaItem
class CBSpider(CrawlSpider):
name = ‘cnbeta’
allowed_domains = [‘cnbeta.com’]
start_urls = [‘https://www.gaodaima.com’%5D
rules = (
Rule(SgmlLinkExtractor(allow=(‘/articles/.*\.htm’, )),
callback=’parse_page’, follow=True),
)
def parse_page(self, response):
item = CnbetaItem()
sel = Selec来源gaodai#ma#com搞*!代#%^码网tor(response)
item[‘title’] = sel.xpath(‘//title/text()’).extract()
item[‘url’] = response.url
return item
以上就是使用scrapy实现爬网站例子和实现网络爬虫(蜘蛛)的步骤的详细内容,更多请关注gaodaima搞代码网其它相关文章!