资源简介
找出评分最高的前100部电影,使用python 实现,对网站爬虫
代码片段和文件信息
#-*- coding: UTF-8 -*-
import sys
import time
import urllib
import urllib2
// import requests
import numpy as np
from bs4 import BeautifulSoup
from openpyxl import Workbook
reload(sys)
sys.setdefaultencoding(‘utf8‘)
#Some User Agents
hds=[{‘User-Agent‘:‘Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6‘}\
{‘User-Agent‘:‘Mozilla/5.0 (Windows NT 6.2) AppleWebKit/535.11 (KHTML like Gecko) Chrome/17.0.963.12 Safari/535.11‘}\
{‘User-Agent‘: ‘Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; Trident/6.0)‘}]
def book_spider(book_tag):
page_num=0;
book_list=[]
try_times=0
while(1):
#url=‘http://www.douban.com/tag/%E5%B0%8F%E8%AF%B4/book?start=0‘ # For Test
url=‘http://www.douban.com/tag/‘+urllib.quote(book_tag)+‘/book?start=‘+str(page_num*15)
time.sleep(np.random.rand()*5)
#Last Version
try:
req = urllib2.Request(url headers=hds[page_num%len(hds)])
source_code = urllib2.urlopen(req).read()
plain_text=str(source_code)
except (urllib2.HTTPError urllib2.URLError) e:
print e
continue
##Previous Version IP is easy to be Forbidden
#source_code = requests.get(url)
#plain_text = source_code.text
soup = BeautifulSoup(plain_text)
list_soup = soup.find(‘div‘ {‘class‘: ‘mod book-list‘})
try_times+=1;
if list_soup==None and try_times<200:
continue
elif list_soup==None or len(list_soup)<=1:
break # Break when no informatoin got after 200 times requesting
for book_info in list_soup.findAll(‘dd‘):
title = book_info.find(‘a‘ {‘class‘:‘title‘}).string.strip()
desc = book_info.find(‘div‘ {‘class‘:‘desc‘}).string.strip()
desc_list = desc.split(‘/‘)
book_url = book_info.find(‘a‘ {‘class‘:‘title‘}).get(‘href‘)
try:
author_info = ‘作者/译者: ‘ + ‘/‘.join(desc_list[0:-3])
except:
author_info =‘作者/译者: 暂无‘
try:
pub_info = ‘出版信息: ‘ + ‘/‘.join(desc_list[-3:])
except:
pub_info = ‘出版信息: 暂无‘
try:
rating = book_info.find(‘span‘ {‘class‘:‘rating_nums‘}).string.strip()
except:
rating=‘0.0‘
try:
#people_num = book_info.findAll(‘s
相关资源
- 一个多线程智能爬虫,爬取网站小说
- 基于Python爬虫爬取天气预报信息
- 顶点小说单本书爬虫.py
- 一个简单的python爬虫
- 豆瓣爬虫;Scrapy框架
- 中国城市经纬度爬虫.ipynb
- Python爬虫数据分析可视化
- 豆瓣热门电影爬取
- 网站列表信息爬虫
- 百度图片爬虫(python版)
- python爬取小说59868
- 彼岸花网壁纸爬虫
- Python 爬虫小说.ipynb
- 爬虫爬取网易云音乐
- 北邮python爬虫学堂在线
- python简单爬虫
- 爬取58同城二手房信息.py
- 知网爬虫软件(python)
- python爬虫爬取微博热搜
- python爬虫爬取旅游信息(附源码,c
- python爬虫爬取豆瓣电影信息
- 爬取上百张妹子图源码可直接运行
- xpath爬取豆瓣电影top250
- Python爬虫实战入门教程
- 网络爬虫(pachong_anjuke.py)
- Python-京东抢购助手包含登录查询商品
- python网络爬虫获取景点信息源码
- python爬取维基百科程序语言消息盒(
- python新浪微博爬虫
- 12306爬虫实现
评论
共有 条评论