资源简介
python实现赶集网北京地区招聘信息爬虫,采用多进程方式爬取

代码片段和文件信息
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import requests
from bs4 import BeautifulSoup
url = ‘http://bj.ganji.com/zhaopin/‘
url_host = ‘http://bj.ganji.com‘
headers = {
‘User_Agent‘ : ‘Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML like Gecko) Chrome/64.0.3282.119 Safari/537.36‘
‘Cookie‘ : ‘statistics_clientid=me; citydomain=bj; ganji_xuuid=1126098c-9b0b-45a2-d1c5-696c24513e61.1527074080805; xxzl_deviceid=ut3zpBItorVTZAtr0DCbC9scViCnLY%2FCFM42aV5iWN59%2F298Q7Et0CyhHT%2F6Gl6j; ganji_uuid=4720982452122919813978; 58uuid=7a9bc91b-02ad-44f4-afee-dacf0f031296; als=0; new_uv=16; _gl_tracker=%7B%22ca_source%22%3A%22www.baidu.com%22%2C%22ca_name%22%3A%22-%22%2C%22ca_kw%22%3A%22-%22%2C%22ca_id%22%3A%22-%22%2C%22ca_s%22%3A%22seo_baidu%22%2C%22ca_n%22%3A%22-%22%2C%22ca_i%22%3A%22-%22%2C%22sid%22%3A59536909372%7D; GANJISESSID=0u7dffn8a7uu5v7t9bjkt4oo60; __utmc=32156897; STA_DS=1; __utma=32156897.1527240521.1527160626.1527582738.1527586203.5; __utmz=32156897.1527586203.5.5.utmcsr=bj.ganji.com|utmccn=(referral)|utmcmd=referral|utmcct=/; _wap__utmganji_wap_newCaInfo_V2=%7B%22ca_n%22%3A%22-%22%2C%22ca_s%22%3A%22self%22%2C%22ca_i%22%3A%22-%22%7D; Hm_lvt_655ab0c3b3fdcfa236c3971a300f3f29=1527586230; WantedListPageScreenType=1280; Hm_lvt_8da53a2eb543c124384f1841999dcbb8=1527586250; pos_detail_zcm_popup=2018-5-29; __utmt=1; zhaopin_lasthistory=zpxingzhenghouqin%7Czpxingzhenghouqin; zhaopin_historyrecords=bj%7Czpxingzhenghouqin%7C-%2Cbj%7Czpshichangyingxiao%7C-; Hm_lpvt_655ab0c3b3fdcfa236c3971a300f3f29=1527586904; gj_footprint=%5B%5B%22%5Cu5176%5Cu4ed6%5Cu9500%5Cu552e%5Cu804c%5Cu4f4d%22%2C%22%5C%2Fzpxiaoshouqita%5C%2F%22%5D%2C%5B%22%5Cu884c%5Cu653f%5C%2F%5Cu540e%5Cu52e4%22%2C%22%5C%2Fzpxingzhenghouqin%5C%2F%22%5D%2C%5B%22%5Cu9500%5Cu552e%22%2C%22%5C%2Fzpshichangyingxiao%5C%2F%22%5D%5D; Hm_lpvt_8da53a2eb543c124384f1841999dcbb8=1527586931; ganji_login_act=1527587108173; __utmb=32156897.41.10.1527586203‘
‘Connection‘: ‘keep-alive‘
}
def get_genre_url(url):
wb_data = requests.get(url headers=headers)
soup = BeautifulSoup(wb_data.text ‘lxml‘)
genre_links = soup.select(‘dt > a‘)
urls = []
for genre_link in genre_links:
genre_url = url_host + genre_link.get(‘href‘)
urls.append(genre_url)
return urls
genre_urls = get_genre_url(url)
属性 大小 日期 时间 名称
----------- --------- ---------- ----- ----
目录 0 2018-06-14 12:41 赶集招聘爬虫\
文件 2351 2018-05-30 04:16 赶集招聘爬虫\all_genre_li
文件 369 2018-06-02 00:49 赶集招聘爬虫\counter.py
文件 4392 2018-05-30 15:25 赶集招聘爬虫\get_jobs_info.py
文件 760 2018-06-14 12:41 赶集招聘爬虫\main.py
目录 0 2018-06-14 12:38 赶集招聘爬虫\__pycache__\
文件 2446 2018-06-14 12:38 赶集招聘爬虫\__pycache__\all_genre_li
文件 4356 2018-05-31 07:13 赶集招聘爬虫\__pycache__\get_jobs_info.cpython-36.pyc
相关资源
- 二级考试python试题12套(包括选择题和
- pywin32_python3.6_64位
- python+ selenium教程
- PycURL(Windows7/Win32)Python2.7安装包 P
- 英文原版-Scientific Computing with Python
- 7.图像风格迁移 基于深度学习 pyt
- 基于Python的学生管理系统
- A Byte of Python(简明Python教程)(第
- Python实例174946
- Python 人脸识别
- Python 人事管理系统
- 一个多线程智能爬虫,爬取网站小说
- 基于python-flask的个人博客系统
- 计算机视觉应用开发流程
- python 调用sftp断点续传文件
- python socket游戏
- 基于Python爬虫爬取天气预报信息
- python函数编程和讲解
- 顶点小说单本书爬虫.py
- Python开发的个人博客
- 基于python的三层神经网络模型搭建
- python实现自动操作windows应用
- python人脸识别(opencv)
- python 绘图(方形、线条、圆形)
- python疫情卡UN管控
- python 连连看小游戏源码
- 基于PyQt5的视频播放器设计
- 一个简单的python爬虫
- csv文件行列转换python实现代码
- Python操作Mysql教程手册
评论
共有 条评论