抓取网页的gzip/deflate

来源:互联网 发布:如何查找淘宝买家信息 编辑:程序博客网 时间:2024/05/20 03:04

现在的网页普遍支持gzip压缩,这往往可以解决大量传输时间,以豆瓣的主页为例,未压缩版本327K,压缩了以后61K,为原来的1/5。这就意味着抓取速度会快5倍。

然而python的urllib/urllib2默认都不支持压缩,要返回压缩格式,必须在request的header里面写明’accept-encoding’,然后读取response后更要检查header查看是否有’content-encoding’一项来判断是否需要解码,很繁琐琐碎。如何让urllib2自动支持gzip, defalte呢?

其实可以继承BaseHanlder类,然后build_opener的方式来处理:


encoding_support = ContentEncodingProcessoropener = urllib2.build_opener( encoding_support, urllib2.HTTPHandler ) #直接用opener打开网页,如果服务器支持gzip/defalte则自动解压缩content = opener.open(url).read()

import urllib2from gzip import GzipFilefrom StringIO import StringIOclass ContentEncodingProcessor(urllib2.BaseHandler):  """A handler to add gzip capabilities to urllib2 requests """   # add headers to requests  def http_request(self, req):    req.add_header("Accept-Encoding", "gzip, deflate")    return req   # decode  def http_response(self, req, resp):    old_resp = resp    # gzip    if resp.headers.get("content-encoding") == "gzip":        gz = GzipFile(                    fileobj=StringIO(resp.read()),                    mode="r"                  )        resp = urllib2.addinfourl(gz, old_resp.headers, old_resp.url, old_resp.code)        resp.msg = old_resp.msg    # deflate    if resp.headers.get("content-encoding") == "deflate":        gz = StringIO( deflate(resp.read()) )        resp = urllib2.addinfourl(gz, old_resp.headers, old_resp.url, old_resp.code)  # 'class to add info() and        resp.msg = old_resp.msg    return resp # deflate supportimport zlibdef deflate(data):   # zlib only provides the zlib compress format, not the deflate format;  try:               # so on top of all there's this workaround:    return zlib.decompress(data, -zlib.MAX_WBITS)  except zlib.error:    return zlib.decompress(data)




0 0
原创粉丝点击