Python爬虫之:Requests基础操作

Python爬虫之:Requests基础操作

2023年6月25日发(作者:)

Python爬⾍之:Requests基础操作请求与响应请求⽅式import requests# GETr = ('url')# POSTr = ('url')# PUTr = ('url')# deleter = ('url')# headr = ('url')# optionsr = s('url')数据交互ajax传参import requestsdata = {'key':'value'}r = ('url',data=data)url传参import requestspayload = {'key1': 'value1', 'key2': 'value2'}r = (‘url, params=payload)# print url>>> print()url?key2=value2&key1=value1⽂件交互import requestsr = ('/')with open('','wb') as f: (t)响应import requests# GETr = ('url')#

响应码print(_code)#

以⽂本格式返回响应内容print()#

以字节格式返回响应内容print(t)#

以json格式返回相应内容,因为就算请求出错也会返回⼀串json格式的字符串。#

所以可以使⽤_code或者_for_status来判断响应是否成功print(())通⽤参数headersimport requestsheaders={ 'content-encoding': 'gzip', 'transfer-encoding': 'chunked', 'connection': 'close', 'server': 'nginx/1.0.4', 'x-runtime': '148ms', 'etag': '"e1ca502697e5c9317743dc078f67693f"', 'content-type': 'application/json'}r = ('url',headers=headers)cookisimport requestscookies=""jar = tsCookieJar()for cookie in (';'): key,value = ('=',1) (key,vaule)r = ('url',cookies=cookies)proxiesimport requestsproxies = { "http":"", "https":""}r = ('url',proxies=proxies)timeoutimport requests# 1sr = ('url',timeout = 1)SSL 证书验证import requests#

关闭验证r = ('url',verify=False)SessionSession 对象允许您跨请求保留某些参数。import requestss = n() = ('username','password')({'x-test':'true'})r = ('url')Prepared Requestfrom requests import Request, Sessiondata = {

}headers = {

}s = Session()req = ('url',data=data,headers=headers)#

将请求表⽰为数据结构prepped = e_request(req)r = (prepped)print()⾝份验证import requestsform import HTTPBasicAuthr = ('url',auth=HTTPBasicAuth('username','password'))

发布者:admin,转转请注明出处:http://www.yc00.com/web/1687678759a30877.html

相关推荐

发表回复

评论列表(0条)

  • 暂无评论

联系我们

400-800-8888

在线咨询: QQ交谈

邮件:admin@example.com

工作时间:周一至周五,9:30-18:30,节假日休息

关注微信