時間:2023-05-24 11:12:01 | 來源:網(wǎng)站運營
時間:2023-05-24 11:12:01 來源:網(wǎng)站運營
基于Python---Ajax網(wǎng)頁爬取案例詳解:本文的大致路線##設(shè)置一定的點擊次數(shù)from bs4 import BeautifulSoupfrom selenium import webdriverimport timeimport rebrowser = webdriver.Chrome()###版本 68.0.3440.106(正式版本) (64 位)browser.get('https://movie.douban.com/tag/#/?sort=T&range=0,10&tags=')browser.implicitly_wait(3)##瀏覽器解釋JS腳本是需要時間的,但實際上這個時間并不好確定,如果我們手動設(shè)定時間間隔的話,設(shè)置多了浪費時間,設(shè)置少了又會丟失數(shù)據(jù)##implictly_wait函數(shù)則完美解決了這個問題,給他一個時間參數(shù),它會只能等待,當(dāng)js完全解釋完畢就會自動執(zhí)行下一步。time.sleep(3)browser.find_element_by_xpath('//*[@id="app"]/div/div[1]/div[1]/ul[4]/li[6]/span').click()###自動選擇勵志電影類型i = 0for i in range(5):##這里設(shè)置點擊5次“加載更多” browser.find_element_by_link_text("加載更多").click() time.sleep(5)###如果網(wǎng)頁沒有完全加載,會出現(xiàn)點擊錯誤,會點擊到某個電影頁面,所以加了一個睡眠時間。 ##browswe.page_source是點擊5次后的源碼,用Beautiful Soup解析源碼 soup = BeautifulSoup(browser.page_source, 'html.parser')items = soup.find('div', class_=re.compile('list-wp'))for item in items.find_all('a'): Title = item.find('span', class_='title').text Rate = item.find('span', class_='rate').text Link = item.find('span',class_='pic').find('img').get('src') print(Title,Rate,Link)------------------------------------------------------------------------------------------------###一直不斷點擊,直到加載完全from bs4 import BeautifulSoupfrom selenium import webdriverimport timeimport rebrowser = webdriver.Chrome()###版本 68.0.3440.106(正式版本) (64 位)browser.get('https://movie.douban.com/tag/#/?sort=T&range=0,10&tags=')browser.implicitly_wait(3)time.sleep(3)browser.find_element_by_xpath('//*[@id="app"]/div/div[1]/div[1]/ul[4]/li[6]/span').click()###自動選擇勵志電影類型soup = BeautifulSoup(browser.page_source, 'html.parser')while len(soup.select('.more'))>0:##soup.select(),返回類型是 list,判斷只要長度大于0,就會一直不斷點擊。 browser.find_element_by_link_text("加載更多").click() time.sleep(5)###如果網(wǎng)頁沒有完全加載,會出現(xiàn)點擊錯誤,會點擊到某個電影頁面,所以加了一個睡眠時間。 soup = BeautifulSoup(browser.page_source, 'html.parser')##將 加載更多 全部點擊完成后,用Beautiful Soup解析網(wǎng)頁源代碼items = soup.find('div', class_=re.compile('list-wp'))for item in items.find_all('a'): Title = item.find('span', class_='title').text Rate = item.find('span', class_='rate').text Link = item.find('span', class_='pic').find('img').get('src') print(Title, Rate, Link)
方法二、依據(jù)選項卡中URL規(guī)律直接構(gòu)造二次請求的URLimport requestsfrom requests.exceptions import RequestExceptionimport timeimport csvimport jsonheaders = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36'}def get_one_page(url): try: response = requests.get(url,headers=headers) if response.status_code == 200: return response.json()##將返回的json數(shù)據(jù)轉(zhuǎn)換為python可讀的字典數(shù)據(jù),.json是requests庫自帶的函數(shù)。 return None except RequestException: print("抓取失敗")def parse_one_page(d): try: datum = d['data'] for data in datum: yield{ 'Title':data['title'], 'Director':data['directors'], 'Actors':data['casts'], 'Rate':data['rate'], 'Link':data['url'] } if data['Title','Director','Actors','Rate','Link'] == None: return None except Exception: return Nonedef main(): for i in range(10):###這里就抓取10個網(wǎng)頁,如果需求更多數(shù)據(jù),將將數(shù)字改更大些即可。 url = 'https://movie.douban.com/j/new_search_subjects?sort=T&range=0,10&tags=%E5%8A%B1%E5%BF%97&start={}'.format(i*20) d = get_one_page(url) print('第{}頁抓取完畢'.format(i+1)) for item in parse_one_page(d): print(item) ##將輸出字典依次寫入csv文件中 with open('Movie.csv', 'a', newline='',encoding='utf-8') as f: # file_path 是 csv 文件存儲的路徑,默認路徑 fieldnames = ['Title', 'Director', 'Actors', 'Rate', 'Link'] writer = csv.DictWriter(f, fieldnames=fieldnames) writer.writeheader() for item in parse_one_page(d): writer.writerow(item)if __name__=='__main__': main()
from selenium import webdriverimport reimport timebrowser = webdriver.Chrome()browser.get('https://www.csdn.net/')browser.implicitly_wait(10)i = 0for i in range(5):###設(shè)置下拉5次,如果想獲取更多信息,增加下拉次數(shù)即可 browser.execute_script('window.scrollTo(0, document.body.scrollHeight)')##下拉,execute_script可以將進度條下拉到最底部 time.sleep(5)##睡眠一下,防止網(wǎng)絡(luò)延遲,卡頓等data = []pattern = re.compile('<li.*?blog".*?>.*?title">.*?<a.*?>(.*?)</a>.*?<dd.*?name">.*?<a.*?blank">(.*?)</a>' '.*?<span.*?num">(.*?)</span>.*?text">(.*?)</span>.*?</li>',re.S)items = re.findall(pattern,browser.page_source)##這里網(wǎng)頁源代碼為下拉5次后的代碼for item in items: data.append({ 'Title':item[0].strip(), 'Author' : item[1].strip(), 'ReadNum' : item[2] + item[3] })print(data)
方法二、通過瀏覽器審查元素解析真實地址import requestsheaders = {'cookie':'uuid_tt_dd=3844871280714138949_20171108; kd_user_id=e61e2f88-9c4f-4cf7-88c7-68213cac17f7; UN=qq_40963426; BT=1521452212412; Hm_ct_6bcd52f51e9b3dce32bec4a3997715ac=1788*1*PC_VC; smidV2=20180626144357b069d2909d23ff73c3bc90ce183c8c57003acfcec7f57dd70; __utma=17226283.14643699.1533350144.1533350144.1534431588.2; __utmz=17226283.1533350144.1.1.utmcsr=zhuanlan.zhihu.com|utmccn=(referral)|utmcmd=referral|utmcct=/p/39165199/edit; TY_SESSION_ID=f49bdedd-c1d4-4f86-b254-22ab2e8f02f6; ViewMode=contents; UM_distinctid=165471259cb3e-02c5602643907b-5d4e211f-100200-165471259cc86; dc_session_id=10_1534471423488.794042; dc_tos=pdlzzq; Hm_lvt_6bcd52f51e9b3dce32bec4a3997715ac=1534515139,1534515661,1534515778,1534515830; Hm_lpvt_6bcd52f51e9b3dce32bec4a3997715ac=1534515830; ADHOC_MEMBERSHIP_CLIENT_ID1.0=d480c872-d4e9-33d9-d0e5-f076fc76aa83', 'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36'}def get_page(): r =requests.get('https://www.csdn.net/api/articles?type=more&category=home&shown_offset=1534516237069160',headers=headers) d=r.json()#一般ajax返回的都是json格式數(shù)據(jù),將返回的數(shù)據(jù)json格式化,.json()是requests庫自帶函數(shù) articles = d['articles']#字典形式 for article in articles: yield { 'article':article['title'], 'Link':article['user_url'], 'View':article['views'] }for i in get_page(): print(i)##這里應(yīng)該有關(guān)于抓取不同頁文章標題的操作,但是還沒有解決。
案例二參考鏈接:ajax動態(tài)加載網(wǎng)頁抓取關(guān)鍵詞:
微信公眾號
版權(quán)所有? 億企邦 1997-2025 保留一切法律許可權(quán)利。