目錄
一、單執行緒爬蟲
二、最佳化為多執行緒爬蟲
三、使用asyncio進一步優化
四、存入Mysql資料庫
(一)建表
(二)將資料存入資料庫中 
五、最終效果圖(已打碼)
首頁 後端開發 Python教學 Python爬蟲:如何取得城市租房資訊?

Python爬蟲:如何取得城市租房資訊?

May 07, 2023 pm 08:34 PM
python

    想法:先單線程爬蟲,測試可以成功爬取之後再優化為多線程,最後存入資料庫

    以爬取鄭州市租房資訊為例

    注意:本實戰專案僅以學習為目的,為避免對網站造成太大壓力,請將程式碼中的num修改成較小的數字,並將執行緒改小

    一、單執行緒爬蟲

    # 用session取代requests
    # 解析库使用bs4
    # 并发库使用concurrent
    import requests
    # from lxml import etree    # 使用xpath解析
    from bs4 import BeautifulSoup
    from urllib import parse
    import re
    import time
     
    headers = {
        'referer': 'https://zz.zu.fang.com/',
        'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36',
        'cookie': 'global_cookie=ffzvt3kztwck05jm6twso2wjw18kl67hqft; city=zz; integratecover=1; __utma=147393320.427795962.1613371106.1613371106.1613371106.1; __utmc=147393320; __utmz=147393320.1613371106.1.1.utmcsr=zz.fang.com|utmccn=(referral)|utmcmd=referral|utmcct=/; __utmt_t0=1; __utmt_t1=1; __utmt_t2=1; ASP.NET_SessionId=aamzdnhzct4i5mx3ak4cyoyp; Rent_StatLog=23d82b94-13d6-4601-9019-ce0225c092f6; Captcha=61584F355169576F3355317957376E4F6F7552365351342B7574693561766E63785A70522F56557370586E3376585853346651565256574F37694B7074576B2B34536C5747715856516A4D3D; g_sourcepage=zf_fy%5Elb_pc; unique_cookie=U_ffzvt3kztwck05jm6twso2wjw18kl67hqft*6; __utmb=147393320.12.10.1613371106'
    }
    data={
        'agentbid':''
    }
     
    session = requests.session()
    session.headers = headers
     
    # 获取页面
    def getHtml(url):
        try:
            re = session.get(url)
            re.encoding = re.apparent_encoding
            return re.text
        except:
            print(re.status_code)
     
    # 获取页面总数量
    def getNum(text):
        soup = BeautifulSoup(text, 'lxml')
        txt = soup.select('.fanye .txt')[0].text
        # 取出“共**页”中间的数字
        num = re.search(r'\d+', txt).group(0)
        return num
     
    # 获取详细链接
    def getLink(tex):
        soup=BeautifulSoup(text,'lxml')
        links=soup.select('.title a')
        for link in links:
            href=parse.urljoin('https://zz.zu.fang.com/',link['href'])
            hrefs.append(href)
     
    # 解析页面
    def parsePage(url):
        res=session.get(url)
        if res.status_code==200:
            res.encoding=res.apparent_encoding
            soup=BeautifulSoup(res.text,'lxml')
            try:
                title=soup.select('div .title')[0].text.strip().replace(' ','')
                price=soup.select('div .trl-item')[0].text.strip()
                block=soup.select('.rcont #agantzfxq_C02_08')[0].text.strip()
                building=soup.select('.rcont #agantzfxq_C02_07')[0].text.strip()
                try:
                    address=soup.select('.trl-item2 .rcont')[2].text.strip()
                except:
                    address=soup.select('.trl-item2 .rcont')[1].text.strip()
                detail1=soup.select('.clearfix')[4].text.strip().replace('\n\n\n',',').replace('\n','')
                detail2=soup.select('.clearfix')[5].text.strip().replace('\n\n\n',',').replace('\n','')
                detail=detail1+detail2
                name=soup.select('.zf_jjname')[0].text.strip()
                buserid=re.search('buserid: \'(\d+)\'',res.text).group(1)
                phone=getPhone(buserid)
                print(title,price,block,building,address,detail,name,phone)
                house = (title, price, block, building, address, detail, name, phone)
                info.append(house)
            except:
                pass
        else:
            print(re.status_code,re.text)
     
    # 获取代理人号码
    def getPhone(buserid):
        url='https://zz.zu.fang.com/RentDetails/Ajax/GetAgentVirtualMobile.aspx'
        data['agentbid']=buserid
        res=session.post(url,data=data)
        if res.status_code==200:
            return res.text
        else:
            print(res.status_code)
            return
     
    if __name__ == '__main__':
        start_time=time.time()
        hrefs=[]
        info=[]
        init_url = 'https://zz.zu.fang.com/house/'
        num=getNum(getHtml(init_url))
        for i in range(0,num):
            url = f'https://zz.zu.fang.com/house/i3{i+1}/'
            text=getHtml(url)
            getLink(text)
        print(hrefs)
        for href in hrefs:
            parsePage(href)
     
        print("共获取%d条数据"%len(info))
        print("共耗时{}".format(time.time()-start_time))
        session.close()
    登入後複製

    二、最佳化為多執行緒爬蟲

    # 用session取代requests
    # 解析库使用bs4
    # 并发库使用concurrent
    import requests
    # from lxml import etree    # 使用xpath解析
    from bs4 import BeautifulSoup
    from concurrent.futures import ThreadPoolExecutor
    from urllib import parse
    import re
    import time
     
    headers = {
        'referer': 'https://zz.zu.fang.com/',
        'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36',
        'cookie': 'global_cookie=ffzvt3kztwck05jm6twso2wjw18kl67hqft; integratecover=1; city=zz; keyWord_recenthousezz=%5b%7b%22name%22%3a%22%e6%96%b0%e5%af%86%22%2c%22detailName%22%3a%22%22%2c%22url%22%3a%22%2fhouse-a014868%2f%22%2c%22sort%22%3a1%7d%2c%7b%22name%22%3a%22%e4%ba%8c%e4%b8%83%22%2c%22detailName%22%3a%22%22%2c%22url%22%3a%22%2fhouse-a014864%2f%22%2c%22sort%22%3a1%7d%2c%7b%22name%22%3a%22%e9%83%91%e4%b8%9c%e6%96%b0%e5%8c%ba%22%2c%22detailName%22%3a%22%22%2c%22url%22%3a%22%2fhouse-a0842%2f%22%2c%22sort%22%3a1%7d%5d; __utma=147393320.427795962.1613371106.1613558547.1613575774.5; __utmc=147393320; __utmz=147393320.1613575774.5.4.utmcsr=zz.fang.com|utmccn=(referral)|utmcmd=referral|utmcct=/; ASP.NET_SessionId=vhrhxr1tdatcc1xyoxwybuwv; g_sourcepage=zf_fy%5Elb_pc; Captcha=4937566532507336644D6557347143746B5A6A6B4A7A48445A422F2F6A51746C67516F31357446573052634562725162316152533247514250736F72775566574A2B33514357304B6976343D; __utmt_t0=1; __utmt_t1=1; __utmt_t2=1; __utmb=147393320.9.10.1613575774; unique_cookie=U_0l0d1ilf1t0ci2rozai9qi24k1pkl9lcmrs*4'
    }
    data={
        'agentbid':''
    }
     
    session = requests.session()
    session.headers = headers
     
    # 获取页面
    def getHtml(url):
        res = session.get(url)
        if res.status_code==200:
            res.encoding = res.apparent_encoding
            return res.text
        else:
            print(res.status_code)
     
    # 获取页面总数量
    def getNum(text):
        soup = BeautifulSoup(text, 'lxml')
        txt = soup.select('.fanye .txt')[0].text
        # 取出“共**页”中间的数字
        num = re.search(r'\d+', txt).group(0)
        return num
     
    # 获取详细链接
    def getLink(url):
        text=getHtml(url)
        soup=BeautifulSoup(text,'lxml')
        links=soup.select('.title a')
        for link in links:
            href=parse.urljoin('https://zz.zu.fang.com/',link['href'])
            hrefs.append(href)
     
    # 解析页面
    def parsePage(url):
        res=session.get(url)
        if res.status_code==200:
            res.encoding=res.apparent_encoding
            soup=BeautifulSoup(res.text,'lxml')
            try:
                title=soup.select('div .title')[0].text.strip().replace(' ','')
                price=soup.select('div .trl-item')[0].text.strip()
                block=soup.select('.rcont #agantzfxq_C02_08')[0].text.strip()
                building=soup.select('.rcont #agantzfxq_C02_07')[0].text.strip()
                try:
                    address=soup.select('.trl-item2 .rcont')[2].text.strip()
                except:
                    address=soup.select('.trl-item2 .rcont')[1].text.strip()
                detail1=soup.select('.clearfix')[4].text.strip().replace('\n\n\n',',').replace('\n','')
                detail2=soup.select('.clearfix')[5].text.strip().replace('\n\n\n',',').replace('\n','')
                detail=detail1+detail2
                name=soup.select('.zf_jjname')[0].text.strip()
                buserid=re.search('buserid: \'(\d+)\'',res.text).group(1)
                phone=getPhone(buserid)
                print(title,price,block,building,address,detail,name,phone)
                house = (title, price, block, building, address, detail, name, phone)
                info.append(house)
            except:
                pass
        else:
            print(re.status_code,re.text)
     
    # 获取代理人号码
    def getPhone(buserid):
        url='https://zz.zu.fang.com/RentDetails/Ajax/GetAgentVirtualMobile.aspx'
        data['agentbid']=buserid
        res=session.post(url,data=data)
        if res.status_code==200:
            return res.text
        else:
            print(res.status_code)
            return
     
    if __name__ == '__main__':
        start_time=time.time()
        hrefs=[]
        info=[]
        init_url = 'https://zz.zu.fang.com/house/'
        num=getNum(getHtml(init_url))
        with ThreadPoolExecutor(max_workers=5) as t:
            for i in range(0,num):
                url = f'https://zz.zu.fang.com/house/i3{i+1}/'
                t.submit(getLink,url)
        print("共获取%d个链接"%len(hrefs))
        print(hrefs)
        with ThreadPoolExecutor(max_workers=30) as t:
            for href in hrefs:
                t.submit(parsePage,href)
        print("共获取%d条数据"%len(info))
        print("耗时{}".format(time.time()-start_time))
        session.close()
    登入後複製

    三、使用asyncio進一步優化

    # 用session取代requests
    # 解析库使用bs4
    # 并发库使用concurrent
    import requests
    # from lxml import etree    # 使用xpath解析
    from bs4 import BeautifulSoup
    from concurrent.futures import ThreadPoolExecutor
    from urllib import parse
    import re
    import time
    import asyncio
     
    headers = {
        'referer': 'https://zz.zu.fang.com/',
        'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36',
        'cookie': 'global_cookie=ffzvt3kztwck05jm6twso2wjw18kl67hqft; integratecover=1; city=zz; keyWord_recenthousezz=%5b%7b%22name%22%3a%22%e6%96%b0%e5%af%86%22%2c%22detailName%22%3a%22%22%2c%22url%22%3a%22%2fhouse-a014868%2f%22%2c%22sort%22%3a1%7d%2c%7b%22name%22%3a%22%e4%ba%8c%e4%b8%83%22%2c%22detailName%22%3a%22%22%2c%22url%22%3a%22%2fhouse-a014864%2f%22%2c%22sort%22%3a1%7d%2c%7b%22name%22%3a%22%e9%83%91%e4%b8%9c%e6%96%b0%e5%8c%ba%22%2c%22detailName%22%3a%22%22%2c%22url%22%3a%22%2fhouse-a0842%2f%22%2c%22sort%22%3a1%7d%5d; __utma=147393320.427795962.1613371106.1613558547.1613575774.5; __utmc=147393320; __utmz=147393320.1613575774.5.4.utmcsr=zz.fang.com|utmccn=(referral)|utmcmd=referral|utmcct=/; ASP.NET_SessionId=vhrhxr1tdatcc1xyoxwybuwv; g_sourcepage=zf_fy%5Elb_pc; Captcha=4937566532507336644D6557347143746B5A6A6B4A7A48445A422F2F6A51746C67516F31357446573052634562725162316152533247514250736F72775566574A2B33514357304B6976343D; __utmt_t0=1; __utmt_t1=1; __utmt_t2=1; __utmb=147393320.9.10.1613575774; unique_cookie=U_0l0d1ilf1t0ci2rozai9qi24k1pkl9lcmrs*4'
    }
    data={
        'agentbid':''
    }
     
    session = requests.session()
    session.headers = headers
     
    # 获取页面
    def getHtml(url):
        res = session.get(url)
        if res.status_code==200:
            res.encoding = res.apparent_encoding
            return res.text
        else:
            print(res.status_code)
     
    # 获取页面总数量
    def getNum(text):
        soup = BeautifulSoup(text, 'lxml')
        txt = soup.select('.fanye .txt')[0].text
        # 取出“共**页”中间的数字
        num = re.search(r'\d+', txt).group(0)
        return num
     
    # 获取详细链接
    def getLink(url):
        text=getHtml(url)
        soup=BeautifulSoup(text,'lxml')
        links=soup.select('.title a')
        for link in links:
            href=parse.urljoin('https://zz.zu.fang.com/',link['href'])
            hrefs.append(href)
     
    # 解析页面
    def parsePage(url):
        res=session.get(url)
        if res.status_code==200:
            res.encoding=res.apparent_encoding
            soup=BeautifulSoup(res.text,'lxml')
            try:
                title=soup.select('div .title')[0].text.strip().replace(' ','')
                price=soup.select('div .trl-item')[0].text.strip()
                block=soup.select('.rcont #agantzfxq_C02_08')[0].text.strip()
                building=soup.select('.rcont #agantzfxq_C02_07')[0].text.strip()
                try:
                    address=soup.select('.trl-item2 .rcont')[2].text.strip()
                except:
                    address=soup.select('.trl-item2 .rcont')[1].text.strip()
                detail1=soup.select('.clearfix')[4].text.strip().replace('\n\n\n',',').replace('\n','')
                detail2=soup.select('.clearfix')[5].text.strip().replace('\n\n\n',',').replace('\n','')
                detail=detail1+detail2
                name=soup.select('.zf_jjname')[0].text.strip()
                buserid=re.search('buserid: \'(\d+)\'',res.text).group(1)
                phone=getPhone(buserid)
                print(title,price,block,building,address,detail,name,phone)
                house = (title, price, block, building, address, detail, name, phone)
                info.append(house)
            except:
                pass
        else:
            print(re.status_code,re.text)
     
    # 获取代理人号码
    def getPhone(buserid):
        url='https://zz.zu.fang.com/RentDetails/Ajax/GetAgentVirtualMobile.aspx'
        data['agentbid']=buserid
        res=session.post(url,data=data)
        if res.status_code==200:
            return res.text
        else:
            print(res.status_code)
            return
     
    # 获取详细链接的线程池
    async def Pool1(num):
        loop=asyncio.get_event_loop()
        task=[]
        with ThreadPoolExecutor(max_workers=5) as t:
            for i in range(0,num):
                url = f'https://zz.zu.fang.com/house/i3{i+1}/'
                task.append(loop.run_in_executor(t,getLink,url))
     
    # 解析页面的线程池
    async def Pool2(hrefs):
        loop=asyncio.get_event_loop()
        task=[]
        with ThreadPoolExecutor(max_workers=30) as t:
            for href in hrefs:
                task.append(loop.run_in_executor(t,parsePage,href))
     
    if __name__ == '__main__':
        start_time=time.time()
        hrefs=[]
        info=[]
        task=[]
        init_url = 'https://zz.zu.fang.com/house/'
        num=getNum(getHtml(init_url))
        loop = asyncio.get_event_loop()
        loop.run_until_complete(Pool1(num))
        print("共获取%d个链接"%len(hrefs))
        print(hrefs)
        loop.run_until_complete(Pool2(hrefs))
        loop.close()
        print("共获取%d条数据"%len(info))
        print("耗时{}".format(time.time()-start_time))
        session.close()
    登入後複製

    四、存入Mysql資料庫

    (一)建表

    from sqlalchemy import create_engine
    from sqlalchemy import String, Integer, Column, Text
    from sqlalchemy.orm import sessionmaker
    from sqlalchemy.orm import scoped_session  # 多线程爬虫时避免出现线程安全问题
    from sqlalchemy.ext.declarative import declarative_base
     
    BASE = declarative_base()  # 实例化
    engine = create_engine(
        "mysql+pymysql://root:root@127.0.0.1:3306/pytest?charset=utf8",
        max_overflow=300,  # 超出连接池大小最多可以创建的连接
        pool_size=100,  # 连接池大小
        echo=False,  # 不显示调试信息
    )
     
     
    class House(BASE):
        __tablename__ = 'house'
        id = Column(Integer, primary_key=True, autoincrement=True)
        title=Column(String(200))
        price=Column(String(200))
        block=Column(String(200))
        building=Column(String(200))
        address=Column(String(200))
        detail=Column(Text())
        name=Column(String(20))
        phone=Column(String(20))
     
     
    BASE.metadata.create_all(engine)
    Session = sessionmaker(engine)
    sess = scoped_session(Session)
    登入後複製

    (二)將資料存入資料庫中 

    # 用session取代requests
    # 解析库使用bs4
    # 并发库使用concurrent
    import requests
    from bs4 import BeautifulSoup
    from concurrent.futures import ThreadPoolExecutor
    from urllib import parse
    from mysqldb import sess, House
    import re
    import time
    import asyncio
     
    headers = {
        'referer': 'https://zz.zu.fang.com/',
        'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36',
        'cookie': 'global_cookie=ffzvt3kztwck05jm6twso2wjw18kl67hqft; integratecover=1; city=zz; __utmc=147393320; ASP.NET_SessionId=vhrhxr1tdatcc1xyoxwybuwv; __utma=147393320.427795962.1613371106.1613575774.1613580597.6; __utmz=147393320.1613580597.6.5.utmcsr=zz.fang.com|utmccn=(referral)|utmcmd=referral|utmcct=/; __utmt_t0=1; __utmt_t1=1; __utmt_t2=1; Rent_StatLog=c158b2a7-4622-45a9-9e69-dcf6f42cf577; keyWord_recenthousezz=%5b%7b%22name%22%3a%22%e4%ba%8c%e4%b8%83%22%2c%22detailName%22%3a%22%22%2c%22url%22%3a%22%2fhouse-a014864%2f%22%2c%22sort%22%3a1%7d%2c%7b%22name%22%3a%22%e9%83%91%e4%b8%9c%e6%96%b0%e5%8c%ba%22%2c%22detailName%22%3a%22%22%2c%22url%22%3a%22%2fhouse-a0842%2f%22%2c%22sort%22%3a1%7d%2c%7b%22name%22%3a%22%e7%bb%8f%e5%bc%80%22%2c%22detailName%22%3a%22%22%2c%22url%22%3a%22%2fhouse-a014871%2f%22%2c%22sort%22%3a1%7d%5d; g_sourcepage=zf_fy%5Elb_pc; Captcha=6B65716A41454739794D666864397178613772676C75447A4E746C657144775A347A6D42554F446532357649643062344F6976756E563450554E59594B7833712B413579506C4B684958343D; unique_cookie=U_0l0d1ilf1t0ci2rozai9qi24k1pkl9lcmrs*14; __utmb=147393320.21.10.1613580597'
    }
    data={
        'agentbid':''
    }
     
    session = requests.session()
    session.headers = headers
     
    # 获取页面
    def getHtml(url):
        res = session.get(url)
        if res.status_code==200:
            res.encoding = res.apparent_encoding
            return res.text
        else:
            print(res.status_code)
     
    # 获取页面总数量
    def getNum(text):
        soup = BeautifulSoup(text, 'lxml')
        txt = soup.select('.fanye .txt')[0].text
        # 取出“共**页”中间的数字
        num = re.search(r'\d+', txt).group(0)
        return num
     
    # 获取详细链接
    def getLink(url):
        text=getHtml(url)
        soup=BeautifulSoup(text,'lxml')
        links=soup.select('.title a')
        for link in links:
            href=parse.urljoin('https://zz.zu.fang.com/',link['href'])
            hrefs.append(href)
     
    # 解析页面
    def parsePage(url):
        res=session.get(url)
        if res.status_code==200:
            res.encoding=res.apparent_encoding
            soup=BeautifulSoup(res.text,'lxml')
            try:
                title=soup.select('div .title')[0].text.strip().replace(' ','')
                price=soup.select('div .trl-item')[0].text.strip()
                block=soup.select('.rcont #agantzfxq_C02_08')[0].text.strip()
                building=soup.select('.rcont #agantzfxq_C02_07')[0].text.strip()
                try:
                    address=soup.select('.trl-item2 .rcont')[2].text.strip()
                except:
                    address=soup.select('.trl-item2 .rcont')[1].text.strip()
                detail1=soup.select('.clearfix')[4].text.strip().replace('\n\n\n',',').replace('\n','')
                detail2=soup.select('.clearfix')[5].text.strip().replace('\n\n\n',',').replace('\n','')
                detail=detail1+detail2
                name=soup.select('.zf_jjname')[0].text.strip()
                buserid=re.search('buserid: \'(\d+)\'',res.text).group(1)
                phone=getPhone(buserid)
                print(title,price,block,building,address,detail,name,phone)
                house = (title, price, block, building, address, detail, name, phone)
                info.append(house)
                try:
                    house_data=House(
                        title=title,
                        price=price,
                        block=block,
                        building=building,
                        address=address,
                        detail=detail,
                        name=name,
                        phone=phone
                    )
                    sess.add(house_data)
                    sess.commit()
                except Exception as e:
                    print(e)    # 打印错误信息
                    sess.rollback()  # 回滚
            except:
                pass
        else:
            print(re.status_code,re.text)
     
    # 获取代理人号码
    def getPhone(buserid):
        url='https://zz.zu.fang.com/RentDetails/Ajax/GetAgentVirtualMobile.aspx'
        data['agentbid']=buserid
        res=session.post(url,data=data)
        if res.status_code==200:
            return res.text
        else:
            print(res.status_code)
            return
     
    # 获取详细链接的线程池
    async def Pool1(num):
        loop=asyncio.get_event_loop()
        task=[]
        with ThreadPoolExecutor(max_workers=5) as t:
            for i in range(0,num):
                url = f'https://zz.zu.fang.com/house/i3{i+1}/'
                task.append(loop.run_in_executor(t,getLink,url))
     
    # 解析页面的线程池
    async def Pool2(hrefs):
        loop=asyncio.get_event_loop()
        task=[]
        with ThreadPoolExecutor(max_workers=30) as t:
            for href in hrefs:
                task.append(loop.run_in_executor(t,parsePage,href))
     
    if __name__ == '__main__':
        start_time=time.time()
        hrefs=[]
        info=[]
        task=[]
        init_url = 'https://zz.zu.fang.com/house/'
        num=getNum(getHtml(init_url))
        loop = asyncio.get_event_loop()
        loop.run_until_complete(Pool1(num))
        print("共获取%d个链接"%len(hrefs))
        print(hrefs)
        loop.run_until_complete(Pool2(hrefs))
        loop.close()
        print("共获取%d条数据"%len(info))
        print("耗时{}".format(time.time()-start_time))
        session.close()
    登入後複製

    五、最終效果圖(已打碼)

    Python爬蟲:如何取得城市租房資訊?

    #

    以上是Python爬蟲:如何取得城市租房資訊?的詳細內容。更多資訊請關注PHP中文網其他相關文章!

    本網站聲明
    本文內容由網友自願投稿,版權歸原作者所有。本站不承擔相應的法律責任。如發現涉嫌抄襲或侵權的內容,請聯絡admin@php.cn

    熱AI工具

    Undresser.AI Undress

    Undresser.AI Undress

    人工智慧驅動的應用程序,用於創建逼真的裸體照片

    AI Clothes Remover

    AI Clothes Remover

    用於從照片中去除衣服的線上人工智慧工具。

    Undress AI Tool

    Undress AI Tool

    免費脫衣圖片

    Clothoff.io

    Clothoff.io

    AI脫衣器

    Video Face Swap

    Video Face Swap

    使用我們完全免費的人工智慧換臉工具,輕鬆在任何影片中換臉!

    熱工具

    記事本++7.3.1

    記事本++7.3.1

    好用且免費的程式碼編輯器

    SublimeText3漢化版

    SublimeText3漢化版

    中文版,非常好用

    禪工作室 13.0.1

    禪工作室 13.0.1

    強大的PHP整合開發環境

    Dreamweaver CS6

    Dreamweaver CS6

    視覺化網頁開發工具

    SublimeText3 Mac版

    SublimeText3 Mac版

    神級程式碼編輯軟體(SublimeText3)

    PHP和Python:解釋了不同的範例 PHP和Python:解釋了不同的範例 Apr 18, 2025 am 12:26 AM

    PHP主要是過程式編程,但也支持面向對象編程(OOP);Python支持多種範式,包括OOP、函數式和過程式編程。 PHP適合web開發,Python適用於多種應用,如數據分析和機器學習。

    在PHP和Python之間進行選擇:指南 在PHP和Python之間進行選擇:指南 Apr 18, 2025 am 12:24 AM

    PHP適合網頁開發和快速原型開發,Python適用於數據科學和機器學習。 1.PHP用於動態網頁開發,語法簡單,適合快速開發。 2.Python語法簡潔,適用於多領域,庫生態系統強大。

    Python vs. JavaScript:學習曲線和易用性 Python vs. JavaScript:學習曲線和易用性 Apr 16, 2025 am 12:12 AM

    Python更適合初學者,學習曲線平緩,語法簡潔;JavaScript適合前端開發,學習曲線較陡,語法靈活。 1.Python語法直觀,適用於數據科學和後端開發。 2.JavaScript靈活,廣泛用於前端和服務器端編程。

    PHP和Python:深入了解他們的歷史 PHP和Python:深入了解他們的歷史 Apr 18, 2025 am 12:25 AM

    PHP起源於1994年,由RasmusLerdorf開發,最初用於跟踪網站訪問者,逐漸演變為服務器端腳本語言,廣泛應用於網頁開發。 Python由GuidovanRossum於1980年代末開發,1991年首次發布,強調代碼可讀性和簡潔性,適用於科學計算、數據分析等領域。

    vs code 可以在 Windows 8 中運行嗎 vs code 可以在 Windows 8 中運行嗎 Apr 15, 2025 pm 07:24 PM

    VS Code可以在Windows 8上運行,但體驗可能不佳。首先確保系統已更新到最新補丁,然後下載與系統架構匹配的VS Code安裝包,按照提示安裝。安裝後,注意某些擴展程序可能與Windows 8不兼容,需要尋找替代擴展或在虛擬機中使用更新的Windows系統。安裝必要的擴展,檢查是否正常工作。儘管VS Code在Windows 8上可行,但建議升級到更新的Windows系統以獲得更好的開發體驗和安全保障。

    visual studio code 可以用於 python 嗎 visual studio code 可以用於 python 嗎 Apr 15, 2025 pm 08:18 PM

    VS Code 可用於編寫 Python,並提供許多功能,使其成為開發 Python 應用程序的理想工具。它允許用戶:安裝 Python 擴展,以獲得代碼補全、語法高亮和調試等功能。使用調試器逐步跟踪代碼,查找和修復錯誤。集成 Git,進行版本控制。使用代碼格式化工具,保持代碼一致性。使用 Linting 工具,提前發現潛在問題。

    notepad 怎麼運行python notepad 怎麼運行python Apr 16, 2025 pm 07:33 PM

    在 Notepad 中運行 Python 代碼需要安裝 Python 可執行文件和 NppExec 插件。安裝 Python 並為其添加 PATH 後,在 NppExec 插件中配置命令為“python”、參數為“{CURRENT_DIRECTORY}{FILE_NAME}”,即可在 Notepad 中通過快捷鍵“F6”運行 Python 代碼。

    sublime怎麼運行代碼python sublime怎麼運行代碼python Apr 16, 2025 am 08:48 AM

    在 Sublime Text 中運行 Python 代碼,需先安裝 Python 插件,再創建 .py 文件並編寫代碼,最後按 Ctrl B 運行代碼,輸出會在控制台中顯示。

    See all articles