Crawlee를 사용하여 Python으로 LinkedIn 채용 스크레이퍼를 만드는 방법

Patricia Arquette
풀어 주다: 2024-10-18 14:21:02
원래의
966명이 탐색했습니다.

介绍

在本文中,我们将构建一个 Web 应用程序,使用 Crawlee 和 Streamlit 抓取 LinkedIn 上的职位发布。

我们将使用 Crawlee for Python 在 Python 中创建 LinkedIn 职位抓取工具,从通过 Web 应用程序动态接收的用户输入中提取公司名称、职位名称、发布时间以及职位发布的链接。

注意
我们的一位社区成员撰写了此博客,作为对 Crawlee 博客的贡献。如果您想向 Crawlee 博客贡献此类博客,请通过我们的 Discord 频道与我们联系。

在本教程结束时,您将拥有一个功能齐全的 Web 应用程序,可用于从 LinkedIn 抓取职位发布。

How to create a LinkedIn job scraper in Python with Crawlee

我们开始吧。


先决条件

让我们首先使用以下命令创建一个新的 Crawlee for Python 项目:

pipx run crawlee create linkedin-scraper
로그인 후 복사

当 Crawlee 要求时,在终端中选择 PlaywrightCrawler。

安装后,Crawlee for Python 将为您创建样板代码。您可以将目录(cd)更改为项目文件夹并运行此命令来安装依赖项。

poetry install
로그인 후 복사

我们将开始编辑 Crawlee 提供给我们的文件,以便我们可以构建我们的抓取工具。

注意
在继续之前,如果您喜欢阅读此博客,如果您在 GitHub 上给 Crawlee for Python 一颗星,我们将非常高兴!

在 GitHub 上给我们加星标 ⭐️

使用 Crawlee 使用 Python 构建 LinkedIn 职位 Scraper

在本节中,我们将使用 Crawlee for Python 包构建抓取工具。要了解有关 Crawlee 的更多信息,请查看他们的文档。

1. 检查 LinkedIn 职位搜索页面

在网络浏览器中打开 LinkedIn 并从网站注销(如果您已经登录了帐户)。你应该看到这样的界面。

How to create a LinkedIn job scraper in Python with Crawlee

导航到职位部分,搜索您选择的职位和地点,然后复制 URL。

How to create a LinkedIn job scraper in Python with Crawlee

你应该有这样的东西:

https://www.linkedin.com/jobs/search?keywords=后端开发人员&location=加拿大&geoId=101174742&trk=public_jobs_jobs-search-bar_search-submit&position=1&pageNum=0

我们将重点关注搜索参数,即“?”后面的部分。关键字和位置参数对我们来说是最重要的。

用户提供的职位将被输入到keyword参数中,而用户提供的地点将被输入到location参数中。最后,geoId 参数将被删除,同时我们保持其他参数不变。

我们将对 main.py 文件进行更改。将以下代码复制并粘贴到您的 main.py 文件中。

from crawlee.playwright_crawler import PlaywrightCrawler
from .routes import router                                     
import urllib.parse

async def main(title: str, location: str, data_name: str) -> None:
    base_url = "https://www.linkedin.com/jobs/search"

    # URL encode the parameters
    params = {
        "keywords": title,
        "location": location,
        "trk": "public_jobs_jobs-search-bar_search-submit",
        "position": "1",
        "pageNum": "0"
    }

    encoded_params = urlencode(params)

    # Encode parameters into a query string
    query_string = '?' + encoded_params

    # Combine base URL with the encoded query string
    encoded_url = urljoin(base_url, "") + query_string

    # Initialize the crawler
    crawler = PlaywrightCrawler(
        request_handler=router,
    )

    # Run the crawler with the initial list of URLs
    await crawler.run([encoded_url])

    # Save the data in a CSV file
    output_file = f"{data_name}.csv"
    await crawler.export_data(output_file)
로그인 후 복사

现在我们已经对 URL 进行了编码,下一步是调整生成的路由器来处理 LinkedIn 职位发布。

2. 路由你的爬虫

我们将为您的应用程序使用两个处理程序:

  • 默认处理程序

default_handler 处理起始 URL

  • 职位列表

job_listing 处理程序提取各个作业的详细信息。

剧作家爬虫将爬行职位发布页面并提取页面上所有职位发布的链接。

How to create a LinkedIn job scraper in Python with Crawlee

当您检查职位发布时,您会发现职位发布链接位于一个名为 jobs-search__results-list 的类的有序列表中。然后,我们将使用 Playwright 定位器对象提取链接并将它们添加到 job_listing 路由中进行处理。

router = Router[PlaywrightCrawlingContext]()

@router.default_handler
async def default_handler(context: PlaywrightCrawlingContext) -> None:
    """Default request handler."""

    #select all the links for the job posting on the page
    hrefs = await context.page.locator('ul.jobs-search__results-list a').evaluate_all("links => links.map(link => link.href)")

    #add all the links to the job listing route
    await context.add_requests(
            [Request.from_url(rec, label='job_listing') for rec in hrefs]
        )
로그인 후 복사

现在我们有了职位列表,下一步就是抓取他们的详细信息。

我们将提取每个职位的标题、公司名称、发布时间以及职位发布的链接。打开开发工具,使用 CSS 选择器提取每个元素。

How to create a LinkedIn job scraper in Python with Crawlee

抓取每个列表后,我们将从文本中删除特殊字符以使其干净,并使用 context.push_data 函数将数据推送到本地存储。

@router.handler('job_listing')
async def listing_handler(context: PlaywrightCrawlingContext) -> None:
    """Handler for job listings."""

    await context.page.wait_for_load_state('load')

    job_title = await context.page.locator('div.top-card-layout__entity-info h1.top-card-layout__title').text_content()

    company_name  = await context.page.locator('span.topcard__flavor a').text_content()   

    time_of_posting= await context.page.locator('div.topcard__flavor-row span.posted-time-ago__text').text_content()


    await context.push_data(
        {
            # we are making use of regex to remove special characters for the extracted texts

            'title': re.sub(r'[\s\n]+', '', job_title),
            'Company name': re.sub(r'[\s\n]+', '', company_name),
            'Time of posting': re.sub(r'[\s\n]+', '', time_of_posting),
            'url': context.request.loaded_url,
        }
    )
로그인 후 복사

3. Creating your application

For this project, we will be using Streamlit for the web application. Before we proceed, we are going to create a new file named app.py in your project directory. In addition, ensure you have Streamlit installed in your global Python environment before proceeding with this section.

import streamlit as st
import subprocess

# Streamlit form for inputs 
st.title("LinkedIn Job Scraper")

with st.form("scraper_form"):
    title = st.text_input("Job Title", value="backend developer")
    location = st.text_input("Job Location", value="newyork")
    data_name = st.text_input("Output File Name", value="backend_jobs")

    submit_button = st.form_submit_button("Run Scraper")

if submit_button:

    # Run the scraping script with the form inputs
    command = f"""poetry run python -m linkedin-scraper --title "{title}"  --location "{location}" --data_name "{data_name}" """

    with st.spinner("Crawling in progress..."):
         # Execute the command and display the results
        result = subprocess.run(command, shell=True, capture_output=True, text=True)

        st.write("Script Output:")
        st.text(result.stdout)

        if result.returncode == 0:
            st.success(f"Data successfully saved in {data_name}.csv")
        else:
            st.error(f"Error: {result.stderr}")
로그인 후 복사

The Streamlit web application takes in the user's input and uses the Python Subprocess package to run the Crawlee scraping script.

4. Testing your app

Before we test the application, we need to make a little modification to the __main__ file in order for it to accommodate the command line arguments.

import asyncio
import argparse

from .main import main

def get_args():
    # ArgumentParser object to capture command-line arguments
    parser = argparse.ArgumentParser(description="Crawl LinkedIn job listings")


    # Define the arguments
    parser.add_argument("--title", type=str, required=True, help="Job title")
    parser.add_argument("--location", type=str, required=True, help="Job location")
    parser.add_argument("--data_name", type=str, required=True, help="Name for the output CSV file")


    # Parse the arguments
    return parser.parse_args()

if __name__ == '__main__':
    args = get_args()
    # Run the main function with the parsed command-line arguments
    asyncio.run(main(args.title, args.location, args.data_name))
로그인 후 복사

We will start the Streamlit application by running this code in the terminal:

streamlit run app.py
로그인 후 복사

This is what your application what the application should look like on the browser:

How to create a LinkedIn job scraper in Python with Crawlee

You will get this interface showing you that the scraping has been completed:

How to create a LinkedIn job scraper in Python with Crawlee

To access the scraped data, go over to your project directory and open the CSV file.

How to create a LinkedIn job scraper in Python with Crawlee

You should have something like this as the output of your CSV file.

Conclusion

In this tutorial, we have learned how to build an application that can scrape job posting data from LinkedIn using Crawlee. Have fun building great scraping applications with Crawlee.

You can find the complete working Crawler code here on the GitHub repository..

Follow Crawlee for more content like this.

How to create a LinkedIn job scraper in Python with Crawlee

Crawlee

Crawlee is a web scraping and browser automation library. It helps you build reliable crawlers. Fast.

Thank you!

위 내용은 Crawlee를 사용하여 Python으로 LinkedIn 채용 스크레이퍼를 만드는 방법의 상세 내용입니다. 자세한 내용은 PHP 중국어 웹사이트의 기타 관련 기사를 참조하세요!

원천:dev.to
본 웹사이트의 성명
본 글의 내용은 네티즌들의 자발적인 기여로 작성되었으며, 저작권은 원저작자에게 있습니다. 본 사이트는 이에 상응하는 법적 책임을 지지 않습니다. 표절이나 침해가 의심되는 콘텐츠를 발견한 경우 admin@php.cn으로 문의하세요.
저자별 최신 기사
인기 튜토리얼
더>
최신 다운로드
더>
웹 효과
웹사이트 소스 코드
웹사이트 자료
프론트엔드 템플릿
회사 소개 부인 성명 Sitemap
PHP 중국어 웹사이트:공공복지 온라인 PHP 교육,PHP 학습자의 빠른 성장을 도와주세요!