Home Backend Development Python Tutorial python query Baidu seo information

python query Baidu seo information

Oct 18, 2016 am 10:30 AM

A simple python function to query Baidu keyword rankings, features:

1. UA random

2. Simple and convenient operation, just getRank (keyword, domain name) directly

3. Coding conversion. There should be no problem with coding.

4. Rich results. Not only the ranking, but also the title, URL, and snapshot time of the search results, which meets SEO needs


Disadvantages:

Single-threaded, slow

#coding=utf-8
  
import requests
import BeautifulSoup
import re
import random
  
def decodeAnyWord(w):
    try:
        w.decode('utf-8')
    except:
        w = w.decode('gb2312')
    else:
        w = w.decode('utf-8')
    return w
  
def createURL(checkWord):   #create baidu URL with search words
    checkWord = checkWord.strip()
    checkWord = checkWord.replace(' ', '+').replace('\n', '')
    baiduURL = 'http://www.baidu.com/s?wd=%s&rn=100' % checkWord
    return baiduURL
  
def getContent(baiduURL):   #get the content of the serp
    uaList = ['Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.1;+SV1;+.NET+CLR+1.1.4322;+TencentTraveler)',
    'Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.1;+SV1;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729)',
    'Mozilla/5.0+(Windows+NT+5.1)+AppleWebKit/537.1+(KHTML,+like+Gecko)+Chrome/21.0.1180.89+Safari/537.1',
    'Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.1;+SV1)',
    'Mozilla/5.0+(Windows+NT+6.1;+rv:11.0)+Gecko/20100101+Firefox/11.0',
    'Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0;+SV1)',
    'Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0;+GTB7.1;+.NET+CLR+2.0.50727)',
    'Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0;+KB974489)']
    headers = {'User-Agent': random.choice(uaList)}
    ipList = ['202.43.188.13:8080',
    '80.243.185.168:1177',
    '218.108.85.59:81']
    proxies = {'http': 'http://%s' % random.choice(ipList)}
    r = requests.get(baiduURL, headers = headers, proxies = proxies)
    return r.content
  
def getLastURL(rawurl): #get final URL while there're redirects
    r = requests.get(rawurl)
    return r.url
  
def getAtext(atext):    #get the text with <a> and </a>
    pat = re.compile(r&#39;<a .*?>(.*?)</a>&#39;)
    match = pat.findall(atext)
    pureText = match[0].replace(&#39;<em>&#39;, &#39;&#39;).replace(&#39;</em>&#39;, &#39;&#39;)
    return pureText
  
def getCacheDate(t):    #get the date of cache
    pat = re.compile(r&#39;<span class="g">.*?(\d{4}-\d{1,2}-\d{1,2})  </span>&#39;)
    match = pat.findall(t)
    cacheDate = match[0]
    return cacheDate
  
def getRank(checkWord, domain): #main line
    checkWord = checkWord.replace(&#39;\n&#39;, &#39;&#39;)
    checkWord = decodeAnyWord(checkWord)
    baiduURL = createURL(checkWord)
    cont = getContent(baiduURL)
    soup = BeautifulSoup.BeautifulSoup(cont)
    results = soup.findAll(&#39;table&#39;, {&#39;class&#39;: &#39;result&#39;})    #find all results in this page
    for result in results:
        checkData = unicode(result.find(&#39;span&#39;, {&#39;class&#39;: &#39;g&#39;}))
        if re.compile(r&#39;^[^/]*%s.*?&#39; %domain).match(checkData): #改正则
            nowRank = result[&#39;id&#39;]  #get the rank if match the domain info
  
            resLink = result.find(&#39;h3&#39;).a
            resURL = resLink[&#39;href&#39;]
            domainURL = getLastURL(resURL)  #get the target URL
            resTitle = getAtext(unicode(resLink))   #get the title of the target page
  
            rescache = result.find(&#39;span&#39;, {&#39;class&#39;: &#39;g&#39;})
            cacheDate = getCacheDate(unicode(rescache)) #get the cache date of the target page
  
            res = u&#39;%s, 第%s名, %s, %s, %s&#39; % (checkWord, nowRank, resTitle, cacheDate, domainURL)
            return res.encode(&#39;gb2312&#39;)
            break
    else:
        return &#39;>100&#39;
  
domain = &#39;www.douban.com&#39; #set the domain which you want to search.
  
  
  
f = open(&#39;r.txt&#39;)
for w in f.readlines():
    print getRank(w, domain)
  
f.close()
Copy after login


Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Repo: How To Revive Teammates
1 months ago By 尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
1 months ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

How to Use Python to Find the Zipf Distribution of a Text File How to Use Python to Find the Zipf Distribution of a Text File Mar 05, 2025 am 09:58 AM

This tutorial demonstrates how to use Python to process the statistical concept of Zipf's law and demonstrates the efficiency of Python's reading and sorting large text files when processing the law. You may be wondering what the term Zipf distribution means. To understand this term, we first need to define Zipf's law. Don't worry, I'll try to simplify the instructions. Zipf's Law Zipf's law simply means: in a large natural language corpus, the most frequently occurring words appear about twice as frequently as the second frequent words, three times as the third frequent words, four times as the fourth frequent words, and so on. Let's look at an example. If you look at the Brown corpus in American English, you will notice that the most frequent word is "th

How Do I Use Beautiful Soup to Parse HTML? How Do I Use Beautiful Soup to Parse HTML? Mar 10, 2025 pm 06:54 PM

This article explains how to use Beautiful Soup, a Python library, to parse HTML. It details common methods like find(), find_all(), select(), and get_text() for data extraction, handling of diverse HTML structures and errors, and alternatives (Sel

How to Perform Deep Learning with TensorFlow or PyTorch? How to Perform Deep Learning with TensorFlow or PyTorch? Mar 10, 2025 pm 06:52 PM

This article compares TensorFlow and PyTorch for deep learning. It details the steps involved: data preparation, model building, training, evaluation, and deployment. Key differences between the frameworks, particularly regarding computational grap

Serialization and Deserialization of Python Objects: Part 1 Serialization and Deserialization of Python Objects: Part 1 Mar 08, 2025 am 09:39 AM

Serialization and deserialization of Python objects are key aspects of any non-trivial program. If you save something to a Python file, you do object serialization and deserialization if you read the configuration file, or if you respond to an HTTP request. In a sense, serialization and deserialization are the most boring things in the world. Who cares about all these formats and protocols? You want to persist or stream some Python objects and retrieve them in full at a later time. This is a great way to see the world on a conceptual level. However, on a practical level, the serialization scheme, format or protocol you choose may determine the speed, security, freedom of maintenance status, and other aspects of the program

Mathematical Modules in Python: Statistics Mathematical Modules in Python: Statistics Mar 09, 2025 am 11:40 AM

Python's statistics module provides powerful data statistical analysis capabilities to help us quickly understand the overall characteristics of data, such as biostatistics and business analysis. Instead of looking at data points one by one, just look at statistics such as mean or variance to discover trends and features in the original data that may be ignored, and compare large datasets more easily and effectively. This tutorial will explain how to calculate the mean and measure the degree of dispersion of the dataset. Unless otherwise stated, all functions in this module support the calculation of the mean() function instead of simply summing the average. Floating point numbers can also be used. import random import statistics from fracti

Professional Error Handling With Python Professional Error Handling With Python Mar 04, 2025 am 10:58 AM

In this tutorial you'll learn how to handle error conditions in Python from a whole system point of view. Error handling is a critical aspect of design, and it crosses from the lowest levels (sometimes the hardware) all the way to the end users. If y

What are some popular Python libraries and their uses? What are some popular Python libraries and their uses? Mar 21, 2025 pm 06:46 PM

The article discusses popular Python libraries like NumPy, Pandas, Matplotlib, Scikit-learn, TensorFlow, Django, Flask, and Requests, detailing their uses in scientific computing, data analysis, visualization, machine learning, web development, and H

Scraping Webpages in Python With Beautiful Soup: Search and DOM Modification Scraping Webpages in Python With Beautiful Soup: Search and DOM Modification Mar 08, 2025 am 10:36 AM

This tutorial builds upon the previous introduction to Beautiful Soup, focusing on DOM manipulation beyond simple tree navigation. We'll explore efficient search methods and techniques for modifying HTML structure. One common DOM search method is ex

See all articles