Every day you may perform many repetitive tasks, such as reading news, sending emails, checking the weather, cleaning folders, etc. With automated scripts, you don’t need to do it manually again and again. It is very convenient to complete these tasks. To a certain extent, Python is synonymous with automation.
Today I share 8 very useful Python automation scripts. If you like, remember to collect, follow and like.
This script can capture text from the web page and then automatically read it by voice. When you want to listen to the news, this is a Not a bad choice.
The code is divided into two parts. The first is to crawl the web page text, and the second is to read the text aloud through the reading tool.
Required third-party libraries:
Beautiful Soup - a classic HTML/XML text parser, used to extract crawled web page information.
requests - A very useful HTTP tool for sending requests to web pages to obtain data.
Pyttsx3 - Convert text to speech and control rate, frequency and voice.
import pyttsx3 import requests from bs4 import BeautifulSoup engine = pyttsx3.init('sapi5') voices = engine.getProperty('voices') newVoiceRate = 130 ## Reduce The Speech Rate engine.setProperty('rate',newVoiceRate) engine.setProperty('voice', voices[1].id) def speak(audio): engine.say(audio) engine.runAndWait() text = str(input("Paste articlen")) res = requests.get(text) soup = BeautifulSoup(res.text,'html.parser') articles = [] for i in range(len(soup.select('.p'))): article = soup.select('.p')[i].getText().strip() articles.append(article) text = " ".join(articles) speak(text) # engine.save_to_file(text, 'test.mp3') ## If you want to save the speech as a audio file engine.runAndWait()
Data exploration is the first step in a data science project. You need to understand the basic information of the data to further analyze the deeper value.
Generally, we use pandas, matplotlib and other tools to explore data, but we need to write a lot of code ourselves. If we want to improve efficiency, Dtale is a good choice.
Dtale is characterized by generating automated analysis reports with one line of code. It combines the Flask backend and React frontend to provide us with an easy way to view and analyze Pandas data structures.
We can use Dtale on Jupyter.
Required third-party libraries:
Dtale - Automatically generate analysis reports.
### Importing Seaborn Library For Some Datasets import seaborn as sns ### Printing Inbuilt Datasets of Seaborn Library print(sns.get_dataset_names()) ### Loading Titanic Dataset df=sns.load_dataset('titanic') ### Importing The Library import dtale #### Generating Quick Summary dtale.show(df)
This script can help us send emails in batches and at regular intervals. The email content and attachments can also be customized. Adjustment is very practical.
Compared with email clients, the advantage of Python scripts is that they can deploy email services intelligently, in batches, and with high customization.
Required third-party libraries:
Email - for managing email messages
Smtlib - for sending emails to SMTP servers, It defines an SMTP client session object that can send mail to any computer on the Internet with an SMTP or ESMTP listener
Pandas - Tool for data analysis and cleaning.
import smtplib from email.message import EmailMessage import pandas as pd def send_email(remail, rsubject, rcontent): email = EmailMessage()## Creating a object for EmailMessage email['from'] = 'The Pythoneer Here'## Person who is sending email['to'] = remail## Whom we are sending email['subject'] = rsubject ## Subject of email email.set_content(rcontent) ## content of email with smtplib.SMTP(host='smtp.gmail.com',port=587)as smtp: smtp.ehlo() ## server object smtp.starttls() ## used to send data between server and client smtp.login("deltadelta371@gmail.com","delta@371") ## login id and password of gmail smtp.send_message(email)## Sending email print("email send to ",remail)## Printing success message if __name__ == '__main__': df = pd.read_excel('list.xlsx') length = len(df)+1 for index, item in df.iterrows(): email = item[0] subject = item[1] content = item[2] send_email(email,subject,content)
The script can convert pdf to audio file. The principle is also very simple. First use PyPDF to extract the text in the pdf, and then use Pyttsx3 Convert text to speech.
import pyttsx3,PyPDF2 pdfreader = PyPDF2.PdfFileReader(open('story.pdf','rb')) speaker = pyttsx3.init() for page_num in range(pdfreader.numPages): text = pdfreader.getPage(page_num).extractText()## extracting text from the PDF cleaned_text = text.strip().replace('n',' ')## Removes unnecessary spaces and break lines print(cleaned_text)## Print the text from PDF #speaker.say(cleaned_text)## Let The Speaker Speak The Text speaker.save_to_file(cleaned_text,'story.mp3')## Saving Text In a audio file 'story.mp3' speaker.runAndWait() speaker.stop()
This script will randomly select a song from the song folder to play. What needs to be noted is os.startfile Only supports Windows systems.
import random, os music_dir = 'G:\new english songs' songs = os.listdir(music_dir) song = random.randint(0,len(songs)) print(songs[song])## Prints The Song Name os.startfile(os.path.join(music_dir, songs[0]))
The National Weather Service website provides an API for obtaining weather forecasts, which directly returns weather data in json format. So you only need to extract the corresponding fields from json.
The following is the URL of the weather for the designated city (county, district). Open the URL directly and the weather data of the corresponding city will be returned. For example:
http://www.weather.com.cn/data/cityinfo/101021200.html The weather URL corresponding to Xuhui District, Shanghai.
The specific code is as follows:
mport requests import json import logging as log def get_weather_wind(url): r = requests.get(url) if r.status_code != 200: log.error("Can't get weather data!") info = json.loads(r.content.decode()) # get wind data data = info['weatherinfo'] WD = data['WD'] WS = data['WS'] return "{}({})".format(WD, WS) def get_weather_city(url): # open url and get return data r = requests.get(url) if r.status_code != 200: log.error("Can't get weather data!") # convert string to json info = json.loads(r.content.decode()) # get useful data data = info['weatherinfo'] city = data['city'] temp1 = data['temp1'] temp2 = data['temp2'] weather = data['weather'] return "{} {} {}~{}".format(city, weather, temp1, temp2) if __name__ == '__main__': msg = """**天气提醒**: {} {} {} {} 来源: 国家气象局 """.format( get_weather_city('http://www.weather.com.cn/data/cityinfo/101021200.html'), get_weather_wind('http://www.weather.com.cn/data/sk/101021200.html'), get_weather_city('http://www.weather.com.cn/data/cityinfo/101020900.html'), get_weather_wind('http://www.weather.com.cn/data/sk/101020900.html') ) print(msg)
The running result is as follows:
Sometimes those big URLs become really annoying and hard to read and share, this kick can turn long URLs into short URLs.
import contextlib from urllib.parse import urlencode from urllib.request import urlopen import sys def make_tiny(url): request_url = ('http://tinyurl.com/api-create.php?' + urlencode({'url':url})) with contextlib.closing(urlopen(request_url)) as response: return response.read().decode('utf-8') def main(): for tinyurl in map(make_tiny, sys.argv[1:]): print(tinyurl) if __name__ == '__main__': main()
This script is very practical. For example, if a content platform blocks public account articles, you can change the link of the public account article into a short link and then insert it into it to achieve bypass.
One of the most confusing things in the world is the developer's download folder, which contains a lot of disorganized files. This script Your downloads folder will be cleaned based on size limits, with limited cleaning of older files:
import os import threading import time def get_file_list(file_path): #文件按最后修改时间排序 dir_list = os.listdir(file_path) if not dir_list: return else: dir_list = sorted(dir_list, key=lambda x: os.path.getmtime(os.path.join(file_path, x))) return dir_list def get_size(file_path): """[summary] Args: file_path ([type]): [目录] Returns: [type]: 返回目录大小,MB """ totalsize=0 for filename in os.listdir(file_path): totalsize=totalsize+os.path.getsize(os.path.join(file_path, filename)) #print(totalsize / 1024 / 1024) return totalsize / 1024 / 1024 def detect_file_size(file_path, size_Max, size_Del): """[summary] Args: file_path ([type]): [文件目录] size_Max ([type]): [文件夹最大大小] size_Del ([type]): [超过size_Max时要删除的大小] """ print(get_size(file_path)) if get_size(file_path) > size_Max: fileList = get_file_list(file_path) for i in range(len(fileList)): if get_size(file_path) > (size_Max - size_Del): print ("del :%d %s" % (i + 1, fileList[i])) #os.remove(file_path + fileList[i])
The above is the detailed content of Eight ready-to-use Python automation scripts!. For more information, please follow other related articles on the PHP Chinese website!