How to do URL filtering
url filtering
1 Create a class-map (class map) to identify transmission traffic.
The first network segment of the intranet
Second definition of regular expression, determine the (domain name) keyword contained in the URL
Third check the IP message header , whether it is http traffic
2 Create policy-map (policy map), associate class-map
either allow this link
or discard this link
Usually the policy is applied to the inside (inbound) interface
Only one policy mapping can be applied to an interface
3 Apply class-map to the interface.
-------------------------------------------------- ----------------------------------------
//Define ACL access control list aclfile//
access-list aclfile extended permit tcp 192.168.100.0 255.255.255.0 any eq www
Establish class mapping aclclass
class -map aclclass
match access-list aclfile //Match access control list//
//Define regular expressions to specify websites that need to be filtered and are not allowed to be accessed//
regex url "\.accp\.com" //*Be careful not to write the slash backwards*//
Create a class mapping urlclass
class-map type regex match-any urlclass
match regex url // call matching regular expression //
httpclass // Create a category mapping for checking traffic //
## Class-Map Type Inspect httpclass #match request header host regex class urlclass //Call the previously configured urlclass1//----------------------- -------------------------------------------------- ----------------policy-map type inspect http httppolicy //Create policy mapping //class httpclass using use using use ’ use ’s ’ s ’ s ‐ ‐ ‐‐ ‐‐ ‐ //Create policy mapping //class httpclass //Call the inspection that has been done Class mapping of http header //drop-connection log use using using using ‐ ‐ ‐ ’ ’ ’ ’ ’ ‐ ‐ ‐ ‐ ‐ ‐ o ###### Class ACLCLASS // Call the class mapping of the access control list // ###### stect http httppolicy // Check the defined strategic mapping // ##### Service-Policy Insidepolicy Interface INSS IDE //Apply on port//###The above is the detailed content of How to do URL filtering. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

PHP function introduction—get_headers(): Overview of obtaining the response header information of the URL: In PHP development, we often need to obtain the response header information of the web page or remote resource. The PHP function get_headers() can easily obtain the response header information of the target URL and return it in the form of an array. This article will introduce the usage of get_headers() function and provide some related code examples. Usage of get_headers() function: get_header

Nowadays, many Windows users who love games have entered the Steam client and can search, download and play any good games. However, many users' profiles may have the exact same name, making it difficult to find a profile or even link a Steam profile to other third-party accounts or join Steam forums to share content. The profile is assigned a unique 17-digit id, which remains the same and cannot be changed by the user at any time, whereas the username or custom URL can. Regardless, some users don't know their Steamid, and it's important to know this. If you don't know how to find your account's Steamid, don't panic. In this article

The reason for the error is NameResolutionError(self.host,self,e)frome, which is an exception type in the urllib3 library. The reason for this error is that DNS resolution failed, that is, the host name or IP address attempted to be resolved cannot be found. This may be caused by the entered URL address being incorrect or the DNS server being temporarily unavailable. How to solve this error There may be several ways to solve this error: Check whether the entered URL address is correct and make sure it is accessible Make sure the DNS server is available, you can try using the "ping" command on the command line to test whether the DNS server is available Try accessing the website using the IP address instead of the hostname if behind a proxy

Use url to encode and decode the class java.net.URLDecoder.decode(url, decoding format) decoder.decoding method for encoding and decoding. Convert into an ordinary string, URLEncoder.decode(url, encoding format) turns the ordinary string into a string in the specified format packagecom.zixue.springbootmybatis.test;importjava.io.UnsupportedEncodingException;importjava.net.URLDecoder;importjava.net. URLEncoder

Differences: 1. Different definitions, url is a uniform resource locator, and html is a hypertext markup language; 2. There can be many urls in an html, but only one html page can exist in a url; 3. html refers to is a web page, and url refers to the website address.

Preface In some cases, the prefixes in the service controller are consistent. For example, the prefix of all URLs is /context-path/api/v1, and a unified prefix needs to be added to some URLs. The conceivable solution is to modify the context-path of the service and add api/v1 to the context-path. Modifying the global prefix can solve the above problem, but there are disadvantages. If the URL has multiple prefixes, for example, some URLs require prefixes. If it is api/v2, it cannot be distinguished. If you do not want to add api/v1 to some static resources in the service, it cannot be distinguished. The following uses custom annotations to uniformly add certain URL prefixes. one,

Scrapy is a powerful Python crawler framework that can be used to obtain large amounts of data from the Internet. However, when developing Scrapy, we often encounter the problem of crawling duplicate URLs, which wastes a lot of time and resources and affects efficiency. This article will introduce some Scrapy optimization techniques to reduce the crawling of duplicate URLs and improve the efficiency of Scrapy crawlers. 1. Use the start_urls and allowed_domains attributes in the Scrapy crawler to

URL is the abbreviation of "Uniform Resource Locator", which means "Uniform Resource Locator" in Chinese. A URL is an address used to locate and access specific resources through the Internet. It is commonly seen in web browsing and HTTP requests. The main function of URL is to locate and access resources on the Internet. These resources can be web pages, pictures, videos, documents or other files.
