php分页遇到的有关问题
php分页遇到的问题
刚学习php,在php100下载了视频教程,可是根据教程里第十三讲给的分页demo,发现运行出错,请大虾给分析下什么原因呢?数据库表为test,字段包括:id、name、sex;运行下面的demo,可现实第一页内容,但是点击下一页的时候不翻页。url地址为http://localhost/bbs2/page.php?page=2,再点击下一页url为http://localhost/bbs2/page.php?page=2&page=2;这个地址就不对了,应该是http://localhost/bbs2/page.php?page=3吧。
- HTML code
<!-- Code highlighting produced by Actipro CodeHighlighter (freeware) http://www.CodeHighlighter.com/ --> <?php function _PAGEFT($totle, $displaypg = 20, $url = '') { global $page, $firstcount, $pagenav, $_SERVER; $GLOBALS["displaypg"] = $displaypg; if (!$page) $page = 1; if (!$url) { $url = $_SERVER["REQUEST_URI"]; } //URL分析: $parse_url = parse_url($url); $url_query = $parse_url["query"]; //单独取出URL的查询字串 if ($url_query) { $url_query = ereg_replace("(^|&)page=$page", "", $url_query); $url = str_replace($parse_url["query"], $url_query, $url); if ($url_query) $url .= "&page"; else $url .= "page"; } else { $url .= "?page"; } $lastpg = ceil($totle / $displaypg); //最后页,也是总页数 $page = min($lastpg, $page); $prepg = $page -1; //上一页 $nextpg = ($page == $lastpg ? 0 : $page +1); //下一页 $firstcount = ($page -1) * $displaypg; //开始分页导航条代码: $pagenav = "显示第 <B>" . ($totle ? ($firstcount +1) : 0) . "-<b>" . min($firstcount + $displaypg, $totle) . "</b> 条记录,共 $totle 条记录"; //如果只有一页则跳出函数: if ($lastpg 首页 "; if ($prepg) $pagenav .= " <a href="%24url=%24prepg">前页</a> "; else $pagenav .= " 前页 "; if ($nextpg) $pagenav .= " <a href="%24url=%24nextpg">后页</a> "; else $pagenav .= " 后页 "; $pagenav .= " <a href="%24url=%24lastpg">尾页</a> "; //下拉跳转列表,循环列出所有页码: $pagenav .= " 到第 <select name="topage" size="1" onchange='window.location=\"$url=\"+this.value'>\n"; for ($i = 1; $i $i\n"; else $pagenav .= "<option value="$i">$i</option>\n"; } $pagenav .= "</select> 页,共 $lastpg 页"; } include("conn.php"); $result=mysql_query("SELECT * FROM `test`"); $total=mysql_num_rows($result); //调用pageft(),每页显示10条信息(使用默认的20时,可以省略此参数),使用本页URL(默认,所以省略掉)。 _PAGEFT($total,5); echo $pagenav; $result=mysql_query("SELECT * FROM `test` limit $firstcount,$displaypg "); while($row=mysql_fetch_array($result)){ echo "<hr><b>".$row[name]." | ".$row[sex]; } ?> </b>
------解决方案--------------------
这个教程真是害人,太不靠谱了。 不如你在网上找找源码自己改改看
找了半天没看见 $_GET['page'], LIMIT 这两个分页必需词汇
------解决方案--------------------
- PHP code
$nowPage = is_numeric($_GET['page']) ? $_GET['page'] : 1; // 当前页 $displaypg = 5; //每页显示五个条目; firstcount = ($nowPage-1)*$displaypg; $result=mysql_query("SELECT * FROM `test` limit $firstcount,$displaypg "); <br><font color="#e78608">------解决方案--------------------</font><br>不翻页是你的page没有得到<br><br>这分页程序够雷人的<br><br>太坑人了<br><br>写那么大一片<br><br>去掉一半运行都不会有问题 <br><font color="#e78608">------解决方案--------------------</font><br>

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



PHP function introduction—get_headers(): Overview of obtaining the response header information of the URL: In PHP development, we often need to obtain the response header information of the web page or remote resource. The PHP function get_headers() can easily obtain the response header information of the target URL and return it in the form of an array. This article will introduce the usage of get_headers() function and provide some related code examples. Usage of get_headers() function: get_header

The reason for the error is NameResolutionError(self.host,self,e)frome, which is an exception type in the urllib3 library. The reason for this error is that DNS resolution failed, that is, the host name or IP address attempted to be resolved cannot be found. This may be caused by the entered URL address being incorrect or the DNS server being temporarily unavailable. How to solve this error There may be several ways to solve this error: Check whether the entered URL address is correct and make sure it is accessible Make sure the DNS server is available, you can try using the "ping" command on the command line to test whether the DNS server is available Try accessing the website using the IP address instead of the hostname if behind a proxy

Nowadays, many Windows users who love games have entered the Steam client and can search, download and play any good games. However, many users' profiles may have the exact same name, making it difficult to find a profile or even link a Steam profile to other third-party accounts or join Steam forums to share content. The profile is assigned a unique 17-digit id, which remains the same and cannot be changed by the user at any time, whereas the username or custom URL can. Regardless, some users don't know their Steamid, and it's important to know this. If you don't know how to find your account's Steamid, don't panic. In this article

Differences: 1. Different definitions, url is a uniform resource locator, and html is a hypertext markup language; 2. There can be many urls in an html, but only one html page can exist in a url; 3. html refers to is a web page, and url refers to the website address.

Use url to encode and decode the class java.net.URLDecoder.decode(url, decoding format) decoder.decoding method for encoding and decoding. Convert into an ordinary string, URLEncoder.decode(url, encoding format) turns the ordinary string into a string in the specified format packagecom.zixue.springbootmybatis.test;importjava.io.UnsupportedEncodingException;importjava.net.URLDecoder;importjava.net. URLEncoder

Scrapy is a powerful Python crawler framework that can be used to obtain large amounts of data from the Internet. However, when developing Scrapy, we often encounter the problem of crawling duplicate URLs, which wastes a lot of time and resources and affects efficiency. This article will introduce some Scrapy optimization techniques to reduce the crawling of duplicate URLs and improve the efficiency of Scrapy crawlers. 1. Use the start_urls and allowed_domains attributes in the Scrapy crawler to

Preface In some cases, the prefixes in the service controller are consistent. For example, the prefix of all URLs is /context-path/api/v1, and a unified prefix needs to be added to some URLs. The conceivable solution is to modify the context-path of the service and add api/v1 to the context-path. Modifying the global prefix can solve the above problem, but there are disadvantages. If the URL has multiple prefixes, for example, some URLs require prefixes. If it is api/v2, it cannot be distinguished. If you do not want to add api/v1 to some static resources in the service, it cannot be distinguished. The following uses custom annotations to uniformly add certain URL prefixes. one,

URL is the abbreviation of "Uniform Resource Locator", which means "Uniform Resource Locator" in Chinese. A URL is an address used to locate and access specific resources through the Internet. It is commonly seen in web browsing and HTTP requests. The main function of URL is to locate and access resources on the Internet. These resources can be web pages, pictures, videos, documents or other files.
