数组问题.
代码如下
foreach ($url as $val){ if ($netbot->fetchlinks($val)){ $urlarray[] = $netbot->results; }else{ echo "error :".$netbot->error; exit; }}
这里的$url是一个数组,存放地址
通过snoopy的方法, $netbot->fetchlinks($val)来获取这个url的所有link
那么,我想把所有的link放在一个数组中
$netbot->results 返回的是一个一维数组
现在的问题是: $urlarray[] = $netbot->results;
这种方式赋值,会获得一个多维数组
里面的link只有他第一次访问url的links
我想让这个$urlarray下标自动转移.
在获取第二个url的时候,赋值时,可以继承上面的下标, 从而将$urlarray转化为1维数组,
这该如何实现呢?
回复讨论(解决方案)
if条件里面再循环?
foreach($netbot->results as $v){
$urlarray[] = $v;
}
先不管 $netbot->results 取得的是什么
你在 $netbot->fetchlinks($val) 失败后就 exit 了
那么 $url 中后续的 $val 就没有机会再 fetchlinks 了
所以不应该 exit 而应该继续循环
你每次 fetchlinks 到的 url 被存放于 $urlarray 并没有参与后续的查找
所以至多是 $url 遍历结束就结束了。这样取到的是 $url 中可能存在的所有一级连接
看你的需求,似乎应该循着获取的连接一直深入下去的
假定 $netbot->results 是一个关于连接的一维数组,name
for($i=0; $i
if ($netbot->fetchlinks($val)){ //如果有链接 $url = array_merge($url, $netbot->results); //就将连接加入搜寻队列 }}
是不是搞错了break和exit?
楼主放弃了吗?
应该在if的第一个条件再循环

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

The method of using a foreach loop to remove duplicate elements from a PHP array is as follows: traverse the array, and if the element already exists and the current position is not the first occurrence, delete it. For example, if there are duplicate records in the database query results, you can use this method to remove them and obtain results without duplicate records.

The performance comparison of PHP array key value flipping methods shows that the array_flip() function performs better than the for loop in large arrays (more than 1 million elements) and takes less time. The for loop method of manually flipping key values takes a relatively long time.

Multidimensional array sorting can be divided into single column sorting and nested sorting. Single column sorting can use the array_multisort() function to sort by columns; nested sorting requires a recursive function to traverse the array and sort it. Practical cases include sorting by product name and compound sorting by sales volume and price.

Methods for deep copying arrays in PHP include: JSON encoding and decoding using json_decode and json_encode. Use array_map and clone to make deep copies of keys and values. Use serialize and unserialize for serialization and deserialization.

The best practice for performing an array deep copy in PHP is to use json_decode(json_encode($arr)) to convert the array to a JSON string and then convert it back to an array. Use unserialize(serialize($arr)) to serialize the array to a string and then deserialize it to a new array. Use the RecursiveIteratorIterator to recursively traverse multidimensional arrays.

PHP's array_group_by function can group elements in an array based on keys or closure functions, returning an associative array where the key is the group name and the value is an array of elements belonging to the group.

PHP's array_group() function can be used to group an array by a specified key to find duplicate elements. This function works through the following steps: Use key_callback to specify the grouping key. Optionally use value_callback to determine grouping values. Count grouped elements and identify duplicates. Therefore, the array_group() function is very useful for finding and processing duplicate elements.

The PHP array merging and deduplication algorithm provides a parallel solution, dividing the original array into small blocks for parallel processing, and the main process merges the results of the blocks to deduplicate. Algorithmic steps: Split the original array into equally allocated small blocks. Process each block for deduplication in parallel. Merge block results and deduplicate again.
