©
本文檔使用 php中文網手册 發布
(PHP 4 >= 4.3.0, PHP 5, PHP 7)
proc_open — 执行一个命令,并且打开用来输入/输出的文件指针。
$cmd
, array $descriptorspec
, array &$pipes
[, string $cwd
[, array $env
[, array $other_options
]]] )类似 popen() 函数, 但是 proc_open() 提供了更加强大的控制程序执行的能力。
cmd
要执行的命令
descriptorspec
一个索引数组。 数组的键表示描述符,数组元素值表示 PHP 如何将这些描述符传送至子进程。 0 表示标准输入(stdin),1 表示标准输出(stdout),2 表示标准错误(stderr)。
数组中的元素可以是:
STDIN
)。
文件描述符的值不限于 0,1 和 2,你可以使用任何有效的文件描述符 并将其传送至子进程。 这使得你的脚本可以和其他脚本交互操作。 例如,可以通过指定文件描述符将密码以更加安全的方式 传送至诸如 PGP,GPG 和 openssl 程序, 同时也可以很方便的获取这些程序的状态信息。
pipes
将被置为索引数组, 其中的元素是被执行程序创建的管道对应到 PHP 这一端的文件指针。
cwd
要执行命令的初始工作目录。
必须是 绝对 路径,
设置此参数为 NULL
表示使用默认值(当前 PHP 进程的工作目录)。
env
要执行的命令所使用的环境变量。
设置此参数为 NULL
表示使用和当前 PHP 进程相同的环境变量。
other_options
你还可以指定一些附加选项。 目前支持的选项包括:
TRUE
表示抑制本函数产生的错误。
TRUE
表示绕过 cmd.exe shell。
返回表示进程的资源类型,
当使用完毕之后,请调用 proc_close() 函数来关闭此资源。
如果失败,返回 FALSE
。
版本 | 说明 |
---|---|
5.2.1 |
为 other_options 参数增加 bypass_shell 选项。
|
Example #1 proc_open() 例程
<?php
$descriptorspec = array(
0 => array( "pipe" , "r" ), // 标准输入,子进程从此管道中读取数据
1 => array( "pipe" , "w" ), // 标准输出,子进程向此管道中写入数据
2 => array( "file" , "/tmp/error-output.txt" , "a" ) // 标准错误,写入到一个文件
);
$cwd = '/tmp' ;
$env = array( 'some_option' => 'aeiou' );
$process = proc_open ( 'php' , $descriptorspec , $pipes , $cwd , $env );
if ( is_resource ( $process )) {
// $pipes 现在看起来是这样的:
// 0 => 可以向子进程标准输入写入的句柄
// 1 => 可以从子进程标准输出读取的句柄
// 错误输出将被追加到文件 /tmp/error-output.txt
fwrite ( $pipes [ 0 ], '<?php print_r($_ENV); ?>' );
fclose ( $pipes [ 0 ]);
echo stream_get_contents ( $pipes [ 1 ]);
fclose ( $pipes [ 1 ]);
// 切记:在调用 proc_close 之前关闭所有的管道以避免死锁。
$return_value = proc_close ( $process );
echo "command returned $return_value \n" ;
}
?>
以上例程的输出类似于:
Array ( [some_option] => aeiou [PWD] => /tmp [SHLVL] => 1 [_] => /usr/local/bin/php ) command returned 0
Note:
Windows 兼容性:超过 2 的描述符也可以作为可继承的句柄传送到子进程。 但是,由于 Windows 的架构并不将文件描述符和底层句柄进行关联, 所以,子进程无法访问这样的句柄。 标准输入,标准输出和标注错误会按照预期工作。
Note:
如果你只需要单向的进程管道, 使用 popen() 函数会更加简单。
[#1] aaronw at catalyst dot net dot nz [2015-10-05 23:16:42]
If you have a CLI script that prompts you for a password via STDIN, and you need to run it from PHP, proc_open() can get you there. It's better than doing "echo $password | command.sh", because then your password will be visible in the process list to any user who runs "ps". Alternately you could print the password to a file and use cat: "cat passwordfile.txt | command.sh", but then you've got to manage that file in a secure manner.
If your command will always prompt you for responses in a specific order, then proc_open() is quite simple to use and you don't really have to worry about blocking & non-blocking streams. For instance, to run the "passwd" command:
<?php
$descriptorspec = array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("pipe", "w")
);
$process = proc_open(
'/usr/bin/passwd ' . escapeshellarg($username),
$descriptorspec,
$pipes
);
// It wil prompt for existing password, then new password twice.
// You don't need to escapeshellarg() these, but you should whitelist
// them to guard against control characters, perhaps by using ctype_print()
fwrite($pipes[0], "$oldpassword\n$newpassword\n$newpassword\n");
// Read the responses if you want to look at them
$stdout = fread($pipes[1], 1024);
$stderr = fread($pipes[2], 1024);
fclose($pipes[0]);
fclose($pipes[1]);
fclose($pipes[2]);
$exit_status = proc_close($process);
// It returns 0 on successful password change
$success = ($exit_status === 0);
?>
[#2] stevebaldwin21 at googlemail dot com [2015-08-31 18:32:57]
For those who are finding that using the $cwd and $env options cause proc_open to fail (windows). You will need to pass all other server environment variables;
$descriptorSpec = array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
);
proc_open(
"C:\\Windows\\System32\\PING.exe localhost,
$descriptorSpec ,
$pipes,
"C:\\Windows\\System32",
array($_SERVER)
);
[#3] vanyazin at gmail dot com [2015-06-25 04:47:20]
If you want to use proc_open() function with socket streams, you can open connection with help of fsockopen() function and then just put handlers into array of io descriptors:
<?php
$fh = fsockopen($address, $port);
$descriptors = [
$fh, // stdin
$fh, // stdout
$fh, // stderr
];
$proc = proc_open($cmd, $descriptors, $pipes);
[#4] hablutzel1 at gmail dot com [2015-04-15 17:26:33]
Note that the usage of "bypass_shell" in Windows allows you to pass a command of length around ~32767 characters. If you do not use it, your limit is around ~8191 characters only.
See https://support.microsoft.com/en-us/kb/830473.
[#5] sergioshev at gmail dot com [2014-08-12 16:38:23]
If you want make use of double pipe. As prove of concept something like this...
"Double pipe example."
cat in_data | cat | cat > out_data
here is an example how to achieve that
<?php
$d2 = array (
0 => array('pipe','r'),
1 => array('file','/var/tmp/test_gen_trans.log','w')
);
$proc_2 = proc_open('cat', $d2, $p2);
$d = array(
0 => array('file','/var/tmp/test.log','r'),
1 => $p2[0]
);
$proc = proc_open("cat", $d, $p);
proc_close($proc);
proc_close($proc_2);
?>
[#6] exel at example dot com [2013-09-26 10:05:36]
pipe communications may break brains off. i want to share some stuff to avoid such result.
for proper control of the communications through the "in" and "out" pipes of the opened sub-process, remember to set both of them into non-blocking mode and especially notice that fwrite may return (int)0 but it's not an error, just process might not except input at that moment.
so, let us consider an example of decoding gz-encoded file by using funzip as sub-process: (this is not the final version, just to show important things)
<?php
// make gz file
$fd=fopen("/tmp/testPipe", "w");
for($i=0;$i<100000;$i++)
fwrite($fd, md5($i)."\n");
fclose($fd);
if(is_file("/tmp/testPipe.gz"))
unlink("/tmp/testPipe.gz");
system("gzip /tmp/testPipe");
// open process
$pipesDescr=array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("file", "/tmp/testPipe.log", "a"),
);
$process=proc_open("zcat", $pipesDescr, $pipes);
if(!is_resource($process)) throw new Exception("popen error");
// set both pipes non-blocking
stream_set_blocking($pipes[0], 0);
stream_set_blocking($pipes[1], 0);
////////////////////////////////////////////////////////////////////
$text="";
$fd=fopen("/tmp/testPipe.gz", "r");
while(!feof($fd))
{
$str=fread($fd, 16384*4);
$try=3;
while($str)
{
$len=fwrite($pipes[0], $str);
while($s=fread($pipes[1], 16384*4))
$text.=$s;
if(!$len)
{
// if yo remove this paused retries, process may fail
usleep(200000);
$try--;
if(!$try)
throw new Exception("fwrite error");
}
$str=substr($str, $len);
}
echo strlen($text)."\n";
}
fclose($fd);
fclose($pipes[0]);
// reading the rest of output stream
stream_set_blocking($pipes[1], 1);
while(!feof($pipes[1]))
{
$s=fread($pipes[1], 16384);
$text.=$s;
}
echo strlen($text)." / 3 300 000\n";
?>
[#7] mcuadros at gmail dot com [2013-04-05 10:56:11]
This is a example of how run a command using as output the TTY, just like crontab -e or git commit does.
<?php
$descriptors = array(
array('file', '/dev/tty', 'r'),
array('file', '/dev/tty', 'w'),
array('file', '/dev/tty', 'w')
);
$process = proc_open('vim', $descriptors, $pipes);
[#8] michael dot gross at NOSPAM dot flexlogic dot at [2013-01-03 20:33:46]
Please note that if you plan to spawn multiple processes you have to save all the results in different variables (in an array for example). If you for example would call $proc = proc_open..... multiple times the script will block after the second time until the child process exits (proc_close is called implicitly).
[#9] bilge at boontex dot com [2012-09-05 02:56:02]
$cmd can actually be multiple commands by separating each command with a newline. However, due to this it is not possible to split up one very long command over multiple lines, even when using "\\\n" syntax.
[#10] devel at romanr dot info [2012-03-10 08:04:02]
The call works as should. No bugs.
But. In most cases you won't able to work with pipes in blocking mode.
When your output pipe (process' input one, $pipes[0]) is blocking, there is a case, when you and the process are blocked on output.
When your input pipe (process' output one, $pipes[1]) is blocking, there is a case, when you and the process both are blocked on own input.
So you should switch pipes into NONBLOCKING mode (stream_set_blocking).
Then, there is a case, when you're not able to read anything (fread($pipes[1],...) == "") either write (fwrite($pipes[0],...) == 0). In this case, you better check the process is alive (proc_get_status) and if it still is - wait for some time (stream_select). The situation is truly asynchronous, the process may be busy working, processing your data.
Using shell effectively makes not possible to know whether the command is exists - proc_open always returns valid resource. You may even write some data into it (into shell, actually). But eventually it will terminate, so check the process status regularly.
I would advice not using mkfifo-pipes, because filesystem fifo-pipe (mkfifo) blocks open/fopen call (!!!) until somebody opens other side (unix-related behavior). In case the pipe is opened not by shell and the command is crashed or is not exists you will be blocked forever.
[#11] toby at globaloptima dot co dot uk [2011-11-14 13:38:03]
If script A is spawning script B and script B pushes a lot of data to stdout without script A consuming that data, script B is likely to hang but the result of proc_get_status on that process seems to continue to indicate it's running.
So either don't write to stdout i the spawned process (I write to log files instead now) or try to read in the stdout in a non-blocking way if your script A is spawning many instances of script B, I couldn't get this second option to work sadly.
PHP 5.3.8 CLI on Windows 7 64.
[#12] mattis at xait dot no [2011-02-03 07:41:23]
If you are, like me, tired of the buggy way proc_open handles streams and exit codes; this example demonstrate the power of pcntl, posix and some simple output redirection:
<?php
$outpipe = '/tmp/outpipe';
$inpipe = '/tmp/inpipe';
posix_mkfifo($inpipe, 0600);
posix_mkfifo($outpipe, 0600);
$pid = pcntl_fork();
//parent
if($pid) {
$in = fopen($inpipe, 'w');
fwrite($in, "A message for the inpipe reader\n");
fclose($in);
$out = fopen($outpipe, 'r');
while(!feof($out)) {
echo "From out pipe: " . fgets($out) . PHP_EOL;
}
fclose($out);
pcntl_waitpid($pid, $status);
if(pcntl_wifexited($status)) {
echo "Reliable exit code: " . pcntl_wexitstatus($status) . PHP_EOL;
}
unlink($outpipe);
unlink($inpipe);
}
//child
else {
//parent
if($pid = pcntl_fork()) {
pcntl_exec('/bin/sh', array('-c', "printf 'A message for the outpipe reader' > $outpipe 2>&1 && exit 12"));
}
//child
else {
pcntl_exec('/bin/sh', array('-c', "printf 'From in pipe: '; cat $inpipe"));
}
}
?>
Output:
From in pipe: A message for the inpipe reader
From out pipe: A message for the outpipe reader
Reliable exit code: 12
[#13] php at keith tyler dot com [2010-04-16 11:32:28]
Interestingly enough, it seems you actually have to store the return value in order for your streams to exist. You can't throw it away.
In other words, this works:
<?php
$proc=proc_open("echo foo",
array(
array("pipe","r"),
array("pipe","w"),
array("pipe","w")
),
$pipes);
print stream_get_contents($pipes[1]);
?>
prints:
foo
but this doesn't work:
<?php
proc_open("echo foo",
array(
array("pipe","r"),
array("pipe","w"),
array("pipe","w")
),
$pipes);
print stream_get_contents($pipes[1]);
?>
outputs:
Warning: stream_get_contents(): <n> is not a valid stream resource in Command line code on line 1
The only difference is that in the second case we don't save the output of proc_open to a variable.
[#14] Matou Havlena - matous at havlena dot net [2010-04-14 13:03:40]
There is some smart object Processes Manager which i have created for my application. It can control the maximum of simultaneously running processes.
Proccesmanager class:
<?php
class Processmanager {
public $executable = "C:\\www\\_PHP5_2_10\\php";
public $root = "C:\\www\\parallelprocesses\\";
public $scripts = array();
public $processesRunning = 0;
public $processes = 3;
public $running = array();
public $sleep_time = 2;
function addScript($script, $max_execution_time = 300) {
$this->scripts[] = array("script_name" => $script,
"max_execution_time" => $max_execution_time);
}
function exec() {
$i = 0;
for(;;) {
// Fill up the slots
while (($this->processesRunning<$this->processes) and ($i<count($this->scripts))) {
echo "<span style='color: orange;'>Adding script: ".$this->scripts[$i]["script_name"]."</span><br />";
ob_flush();
flush();
$this->running[] =& new Process($this->executable, $this->root, $this->scripts[$i]["script_name"], $this->scripts[$i]["max_execution_time"]);
$this->processesRunning++;
$i++;
}
// Check if done
if (($this->processesRunning==0) and ($i>=count($this->scripts))) {
break;
}
// sleep, this duration depends on your script execution time, the longer execution time, the longer sleep time
sleep($this->sleep_time);
// check what is done
foreach ($this->running as $key => $val) {
if (!$val->isRunning() or $val->isOverExecuted()) {
if (!$val->isRunning()) echo "<span style='color: green;'>Done: ".$val->script."</span><br />";
else echo "<span style='color: red;'>Killed: ".$val->script."</span><br />";
proc_close($val->resource);
unset($this->running[$key]);
$this->processesRunning--;
ob_flush();
flush();
}
}
}
}
}
?>
Process class:
<?php
class Process {
public $resource;
public $pipes;
public $script;
public $max_execution_time;
public $start_time;
function __construct(&$executable, &$root, $script, $max_execution_time) {
$this->script = $script;
$this->max_execution_time = $max_execution_time;
$descriptorspec = array(
0 => array('pipe', 'r'),
1 => array('pipe', 'w'),
2 => array('pipe', 'w')
);
$this->resource = proc_open($executable." ".$root.$this->script, $descriptorspec, $this->pipes, null, $_ENV);
$this->start_time = mktime();
}
// is still running?
function isRunning() {
$status = proc_get_status($this->resource);
return $status["running"];
}
// long execution time, proccess is going to be killer
function isOverExecuted() {
if ($this->start_time+$this->max_execution_time<mktime()) return true;
else return false;
}
}
?>
Example of using:
<?php
$manager = new Processmanager();
$manager->executable = "C:\\www\\_PHP5_2_10\\php";
$manager->path = "C:\\www\\parallelprocesses\\";
$manager->processes = 3;
$manager->sleep_time = 2;
$manager->addScript("script1.php", 10);
$manager->addScript("script2.php");
$manager->addScript("script3.php");
$manager->addScript("script4.php");
$manager->addScript("script5.php");
$manager->addScript("script6.php");
$manager->exec();
?>
And possible output:
Adding script: script1.php
Adding script: script2.php
Adding script: script3.php
Done: script2.php
Adding script: script4.php
Killed: script1.php
Done: script3.php
Done: script4.php
Adding script: script5.php
Adding script: script6.php
Done: script5.php
Done: script6.php
[#15] Luceo [2010-03-28 07:39:34]
It seems that stream_get_contents() on STDOUT blocks infinitly under Windows when STDERR is filled under some circumstances.
The trick is to open STDERR in append mode ("a"), then this will work, too.
<?php
$descriptorspec = array(
0 => array('pipe', 'r'), // stdin
1 => array('pipe', 'w'), // stdout
2 => array('pipe', 'a') // stderr
);
?>
[#16] cbn at grenet dot org [2009-12-18 07:30:07]
Display output (stdout/stderr) in real time, and get the real exit code in pure PHP (no shell workaround!). It works well on my machines (debian mostly).
#!/usr/bin/php
<?php
define(BUF_SIZ, 1024); # max buffer size
define(FD_WRITE, 0); # stdin
define(FD_READ, 1); # stdout
define(FD_ERR, 2); # stderr
function proc_exec($cmd)
{
$descriptorspec = array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("pipe", "w")
);
$ptr = proc_open($cmd, $descriptorspec, $pipes, NULL, $_ENV);
if (!is_resource($ptr))
return false;
while (($buffer = fgets($pipes[FD_READ], BUF_SIZ)) != NULL
|| ($errbuf = fgets($pipes[FD_ERR], BUF_SIZ)) != NULL) {
if (!isset($flag)) {
$pstatus = proc_get_status($ptr);
$first_exitcode = $pstatus["exitcode"];
$flag = true;
}
if (strlen($buffer))
echo $buffer;
if (strlen($errbuf))
echo "ERR: " . $errbuf;
}
foreach ($pipes as $pipe)
fclose($pipe);
$pstatus = proc_get_status($ptr);
if (!strlen($pstatus["exitcode"]) || $pstatus["running"]) {
if ($pstatus["running"])
proc_terminate($ptr);
$ret = proc_close($ptr);
} else {
if ((($first_exitcode + 256) % 256) == 255
&& (($pstatus["exitcode"] + 256) % 256) != 255)
$ret = $pstatus["exitcode"];
elseif (!strlen($first_exitcode))
$ret = $pstatus["exitcode"];
elseif ((($first_exitcode + 256) % 256) != 255)
$ret = $first_exitcode;
else
$ret = 0;
proc_close($ptr);
}
return ($ret + 256) % 256;
}
if (isset($argv) && count($argv) > 1 && !empty($argv[1])) {
if (($ret = proc_exec($argv[1])) === false)
die("Error: not enough FD or out of memory.\n");
elseif ($ret == 127)
die("Command not found (returned by sh).\n");
else
exit($ret);
}
?>
[#17] simeonl at dbc dot co dot nz [2009-03-03 18:39:17]
Note that when you call an external script and retrieve large amounts of data from STDOUT and STDERR, you may need to retrieve from both alternately in non-blocking mode (with appropriate pauses if no data is retrieved), so that your PHP script doesn't lock up. This can happen if you waiting on activity on one pipe while the external script is waiting for you to empty the other, e.g:
<?php
$read_output = $read_error = false;
$buffer_len = $prev_buffer_len = 0;
$ms = 10;
$output = '';
$read_output = true;
$error = '';
$read_error = true;
stream_set_blocking($pipes[1], 0);
stream_set_blocking($pipes[2], 0);
// dual reading of STDOUT and STDERR stops one full pipe blocking the other, because the external script is waiting
while ($read_error != false or $read_output != false)
{
if ($read_output != false)
{
if(feof($pipes[1]))
{
fclose($pipes[1]);
$read_output = false;
}
else
{
$str = fgets($pipes[1], 1024);
$len = strlen($str);
if ($len)
{
$output .= $str;
$buffer_len += $len;
}
}
}
if ($read_error != false)
{
if(feof($pipes[2]))
{
fclose($pipes[2]);
$read_error = false;
}
else
{
$str = fgets($pipes[2], 1024);
$len = strlen($str);
if ($len)
{
$error .= $str;
$buffer_len += $len;
}
}
}
if ($buffer_len > $prev_buffer_len)
{
$prev_buffer_len = $buffer_len;
$ms = 10;
}
else
{
usleep($ms * 1000); // sleep for $ms milliseconds
if ($ms < 160)
{
$ms = $ms * 2;
}
}
}
return proc_close($process);
?>
[#18] snowleopard at amused dot NOSPAMPLEASE dot com dot au [2008-06-05 07:46:29]
I managed to make a set of functions to work with GPG, since my hosting provider refused to use GPG-ME.
Included below is an example of decryption using a higher descriptor to push a passphrase.
Comments and emails welcome. :)
<?php
function GPGDecrypt($InputData, $Identity, $PassPhrase, $HomeDir="~/.gnupg", $GPGPath="/usr/bin/gpg") {
if(!is_executable($GPGPath)) {
trigger_error($GPGPath . " is not executable",
E_USER_ERROR);
die();
} else {
// Set up the descriptors
$Descriptors = array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("pipe", "w"),
3 => array("pipe", "r") // This is the pipe we can feed the password into
);
// Build the command line and start the process
$CommandLine = $GPGPath . ' --homedir ' . $HomeDir . ' --quiet --batch --local-user "' . $Identity . '" --passphrase-fd 3 --decrypt -';
$ProcessHandle = proc_open( $CommandLine, $Descriptors, $Pipes);
if(is_resource($ProcessHandle)) {
// Push passphrase to custom pipe
fwrite($Pipes[3], $PassPhrase);
fclose($Pipes[3]);
// Push input into StdIn
fwrite($Pipes[0], $InputData);
fclose($Pipes[0]);
// Read StdOut
$StdOut = '';
while(!feof($Pipes[1])) {
$StdOut .= fgets($Pipes[1], 1024);
}
fclose($Pipes[1]);
// Read StdErr
$StdErr = '';
while(!feof($Pipes[2])) {
$StdErr .= fgets($Pipes[2], 1024);
}
fclose($Pipes[2]);
// Close the process
$ReturnCode = proc_close($ProcessHandle);
} else {
trigger_error("cannot create resource", E_USER_ERROR);
die();
}
}
if (strlen($StdOut) >= 1) {
if ($ReturnCode <= 0) {
$ReturnValue = $StdOut;
} else {
$ReturnValue = "Return Code: " . $ReturnCode . "\nOutput on StdErr:\n" . $StdErr . "\n\nStandard Output Follows:\n\n";
}
} else {
if ($ReturnCode <= 0) {
$ReturnValue = $StdErr;
} else {
$ReturnValue = "Return Code: " . $ReturnCode . "\nOutput on StdErr:\n" . $StdErr;
}
}
return $ReturnValue;
}
?>
[#19] radone at gmail dot com [2008-05-26 05:26:51]
To complete the examples below that use proc_open to encrypt a string using GPG, here is a decrypt function:
<?php
function gpg_decrypt($string, $secret) {
$homedir = ''; // path to you gpg keyrings
$tmp_file = '/tmp/gpg_tmp.asc' ; // tmp file to write to
file_put_contents($tmp_file, $string);
$text = '';
$error = '';
$descriptorspec = array(
0 => array("pipe", "r"), // stdin
1 => array("pipe", "w"), // stdout
2 => array("pipe", "w") // stderr ?? instead of a file
);
$command = 'gpg --homedir ' . $homedir . ' --batch --no-verbose --passphrase-fd 0 -d ' . $tmp_file . ' ';
$process = proc_open($command, $descriptorspec, $pipes);
if (is_resource($process)) {
fwrite($pipes[0], $secret);
fclose($pipes[0]);
while($s= fgets($pipes[1], 1024)) {
// read from the pipe
$text .= $s;
}
fclose($pipes[1]);
// optional:
while($s= fgets($pipes[2], 1024)) {
$error .= $s . "\n";
}
fclose($pipes[2]);
}
file_put_contents($tmp_file, '');
if (preg_match('/decryption failed/i', $error)) {
return false;
} else {
return $text;
}
}
?>
[#20] jonah at whalehosting dot ca [2008-05-02 22:22:15]
@joachimb: The descriptorspec describes the i/o from the perspective of the process you are opening. That is why stdin is read: you are writing, the process is reading. So you want to open descriptor 2 (stderr) in write mode so that the process can write to it and you can read it. In your case where you want all descriptors to be pipes you should always use:
<?php
$descriptorspec = array(
0 => array('pipe', 'r'), // stdin
1 => array('pipe', 'w'), // stdout
2 => array('pipe', 'w') // stderr
);
?>
The examples below where stderr is opened as 'r' is a mistake.
I would like to see examples of using higher descriptor numbers than 2. Specifically GPG as mentioned in the documentation.
[#21] joachimb at gmail dot com [2008-04-30 08:24:44]
I'm confused by the direction of the pipes. Most of the examples in this documentation opens pipe #2 as "r", because they want to read from stderr. That sounds logical to me, and that's what I tried to do. That didn't work, though. When I changed it to w, as in
<?php
$descriptorspec = array(
0 => array("pipe", "r"), // stdin
1 => array("pipe", "w"), // stdout
2 => array("pipe", "w") // stderr
);
$process = proc_open(escapeshellarg($scriptFile), $descriptorspec, $pipes, $this->wd);
...
while (!feof($pipes[1])) {
foreach($pipes as $key =>$pipe) {
$line = fread($pipe, 128);
if($line) {
print($line);
$this->log($line);
}
}
sleep(0.5);
}
...
?>
everything works fine.
[#22] jaroslaw at pobox dot sk [2008-03-28 02:15:54]
Some functions stops working proc_open() to me.
This i made to work for me to communicate between two php scripts:
<?php
$abs_path = '/var/www/domain/filename.php';
$spec = array(array("pipe", "r"), array("pipe", "w"), array("pipe", "w"));
$process = proc_open('php '.$abs_path, $spec, $pipes, null, $_ENV);
if (is_resource($process)) {
# wait till something happens on other side
sleep(1);
# send command
fwrite($pipes[0], 'echo $test;');
fflush($pipes[0]);
# wait till something happens on other side
usleep(1000);
# read pipe for result
echo fread($pipes[1],1024).'<hr>';
# close pipes
fclose($pipes[0]);fclose($pipes[1]);fclose($pipes[2]);
$return_value = proc_close($process);
}
?>
filename.php then contains this:
<?php
$test = 'test data generated here<br>';
while(true) {
# read incoming command
if($fh = fopen('php://stdin','rb')) {
$val_in = fread($fh,1024);
fclose($fh);
}
# execute incoming command
if($val_in)
eval($val_in);
usleep(1000);
# prevent neverending cycle
if($tmp_counter++ > 100)
break;
}
?>
[#23] chris AT w3style DOT co.uk [2008-02-22 02:57:00]
It took me a long time (and three consecutive projects) to figure this out. Because popen() and proc_open() return valid processes even when the command failed it's awkward to determine when it really has failed if you're opening a non-interactive process like "sendmail -t".
I had previously guess that reading from STDERR immediately after starting the process would work, and it does... but when the command is successful PHP just hangs because STDERR is empty and it's waiting for data to be written to it.
The solution is a simple stream_set_blocking($pipes[2], 0) immediately after calling proc_open().
<?php
$this->_proc = proc_open($command, $descriptorSpec, $pipes);
stream_set_blocking($pipes[2], 0);
if ($err = stream_get_contents($pipes[2]))
{
throw new Swift_Transport_TransportException(
'Process could not be started [' . $err . ']'
);
}
?>
If the process is opened successfully $pipes[2] will be empty, but if it failed the bash/sh error will be in it.
Finally I can drop all my "workaround" error checking.
I realise this solution is obvious and I'm not sure how it took me 18 months to figure it out, but hopefully this will help someone else.
NOTE: Make sure your descriptorSpec has ( 2 => array('pipe', 'w')) for this to work.
[#24] Anonymous [2007-12-27 07:40:27]
I needed to emulate a tty for a process (it wouldnt write to stdout or read from stdin), so I found this:
<?php
$descriptorspec = array(0 => array('pty'),
1 => array('pty'),
2 => array('pty'));
?>
pipes are bidirectional then
[#25] John Wehin [2007-12-06 22:52:35]
STDIN STDOUT example
test.php
<?php
$descriptorspec = array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("pipe", "r")
);
$process = proc_open('php test_gen.php', $descriptorspec, $pipes, null, null); //run test_gen.php
echo ("Start process:\n");
if (is_resource($process))
{
fwrite($pipes[0], "start\n"); // send start
echo ("\n\nStart ....".fgets($pipes[1],4096)); //get answer
fwrite($pipes[0], "get\n"); // send get
echo ("Get: ".fgets($pipes[1],4096)); //get answer
fwrite($pipes[0], "stop\n"); //send stop
echo ("\n\nStop ....".fgets($pipes[1],4096)); //get answer
fclose($pipes[0]);
fclose($pipes[1]);
fclose($pipes[2]);
$return_value = proc_close($process); //stop test_gen.php
echo ("Returned:".$return_value."\n");
}
?>
test_gen.php
<?php
$keys=0;
function play_stop()
{
global $keys;
$stdin_stat_arr=fstat(STDIN);
if($stdin_stat_arr[size]!=0)
{
$val_in=fread(STDIN,4096);
switch($val_in)
{
case "start\n": echo "Started\n";
return false;
break;
case "stop\n": echo "Stopped\n";
$keys=0;
return false;
break;
case "pause\n": echo "Paused\n";
return false;
break;
case "get\n": echo ($keys."\n");
return true;
break;
default: echo("????????? ??? ???????? ?????????: ".$val_in."\n");
return true;
exit();
}
}else{return true;}
}
while(true)
{
while(play_stop()){usleep(1000);}
while(play_stop()){$keys++;usleep(10);}
}
?>
[#26] mjutras at beenox dot com [2006-10-16 06:28:20]
The best way on windows to open a process then to let the php script continue is to call your process with the start command then to kill the "start" process and let your program run.
<?php
$descriptorspec = array(
0 => array("pipe", "r"), // stdin
1 => array("pipe", "w"), // stdout
2 => array("pipe", "w") // stderr
);
$process = proc_open('start notepad.exe', $descriptorspec, $pipes);
sleep(1);
proc_close($process);
?>
The start command will be called then open notepad, after 1 second the "start" command will be killed but the notepad will still opened and your php script can continue!
[#27] Docey [2006-07-24 17:12:31]
if your writing a function that processes a resource from
another function its a good idea not only to check whether
a resource has been passed to your function but also if its
of the good type like so:
<?php
function workingonit($resource){
if(is_resource($resource)){
if(get_resource_type($resource) == "resource_type"){
// resource is a resource and of the good type. continue
}else{
print("resource is of the wrong type.");
return false;
}
}else{
print("resource passed is not a resource at all.");
return false;
}
// do your stuff with the resource here and return
}
?>
this is extra true for working with files and process pipes.
so always check whats being passed to your functions.
here's a small snipppet of a few resource types:
files are of type 'file' in php4 and 'stream' in php5
'prossess' are resources opened by proc_open.
'pipe' are resource opened by popen.
btw the 'prossess' resource type was not mentioned in
the documentation. i make a bug-report for this.
[#28] php dot net_manual at reimwerker dot de [2006-06-03 04:47:11]
If you are going to allow data coming from user input to be passed to this function, then you should keep in mind the following warning that also applies to exec() and system():
http://www.php.net/manual/en/function.exec.php
http://www.php.net/manual/en/function.system.php
Warning:
If you are going to allow data coming from user input to be passed to this function, then you should be using escapeshellarg() or escapeshellcmd() to make sure that users cannot trick the system into executing arbitrary commands.
[#29] richard at 2006 dot atterer dot net [2006-04-07 12:14:14]
[Again, please delete my previous comment, the code still contained bugs (sorry). This version now includes Frederick Leitner's fix from below, and also fixes another bug: If an empty file was piped into the process, the loop would hang indefinitely.]
The following code works for piping large amounts of data through a filtering program. I find it very weird that such a lot of code is needed for this task... On entry, $stdin contains the standard input for the program. Tested on Debian Linux with PHP 5.1.2.
<?php
$descriptorSpec = array(0 => array("pipe", "r"),
1 => array('pipe', 'w'),
2 => array('pipe', 'w'));
$process = proc_open($command, $descriptorSpec, $pipes);
$txOff = 0; $txLen = strlen($stdin);
$stdout = ''; $stdoutDone = FALSE;
$stderr = ''; $stderrDone = FALSE;
stream_set_blocking($pipes[0], 0); // Make stdin/stdout/stderr non-blocking
stream_set_blocking($pipes[1], 0);
stream_set_blocking($pipes[2], 0);
if ($txLen == 0) fclose($pipes[0]);
while (TRUE) {
$rx = array(); // The program's stdout/stderr
if (!$stdoutDone) $rx[] = $pipes[1];
if (!$stderrDone) $rx[] = $pipes[2];
$tx = array(); // The program's stdin
if ($txOff < $txLen) $tx[] = $pipes[0];
stream_select($rx, $tx, $ex = NULL, NULL, NULL); // Block til r/w possible
if (!empty($tx)) {
$txRet = fwrite($pipes[0], substr($stdin, $txOff, 8192));
if ($txRet !== FALSE) $txOff += $txRet;
if ($txOff >= $txLen) fclose($pipes[0]);
}
foreach ($rx as $r) {
if ($r == $pipes[1]) {
$stdout .= fread($pipes[1], 8192);
if (feof($pipes[1])) { fclose($pipes[1]); $stdoutDone = TRUE; }
} else if ($r == $pipes[2]) {
$stderr .= fread($pipes[2], 8192);
if (feof($pipes[2])) { fclose($pipes[2]); $stderrDone = TRUE; }
}
}
if (!is_resource($process)) break;
if ($txOff >= $txLen && $stdoutDone && $stderrDone) break;
}
$returnValue = proc_close($process);
?>
[#30] Kevin Barr [2006-03-06 12:36:51]
I found that with disabling stream blocking I was sometimes attempting to read a return line before the external application had responded. So, instead, I left blocking alone and used this simple function to add a timeout to the fgets function:
// fgetsPending( $in,$tv_sec ) - Get a pending line of data from stream $in, waiting a maximum of $tv_sec seconds
function fgetsPending(&$in,$tv_sec=10) {
if ( stream_select($read = array($in),$write=NULL,$except=NULL,$tv_sec) ) return fgets($in);
else return FALSE;
}
[#31] andrew dot budd at adsciengineering dot com [2005-12-28 21:55:45]
The pty option is actually disabled in the source for some reason via a #if 0 && condition. I'm not sure why it's disabled. I removed the 0 && and recompiled, after which the pty option works perfectly. Just a note.
[#32] mendoza at pvv dot ntnu dot no [2005-10-21 22:42:22]
Since I don't have access to PAM via Apache, suexec on, nor access to /etc/shadow I coughed up this way of authenticating users based on the system users details. It's really hairy and ugly, but it works.
<?php
function authenticate($user,$password) {
$descriptorspec = array(
0 => array("pipe", "r"), // stdin is a pipe that the child will read from
1 => array("pipe", "w"), // stdout is a pipe that the child will write to
2 => array("file","/dev/null", "w") // stderr is a file to write to
);
$process = proc_open("su ".escapeshellarg($user), $descriptorspec, $pipes);
if (is_resource($process)) {
// $pipes now looks like this:
// 0 => writeable handle connected to child stdin
// 1 => readable handle connected to child stdout
// Any error output will be appended to /tmp/error-output.txt
fwrite($pipes[0],$password);
fclose($pipes[0]);
fclose($pipes[1]);
// It is important that you close any pipes before calling
// proc_close in order to avoid a deadlock
$return_value = proc_close($process);
return !$return_value;
}
}
?>
[#33] picaune at hotmail dot com [2005-10-16 16:51:41]
The above note on Windows compatibility is not entirely correct.
Windows will dutifully pass on additional handles above 2 onto the child process, starting with Windows 95 and Windows NT 3.5. It even supports this capability (starting with Windows 2000) from the command line using a special syntax (prefacing the redirection operator with the handle number).
These handles will be, when passed to the child, preopened for low-level IO (e.g. _read) by number. The child can reopen them for high-level (e.g. fgets) using the _fdopen or _wfdopen methods. The child can then read from or write to them the same way they would stdin or stdout.
However, child processes must be specially coded to use these handles, and if the end user is not intelligent enough to use them (e.g. "openssl < commands.txt 3< cacert.der") and the program not smart enough to check, it could cause errors or hangs.
[#34] Chapman Flack [2005-10-04 12:34:55]
One can learn from the source code in ext/standard/exec.c that the right-hand side of a descriptor assignment does not have to be an array ('file', 'pipe', or 'pty') - it can also be an existing open stream.
<?php
$p = proc_open('myfilter', array( 0 => $infile, ...), $pipes);
?>
I was glad to learn that because it solves the race condition in a scenario like this: you get a file name, open the file, read a little to make sure it's OK to serve to this client, then rewind the file and pass it as input to the filter. Without this feature, you would be limited to
<?php array('file', $fname) ?>
or passing the name to the filter command. Those choices both involve a race (because the file will be reopened after you have checked it's OK), and the last one invites surprises if not carefully quoted, too.
[#35] Kyle Gibson [2005-08-05 00:16:49]
proc_open is hard coded to use "/bin/sh". So if you're working in a chrooted environment, you need to make sure that /bin/sh exists, for now.
[#36] mib at post dot com [2005-06-21 18:21:19]
I thought it was highly not recommended to fork from your web server?
Apart from that, one caveat is that the child process inherits anything that is preserved over fork from the parent (apart from the file descriptors which are explicitly closed).
Importantly, it inherits the signal handling setup, which at least with apache means that SIGPIPE is ignored. Child processes that expect SIGPIPE to kill them in order to get sensible pipe handling and not go into a tight write loop will have problems unless they reset SIGPIPE themselves.
Similar caveats probably apply to other signals like SIGHUP, SIGINT, etc.
Other things preserved over fork include shared memory segments, umask and rlimits.
[#37] falstaff at arragon dot biz [2005-03-20 17:22:43]
Using this function under windows with large amounts of data is apparently futile.
these functions are returning 0 but do not appear to be doing anything useful.
stream_set_write_buffer($pipes[0],0);
stream_set_write_buffer($pipes[1],0);
these functions are returning false and are also apparently useless under windows.
stream_set_blocking($pipes[0], FALSE);
stream_set_blocking($pipes[1], FALSE);
The magic max buffer size I found with winxp is 63488 bytes, (62k). Anything larger than this results in a system hang.
[#38] list[at]public[dot]lt [2004-05-11 15:21:04]
if you push a little bit more data through the pipe, it will be hanging forever. One simple solution on RH linux was to do this:
stream_set_blocking($pipes[0], FALSE);
stream_set_blocking($pipes[1], FALSE);
This did not work on windows XP though.
[#39] ralf at dreesen[*NO*SPAM*] dot net [2004-01-09 11:49:53]
The behaviour described in the following may depend on the system php runs on. Our platform was "Intel with Debian 3.0 linux".
If you pass huge amounts of data (ca. >>10k) to the application you run and the application for example echos them directly to stdout (without buffering the input), you will get a deadlock. This is because there are size-limited buffers (so called pipes) between php and the application you run. The application will put data into the stdout buffer until it is filled, then it blocks waiting for php to read from the stdout buffer. In the meantime Php filled the stdin buffer and waits for the application to read from it. That is the deadlock.
A solution to this problem may be to set the stdout stream to non blocking (stream_set_blocking) and alternately write to stdin and read from stdout.
Just imagine the following example:
<?php
$descriptorspec = array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("file", "/tmp/error-output.txt", "a")
);
$process = proc_open("cat", $descriptorspec, $pipes);
if (is_resource($process)) {
fwrite($pipes[0], $in);
fclose($pipes[0]);
while (!feof($pipes[1])) {
$out .= fgets($pipes[1], 1024);
}
fclose($pipes[1]);
$return_value = proc_close($process);
}
?>
[#40] MagicalTux at FF.ST [2003-12-24 04:20:04]
Note that if you need to be "interactive" with the user *and* the opened application, you can use stream_select to see if something is waiting on the other side of the pipe.
Stream functions can be used on pipes like :
- pipes from popen, proc_open
- pipes from fopen('php://stdin') (or stdout)
- sockets (unix or tcp/udp)
- many other things probably but the most important is here
More informations about streams (you'll find many useful functions there) :
http://www.php.net/manual/en/ref.stream.php
[#41] daniela at itconnect dot net dot au [2003-04-16 02:01:10]
Just a small note in case it isn't obvious, its possible to treat the filename as in fopen, thus you can pass through the standard input from php like
$descs = array (
0 => array ("file", "php://stdin", "r"),
1 => array ("pipe", "w"),
2 => array ("pipe", "w")
);
$proc = proc_open ("myprogram", $descs, $fp);