


Detailed explanation of the steps to submit a form in the form of an array using the yii framework form model
This time I will bring you a detailed explanation of the steps for submitting a form in the form of an array using the yii framework form model. What are the precautions for the yii framework form model to submit a form in the form of an array? The following is a practical case. Let’s take a look. one time.
According to the description in Yii documentation, the general process of Yii processing forms is:
Create the model class corresponding to the form, and set the fields Validation rules
Create the action corresponding to the form submission and process the submitted content
Create in the viewForm
In a small project just now, I wanted to use ajax to submit the form information and verify and save it, but I didn’t want to Use a hidden iframe for non-refresh submission, and the verification method of the model class can be used in the action, so I thought of using a form array to submit.
Example, form code:
<form action='' method='post' name='form_test'> <input type='text' name='arr[]' value='1'> <input type='text' name='arr[]' value='2'> <input type='text' name='arr[]' value='3'> </form>
After submission, you can directly use $_POST['arr'] to obtain the submitted data, $_POST['arr'] is:
Array ( [0] => a [1] => b [2] => c )
Similarly, if you use the following form to submit:
<form action='' method='post' name='form_test'> <input type='text' name='arr[3]' value='a'> <input type='text' name='arr[6]' value='b'> <input type='text' name='arr[8]' value='c'> </form> $_POST['arr'] Array ( [3] => a [6] => b [8] => c )
Of course you can also submit two-dimensional array:
<form action='http://127.0.0.1/zhaobolu/test.php' method='post' name='form_test'> <input type='text' name='arr[][name1]' value='a'> <input type='text' name='arr[][name2]' value='b'> <input type='text' name='arr[][name3]' value='c'></form> $_POST['arr'] 为:Array( [0] => Array ( [name1] => a ) [1] => Array ( [name2] => b ) [2] => Array ( [name3] => c ))
There is a problem here, If you do not set the key of the first sub-array, each value will be added to arr sequentially when generating the array. If you want to save the information in an array, just add a key value, as follows:
<form action='http://127.0.0.1/zhaobolu/test.php' method='post' name='form_test'> <input type='text' name='arr[a][name1]' value='a1'> <input type='text' name='arr[a][value1]' value='a2'> <input type='text' name='arr[b][name2]' value='b1'> <input type='text' name='arr[b][value2]' value='b2'></form> $_POST['arr'] 为:Array( [a] => Array ( [name1] => a1 [value1] => a2 ) [b] => Array ( [name2] => b1 [value2] => b2 ))
Use ajax to submit the form and use yii formModel verificationExample, first is the model class part, only the simplest verification method:
<?php class LandingForm extends CFormModel { public $landing_title; public $landing_content; public $landing_position; public function rules() { return array( array('landing_title, landing_content', 'required'), array('landing_position', 'default', 'value'=>''), ); } }
Model class When setting the parameter verification method, you need to set rules for each public parameter. If there are parameters without set rules, after assigning values to the model using the form value in $_POST, the parameter values without set rules will be empty# Get the parameters submitted by the form in ##action and verify them:
$model = new LandingForm; $model->attributes = $_POST['form']; if($model->validate()){ $info = $model->attributes; ... }
var info = new Object(); info = { 'form[landing_title]': landing_title, 'form[landing_content]': landing_content, 'form[landing_position]': landing_position, }; var url = "..."; $.post(url, info, function(rst){ ... });
Detailed explanation of the steps to create scheduled tasks through the yii framework through console commands
PHP accelerator eAccelerator configuration and usage steps Detailed explanation
The above is the detailed content of Detailed explanation of the steps to submit a form in the form of an array using the yii framework form model. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Imagine an artificial intelligence model that not only has the ability to surpass traditional computing, but also achieves more efficient performance at a lower cost. This is not science fiction, DeepSeek-V2[1], the world’s most powerful open source MoE model is here. DeepSeek-V2 is a powerful mixture of experts (MoE) language model with the characteristics of economical training and efficient inference. It consists of 236B parameters, 21B of which are used to activate each marker. Compared with DeepSeek67B, DeepSeek-V2 has stronger performance, while saving 42.5% of training costs, reducing KV cache by 93.3%, and increasing the maximum generation throughput to 5.76 times. DeepSeek is a company exploring general artificial intelligence

Earlier this month, researchers from MIT and other institutions proposed a very promising alternative to MLP - KAN. KAN outperforms MLP in terms of accuracy and interpretability. And it can outperform MLP running with a larger number of parameters with a very small number of parameters. For example, the authors stated that they used KAN to reproduce DeepMind's results with a smaller network and a higher degree of automation. Specifically, DeepMind's MLP has about 300,000 parameters, while KAN only has about 200 parameters. KAN has a strong mathematical foundation like MLP. MLP is based on the universal approximation theorem, while KAN is based on the Kolmogorov-Arnold representation theorem. As shown in the figure below, KAN has

The latest video of Tesla's robot Optimus is released, and it can already work in the factory. At normal speed, it sorts batteries (Tesla's 4680 batteries) like this: The official also released what it looks like at 20x speed - on a small "workstation", picking and picking and picking: This time it is released One of the highlights of the video is that Optimus completes this work in the factory, completely autonomously, without human intervention throughout the process. And from the perspective of Optimus, it can also pick up and place the crooked battery, focusing on automatic error correction: Regarding Optimus's hand, NVIDIA scientist Jim Fan gave a high evaluation: Optimus's hand is the world's five-fingered robot. One of the most dexterous. Its hands are not only tactile

1. Introduction Over the past few years, YOLOs have become the dominant paradigm in the field of real-time object detection due to its effective balance between computational cost and detection performance. Researchers have explored YOLO's architectural design, optimization goals, data expansion strategies, etc., and have made significant progress. At the same time, relying on non-maximum suppression (NMS) for post-processing hinders end-to-end deployment of YOLO and adversely affects inference latency. In YOLOs, the design of various components lacks comprehensive and thorough inspection, resulting in significant computational redundancy and limiting the capabilities of the model. It offers suboptimal efficiency, and relatively large potential for performance improvement. In this work, the goal is to further improve the performance efficiency boundary of YOLO from both post-processing and model architecture. to this end

In order to align large language models (LLMs) with human values and intentions, it is critical to learn human feedback to ensure that they are useful, honest, and harmless. In terms of aligning LLM, an effective method is reinforcement learning based on human feedback (RLHF). Although the results of the RLHF method are excellent, there are some optimization challenges involved. This involves training a reward model and then optimizing a policy model to maximize that reward. Recently, some researchers have explored simpler offline algorithms, one of which is direct preference optimization (DPO). DPO learns the policy model directly based on preference data by parameterizing the reward function in RLHF, thus eliminating the need for an explicit reward model. This method is simple and stable

At the forefront of software technology, UIUC Zhang Lingming's group, together with researchers from the BigCode organization, recently announced the StarCoder2-15B-Instruct large code model. This innovative achievement achieved a significant breakthrough in code generation tasks, successfully surpassing CodeLlama-70B-Instruct and reaching the top of the code generation performance list. The unique feature of StarCoder2-15B-Instruct is its pure self-alignment strategy. The entire training process is open, transparent, and completely autonomous and controllable. The model generates thousands of instructions via StarCoder2-15B in response to fine-tuning the StarCoder-15B base model without relying on expensive manual annotation.

The benchmark YOLO series of target detection systems has once again received a major upgrade. Since the release of YOLOv9 in February this year, the baton of the YOLO (YouOnlyLookOnce) series has been passed to the hands of researchers at Tsinghua University. Last weekend, the news of the launch of YOLOv10 attracted the attention of the AI community. It is considered a breakthrough framework in the field of computer vision and is known for its real-time end-to-end object detection capabilities, continuing the legacy of the YOLO series by providing a powerful solution that combines efficiency and accuracy. Paper address: https://arxiv.org/pdf/2405.14458 Project address: https://github.com/THU-MIG/yo

After Stanford's Feifei Li started his business, he unveiled the new concept "spatial intelligence" for the first time. This is not only her entrepreneurial direction, but also the "North Star" that guides her. She considers it "the key puzzle piece to solve the artificial intelligence problem." Visualization leads to insight; seeing leads to understanding; understanding leads to action. Based on Li Feifei's 15-minute TED talk, which is fully open to the public, it starts from the origin of life evolution hundreds of millions of years ago, to how humans are not satisfied with what nature has given them and develops artificial intelligence, to how to build spatial intelligence in the next step. Nine years ago, Li Feifei introduced the newly born ImageNet to the world on the same stage - one of the starting points for this round of deep learning explosion. She herself also encouraged netizens: If you watch both videos, you will be able to understand the computer vision of the past 10 years.
