When the top rankings are released again, some people are happy and some are sad.
This IJCAI 2023 received a total of 4566 full text submissions, with an acceptance rate of approximately 15%
Question link: https://www.zhihu.com/question/578082970
Judging from the feedback results on Zhihu, the overall review quality It is still not satisfactory (it may be due to the resentment of being rejected...), and some reviewers even rejected it without even reading the content of the rebuttal.
There are also papers with the same score but different endings.
Some netizens also posted reasons for rejection of the meta review, all of which were major shortcomings.
But rejection is not the end, the more important thing is to keep going.
Netizen Lower_Evening_4056 believes that even landmark papers will be rejected many times, and some papers can still be accepted even if they are not outstanding enough.
When you move on and look back at those reasonable review comments, you will find that your work can still be improved to a higher level.
The review system does have flaws, and more importantly, don’t regard rejection as an evaluation result of your personal or work value. If you are a student and your advisor evaluates you based on the results of your reviews rather than the quality of your work, you may want to reconsider your relationship with your advisor.
The NeurIPS conference has previously carried out consistency experiments. For papers with an average score between 5 and 6.5, the admission results are basically random. Yes, it depends on the reviewer you meet.
For example, the result of a person's paper is 9665. If he does not meet the reviewer who gave him 9 points, the result must be reject, but he happens to meet Bole. And reversed the review results.
Finally, congratulations to those researchers whose papers were accepted for helping to promote the development of artificial intelligence research!
Here are some accepted papers shared on social media.
Gradient correction of multi-task learning in end-to-end anti-noise speech recognition
In the downstream automatic speech recognition system (ASR), the speech enhancement learning strategy (SE) has been proven to be able to effectively reduce the noise generated by noisy speech signals. The system uses a multi-task learning strategy to jointly optimize the two tasks. .
However, enhanced speech learned through SE targets does not always produce good ASR results.
From an optimization perspective, there is sometimes interference between the gradients of the adaptive task and the adaptive reaction task, which hinders multi-task learning and ultimately leads to poor adaptive reaction performance. ideal.
Paper link: https://arxiv.org/pdf/2302.11362.pdf
This paper proposes a simple and effective gradient compensation (GR) method to solve the interference problem between task gradients in noise-robust speech recognition.
Specifically, the gradient of the SE task is first projected onto a dynamic surface at an acute angle with the ASR gradient to eliminate the conflict between them and assist ASR optimization.
Additionally, the sizes of the two gradients are adaptively adjusted to prevent the dominant ASR task from being misled by the SE gradient.
Experimental results show that this method can better solve the problem of gradient interference. On the multi-task learning baseline, it achieved 9.3% and 11.1 on the RATS and CHiME-4 data sets respectively. % relative word error rate (WER) reduction.
Constrain Tsetlin machine clause size to build concise logical schema
Tsetlin Machine (TM) is a logic-based A machine learning approach that has the key advantages of being transparent and hardware-friendly.
While TM matches or exceeds the accuracy of deep learning in an increasing number of applications, large clause pooling tends to produce clauses with many literals (long clauses) , making them less understandable.
Additionally, longer clauses increase the switching activity of the clause logic in the hardware, with higher power consumption.
##Paper link: https://arxiv.org/abs/2301.08190
This paper introduces a new TM learning method, the clause size-constrained clause learning method (CSC-TM), which can set soft constraints on the clause size.
Once a clause contains more literals than the constraint allows, literals are excluded, so larger clauses appear only briefly.
To evaluate CSC-TM, the researchers conducted classification, clustering, and regression experiments on tabular data, natural language text, images, and board games.
The results show that CSC-TM maintains accuracy with up to 80x text reduction, and indeed TREC, IMDb and BBC Sports have higher accuracy with shorter clauses , after the accuracy peaks, it slowly decreases as the clause size approaches a single text.
The article finally analyzes the power consumption of CSC-TM and obtains new convergence properties.
#DNN-Verification Problem: Computing Unsafe Inputs to Deep Neural Networks
Deep Neural Networks are getting more and more It is mostly used for critical tasks that require a high level of security, such as autonomous driving. Although the most advanced verifier can be used to check whether the DNN is insecure:
Given some attributes (i.e., whether there is at least one unsafe input configuration), the yes/no output of the model is not informative enough for other purposes (such as shielding, model selection, or training improvement).
##Paper link: https://arxiv.org/abs/2301.07068
This paper introduces the #DNN-Verification problem, which involves counting the number of DNN input configurations that lead to the violation of a specific security property. The researchers analyze the complexity of this problem and propose a new approach , returns the exact violation count.Since the problem is P-complete, we propose a stochastic approximation method that provides a provably correct count of the probability bound while significantly reducing computational requirements. The paper also presents a set of safety-critical benchmarks, experimental results demonstrating the effectiveness of the approximation method and evaluating the tightness of the constraints.
The above is the detailed content of Interpretation of the 2023 Graduation Thesis List: Are you frightened by the successful draft? Is it difficult to change fate with rebuttal? Are the reviewers biased?. For more information, please follow other related articles on the PHP Chinese website!