


Interpretation of the 2023 Graduation Thesis List: Are you frightened by the successful draft? Is it difficult to change fate with rebuttal? Are the reviewers biased?
When the top rankings are released again, some people are happy and some are sad.
This IJCAI 2023 received a total of 4566 full text submissions, with an acceptance rate of approximately 15%
Question link: https://www.zhihu.com/question/578082970
Judging from the feedback results on Zhihu, the overall review quality It is still not satisfactory (it may be due to the resentment of being rejected...), and some reviewers even rejected it without even reading the content of the rebuttal.
There are also papers with the same score but different endings.
Some netizens also posted reasons for rejection of the meta review, all of which were major shortcomings.
But rejection is not the end, the more important thing is to keep going.
Netizen Lower_Evening_4056 believes that even landmark papers will be rejected many times, and some papers can still be accepted even if they are not outstanding enough.
When you move on and look back at those reasonable review comments, you will find that your work can still be improved to a higher level.
The review system does have flaws, and more importantly, don’t regard rejection as an evaluation result of your personal or work value. If you are a student and your advisor evaluates you based on the results of your reviews rather than the quality of your work, you may want to reconsider your relationship with your advisor.
The NeurIPS conference has previously carried out consistency experiments. For papers with an average score between 5 and 6.5, the admission results are basically random. Yes, it depends on the reviewer you meet.
For example, the result of a person's paper is 9665. If he does not meet the reviewer who gave him 9 points, the result must be reject, but he happens to meet Bole. And reversed the review results.
Finally, congratulations to those researchers whose papers were accepted for helping to promote the development of artificial intelligence research!
Here are some accepted papers shared on social media.
IJCAI 2023 accepted papers
Gradient correction of multi-task learning in end-to-end anti-noise speech recognition
In the downstream automatic speech recognition system (ASR), the speech enhancement learning strategy (SE) has been proven to be able to effectively reduce the noise generated by noisy speech signals. The system uses a multi-task learning strategy to jointly optimize the two tasks. .
However, enhanced speech learned through SE targets does not always produce good ASR results.
From an optimization perspective, there is sometimes interference between the gradients of the adaptive task and the adaptive reaction task, which hinders multi-task learning and ultimately leads to poor adaptive reaction performance. ideal.
Paper link: https://arxiv.org/pdf/2302.11362.pdf
This paper proposes a simple and effective gradient compensation (GR) method to solve the interference problem between task gradients in noise-robust speech recognition.
Specifically, the gradient of the SE task is first projected onto a dynamic surface at an acute angle with the ASR gradient to eliminate the conflict between them and assist ASR optimization.
Additionally, the sizes of the two gradients are adaptively adjusted to prevent the dominant ASR task from being misled by the SE gradient.
Experimental results show that this method can better solve the problem of gradient interference. On the multi-task learning baseline, it achieved 9.3% and 11.1 on the RATS and CHiME-4 data sets respectively. % relative word error rate (WER) reduction.
Constrain Tsetlin machine clause size to build concise logical schema
Tsetlin Machine (TM) is a logic-based A machine learning approach that has the key advantages of being transparent and hardware-friendly.
While TM matches or exceeds the accuracy of deep learning in an increasing number of applications, large clause pooling tends to produce clauses with many literals (long clauses) , making them less understandable.
Additionally, longer clauses increase the switching activity of the clause logic in the hardware, with higher power consumption.
##Paper link: https://arxiv.org/abs/2301.08190
This paper introduces a new TM learning method, the clause size-constrained clause learning method (CSC-TM), which can set soft constraints on the clause size.
Once a clause contains more literals than the constraint allows, literals are excluded, so larger clauses appear only briefly.
To evaluate CSC-TM, the researchers conducted classification, clustering, and regression experiments on tabular data, natural language text, images, and board games.
The results show that CSC-TM maintains accuracy with up to 80x text reduction, and indeed TREC, IMDb and BBC Sports have higher accuracy with shorter clauses , after the accuracy peaks, it slowly decreases as the clause size approaches a single text.
The article finally analyzes the power consumption of CSC-TM and obtains new convergence properties.
#DNN-Verification Problem: Computing Unsafe Inputs to Deep Neural Networks
Deep Neural Networks are getting more and more It is mostly used for critical tasks that require a high level of security, such as autonomous driving. Although the most advanced verifier can be used to check whether the DNN is insecure:
Given some attributes (i.e., whether there is at least one unsafe input configuration), the yes/no output of the model is not informative enough for other purposes (such as shielding, model selection, or training improvement).
##Paper link: https://arxiv.org/abs/2301.07068
This paper introduces the #DNN-Verification problem, which involves counting the number of DNN input configurations that lead to the violation of a specific security property. The researchers analyze the complexity of this problem and propose a new approach , returns the exact violation count.Since the problem is P-complete, we propose a stochastic approximation method that provides a provably correct count of the probability bound while significantly reducing computational requirements. The paper also presents a set of safety-critical benchmarks, experimental results demonstrating the effectiveness of the approximation method and evaluating the tightness of the constraints.
The above is the detailed content of Interpretation of the 2023 Graduation Thesis List: Are you frightened by the successful draft? Is it difficult to change fate with rebuttal? Are the reviewers biased?. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



StableDiffusion3’s paper is finally here! This model was released two weeks ago and uses the same DiT (DiffusionTransformer) architecture as Sora. It caused quite a stir once it was released. Compared with the previous version, the quality of the images generated by StableDiffusion3 has been significantly improved. It now supports multi-theme prompts, and the text writing effect has also been improved, and garbled characters no longer appear. StabilityAI pointed out that StableDiffusion3 is a series of models with parameter sizes ranging from 800M to 8B. This parameter range means that the model can be run directly on many portable devices, significantly reducing the use of AI

ICCV2023, the top computer vision conference held in Paris, France, has just ended! This year's best paper award is simply a "fight between gods". For example, the two papers that won the Best Paper Award included ControlNet, a work that subverted the field of Vincentian graph AI. Since being open sourced, ControlNet has received 24k stars on GitHub. Whether it is for diffusion models or the entire field of computer vision, this paper's award is well-deserved. The honorable mention for the best paper award was awarded to another equally famous paper, Meta's "Separate Everything" ”Model SAM. Since its launch, "Segment Everything" has become the "benchmark" for various image segmentation AI models, including those that came from behind.

Since Neural Radiance Fields was proposed in 2020, the number of related papers has increased exponentially. It has not only become an important branch of three-dimensional reconstruction, but has also gradually become active at the research frontier as an important tool for autonomous driving. NeRF has suddenly emerged in the past two years, mainly because it skips the feature point extraction and matching, epipolar geometry and triangulation, PnP plus Bundle Adjustment and other steps of the traditional CV reconstruction pipeline, and even skips mesh reconstruction, mapping and light tracing, directly from 2D The input image is used to learn a radiation field, and then a rendered image that approximates a real photo is output from the radiation field. In other words, let an implicit three-dimensional model based on a neural network fit the specified perspective

Just as the AAAI 2023 paper submission deadline was approaching, a screenshot of an anonymous chat in the AI submission group suddenly appeared on Zhihu. One of them claimed that he could provide "3,000 yuan a strong accept" service. As soon as the news came out, it immediately aroused public outrage among netizens. However, don’t rush yet. Zhihu boss "Fine Tuning" said that this is most likely just a "verbal pleasure". According to "Fine Tuning", greetings and gang crimes are unavoidable problems in any field. With the rise of openreview, the various shortcomings of cmt have become more and more clear. The space left for small circles to operate will become smaller in the future, but there will always be room. Because this is a personal problem, not a problem with the submission system and mechanism. Introducing open r

Generative AI has taken the artificial intelligence community by storm. Both individuals and enterprises have begun to be keen on creating related modal conversion applications, such as Vincent pictures, Vincent videos, Vincent music, etc. Recently, several researchers from scientific research institutions such as ServiceNow Research and LIVIA have tried to generate charts in papers based on text descriptions. To this end, they proposed a new method of FigGen, and the related paper was also included in ICLR2023 as TinyPaper. Picture paper address: https://arxiv.org/pdf/2306.00800.pdf Some people may ask, what is so difficult about generating the charts in the paper? How does this help scientific research?

Since it was first held in 2017, CoRL has become one of the world's top academic conferences in the intersection of robotics and machine learning. CoRL is a single-theme conference for robot learning research, covering multiple topics such as robotics, machine learning and control, including theory and application. The 2023 CoRL Conference will be held in Atlanta, USA, from November 6th to 9th. According to official data, 199 papers from 25 countries were selected for CoRL this year. Popular topics include operations, reinforcement learning, and more. Although CoRL is smaller in scale than large AI academic conferences such as AAAI and CVPR, as the popularity of concepts such as large models, embodied intelligence, and humanoid robots increases this year, relevant research worthy of attention will also

Just now, CVPR 2023 issued an article saying: This year, we received a record 9155 papers (12% more than CVPR2022), and accepted 2360 papers, with an acceptance rate of 25.78%. According to statistics, the number of submissions to CVPR only increased from 1,724 to 2,145 in the 7 years from 2010 to 2016. After 2017, it soared rapidly and entered a period of rapid growth. In 2019, it exceeded 5,000 for the first time, and by 2022, the number of submissions had reached 8,161. As you can see, a total of 9,155 papers were submitted this year, indeed setting a record. After the epidemic is relaxed, this year’s CVPR summit will be held in Canada. This year it will be a single-track conference and the traditional Oral selection will be cancelled. google research

As everyone continues to upgrade and iterate their own large models, the ability of LLM (large language model) to process context windows has also become an important evaluation indicator. For example, the star model GPT-4 supports 32k tokens, which is equivalent to 50 pages of text; Anthropic, founded by a former member of OpenAI, has increased Claude's token processing capabilities to 100k, which is about 75,000 words, which is roughly equivalent to summarizing "Harry Potter" with one click "First. In Microsoft's latest research, they directly expanded Transformer to 1 billion tokens this time. This opens up new possibilities for modeling very long sequences, such as treating an entire corpus or even the entire Internet as one sequence. For comparison, common
