


Evaluation and improvement of smart cockpit software performance and reliability
Author | Zhang Xuhai
With the rapid development of smart cars, smart cockpits have had some problems in terms of performance and reliability, resulting in poor user experience and complaints. Increase. This article briefly discusses the importance of building a smart cockpit software evaluation framework from an engineering perspective, as well as methods to continuously improve performance and reliability.
1. Poor performance and reliability of smart cockpit software
According to the "2023 Smart Cockpit White Paper - Focus" released by KPMG "Electrification in the Second Half", China's automotive smart cockpit market continues to expand, and the compound annual growth rate from 2022 to 2026 is expected to exceed 17%, showing that this field has huge development potential. As the market grows, smart cockpit software functions will become more diverse and powerful, and the overall intelligence level will also be significantly improved. This shows that the automotive industry is moving in a more intelligent and connected direction, providing consumers with a more intelligent, convenient and comfortable driving experience.
(Source: "2023 Smart Cockpit White Paper - Focus on the Second Half of Electrification")
As the market size forecast continues to expand, consumers are increasingly interested in smart cockpit software The proportion of complaints is also increasing year by year. It mainly focuses on the operating experience, performance and reliability of smart cockpit software, highlighting the challenges brought about by the continuous increase of smart functions.
According to the car quality network’s car complaint analysis report for the four quarters of 2023, the quality problems involved in smart cockpits (vehicle machines) account for a significant proportion, among which the top 20 complaint fault points in Q1~Q4 are related to car machines Related parts (audio and video system failures, navigation problems, in-vehicle interconnection failures, driving safety assistance system failures, etc.) accounted for 15.89%, 10.99%, 10.56% and 9.56% of the total complaints respectively.
_(Source: Chezhi.com)_Further checking the specific complaint form, you will find that problems including crashes, black screens, freezes, slow responses, etc. are very common and seriously affect the The user's driving experience also reduces the user's confidence and recognition of the brand.
After combining the development trends of smart cockpit software and user complaints, it can be found that performance and reliability are the most critical factors affecting the user experience in addition to ease of operation. These two key factors are not only directly related to user satisfaction, but also determine the competitiveness of smart cockpit software in the market to a large extent.
- Performance improvement is the cornerstone to ensure the smooth operation of smart cockpit software. As functions continue to increase, software requires more efficient processors and optimized algorithms to ensure instant response to user operations and high system fluency.
- Reliability is the key to ensuring that users can trust smart cockpit software in various usage scenarios. Users expect that they will not be disturbed by smart cockpit software failures during driving. It is best for the system to run stably to avoid problems such as crashes or freezes.
In the following article, we will combine the best practices of software development and the characteristics of software in the smart cockpit field to explore methods to evaluate and improve its performance and reliability.
2. Evaluation framework for performance and reliability
If you can't measure it, you can't improve it.
The smart cockpit software system itself is a kind of software, and its research and development process also follows the common processes of software architecture design, development implementation and quality verification. Therefore, before discussing how to improve, we should first clarify: How to correctly evaluate the performance and reliability of software systems?
1. Software Architecture Characteristics Model
Mark Richards and Neal Ford once described "architectural characteristics" in "Software Architecture: A Guide to Architectural Patterns, Characteristics, and Practices" :
Architects may work with others to identify domain or business requirements, but a key responsibility of the architect is to define, discover, and analyze the domain-independent things necessary for the software: architectural features.
Architecture Characteristics are software characteristics that architects need to consider when designing software that are independent of domain or business requirements, such as auditability, performance, security, scalability, and reliability. Sex and so on. In many cases we also call them nonfunctional requirements (Nonfunctional Requirements) or quality attributes (Quality Attributes).
Obviously, key software architecture features need to be taken into consideration at the beginning of the architecture design, and continued attention should be paid during the software development process. So when developing software systems, what are the key architectural features that need to be considered?
ISO/IEC 25010:2011 is a set of standards promoted by the International Organization for Standardization (now updated to the 2023 version). It is affiliated with the ISO System and Software Quality Requirements and Evaluation (SQuaRE) system and defines a Group systems and software quality models. This quality model is widely used to describe and evaluate software quality, and can well guide us in modeling key architectural features of software.
The quality model described by ISO 25010 is as follows (the parts related to performance and reliability are highlighted in the figure):
The GQM method is an effective analysis method for finding and establishing indicator elements: GQM stands for "Goal - Question - Metrics", which can be translated as "Goal - Question - Indicator". It is an analysis method with a long history. Introduced by Victor Basili and David Weiss in 1984.
Essentially, GQM analyzes the structure through a tree, progressing layer by layer. First, we ask questions about the goals based on how to achieve them, and then break down each question into multiple indicator elements that can support the solution to the problem, and finally select the most appropriate indicator elements.
Below we take "evaluation index elements to help find the performance and reliability characteristics of smart cockpit software" as an example, respectively based on "evaluating the smoothness of smart cockpit home screen operations" and "calculating faults in smart cockpit systems and applications" "Rate and availability" as the goal, establish a GQM analysis tree:
- The more problems it can support, the higher the priority
- The easier it is to collect and calculate, the higher the priority
When evaluating whether software functions meet the requirements, we will build a large number of automated tests, so as to form a software feature safety net to continuously ensure that the software meets the requirements. As for the evaluation of architectural characteristics, the traditional approach is more like a "sports-style" evaluation:Everything that can be automated should be automated.
- On the R&D side, a dedicated performance or reliability testing team is regularly established, holding the indicator system in hand, testing and evaluating whether the indicator requirements are met from a black box perspective, and producing test reports;
- On the design side, various architecture discussions and review meetings are regularly arranged to evaluate the design itself and whether the software is correctly implemented as designed, and a large number of documents are produced.
ASPICE is a typical case. Due to the complexity of the process and documentation, as well as the strict requirements for each development stage, it is easy for design and testing to stay in the state of an earlier snapshot version. , can never keep up with the speed of software changes.
(Source: An ASPICE Overview)
In the book "Evolutionary Architecture" co-authored by Neal Ford, Patrick Kua and Rebecca Parsons, The fitness function is defined as “an objective function that summarizes how close the intended design solution is to achieving the set goal.” Introducing the fitness function means that the evaluation of the architecture can be automated and normalized through engineering means.
(Source: "Evolutionary Architecture")
When our indicators and models are converted into fitness functions, they can be bound In the R&D pipeline, this enables automated evaluation of architectural features.
With automation as a premise, architecture care can then be used to drive continuous improvement.
Based on the various fitness functions that have been established, the execution results generated by the fitness functions can form a complete set of performance and reliability evaluation reports during daily build, iterative testing, integration testing and other processes. Taking the evaluation results of the previous version as the baseline and comparing them with the evaluation results of the latest version, we can carefully monitor the performance and reliability of the software, thereby determining which parts of the new version have been optimized and which parts have failed. Deterioration is obvious at a glance.
3. Observable toolset helps analysis
So far we have some means to support continuous performance and reliability evaluation, but evaluation is essentially for exposure Problems, subsequent analysis and optimization are the difficulties of continuous improvement.
After the problem is exposed, optimization often needs to be carried out as quickly as possible. For business-oriented organizations, the team spends most of its time working in the business field, analyzing issues such as performance and reliability. and insufficient optimization capabilities, usually at this time the organization will look for or hire technical experts to help improve. However, as a scarce resource, technical experts are often stretched thin when faced with a variety of problems.
Therefore, for organizations that hope to achieve continuous improvement, it is essential to establish engineering analysis and optimization methods to improve efficiency. The first one here is to build an observable tool set. In the evaluation framework mentioned earlier, the role of indicators is mainly to indicate the current status. Indicators can evaluate the pros and cons, but cannot help analyze the root causes of problems. Analyzing software problems requires being able to reproduce what happened when the system was running, how components interacted, and what data was generated. This information needs to be captured and recorded through observable tools.
After having such a toolset, when the assessment finds that certain indicators have deteriorated, the context and observation records of the system runtime can be quickly associated based on some basic information, so as to quickly analyze and locate the problem and implement it quickly. optimization.
Summary
The smart car market has broad prospects and is developing rapidly. As competition deepens, the ultimate experience of smart cockpits will definitely become a major goal of various automobile manufacturers.
This article mainly discusses the continuous evaluation methods and continuous improvement methods of smart cockpit software in terms of performance and reliability from the perspective of software development and delivery, combined with excellent practices and explorations in the software field.
As more and more external investments and cross-field talents pour into the field of smart cars, I believe that huge value will continue to be created in related industries in the future.
The above is the detailed content of Evaluation and improvement of smart cockpit software performance and reliability. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



Windows 10 vs. Windows 11 performance comparison: Which one is better? With the continuous development and advancement of technology, operating systems are constantly updated and upgraded. As one of the world's largest operating system developers, Microsoft's Windows series of operating systems have always attracted much attention from users. In 2021, Microsoft released the Windows 11 operating system, which triggered widespread discussion and attention. So, what is the difference in performance between Windows 10 and Windows 11? Which

The Windows operating system has always been one of the most widely used operating systems on personal computers, and Windows 10 has long been Microsoft's flagship operating system until recently when Microsoft launched the new Windows 11 system. With the launch of Windows 11 system, people have become interested in the performance differences between Windows 10 and Windows 11 systems. Which one is better between the two? First, let’s take a look at W

Ollama is a super practical tool that allows you to easily run open source models such as Llama2, Mistral, and Gemma locally. In this article, I will introduce how to use Ollama to vectorize text. If you have not installed Ollama locally, you can read this article. In this article we will use the nomic-embed-text[2] model. It is a text encoder that outperforms OpenAI text-embedding-ada-002 and text-embedding-3-small on short context and long context tasks. Start the nomic-embed-text service when you have successfully installed o

Performance comparison of different Java frameworks: REST API request processing: Vert.x is the best, with a request rate of 2 times SpringBoot and 3 times Dropwizard. Database query: SpringBoot's HibernateORM is better than Vert.x and Dropwizard's ORM. Caching operations: Vert.x's Hazelcast client is superior to SpringBoot and Dropwizard's caching mechanisms. Suitable framework: Choose according to application requirements. Vert.x is suitable for high-performance web services, SpringBoot is suitable for data-intensive applications, and Dropwizard is suitable for microservice architecture.

The performance comparison of PHP array key value flipping methods shows that the array_flip() function performs better than the for loop in large arrays (more than 1 million elements) and takes less time. The for loop method of manually flipping key values takes a relatively long time.

Sound | Xiaobai Meizu has previously cooperated with Polestar to create the Polestar mobile phone "Polestar Phone". Not long ago, Lynk & Co also announced that it will cooperate with Meizu to customize the "Lynk & Co mobile phone". Meizu seems to have developed a business of cooperating with car companies to manufacture mobile phones. According to the latest news, it seems that Meizu will also OEM mobile phones for Hongqi. As you can see in the picture above, Meizu has recently launched a new 5G phone model M481R on the Internet. @Perfectly arranged digital broke the news and called the model a Hongqi phone. "It seems to be a matryoshka Meizu 21Pro." As a supplement, at the fifth China FAW Technology Conference in April this year, China FAW and Yikatong Technology signed a strategic cooperation agreement on smart cockpits. The two parties will launch a comprehensive strategic cooperation in the field of smart cockpits and jointly create a highly competitive

All the software on my friend's computer has been opened using WPS and cannot run normally. All exes cannot be opened, including the task manager, registry, control panel, settings, etc. When opened, all WPS garbled characters appear. This situation cannot be done remotely. The remote software is also an exe, which seems to be unsolvable. Let’s take a look at how 20 operates to restore the computer to normal. This is because the opening method of the exe has been changed to WPS, and you only need to restore the default opening method. Er0 exports the exe registry information on a normal computer and puts it on the website. Because the browser can be opened, please guide your friends to open our website, copy the registry information, create a new text document on the desktop, and save it as [File name: 1.reg; Save type: All files (*.

Effective techniques for optimizing C++ multi-threaded performance include limiting the number of threads to avoid resource contention. Use lightweight mutex locks to reduce contention. Optimize the scope of the lock and minimize the waiting time. Use lock-free data structures to improve concurrency. Avoid busy waiting and notify threads of resource availability through events.
