This article is introduced by the golang tutorial column to introduce to you the occasional errors in the append method when go is highly concurrency. The following will explain the method in detail. I hope it will be helpful to friends in need!
Background
When implementing image transcoding requirements, it is necessary to support up to 500 images after downloading and converting formats;
If you download them one by one After transcoding, it takes too long. You need to use goroutine to achieve concurrent downloading of 500 images and concurrent transcoding;
However, during the self-test, it was found that only 499 images or more were converted after downloading. There are few cases (under the condition that all downloads and transcoding are successful);
Then the process of printing logs and finding bugs begins.
Troubleshooting
Because sync is used during concurrency to wait for all coroutines to end, I initially thought there was a problem with sync asynchronous waiting;
Print log It was found that 500 downloads were executed normally. After the download was completed, the transcoding operation was continued to eliminate problems with sync asynchronous waiting;
The code is as follows:
import ( "github.com/satori/go.uuid" "sync" ) func downloadFiles(nWait *sync.WaitGroup, urls []interface{}, successFiles *[]string, failedFiles *[]string) { // 遍历 urls 进行下载 for _, value := range urls { go func(value interface{}) { defer nWait.Done() // 执行结束,协程减 1 fullname := config.TranscodeDownloadPath + "/" + uuid.NewV4().String() // 需要确保文件名的唯一性 (防止不同用户同一时间操作了同一文件,导致转码失败) err := utils.DownloadCeph(value.(string), fullname) // 下载文件 // 下载文件状态记录 if err != nil { *failedFiles = append(*failedFiles, fullname) } else { *successFiles = append(*successFiles, fullname) } }(value) } } // 前端传入的图片 url strUrlList := req["strUrlList"] // 初始化变量 nWait := sync.WaitGroup{} // 多协程异步等待 var successFiles []string // 下载成功文件 var failedFiles []string // 下载失败文件 // 遍历 strUrlList 进行下载 log.Error("开始下载!长度:", len(strUrlList)) nWait.Add(len(strUrlList)) // 等待协程数 downloadFiles(&nWait, strUrlList, &successFiles, &failedFiles) nWait.Wait() // 阻塞,等待完成 log.Error("下载结束!长度:", len(successFiles)) //... log.Error("下载转码!") //...
The log is as follows:
2022-10-29 21:28:51.996 ERROR services/tools.go:149 开始下载!长度:500 2022-10-29 21:28:52.486 ERROR services/tools.go:153 下载结束!长度:499 2022-10-29 21:28:52.486 ERROR services/tools.go:155 开始转码!
Print more detailed logs to troubleshoot the logic within the for range loop;
Add a log at the end of a single for loop:
log.Error("下载协程结束: ", len(*successFiles))
Found a special log:
2022-10-29 21:40:38.407 ERROR services/tools.go:35 下载协程结束: 63 2022-10-29 21:40:38.407 ERROR services/tools.go:35 下载协程结束: 64 2022-10-29 21:40:38.407 ERROR services/tools.go:35 下载协程结束: 65 2022-10-29 21:40:38.407 ERROR services/tools.go:35 下载协程结束: 65 2022-10-29 21:40:38.408 ERROR services/tools.go:35 下载协程结束: 66 2022-10-29 21:40:38.408 ERROR services/tools.go:35 下载协程结束: 67
The length of both times is 65, and the slice length has not changed. If the slice append method is executed twice at the same time, it will fail once. The cause of the problem is found;
Solve the problem
Use slice index for assignment, no longer use append;
The repair code is as follows:
import ( "github.com/satori/go.uuid" "sync" ) func downloadFiles(nWait *sync.WaitGroup, urls []interface{}, successFiles *[]string, failedFiles *[]string) { // 遍历 urls 进行下载 for index, value := range urls { go func(index int, value interface{}) { defer nWait.Done() // 执行结束,协程减 1 fullname := config.TranscodeDownloadPath + "/" + uuid.NewV4().String() // 需要确保文件名的唯一性 (防止不同用户同一时间操作了同一文件,导致转码失败) err := utils.DownloadCeph(value.(string), fullname) // 下载文件 // 下载文件状态记录 if err != nil { (*failedFiles)[index] = fullname } else { (*successFiles)[index] = fullname } }(index, value) } } // 前端传入的图片 url strUrlList := req["strUrlList"] // 初始化变量 nWait := sync.WaitGroup{} // 多协程异步等待 successFiles := make([]string, len(strUrlList), len(strUrlList)) // 下载成功文件 failedFiles := make([]string, len(strUrlList), len(strUrlList)) // 下载失败文件 // 遍历 strUrlList 进行下载 nWait.Add(len(strUrlList)) // 等待协程数 downloadFiles(&nWait, strUrlList, &successFiles, &failedFiles) nWait.Wait() // 阻塞,等待完成
The above is the detailed content of Detailed explanation of the append error problem when Go has high concurrency!. For more information, please follow other related articles on the PHP Chinese website!