Thread safety issues of slice and map in Golang

  1. What is thread safety?

When multiple threads access the same object in parallel, the thread-safe code ensures that each thread can execute normally and correctly through the synchronization mechanism, and can obtain correct results without data pollution, etc., which means that the object is thread safe.

  1. Thread safety issues of slice and map

First of all, it is clear that in the case of multi-threading, slice and map are thread-unsafe by default.
2.1 slice thread safety issues

Take a look at the example below

var w sync.WaitGroup
func sliceSafety() {
	var s []int
	var sum int
	fmt.Printf("----------: len(s): %d, cap(s): %d, s: %v \n", len(s), cap(s), s)
	for i := 0; i < 10; i++ {
        w.Add(1)
		go func(i int) {
            defer w.Done()
			sum++
			s = append(s, i)
			fmt.Printf("==========i: %d: len(s): %d, cap(s): %d, s: %v \n", i, len(s), cap(s), s)
		}(i)
	}
    w.Wait()
	fmt.Println(sum)
	fmt.Println(s, len(s))
}

Results of the

#first execution
----------: len(s): 0, cap(s): 0, s: [] 
==========i: 9: len(s): 2, cap(s): 2, s: [3 9] 
==========i: 1: len(s): 1, cap(s): 1, s: [1] 
==========i: 3: len(s): 1, cap(s): 1, s: [3] 
==========i: 2: len(s): 3, cap(s): 4, s: [3 9 2] 
==========i: 4: len(s): 4, cap(s): 4, s: [3 9 2 4] 
==========i: 7: len(s): 6, cap(s): 8, s: [3 9 2 4 5 7] 
==========i: 8: len(s): 6, cap(s): 8, s: [3 9 2 4 0 8] 
==========i: 5: len(s): 5, cap(s): 8, s: [3 9 2 4 5] 
==========i: 6: len(s): 7, cap(s): 8, s: [3 9 2 4 0 8 6] 
==========i: 0: len(s): 5, cap(s): 8, s: [3 9 2 4 0] 
10
[3 9 2 4 0 8 6] 7
#second execution
----------: len(s): 0, cap(s): 0, s: [] 
==========i: 0: len(s): 1, cap(s): 1, s: [0] 
==========i: 2: len(s): 3, cap(s): 4, s: [0 3 2] 
==========i: 9: len(s): 4, cap(s): 4, s: [0 3 2 9] 
==========i: 6: len(s): 5, cap(s): 8, s: [0 3 2 9 6] 
==========i: 7: len(s): 6, cap(s): 8, s: [0 3 2 9 6 7] 
==========i: 4: len(s): 7, cap(s): 8, s: [0 3 2 9 6 7 4] 
==========i: 8: len(s): 8, cap(s): 8, s: [0 3 2 9 6 7 4 8] 
==========i: 3: len(s): 2, cap(s): 2, s: [0 3] 
==========i: 5: len(s): 9, cap(s): 16, s: [0 3 2 9 6 7 4 8 5] 
==========i: 1: len(s): 9, cap(s): 16, s: [0 3 2 9 6 7 4 8 1] 
10
[0 3 2 9 6 7 4 8 5] 9

It can be seen from the results that each execution result is different. Even in the same execution, the value placed in the s slice has been modified, as in the first execution result:

==========i: 7: len(s): 6, cap(s): 8, s: [3 9 2 4 5 7] #The 5th bit value is 5
==========i: 8: len(s): 6, cap(s): 8, s: [3 9 2 4 0 8] #The value of the fifth bit is 0, indicating that the shared value under the same index has been destroyed
==========i: 5: len(s): 5, cap(s): 8, s: [3 9 2 4 5] 
==========i: 6: len(s): 7, cap(s): 8, s: [3 9 2 4 0 8 6] 
==========i: 0: len(s): 5, cap(s): 8, s: [3 9 2 4 0] 

Because it is executed concurrently and without synchronization control, when different threads read the same index bit, the latter thread will overwrite the value put in the index bit by the previous thread. Note here that the slice slice is a reference type, and the bottom layer of the slice actually refers to an array, so different threads read the same bottom layer array. When different threads read the same index bit of the bottom layer array of the slice, in concurrent The following will create a race relationship, causing the value of the shared element to be modified. The underlying implementation principle of slice can refer to the
2.2 Solutions

So what’s the solution? It is to do synchronous control, so that the execution is serial, and the specific implementation is to lock before and after modifying the data.

var w sync.WaitGroup
var m sync.Mutex
func sliceSafety() {
	var s []int
	var sum int
	fmt.Printf("----------: len(s): %d, cap(s): %d, s: %v \n", len(s), cap(s), s)
	for i := 0; i < 10; i++ {
		w.Add(1)
		go func(i int) {
			defer w.Done()
			m.Lock()
			sum++
			s = append(s, i)
			fmt.Printf("==========i: %d: len(s): %d, cap(s): %d, s: %v \n", i, len(s), cap(s), s)
			m.Unlock()
		}(i)
	}
	w.Wait()
	fmt.Println(sum)
	fmt.Println(s, len(s))
}

Results of the

# first execution
----------: len(s): 0, cap(s): 0, s: [] 
==========i: 9: len(s): 1, cap(s): 1, s: [9] 
==========i: 7: len(s): 2, cap(s): 2, s: [9 7] 
==========i: 8: len(s): 3, cap(s): 4, s: [9 7 8] 
==========i: 6: len(s): 4, cap(s): 4, s: [9 7 8 6] 
==========i: 5: len(s): 5, cap(s): 8, s: [9 7 8 6 5] 
==========i: 1: len(s): 6, cap(s): 8, s: [9 7 8 6 5 1] 
==========i: 0: len(s): 7, cap(s): 8, s: [9 7 8 6 5 1 0] 
==========i: 2: len(s): 8, cap(s): 8, s: [9 7 8 6 5 1 0 2] 
==========i: 4: len(s): 9, cap(s): 16, s: [9 7 8 6 5 1 0 2 4] 
==========i: 3: len(s): 10, cap(s): 16, s: [9 7 8 6 5 1 0 2 4 3] 
10
[9 7 8 6 5 1 0 2 4 3] 10

# second execution
----------: len(s): 0, cap(s): 0, s: [] 
==========i: 2: len(s): 1, cap(s): 1, s: [2] 
==========i: 9: len(s): 2, cap(s): 2, s: [2 9] 
==========i: 3: len(s): 3, cap(s): 4, s: [2 9 3] 
==========i: 4: len(s): 4, cap(s): 4, s: [2 9 3 4] 
==========i: 5: len(s): 5, cap(s): 8, s: [2 9 3 4 5] 
==========i: 6: len(s): 6, cap(s): 8, s: [2 9 3 4 5 6] 
==========i: 7: len(s): 7, cap(s): 8, s: [2 9 3 4 5 6 7] 
==========i: 8: len(s): 8, cap(s): 8, s: [2 9 3 4 5 6 7 8] 
==========i: 1: len(s): 9, cap(s): 16, s: [2 9 3 4 5 6 7 8 1] 
==========i: 0: len(s): 10, cap(s): 16, s: [2 9 3 4 5 6 7 8 1 0] 
10
[2 9 3 4 5 6 7 8 1 0] 10

It can be seen from the results that after the lock is added, the values stored under the same index in the slice s are always the same and are not damaged, that is, the lock solves the problem of thread safety. As for whether to use the mutex sync.Mutex or the read-write lock sync.RWMutex, it depends on the specific situation. If the reading scene is much larger than the writing scene, the performance of using the read-write lock is better.
2.3 map thread safety issues

Take a look at the example below

var w sync.WaitGroup
func sliceSafety() {
	var s []int
	var sum int
	fmt.Printf("----------: len(s): %d, cap(s): %d, s: %v \n", len(s), cap(s), s)
	for i := 0; i < 10; i++ {
        w.Add(1)
		go func(i int) {
            defer w.Done()
			sum++
			s = append(s, i)
			fmt.Printf("==========i: %d: len(s): %d, cap(s): %d, s: %v \n", i, len(s), cap(s), s)
		}(i)
	}
    w.Wait()
	fmt.Println(sum)
	fmt.Println(s, len(s))
}

Results of the

fatal error: concurrent map writes	# map并发写入
goroutine 15 [running]:
runtime.throw({0x10a495b?, 0x0?})
	/usr/local/go/src/runtime/panic.go:992 +0x71 fp=0xc000042f48 sp=0xc000042f18 pc=0x102f411
runtime.mapassign_fast64(0x0?, 0x0?, 0x9)
	/usr/local/go/src/runtime/map_fast64.go:102 +0x2c5 fp=0xc000042f80 sp=0xc000042f48 pc=0x100f3a5
main.mapThread.func1(0x9)
	/Users/anker/kevin_go/src/go_learning/01_base/23_go_map_thread_safety/01_map.go:16 +0x6e fp=0xc000042fc8 sp=0xc000042f80 pc=0x1089eae
main.mapThread.func2()
	/Users/anker/kevin_go/src/go_learning/01_base/23_go_map_thread_safety/01_map.go:17 +0x2a fp=0xc000042fe0 sp=0xc000042fc8 pc=0x1089e0a
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc000042fe8 sp=0xc000042fe0 pc=0x105a8a1
created by main.mapThread
	/Users/anker/kevin_go/src/go_learning/01_base/23_go_map_thread_safety/01_map.go:13 +0x3c
goroutine 1 [semacquire]:
sync.runtime_Semacquire(0xc00000c108?)
	/usr/local/go/src/runtime/sema.go:56 +0x25
sync.(*WaitGroup).Wait(0x60?)
	/usr/local/go/src/sync/waitgroup.go:136 +0x52
main.mapThread(0xa)
	/Users/anker/kevin_go/src/go_learning/01_base/23_go_map_thread_safety/01_map.go:19 +0xf6
main.main()
	/Users/anker/kevin_go/src/go_learning/01_base/23_go_map_thread_safety/01_map.go:24 +0x1e
goroutine 14 [runnable]:
main.mapThread.func1(0x8)
	/Users/anker/kevin_go/src/go_learning/01_base/23_go_map_thread_safety/01_map.go:16 +0x6e
created by main.mapThread
	/Users/anker/kevin_go/src/go_learning/01_base/23_go_map_thread_safety/01_map.go:13 +0x

From the execution result, it can be seen that the execution was reported directly with an error, and fatal error: concurrent map writes was prompted. The reason is the same as that of slice. The modification operation is not locked, which leads to resource competition and the so-called thread safety problem. The difference is that map reports an error directly, but slice does not report an error
2.4 Solutions

ar wg sync.WaitGroup
var m sync.Mutex
func mapThread() {
	mp := make(map[int]int)
	for i := 0; i < 10; i++ {
		wg.Add(1)
		go func(i int) {
			defer wg.Done()
			m.Lock()
			mp[i] = i
			m.Unlock()
		}(i)
	}
	wg.Wait()
	fmt.Println(len(mp))
}

Leave a Reply

Your email address will not be published. Required fields are marked *

en_USEnglish