Goroutines are a fundamental feature of Go, enabling concurrent execution of functions with minimal overhead. Understanding how to effectively use goroutines can significantly enhance the performance and responsiveness of your applications. This section provides an in-depth look at goroutines, exploring their creation, management, and optimization.
What Are Goroutines?
Creating Goroutines
go
keyword.gofunc sayHello() {
fmt.Println("Hello, World!")
}
func main() {
go sayHello()
// Additional logic here
}
Starting Goroutines
gogo func() {
fmt.Println("Hello from a goroutine!")
}()
Goroutine Scheduling
runtime.GOMAXPROCS
function sets the maximum number of OS threads that can execute Go code simultaneously. By default, it is set to the number of CPU cores.Goroutine Termination
WaitGroups
sync.WaitGroup
is used to wait for a collection of goroutines to finish executing.govar wg sync.WaitGroup
func worker(id int) {
defer wg.Done()
fmt.Printf("Worker %d starting\n", id)
time.Sleep(time.Second)
fmt.Printf("Worker %d done\n", id)
}
func main() {
for i := 1; i <= 5; i++ {
wg.Add(1)
go worker(i)
}
wg.Wait()
}
Mutexes
sync.Mutex
and sync.RWMutex
provide mutual exclusion to protect shared data.govar mu sync.Mutex
var count int
func increment() {
mu.Lock()
count++
mu.Unlock()
}
func main() {
for i := 0; i < 1000; i++ {
go increment()
}
time.Sleep(time.Second) // Wait for goroutines to finish
fmt.Println("Final count:", count)
}
Channels
gofunc producer(ch chan int) {
for i := 0; i < 5; i++ {
ch <- i
}
close(ch)
}
func consumer(ch chan int) {
for val := range ch {
fmt.Println("Received:", val)
}
}
func main() {
ch := make(chan int)
go producer(ch)
go consumer(ch)
time.Sleep(time.Second) // Wait for goroutines to finish
}
Pipelines
gofunc stage1(out chan<- int) {
for i := 1; i <= 5; i++ {
out <- i
}
close(out)
}
func stage2(in <-chan int, out chan<- int) {
for val := range in {
out <- val * 2
}
close(out)
}
func main() {
ch1 := make(chan int)
ch2 := make(chan int)
go stage1(ch1)
go stage2(ch1, ch2)
for result := range ch2 {
fmt.Println("Result:", result)
}
}
Worker Pools
gofunc worker(id int, tasks <-chan int, results chan<- int) {
for task := range tasks {
fmt.Printf("Worker %d processing task %d\n", id, task)
time.Sleep(time.Second) // Simulate work
results <- task * 2
}
}
func main() {
const numWorkers = 3
const numTasks = 5
tasks := make(chan int, numTasks)
results := make(chan int, numTasks)
for i := 1; i <= numWorkers; i++ {
go worker(i, tasks, results)
}
for i := 1; i <= numTasks; i++ {
tasks <- i
}
close(tasks)
for i := 1; i <= numTasks; i++ {
fmt.Println("Result:", <-results)
}
}
Select Statement
select
statement.gofunc main() {
ch1 := make(chan string)
ch2 := make(chan string)
go func() {
time.Sleep(time.Second)
ch1 <- "Hello from ch1"
}()
go func() {
time.Sleep(2 * time.Second)
ch2 <- "Hello from ch2"
}()
for i := 0; i < 2; i++ {
select {
case msg1 := <-ch1:
fmt.Println(msg1)
case msg2 := <-ch2:
fmt.Println(msg2)
}
}
}
Resource Management
Avoiding Goroutine Leaks
context
package to manage goroutine lifecycles and prevent leaks.gofunc worker(ctx context.Context, id int) {
for {
select {
case <-ctx.Done():
fmt.Printf("Worker %d exiting\n", id)
return
default:
fmt.Printf("Worker %d working\n", id)
time.Sleep(500 * time.Millisecond)
}
}
}
func main() {
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
defer cancel()
for i := 1; i <= 3; i++ {
go worker(ctx, i)
}
time.Sleep(3 * time.Second)
}
Debugging Goroutines
runtime.NumGoroutine()
.pprof
for profiling goroutines and identifying bottlenecks or leaks.By mastering goroutines and their associated patterns, you can effectively leverage Go's concurrency model to build high-performance, scalable applications. This deep dive equips you with the knowledge to handle complex concurrent programming challenges and optimize your Go code for concurrency and parallelism.