with one click
rust-concurrency
// Concurrency and async programming expert. Handles Send, Sync, threads, async/await, tokio, channels, Mutex, RwLock, deadlock prevention, and race condition debugging.
// Concurrency and async programming expert. Handles Send, Sync, threads, async/await, tokio, channels, Mutex, RwLock, deadlock prevention, and race condition debugging.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | rust-concurrency |
| description | Concurrency and async programming expert. Handles Send, Sync, threads, async/await, tokio, channels, Mutex, RwLock, deadlock prevention, and race condition debugging. |
| metadata | {"triggers":["thread","spawn","channel","mpsc","Mutex","RwLock","Atomic","async","await","Future","tokio","deadlock","race condition","Send","Sync"]} |
| Dimension | Concurrency (threads) | Async (async/await) |
|---|---|---|
| Memory | Each thread has separate stack | Single thread reused |
| Blocking | Blocks OS thread | Doesn't block, yields |
| Use case | CPU-intensive | I/O-intensive |
| Complexity | Simple and direct | Requires runtime |
Key Insight: Threads for parallelism, async for concurrency.
Basic types โ automatically Send
Contains references โ automatically Send
Raw pointers โ NOT Send
Rc โ NOT Send (non-atomic ref counting)
Rule: If all fields are Send, the type is Send.
&T where T: Sync โ automatically Sync
RefCell โ NOT Sync (runtime checking not thread-safe)
MutexGuard โ NOT Sync (intentionally)
Rule: &T is Send if T is Sync.
use std::sync::{Arc, Mutex};
let counter = Arc::new(Mutex::new(0));
let mut handles = vec![];
for _ in 0..10 {
let counter = Arc::clone(&counter);
let handle = std::thread::spawn(move || {
let mut num = counter.lock().unwrap();
*num += 1;
});
handles.push(handle);
}
for handle in handles {
handle.join().unwrap();
}
When to use: Multiple threads need to mutate shared data.
Trade-offs: Lock contention can limit scalability.
use std::sync::mpsc;
let (tx, rx) = mpsc::channel();
thread::spawn(move || {
tx.send("hello").unwrap();
});
println!("{}", rx.recv().unwrap());
When to use: Threads communicate without shared state.
Trade-offs: Copy/move overhead for messages.
use tokio;
#[tokio::main]
async fn main() {
let handle = tokio::spawn(async {
// Async task
fetch_data().await
});
let result = handle.await.unwrap();
}
When to use: I/O-bound operations (network, filesystem).
Trade-offs: Requires async runtime, function coloring.
CPU-intensive task?
โ Use threads (rayon for data parallelism)
I/O-intensive task?
โ Use async/await (tokio, async-std)
Both?
โ Use async with spawn_blocking for CPU work
No shared state?
โ Message passing (mpsc channels)
Read-heavy shared state?
โ Arc<RwLock<T>>
Write-heavy shared state?
โ Arc<Mutex<T>> or lock-free alternatives
Simple counters/flags?
โ Atomic types (AtomicUsize, AtomicBool)
Check Send bounds
โ Can transfer ownership?
Check Sync bounds
โ Can share references?
Test for data races
โ Use miri, loom, or thread sanitizers
| Error | Cause | Solution |
|---|---|---|
| E0277 Send not satisfied | Contains non-Send types | Check all fields, replace Rc with Arc |
| E0277 Sync not satisfied | Shared reference type not Sync | Wrap with Mutex/RwLock |
| Deadlock | Inconsistent lock ordering | Establish and follow lock hierarchy |
| MutexGuard across await | Lock held while suspended | Scope lock before await point |
| Data race (runtime) | Improper synchronization | Use proper sync primitives |
// Always lock A before B
let _lock_a = resource_a.lock();
let _lock_b = resource_b.lock();
// Never lock B before A elsewhere
// โ Bad: lock held too long
let guard = data.lock();
do_work(&guard);
more_work(); // still locked
// โ
Good: release early
{
let guard = data.lock();
do_work(&guard);
} // lock released
more_work();
// โ Bad: lock across await
let guard = mutex.lock().unwrap();
async_call().await; // DEADLOCK RISK
// โ
Good: drop lock before await
let value = {
let guard = mutex.lock().unwrap();
guard.clone()
}; // lock dropped
async_call().await;
| Strategy | When to Use | Trade-offs |
|---|---|---|
| Fine-grained locking | Lock small portions | More complex, avoid contention |
| RwLock | Read-heavy workloads | Slower writes than Mutex |
| Atomics | Simple counters/flags | Limited operations, no compound ops |
| Message passing | Avoid shared state | Copy/move overhead |
| Lock-free structures | High contention | Complex, use crates (crossbeam) |
// Spawn independent task
tokio::spawn(async move {
process_data(data).await
});
// Spawn with 'static requirement
tokio::spawn(async move {
let data = Arc::clone(&data); // Share ownership
work_with(data).await
});
use tokio::join;
// Wait for all to complete
let (result1, result2, result3) = tokio::join!(
fetch_user(),
fetch_posts(),
fetch_comments()
);
// First to complete
let result = tokio::select! {
r = fetch_from_primary() => r,
r = fetch_from_backup() => r,
};
use tokio::time::{timeout, Duration};
match timeout(Duration::from_secs(5), long_operation()).await {
Ok(result) => result,
Err(_) => {
// Operation timed out
}
}
When reviewing concurrent code:
# Check compilation with thread safety
cargo check
# Run tests with thread sanitizer (requires nightly)
RUSTFLAGS="-Z sanitizer=thread" cargo +nightly test
# Test with miri (detect undefined behavior)
cargo +nightly miri test
# Use loom for exhaustive concurrency testing
cargo test --features loom
# Check for race conditions
cargo clippy -- -W clippy::mutex_atomic
Symptom: E0277 error, Rc cannot be sent between threads
Fix: Replace Rc with Arc
// โ Bad
let data = Rc::new(value);
thread::spawn(move || { /* use data */ });
// โ
Good
let data = Arc::new(value);
thread::spawn(move || { /* use data */ });
Symptom: Deadlock or "future cannot be sent between threads safely"
Fix: Drop lock before await
// โ Bad
let guard = mutex.lock().unwrap();
async_fn().await;
// โ
Good
let value = mutex.lock().unwrap().clone();
drop(guard); // Explicit drop
async_fn().await;
Symptom: Borrow checker errors when spawning threads
Fix: Clone Arc before moving into closure
// โ Bad
let data = Arc::new(vec![1, 2, 3]);
thread::spawn(move || { /* data moved */ });
// data is gone
// โ
Good
let data = Arc::new(vec![1, 2, 3]);
let data_clone = Arc::clone(&data);
thread::spawn(move || { /* data_clone moved */ });
// data still available