Go maps reuse memory on overwrites, which is why orcaman achieves 0 B/op for pure updates. xsync's custom bucket structure allocates 24 B/op per write even when overwriting existing keys.
At 1M writes/second with 90% overwrites: xsync allocates ~27 MB/s, orcaman ~6 MB/s. The trade is 24 bytes/op for 2x speed under contention. Whether this matters depends on whether your bottleneck is CPU or memory allocation.
Benchmark code: standard Go testing framework, 8 workers, 100k keys.
puzpuzpuz-hn 44 days ago [-]
Allocation rates comparison is included. If your application writes into the map most of the time, you should go with plain map + RWMutex (or orcaman/concurrent-map). But if, for instance, you're using the map as a cache, read operations will dominate and having better read scalability becomes important. As an example, Otter cache library uses a modified variant of xsync.Map, not a plain map + RWMutex.
hinkley 44 days ago [-]
How does reuse avoid false sharing between cores? Since this is concurrent hashmap we are talking about.
tl2do 44 days ago [-]
I focused on B/op because it was the only apparent weakness I saw. My “reuse” note was about allocation behavior, not false sharing. We’re talking about different concerns.
hinkley 44 days ago [-]
Allocation behavior when one core deletes and another adds and they reuse the same memory allocation is what I thought you meant.
Is that what you meant? Because if it is then you now have potential for the problem I described.
withinboredom 44 days ago [-]
Looks good! There's an important thing missing from the benchmarks though:
- cpu usage under concurrency: many of these spin-lock or use atomics, which can use up to 100% cpu time just spinning.
- latency under concurrency: atomics cause cache-line bouncing which kills latency, especially p99 latency
puzpuzpuz-hn 44 days ago [-]
Yup, that's a valid point. I'll consider adding these metrics.
withinboredom 43 days ago [-]
Am I reading the benchmark code that uses the same prefix for all string keys? This would be pathological for any trie-based implementation.
vanderZwan 44 days ago [-]
I don't write Go but respect to the author for trying to list trade-off considerations for each of the implementations tested, and not just proclaim their library the overal winner.
puzpuzpuz-hn 44 days ago [-]
Thanks. There are downsides in each approach, e.g. if you care about minimal allocation rate, you should go with plain map + RWMutex. So yeah, no silver bullet.
vanderZwan 43 days ago [-]
There almost never is. The fact that you acknowledge it and give context only would make me more confident in trying out your library, or any of the other listed (if I wrote Go code, that is).
eatonphil 44 days ago [-]
Will we also eventually get a generic sync.Map?
darkr 44 days ago [-]
It’d be nice to have in stdlib, but it’s pretty trivial to write a generic wrapper for it
jeffbee 44 days ago [-]
Almost certainly, since the internal HashTrieMap is already generic. But for now this author's package stands in nicely.
puzpuzpuz-hn 44 days ago [-]
Would be great to see that - there are multiple GH issues for that. But so far, I'm not convinced that Google prioritizes community requests over its own needs.
candiddevmike 44 days ago [-]
Idk why but I tend to shy away from non std libs that use unsafe (like xsync). I'm sure the code is fine, but I'd rather take the performance hit I guess.
puzpuzpuz-hn 44 days ago [-]
Unsafe usage in the recent xsync versions is very limited (runtime.cheaprand only). On the other hand, your point is valid and it'd be great to see standard library improvements.
44 days ago [-]
mappu 44 days ago [-]
A few release cycles back, Swiss Maps became popular (i think, particular thanks to CockroachDB) as a replacement for standard Go map[K]V.
Later, Go's stdlib map implementation was updated to use Swiss Maps internally and everyone benefited.
Do you think the xsync.Map could be considered for upstreaming? Especially if it outperforms sync.Map at all the same use cases.
puzpuzpuz-hn 44 days ago [-]
There are multiple GH issues around better sync.Map. Among other alternatives, xsync.Map is also mentioned. But Golang core team doesn't seem interested in sync.Map (or a generic variant of it) improvements.
kgeist 44 days ago [-]
Orcaman is a very straightforward implementation (just sharded RW locks and backing maps), but it limits the number of shards to a fixed 32. I wonder what the benchmarks would look like if the shard count were increased to 64, 128, etc.
puzpuzpuz-hn 44 days ago [-]
My box is 12c/24t only, so it won't make any difference. But on a beefy box, it may improve performance in high cardinality key scenarios.
nasretdinov 44 days ago [-]
It potentially still might make a difference due to reduced contention: if we have more shards the chances of two or more goroutines hitting the same shard would be lower. In my mind the only downside to having more shards is the upfront cost, so it might slow down the smallest example only
kgeist 44 days ago [-]
The upfront cost isn't that big: a Go map+RW lock is probably a few hundred bytes. Allocating them costs far below 1 ms.
umairnadeem123 44 days ago [-]
[dead]
puzpuzpuz-hn 44 days ago [-]
Allocation rates are also compared. Long story short, vanilla map + RWMutex (or a sharded variant of it like orcaman/concurrent-map) is the way to go if you want to minimize allocations. On the other hand, if reads dominate your workload, using one of custom concurrent maps may be a good idea.
Rendered at 06:01:53 GMT+0000 (UTC) with Wasmer Edge.
Pure overwrite workload (pre-allocated values): xsync.Map: 24 B/op 1 alloc/op 31.89 ns/op orcaman/concurrent-map: 0 B/op 0 alloc/op 70.72 ns/op
Real-world mixed (80% overwrites, 20% new): xsync.Map: 57 B/op 2 allocs/op 218.1 ns/op orcaman/concurrent-map: 63 B/op 3 allocs/op 283.1 ns/op
Go maps reuse memory on overwrites, which is why orcaman achieves 0 B/op for pure updates. xsync's custom bucket structure allocates 24 B/op per write even when overwriting existing keys.
At 1M writes/second with 90% overwrites: xsync allocates ~27 MB/s, orcaman ~6 MB/s. The trade is 24 bytes/op for 2x speed under contention. Whether this matters depends on whether your bottleneck is CPU or memory allocation.
Benchmark code: standard Go testing framework, 8 workers, 100k keys.
Is that what you meant? Because if it is then you now have potential for the problem I described.
- cpu usage under concurrency: many of these spin-lock or use atomics, which can use up to 100% cpu time just spinning.
- latency under concurrency: atomics cause cache-line bouncing which kills latency, especially p99 latency
Later, Go's stdlib map implementation was updated to use Swiss Maps internally and everyone benefited.
Do you think the xsync.Map could be considered for upstreaming? Especially if it outperforms sync.Map at all the same use cases.