I've been getting more comfortable with using Go over the last month.
For fun and practice, I had the idea to translate a Breadth-first search
problem I had in one of my CS courses from C (see
mazesolver). For
this I need a queue. My instinct was to jerry rig one from slices, and I
began to do this, but I wondered whether there was anything else better
out there. After looking into it, I decided to test the efficiency of
three ways of doing queues:
- Slices (pop with
e := s[0]
and s = s[1:]
, push with s = append(s, n)
)
- A linked-list, using the
container/list
standard library module
- The ring-buffer based eapache/queue
TL;DR: Use slices.
Benchmark Results
See raw-results.txt for the unformatted output of the
benchmark tests.
1. n=10 (1000 iterations) |
list | 0m0.012s |
queue | 0m0.010s |
slice | 0m0.010s |
2. n=10, random insert (1000 iterations) |
list | 0m0.022s |
queue | 0m0.020s |
slice | 0m0.021s |
3. n=100 (1000 iterations) |
list | 0m0.078s |
queue | 0m0.076s |
slice | 0m0.059s |
4. n=100, random insert (1000 iterations) |
list | 0m0.199s |
queue | 0m0.188s |
slice | 0m0.174s |
5. n=1000 (1000 iterations) |
list | 0m0.813s |
queue | 0m1.086s |
slice | 0m0.777s |
6. n=1000, random insert (1000 iterations) |
list | 0m2.572s |
queue | 0m2.104s |
slice | 0m2.024s |
7. n=10000 (1000 iterations) |
list | 0m10.676s |
queue | 0m11.711s |
slice | 0m8.803s |
8. n=10000, random insert (1000 iterations) |
list | 0m29.539s |
queue | 0m24.285s |
slice | 0m23.293s |
9. n=100000 |
list | 0m0.074s |
queue | 0m0.065s |
slice | 0m0.063s |
10. n=100000, random insert |
list | 0m0.241s |
queue | 0m0.205s |
slice | 0m0.225s |
11. n=1000000 |
list | 0m0.844s |
queue | 0m0.684s |
slice | 0m0.614s |
12. n=1000000, random insert |
list | 0m2.152s |
queue | 0m1.916s |
slice | 0m2.081s |
13. n=10000000 |
list | 0m7.843s |
queue | 0m6.955s |
slice | 0m6.082s |
14. n=10000000, random insert |
list | 0m20.915s |
queue | 0m19.353s |
slice | 0m19.030s |
15. n=100000000 |
list | 1m16.619s |
queue | 1m6.998s |
slice | 1m1.921s |
16. n=100000000, random insert |
list | 3m42.505s |
queue | 3m13.185s |
slice | 3m6.901s |
Summary
Across 16 tests:
- Lists won zero π
, two π₯ and fourteen π₯
- eapache's ring-buffer queue won four π
, ten π₯ and two π₯
- Slices won thirteen π
, three π₯ and zero π₯
I conclude that unless you know better, you should use slices as a FIFO
queue in Go.
In addition to these performance statistics, I noticed while running the
tests that slices were the least resource-intensive (memory and CPU%),
and lists were the most. On the final test with one hundred million
initial elements and random insert, the list implementation used about 8
GB of memory+swap, the ring-buffer implementation about 5-6GB, and the
slice implementation about 3GB.
Further research
Buffered queues?
If you have any criticism or feedback on these tests, please send me an
email at ~smlavine/public-inbox@lists.sr.ht.