mergesort

Mergesort Investigation 8 - bottom up

Mergesort Investigation 8 - bottom up

I thought that since the purely recursive implementation of mergesort didn’t show weird, abrupt performance drops, traversing linked lists (no matter what order the nodes appeared in memory) also did not show abrupt performance drops, and that node size in memory caused different performance oddities, that the iterative implementation’s memory access patterns might be the cause.

Mergesort Investigation 4 - reuse linked lists

Mergesort Investigation 4 - reuse linked lists

After I wrote a recursive merge sort, I reviewed it to ensure it wasn’t doing extra work. From that perspective, I noticed that for each of the 10 list sorts my benchmark program made for a given length linked list, it created a new linked list.

I thought that I could make the overall benchmark run faster by not creating a new, very large, linked list for every run, but rather re-using the existing list. I hoped to have the benchmark program use less time between the ten measured sort algorithm executions, not make the sorting faster. I surprised myself by getting the opposite effect.

Mergesort Investigation 3 - recursive implementation

Mergesort Investigation 3 - recursive implementation

I wrote a traditional, top-down, recursive merge sort. The idea was to see if a different implementation exhibited the same dramatic, abrupt slowdowns. If the different implementation showed the same jumps in timings, the problem is with the operating system, the language run time, or the memory hardware. I know I said I’d ruled out the underlying operating system previously, but I was still suspicious.