Currently we use the worst-case-possible size for the recovery area.
Instead, prepare the recovery data, then see whether it's too large.
Note that this currently works out to make the database *larger* on
our speed benchmark, since we happen to need to enlarge the recovery
area at the wrong time now, rather than the old case where its already
hugely oversized.
Before:
$ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction
2000000
real 0m50.366s
user 0m17.109s
sys 0m2.468s
-rw------- 1 rusty rusty
564215952 2011-04-27 21:31 /tmp/growtdb.tdb
testing with 3 processes, 5000 loops, seed=0
OK
real 1m23.818s
user 0m0.304s
sys 0m0.508s
-rw------- 1 rusty rusty 669856 2011-04-27 21:32 torture.tdb
Adding
2000000 records: 887 ns (
110556088 bytes)
Finding
2000000 records: 556 ns (
110556088 bytes)
Missing
2000000 records: 385 ns (
110556088 bytes)
Traversing
2000000 records: 401 ns (
110556088 bytes)
Deleting
2000000 records: 710 ns (
244003768 bytes)
Re-adding
2000000 records: 825 ns (
244003768 bytes)
Appending
2000000 records: 1255 ns (
268404160 bytes)
Churning
2000000 records: 2299 ns (
268404160 bytes)
After:
$ time ./growtdb-bench 250000 10 > /dev/null && ls -l /tmp/growtdb.tdb && time ./tdbtorture -s 0 && ls -l torture.tdb && ./speed --transaction
2000000
real 0m47.127s
user 0m17.125s
sys 0m2.456s
-rw------- 1 rusty rusty
366680288 2011-04-27 21:34 /tmp/growtdb.tdb
testing with 3 processes, 5000 loops, seed=0
OK
real 1m16.049s
user 0m0.300s
sys 0m0.492s
-rw------- 1 rusty rusty 244472 2011-04-27 21:35 torture.tdb
Adding
2000000 records: 894 ns (
110551992 bytes)
Finding
2000000 records: 564 ns (
110551992 bytes)
Missing
2000000 records: 398 ns (
110551992 bytes)
Traversing
2000000 records: 399 ns (
110551992 bytes)
Deleting
2000000 records: 711 ns (
225633208 bytes)
Re-adding
2000000 records: 819 ns (
225633208 bytes)
Appending
2000000 records: 1252 ns (
248196544 bytes)
Churning
2000000 records: 2319 ns (
424005056 bytes)