Rusty Russell [Mon, 30 Aug 2010 01:21:33 +0000 (10:51 +0930)]
tools: add build_verbose to show every command executed.
Rusty Russell [Mon, 30 Aug 2010 01:11:57 +0000 (10:41 +0930)]
tdb: fix lock-tracking test code after gradual lock changes.
Rusty Russell [Sat, 28 Aug 2010 11:06:22 +0000 (20:36 +0930)]
tdb2: new tests, and new fixes.
Rusty Russell [Sat, 28 Aug 2010 11:05:39 +0000 (20:35 +0930)]
tdb2: check that records are of sufficient length in tdb_check.
Rusty Russell [Fri, 27 Aug 2010 05:33:41 +0000 (15:03 +0930)]
tdb2: split expand into functions and test separately.
Rusty Russell [Fri, 27 Aug 2010 05:01:11 +0000 (14:31 +0930)]
Update main page.
Rusty Russell [Fri, 27 Aug 2010 04:27:23 +0000 (13:57 +0930)]
ccanlint: chdir to temporary dir so gcov files land there.
This means parallel "make check" works again.
Rusty Russell [Fri, 27 Aug 2010 04:15:16 +0000 (13:45 +0930)]
ccanlint: use gcov to rate test coverage (score out of 5)
Rusty Russell [Fri, 27 Aug 2010 04:14:49 +0000 (13:44 +0930)]
ccanlint: clean up code which outputs results, handle partial failure.
Rusty Russell [Fri, 27 Aug 2010 04:08:58 +0000 (13:38 +0930)]
tap: don't _exit on success
This prevents us from writing out gcov files.
Rusty Russell [Thu, 26 Aug 2010 14:38:34 +0000 (00:08 +0930)]
tdb2: more fixes and tests for enlarging hash.
- Neaten I/O function
- Don't use fill in zero_out: it's only for low-level ops.
- Don't mangle arg in tdb_write_convert: it broke write_header.
- More use of tdb_access_read, make it optionally converting.
- Rename unlock_range to unlock_lists.
- Lots of fixes to enlarge_hash now it's being tested.
- More expansion cases tested.
Rusty Russell [Thu, 26 Aug 2010 12:27:32 +0000 (21:57 +0930)]
tdb2: delete and fetch now work.
Rusty Russell [Thu, 26 Aug 2010 12:18:28 +0000 (21:48 +0930)]
tdb2: clean up locking a little bit, fix hash wrap problem.
Rusty Russell [Thu, 26 Aug 2010 10:27:56 +0000 (19:57 +0930)]
tdb2: now we can store a key!
Rusty Russell [Thu, 26 Aug 2010 07:36:47 +0000 (17:06 +0930)]
tdb2: tdb_expand on empty database now tested.
Rusty Russell [Thu, 26 Aug 2010 05:07:18 +0000 (14:37 +0930)]
tdb2: now checking a new empty database works.
Rusty Russell [Thu, 26 Aug 2010 04:29:20 +0000 (13:59 +0930)]
hash: 64-bit hash functions should take 64-bit base.
Rusty Russell [Thu, 26 Aug 2010 03:23:55 +0000 (12:53 +0930)]
tdb2: update config.h with more defines.
Rusty Russell [Thu, 26 Aug 2010 03:22:59 +0000 (12:52 +0930)]
tdb2: initial commit (doesn't work, still writing tests)
Rusty Russell [Mon, 16 Aug 2010 06:19:09 +0000 (15:49 +0930)]
Makefile: don't hide what we're doing, user can use -s.
Rusty Russell [Mon, 16 Aug 2010 06:07:19 +0000 (15:37 +0930)]
hash: 64 bit variants.
Rusty Russell [Fri, 13 Aug 2010 05:37:16 +0000 (15:07 +0930)]
tdb: use tdb_lockall_gradual by default.
This has the advantage the transactions will also use it, preventing them
from blocking.
Rusty Russell [Fri, 13 Aug 2010 05:19:59 +0000 (14:49 +0930)]
tdb: test and resolultion for tdb_lockall starvation.
Rusty Russell [Sat, 7 Aug 2010 03:11:24 +0000 (12:41 +0930)]
web: handle symlinks in bzrbrowse.cgi, fix images.
Rusty Russell [Fri, 6 Aug 2010 13:34:27 +0000 (23:04 +0930)]
web: make sure to include symlinks in tarballs
Rusty Russell [Fri, 6 Aug 2010 13:08:02 +0000 (22:38 +0930)]
Add author and maintainer fields.
Rusty Russell [Fri, 6 Aug 2010 13:01:23 +0000 (22:31 +0930)]
Add licences/ dir and symlinks for a bit more clarity.
Rusty Russell [Thu, 29 Jul 2010 00:49:01 +0000 (10:19 +0930)]
alloc: speed up tiny allocator
Rusty Russell [Wed, 28 Jul 2010 13:44:28 +0000 (23:14 +0930)]
alloc: remove unused debugging function.
Rusty Russell [Wed, 28 Jul 2010 13:40:58 +0000 (23:10 +0930)]
alloc: reduce number of large pages to 256.
This reduces the number of huge allocs, which drops test time from 6 minutes to 57 seconds.
Rusty Russell [Wed, 28 Jul 2010 13:05:08 +0000 (22:35 +0930)]
alloc: implement huge allocations
Inefficient scheme for allocations > 1/8192 of the pool size.
Rusty Russell [Tue, 27 Jul 2010 05:32:02 +0000 (15:02 +0930)]
alloc: fix bug in tiny allocator.
Rusty Russell [Tue, 20 Jul 2010 22:51:08 +0000 (08:21 +0930)]
alloc: fix typo which can cause false assertion
Rusty Russell [Thu, 15 Jul 2010 08:40:47 +0000 (18:10 +0930)]
alloc: fix bug in tiny allocator
We can have a 0 byte in the *middle* of an encoding.
Rusty Russell [Mon, 12 Jul 2010 23:42:49 +0000 (09:12 +0930)]
alloc: remove encode limit arg, and implement free array cache.
Rusty Russell [Mon, 12 Jul 2010 13:52:30 +0000 (23:22 +0930)]
alloc: first cut of tiny allocator (down to 2 bytes!)
Rusty Russell [Tue, 15 Jun 2010 10:10:44 +0000 (19:40 +0930)]
typesafe_cb: fix fallout from API changes.
Rusty Russell [Tue, 15 Jun 2010 10:02:55 +0000 (19:32 +0930)]
typesafe_cb: expose _exact and _def variants.
We can't allow NULL with the new variant (needed by talloc's set_destructor
for example), so document that and expose all three variants for different
uses.
Rusty Russell [Fri, 11 Jun 2010 03:36:40 +0000 (13:06 +0930)]
typesafe_cb: fix promotable types being incorrectly accepted by cast_if_type.
cast_if_type() should not try to degrade the expression using 1?(test):0,
as that promotes bool to int, as well as degrading functions to function
pointers: it should be done by the callers.
In particular, this fixes sparse_bsearch.
Rusty Russell [Wed, 9 Jun 2010 14:33:04 +0000 (00:03 +0930)]
alloc: reduce page header further, go down to 64k minimum.
This means we can't have more than 2^25 elements per page; that's
a maximum small page size of about 2^24 (with >8 objects per small page
we move to large pages), meaning a poolsize max of 4G.
We have a tighter limit at the moment anyway, but we should remove it
once we fix this. In particular count all-zero and all-one words in
the used field (that's what we care about: full or empty) would give us
another factor of 64 (we only care about larger pool sizes on 64-bit
platforms).
We can also restore the larger number of pages and greater inter-page
spacing once we implement the alternative tiny allocator.
Rusty Russell [Wed, 9 Jun 2010 14:22:54 +0000 (23:52 +0930)]
alloc: make small_page_bits() function rather than large_page_bits()
Turns out that now we use page numbers, this is more fundamental.
Rusty Russell [Wed, 9 Jun 2010 14:16:03 +0000 (23:46 +0930)]
alloc: use page numbers in lists to reduce overhead.
Rusty Russell [Wed, 9 Jun 2010 13:05:21 +0000 (22:35 +0930)]
alloc: implement alloc_visualize().
Rusty Russell [Wed, 9 Jun 2010 13:04:44 +0000 (22:34 +0930)]
alloc: fix page header size calculation bug, increase type safety.
Rusty Russell [Wed, 9 Jun 2010 06:44:48 +0000 (16:14 +0930)]
idtree: add examples, particularly the low-id-reuse example.
Rusty Russell [Tue, 8 Jun 2010 03:57:06 +0000 (13:27 +0930)]
alloc: first cut of new Tridge-inspired allocator
This version has limitations: pools must be at least 1MB, and allocations
are restricted to 1/1024 of the total pool size.
Rusty Russell [Mon, 7 Jun 2010 07:57:15 +0000 (17:27 +0930)]
ccanlint: don't setpgid on running commands: we want ^C to work!
Rusty Russell [Mon, 7 Jun 2010 07:15:49 +0000 (16:45 +0930)]
ccanlint: Add -k option to keep results.
Particularly useful for building tests standalone.
Rusty Russell [Mon, 7 Jun 2010 07:04:15 +0000 (16:34 +0930)]
ccanlint: make sure fullname is always full path name.
Rusty Russell [Mon, 7 Jun 2010 04:30:22 +0000 (14:00 +0930)]
hashtable: fix traverse typesafety.
Rusty Russell [Mon, 7 Jun 2010 04:29:18 +0000 (13:59 +0930)]
typesafe_cb: fix up (and test!) cast_if_any.
Rusty Russell [Mon, 24 May 2010 03:02:59 +0000 (12:32 +0930)]
typesafe_cb: handle pointers to undefined struct types.
To do this, we have to lose the ability for preargs and postargs to allow const and volatile argument signatures.
Rusty Russell [Mon, 24 May 2010 01:17:13 +0000 (10:47 +0930)]
typesafe_cb: show flaw in typesafe_cb with undefined structs.
Rusty Russell [Sun, 23 May 2010 11:02:53 +0000 (20:32 +0930)]
typesafe_cb: fix !gcc case
Rusty Russell [Sun, 23 May 2010 09:22:18 +0000 (18:52 +0930)]
typesafe_cb, hashtable: revise typesafe_cb arg order, neaten.
hashtable: make traverse callback typesafe.
Rusty Russell [Thu, 20 May 2010 12:30:32 +0000 (22:00 +0930)]
idtree: new module
Rusty Russell [Tue, 11 May 2010 02:34:35 +0000 (12:04 +0930)]
likely: new module
Rusty Russell [Tue, 11 May 2010 02:01:08 +0000 (11:31 +0930)]
ccanlint: put generated _info in correct directory.
Rusty Russell [Tue, 11 May 2010 00:53:40 +0000 (10:23 +0930)]
str: add stringify()
Rusty Russell [Tue, 4 May 2010 07:37:32 +0000 (17:07 +0930)]
More web fixes.
Rusty Russell [Tue, 4 May 2010 07:36:40 +0000 (17:06 +0930)]
tdb: fix backwards check on HAVE_PAGESIZE
Rusty Russell [Tue, 4 May 2010 02:21:10 +0000 (11:51 +0930)]
Hashtable routines.
Rusty Russell [Fri, 9 Apr 2010 06:24:02 +0000 (15:54 +0930)]
tools: fastcheck adjust; 750ms works well for me.
Rusty Russell [Fri, 9 Apr 2010 04:42:53 +0000 (14:12 +0930)]
ccanlint: optimize the timeout case
This takes my "make fastcheck" from about 57 seconds to about 43 seconds.
Rusty Russell [Fri, 9 Apr 2010 04:30:02 +0000 (14:00 +0930)]
Fix EXCLUDE logic for makefiles, add fastcheck
Rusty Russell [Fri, 9 Apr 2010 04:29:22 +0000 (13:59 +0930)]
ccanlint: always catch timeouts, and force kill valgrind etc.
Rusty Russell [Fri, 9 Apr 2010 03:51:26 +0000 (13:21 +0930)]
ccanlint: timeout, and implement -t option for quicker tests.
Rusty Russell [Fri, 9 Apr 2010 03:51:11 +0000 (13:21 +0930)]
Use bzr to determine what to upload
Rusty Russell [Fri, 9 Apr 2010 02:21:42 +0000 (11:51 +0930)]
ccanlint: clean up test short descriptions
Rusty Russell [Fri, 9 Apr 2010 02:12:10 +0000 (11:42 +0930)]
ccanlint: cleanup listing code, make print in topo order.
Rusty Russell [Fri, 9 Apr 2010 01:28:21 +0000 (10:58 +0930)]
From: Joseph Adams <joeyadams3.14159@gmail.com>
The ccanlint patch is rather intrusive. First, it adds a new field to
all the ccanlint tests, "key". key is a shorter, still unique
description of the test (e.g. "valgrind"). The names I chose as keys
for all the tests are somewhat arbitrary and often don't reflect the
name of the .c source file (because some of those names are just too
darn long). Second, it adds two new options to ccanlint:
-l: list tests ccanlint performs
-x: exclude tests (e.g. -x trailing_whitespace,valgrind)
It also adds a consistency check making sure all tests have unique
keys and names.
The primary goal of the ccanlint patch was so I could exclude the
valgrind test, which takes a really long time for some modules (I
think btree takes the longest, at around 2 minutes). I'm not sure I
did it 100% correctly, so you'll want to review it first.
Rusty Russell [Fri, 9 Apr 2010 01:24:31 +0000 (10:54 +0930)]
From: Joseph Adams <joeyadams3.14159@gmail.com>
The btree patch gives the btree module an intuitive frontend
(btree_insert, btree_remove, btree_lookup) and a built-in ordering
function for strings. Together, these make it easy to use the btree
module as a dynamic string map.
Rusty Russell [Fri, 9 Apr 2010 01:23:35 +0000 (10:53 +0930)]
From: Joseph Adams <joeyadams3.14159@gmail.com>
The charset patch makes utf8_validate reject the invalid codepoints
U+FFFE and U+FFFF . Hopefully it's fully UTF-8 compliant now.
Rusty Russell [Wed, 31 Mar 2010 12:55:56 +0000 (23:25 +1030)]
Fix segfault due to relative paths (ccan_depends), and generate pages for ignored modules too.
Joseph Adams [Wed, 31 Mar 2010 12:43:16 +0000 (23:13 +1030)]
Joey's charset validation module.
Rusty Russell [Wed, 24 Feb 2010 03:42:22 +0000 (14:12 +1030)]
asort: fix gcc warning.
Rusty Russell [Wed, 24 Feb 2010 03:39:06 +0000 (14:09 +1030)]
tools: fix warnings from Ubuntu strict compiler.
Rusty Russell [Wed, 24 Feb 2010 03:38:40 +0000 (14:08 +1030)]
tdb: handle processes dying during transaction commit.
tdb transactions were designed to be robust against the machine
powering off, but interestingly were never designed to handle the case
where an administrator kill -9's a process during commit. Because
recovery is only done on tdb_open, processes with the tdb already
mapped will simply use it despite it being corrupt and needing
recovery.
The solution to this is to check for recovery every time we grab a
data lock: we could have gained the lock because a process just died.
This has no measurable cost: here is the time for tdbtorture -s 0 -n 1
-l 10000:
Before:
2.75 2.50 2.81 3.19 2.91 2.53 2.72 2.50 2.78 2.77 = Avg 2.75
After:
2.81 2.57 3.42 2.49 3.02 2.49 2.84 2.48 2.80 2.43 = Avg 2.74
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Rusty Russell [Wed, 24 Feb 2010 03:26:17 +0000 (13:56 +1030)]
tdb: cleanup: tdb_lock_list helper to cover tdb_lock and tdb_lock_nonblock
Reduce code duplication, and also gives us a central point for the next
patch which wants to cover all list locks.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Rusty Russell [Wed, 24 Feb 2010 03:24:31 +0000 (13:54 +1030)]
tdb: remove lock ops
Now the transaction code uses the standard allrecord lock, that stops
us from trying to grab any per-record locks anyway. We don't need to
have special noop lock ops for transactions.
This is a nice simplification: if you see brlock, you know it's really
going to grab a lock.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Rusty Russell [Tue, 23 Feb 2010 22:47:41 +0000 (09:17 +1030)]
tdb: more testcase fixup.
Rusty Russell [Tue, 23 Feb 2010 22:22:44 +0000 (08:52 +1030)]
tdb: suppress record write locks when allrecord lock is taken.
Records themselves get (read) locked by the traversal code against delete.
Interestingly, this locking isn't done when the allrecord lock has been
taken, though the allrecord lock until recently didn't cover the actual
records (it now goes to end of file).
The write record lock, grabbed by the delete code, is not suppressed by
the allrecord lock, which causes us to punch a hole in that lock when we
release the write record lock. Make this consistent: *no* record locks
of any kind when the allrecord lock is taken.
Rusty Russell [Tue, 23 Feb 2010 21:50:06 +0000 (08:20 +1030)]
tdb: new test, cleanup old tests by centralizing lock tracking.
Rusty Russell [Mon, 22 Feb 2010 12:34:23 +0000 (23:04 +1030)]
tdb: fix test to remove warning, and don't fail when tdb_check() barfs.
Rusty Russell [Mon, 22 Feb 2010 12:33:36 +0000 (23:03 +1030)]
tdb: don't reduce file size on transaction recovery.
There's little point in ever shrinking the file, and it definitely breaks in the case where a process has died during a transaction commit and other processes have the tdb mapped.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Rusty Russell [Mon, 22 Feb 2010 12:24:26 +0000 (22:54 +1030)]
tdb: fix recovery reuse after crash (from SAMBA)
commit
b37b452cb8c1f56b37b04abe7bffdede371ca361
Author: Rusty Russell <rusty@rustcorp.com.au>
Date: Thu Feb 4 23:59:54 2010 +1030
tdb: fix recovery reuse after crash
If a process (or the machine) dies after just after writing the
recovery head (pointing at the end of file), the recovery record will filled
with 0x42. This will not invoke a recovery on open, since rec.magic
!= TDB_RECOVERY_MAGIC.
Unfortunately, the first transaction commit will happily reuse that
area: tdb_recovery_allocate() doesn't check the magic. The recovery
record has length 0x42424242, and it writes that back into the
now-valid-looking transaction header) for the next comer (which
happens to be tdb_wipe_all in my tests).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Rusty Russell [Mon, 22 Feb 2010 12:19:57 +0000 (22:49 +1030)]
tdb: fix tdbtorture seed printing, plus remove CLEAR_IF_FIRST
With killing children, CLEAR_IF_FIRST can happen quite a bit.
Rusty Russell [Mon, 22 Feb 2010 04:33:23 +0000 (15:03 +1030)]
tdb: cleanup: remove ltype argument from _tdb_transaction_cancel.
Now the transaction allrecord lock the standard one, and thus is cleaned
in tdb_release_extra_locks(), _tdb_transaction_cancel() doesn't need to
know what type it is.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Rusty Russell [Mon, 22 Feb 2010 04:02:13 +0000 (14:32 +1030)]
tdb: tdb_allrecord_lock/tdb_allrecord_unlock/tdb_allrecord_upgrade
Centralize locking of all chains of the tdb; rename _tdb_lockall to
tdb_allrecord_lock and _tdb_unlockall to tdb_allrecord_unlock, and
tdb_brlock_upgrade to tdb_allrecord_upgrade.
Then we use this in the transaction code. Unfortunately, if the transaction
code records that it has grabbed the allrecord lock read-only, write locks
will fail, so we treat this upgradable lock as a write lock, and mark it
as upgradable using the otherwise-unused offset field.
One subtlety: now the transaction code is using the allrecord_lock, the
tdb_release_extra_locks() function drops it for us, so we no longer need
to do it manually in _tdb_transaction_cancel.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Rusty Russell [Mon, 22 Feb 2010 03:57:13 +0000 (14:27 +1030)]
tdb: cleanup: always grab allrecord lock to infinity.
We were previously inconsistent with our "global" lock: the
transaction code grabbed it from FREELIST_TOP to end of file, and the
rest of the code grabbed it from FREELIST_TOP to end of the hash
chains. Change it to always grab to end of file for simplicity.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Rusty Russell [Mon, 22 Feb 2010 03:54:44 +0000 (14:24 +1030)]
tdb: remove num_locks
This was redundant before this patch series: it mirrored num_lockrecs
exactly. It still does.
Also, skip useless branch when locks == 1: unconditional assignment is
cheaper anyway.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Rusty Russell [Mon, 22 Feb 2010 03:48:32 +0000 (14:18 +1030)]
tdb: use tdb_nest_lock() for seqnum lock.
This is pure overhead, but it centralizes the locking. Realloc (esp. as
most implementations are lazy) is fast compared to the fnctl anyway.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Rusty Russell [Mon, 22 Feb 2010 03:44:52 +0000 (14:14 +1030)]
tdb: use tdb_nest_lock() for active lock.
Rather than a boutique lock and a separate nest count, use our
newly-generic nested lock tracking for the active lock.
Note that the tdb_have_extra_locks() and tdb_release_extra_locks()
functions have to skip over this lock now it is tracked.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Rusty Russell [Mon, 22 Feb 2010 03:36:02 +0000 (14:06 +1030)]
tdb: use tdb_nest_lock() for open lock.
This never nests, so it's overkill, but it centralizes the locking into
lock.c and removes the ugly flag in the transaction code to track whether
we have the lock or not.
Note that we have a temporary hack so this places a real lock, despite
the fact that we are in a transaction.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Rusty Russell [Mon, 22 Feb 2010 03:04:49 +0000 (13:34 +1030)]
tdb: use tdb_nest_lock() for transaction lock.
Rather than a boutique lock and a separate nest count, use our
newly-generic nested lock tracking for the transaction lock.
Note that the tdb_have_extra_locks() and tdb_release_extra_locks()
functions have to skip over this lock now it is tracked.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Rusty Russell [Mon, 22 Feb 2010 03:04:40 +0000 (13:34 +1030)]
tdb: cleanup: find_nestlock() helper.
Factor out two loops which find locks; we are going to introduce a couple
more so a helper makes sense.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Rusty Russell [Mon, 22 Feb 2010 03:04:29 +0000 (13:34 +1030)]
tdb: cleanup: tdb_release_extra_locks() helper
Move locking intelligence back into lock.c.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Rusty Russell [Mon, 22 Feb 2010 03:04:10 +0000 (13:34 +1030)]
tdb: cleanup: tdb_have_extra_locks() helper
In many places we check whether locks are held: add a helper to do this.
The _tdb_lockall() case has already checked for the allrecord lock, so
the extra work done by tdb_have_extra_locks() is merely redundant.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Rusty Russell [Mon, 22 Feb 2010 03:03:53 +0000 (13:33 +1030)]
tdb: don't suppress the transaction lock because of the allrecord lock.
tdb_transaction_lock() and tdb_transaction_unlock() do nothing if we
hold the allrecord lock. However, the two locks don't overlap, so
this is wrong.
This simplification makes the transaction lock a straight-forward nested
lock.
There are two callers for these functions:
1) The transaction code, which already makes sure the allrecord_lock
isn't held.
2) The traverse code, which wants to stop transactions whether it has the
allrecord lock or not. There have been deadlocks here before, however
this should not bring them back (I hope!)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Rusty Russell [Mon, 22 Feb 2010 03:03:27 +0000 (13:33 +1030)]
tdb: cleanup: tdb_nest_lock/tdb_nest_unlock
Because fcntl locks don't nest, we track them in the tdb->lockrecs array
and only place/release them when the count goes to 1/0. We only do this
for record locks, so we simply place the list number (or -1 for the free
list) in the structure.
To generalize this:
1) Put the offset rather than list number in struct tdb_lock_type.
2) Rename _tdb_lock() to tdb_nest_lock, make it non-static and move the
allrecord check out to the callers (except the mark case which doesn't
care).
3) Rename _tdb_unlock() to tdb_nest_unlock(), make it non-static and
move the allrecord out to the callers (except mark again).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>