TODO - check should have mode to verify all binpool deltas are present in local or server’s binpool. - binpool utility function - prune local binpool (in server or extra deltas) - move data between binpools (by csets, or all) - fully populate binpool - fsck - convert old binary files to binpool - fetch binpool data directly from server if it is on local filesystem

LM TODO’s - shouldn’t all the modes in BitKeeper/binpool/ be 444? (w 555 if needed?) - hardlinked binpool/ dirs - _binpool_send should verify the crc - it’s reading the data anyway - I really dislike the bp_insert() step in clone. It delays long clones needlessly. (WAYNE: what do you want to do instead? Just transferring the whole binpool doesn’t match the current semantics. ) - undo does not toss binpool data (WAYNE: no, intentional. See bk binpool prune) - interrupting a bp_transfer does not kill the sfio on the remote side (WAYNE: I think it will get EOF and clean up after itself now. Unless it happens to be broken on a file boundary.) - if the data is corrupt and I have a local gfile, don’t unlink until it comes over. - I finally remember why I didn’t like the binpool_send/receive. They are binpool specific and they don’t need to be and if they were not then we could use the interfaces for bk repair. They should be bk keysend / bk keyreceive or something like that, with a -B option that gets you the binpool behavior.

Binpool servers

	The whole point of binpool is to allow some bk operations to
	continue without transferring all data in the repository, but
	we need to assure that the data can always be found.  To do
	this we have binpool server repositories.

	Each repository has one (or none) binpool server and the data
	for all binpool deltas in that repository should exist either
	locally or at the server.  Anytime csets travel between
	repositories (pull/push/clone etc...) the binpool data will be
	transferred to the binpools servers at both ends of the
	transfer.

Semantics

  - New data always gets created in the local binpool.  Binpool
    servers don't get populated with "tmp" csets.

  - Any csets that move between repository are pushed to the binpool
    server on both sides of the transfer.

  - Pulls between two repos with the same binpool server do no extra
    work.

  - Binpool servers are identified by there repoid's so that two
    clients that use different URLs for the same server will do the
    right thing.

  - BK patches created with bk-send will contain all binpool data so
    that they are standalone

Here are the steps for each side of the major commands that transfer
csets.


clone:
	- bkd sends SFIO with sfiles
	- client unpacks SFIO and strips the unneeded deltas
	- client then requests any binpool deltas missing from local server

bkd_clone:
	- push binpool deltas to be transferred to the binpool server
	- send SFIO

push:
	- do key exchange
	- update local master with bp data
	- send bp data to remote master
	- send normal bk patch

bkd_push:
	- unchanged

pull:
	- do key exchange
	- receive bk patch
	- request missing bp data from remote master (if not same as
          local)
	- run resolve

bkd_pull:
	- do key exchange
	- update local master with bp data
	- send bk patch


----

Larry's version

master_id:  a locally cached copy of the repo_id of the master contacted
	    using the URL in the binpool master config


clone:
	- receive and unpack sfio

	These are a separate request from server:
	- if local master_id differs from bkd master_id, query local
	  master for missing binpool deltas and send list back to bkd
	- receive missing binpool data items and push to master

	- The call to 'bk -Ur get -S' for checkout:get may request
	  new binpool files from master as needed

bkd_clone:
	- unchanged, sends no binpool data

push:
	- Add new response from the bkd to end of part2 with the list of
	  binpool deltas that need to be transferred.   If this is non-empty
	  these are fetched locally and
	- find local binpool_server repo_id
	- received remote binpool_server repo_id
	- if different
		- fetch all binpool data for patch from master
		- send bk patch with all binpool data
	- if same
		- send master all binpool data in patch
		- send bk patch with no binpool data

bkd_push:
	- unpack patch
	- compare binpool to required binpool-mode and request any
	  missing data from remote-list.

pull:
	- send binpool mode to bkd
	- receive and unpack bk patch
	- compare binpool to required binpool-mode and request any
	  missing data from remote-list.


bkd_pull:
	- create bk patch
	- if !master, add all present bp data
	- if master, add data according to binpool-mode