dlm: fixes for nodir mode

The "nodir" mode (statically assign master nodes instead
of using the resource directory) has always been highly
experimental, and never seriously used.  This commit
fixes a number of problems, making nodir much more usable.

- Major change to recovery: recover all locks and restart
  all in-progress operations after recovery.  In some
  cases it's not possible to know which in-progess locks
  to recover, so recover all.  (Most require recovery
  in nodir mode anyway since rehashing changes most
  master nodes.)

- Change the way nodir mode is enabled, from a command
  line mount arg passed through gfs2, into a sysfs
  file managed by dlm_controld, consistent with the
  other config settings.

- Allow recovering MSTCPY locks on an rsb that has not
  yet been turned into a master copy.

- Ignore RCOM_LOCK and RCOM_LOCK_REPLY recovery messages
  from a previous, aborted recovery cycle.  Base this
  on the local recovery status not being in the state
  where any nodes should be sending LOCK messages for the
  current recovery cycle.

- Hold rsb lock around dlm_purge_mstcpy_locks() because it
  may run concurrently with dlm_recover_master_copy().

- Maintain highbast on process-copy lkb's (in addition to
  the master as is usual), because the lkb can switch
  back and forth between being a master and being a
  process copy as the master node changes in recovery.

- When recovering MSTCPY locks, flag rsb's that have
  non-empty convert or waiting queues for granting
  at the end of recovery.  (Rename flag from LOCKS_PURGED
  to RECOVER_GRANT and similar for the recovery function,
  because it's not only resources with purged locks
  that need grant a grant attempt.)

- Replace a couple of unnecessary assertion panics with
  error messages.

Signed-off-by: David Teigland <teigland@redhat.com>
diff --git a/fs/dlm/requestqueue.c b/fs/dlm/requestqueue.c
index d3191bf..1695f1b 100644
--- a/fs/dlm/requestqueue.c
+++ b/fs/dlm/requestqueue.c
@@ -65,6 +65,7 @@
 int dlm_process_requestqueue(struct dlm_ls *ls)
 {
 	struct rq_entry *e;
+	struct dlm_message *ms;
 	int error = 0;
 
 	mutex_lock(&ls->ls_requestqueue_mutex);
@@ -78,6 +79,14 @@
 		e = list_entry(ls->ls_requestqueue.next, struct rq_entry, list);
 		mutex_unlock(&ls->ls_requestqueue_mutex);
 
+		ms = &e->request;
+
+		log_limit(ls, "dlm_process_requestqueue msg %d from %d "
+			  "lkid %x remid %x result %d seq %u",
+			  ms->m_type, ms->m_header.h_nodeid,
+			  ms->m_lkid, ms->m_remid, ms->m_result,
+			  e->recover_seq);
+
 		dlm_receive_message_saved(ls, &e->request, e->recover_seq);
 
 		mutex_lock(&ls->ls_requestqueue_mutex);
@@ -140,35 +149,7 @@
 	if (!dlm_no_directory(ls))
 		return 0;
 
-	/* with no directory, the master is likely to change as a part of
-	   recovery; requests to/from the defunct master need to be purged */
-
-	switch (type) {
-	case DLM_MSG_REQUEST:
-	case DLM_MSG_CONVERT:
-	case DLM_MSG_UNLOCK:
-	case DLM_MSG_CANCEL:
-		/* we're no longer the master of this resource, the sender
-		   will resend to the new master (see waiter_needs_recovery) */
-
-		if (dlm_hash2nodeid(ls, ms->m_hash) != dlm_our_nodeid())
-			return 1;
-		break;
-
-	case DLM_MSG_REQUEST_REPLY:
-	case DLM_MSG_CONVERT_REPLY:
-	case DLM_MSG_UNLOCK_REPLY:
-	case DLM_MSG_CANCEL_REPLY:
-	case DLM_MSG_GRANT:
-		/* this reply is from the former master of the resource,
-		   we'll resend to the new master if needed */
-
-		if (dlm_hash2nodeid(ls, ms->m_hash) != nodeid)
-			return 1;
-		break;
-	}
-
-	return 0;
+	return 1;
 }
 
 void dlm_purge_requestqueue(struct dlm_ls *ls)