Commit 6d56e6eb authored by yacovm's avatar yacovm Committed by Yacov Manevich
Browse files

[FAB-5165] Optimize block verification



In gossip, when block messages are gossiped among peers the
signature of the ordering service on them is validated.

This causes a message to be validated in several places:

1) When it is received from the ordering service
2) When it is received from a peer via forwarding or pull
3) When it is received from a peer via state tranfer

The problem with (2) is that it is done in an inefficient way:
- When the block is received from the communication layer it is verified
  and then forwarded to the "channel" module that handles it.
- The channel module verifies blocks in 2 cases:
  - If the block is part of a "data update" (gossip pull response) message
    the message is opened and all blocks are verified
  - If the block is a block message itself, it is verified again,
    although... it was verified before passed into the channel module.
    This is redundant.

But the biggest inefficiency is w.r.t the handling in the channel module:
When a block is verified it is then decided if it should be be
propagated to the state transfer layer (the final stop before it is
passed to the committer module). It is decided by asking the in-memory
message store if the block has been already received before, or
if it is too old.

The problem is that this is done *AFTER* the verification and not *BEFORE*
and therefore - since in gossip you may get the same block several times
(from other peers) - we end up verifying the block and then discarding
it anyway.

Empirical performance tests I have conducted show that for blocks
of 100KB, the time spent on verifying a block is between 700 micro-seconds
to 2milliseconds.

When testing a benchmark scenario of 1000 blocks with a single leader
disseminating to 7 non-leader peers, with propagation factor of 4,
a block entry rate (to the leader peer) of bursts of 20 blocks every 100ms,
the gossip network is over committed and starting from block 500 -
most blocks were dropped because the gossip internal buffers were full
(we drop blocks in order for the network not to be "deadlocked").

With this change applied, no block is dropped.

Change-Id: I02ef1a203f469d324509a2fdbd1c8b449a9bcf8f
Signed-off-by: default avataryacovm <yacovm@il.ibm.com>
parent 6c3cb99d
......@@ -420,6 +420,10 @@ func (gc *gossipChannel) HandleMessage(msg proto.ReceivedMessage) {
gc.logger.Warning("Payload is empty, got it from", msg.GetConnectionInfo().ID)
return
}
// Would this block go into the message store if it was verified?
if !gc.blockMsgStore.CheckValid(msg.GetGossipMessage()) {
return
}
if !gc.verifyBlock(m.GossipMessage, msg.GetConnectionInfo().ID) {
gc.logger.Warning("Failed verifying block", m.GetDataMsg().Payload.SeqNum)
return
......@@ -468,6 +472,10 @@ func (gc *gossipChannel) HandleMessage(msg proto.ReceivedMessage) {
gc.logger.Warning("DataUpdate message contains item with channel", gMsg.Channel, "but should be", gc.chainID)
return
}
// Would this block go into the message store if it was verified?
if !gc.blockMsgStore.CheckValid(msg.GetGossipMessage()) {
return
}
if !gc.verifyBlock(gMsg.GossipMessage, msg.GetConnectionInfo().ID) {
return
}
......
......@@ -437,26 +437,6 @@ func (g *gossipServiceImpl) validateMsg(msg proto.ReceivedMessage) bool {
}
}
if msg.GetGossipMessage().IsDataMsg() {
blockMsg := msg.GetGossipMessage().GetDataMsg()
if blockMsg.Payload == nil {
g.logger.Warning("Empty block! Discarding it")
return false
}
// If we're configured to skip block validation, don't verify it
if g.conf.SkipBlockVerification {
return true
}
seqNum := blockMsg.Payload.SeqNum
rawBlock := blockMsg.Payload.Data
if err := g.mcs.VerifyBlock(msg.GetGossipMessage().Channel, seqNum, rawBlock); err != nil {
g.logger.Warning("Could not verify block", blockMsg.Payload.SeqNum, ":", err)
return false
}
}
if msg.GetGossipMessage().IsStateInfoMsg() {
if err := g.validateStateInfoMsg(msg.GetGossipMessage()); err != nil {
g.logger.Warning("StateInfo message", msg, "is found invalid:", err)
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment