1. 11 Jul, 2017 3 commits
  2. 10 Jul, 2017 1 commit
  3. 07 Jul, 2017 2 commits
    • Jason Yellick's avatar
      [FAB-5207] Check channel create channelID mismatch · 4709b338
      Jason Yellick authored
      
      
      The flow for channel creation works loosely as follows.
      
      1. Look up channel resources by channelID from ChannelHeader
      2. If missing, propose a new channel, based on the config update
      3. Extract the channel ID from the config update, create a template
      config from the consortium definition, and check if the config update
      satisfies the channel creation policy.
      4. Add the new channel resources to the channels map.
      
      The problem is that between step 1/2 if the channelID is mismatched, the
      internal channel construction logic will believe it is building
      channelInner, while externally, this channel gets registered as
      channelOuter.
      
      Thus, it is possible to replay a channel creation TX by modifying the
      outer header.  The new channel will be somewhat broken and all
      configuration updates against it will fail.
      
      This CR adds a simple check to verify that the ChannelHeader ChannelID
      matches the ConfigUpdate channelID.
      
      Change-Id: I23b088563016e0aa9f30524887c3c3d49b5942fb
      Signed-off-by: default avatarJason Yellick <jyellick@us.ibm.com>
      4709b338
    • Greg Haskins's avatar
      [FAB-4883] Fix vendoring with parent vendored deps · 9d159a79
      Greg Haskins authored
      
      
      We change the way auto-vendoring works such that deps that
      are already vendored are not re-vendored inappropriately.  It was
      found that nested vendoring could break the previously employed
      scheme.
      
      For example, consider a package "foo/bar/baz".  It's conceivable that
      a vendor folder may appear anywhere in that hierarchy, e.g.
      
      [ "foo/vendor", "foo/bar/vendor", "foo/bar/baz/vendor"]
      
      and golang would recognize the contents as a legitimate vendor.
      
      This was precisely the situation when we had chaincode in the package
      
      github.com/fabric/hyperledger/examples/chaincode where
      
      github.com/fabric/vendor was in effect.
      
      We now scan the entire package heirarchy to ensure we capture the
      potential relationships regardless of where they sit in the tree.
      
      Fixes FAB-4883
      
      Change-Id: I7c6aa5ba0401cecc26bc58f5e6cda6e208109411
      Signed-off-by: default avatarGreg Haskins <gregory.haskins@gmail.com>
      9d159a79
  4. 06 Jul, 2017 3 commits
  5. 05 Jul, 2017 8 commits
    • Jonathan Levi (HACERA)'s avatar
    • Jonathan Levi (HACERA)'s avatar
    • Gari Singh's avatar
      FAB-5189 Hyperledger Project should be Hyperledger · 093985ab
      Gari Singh authored
      
      
      Changes any reference to Hyperledger Project to
      just Hyperledger.  Also fixes / corrects the use
      of project in conjunction with Hyperledger
      Fabric to make things more clear
      
      Change-Id: I3bb8a1c77a2a47c4a885586fe2108e4be8337244
      Signed-off-by: default avatarGari Singh <gari.r.singh@gmail.com>
      093985ab
    • Gari Singh's avatar
      FAB-5185 Remove/correct references to Java chaincode · d6c20715
      Gari Singh authored
      
      
      Java chaincode support was removed for v1.0.0 so
      need to make this clear in the docs as well as clarify
      that Go is the only fully supported language for
      chaincode.  Also added a link to Hyperledger
      Composer under supported languages as well
      
      Change-Id: I8bbc9e65f371df9d0835910f89a22bba16568074
      Signed-off-by: default avatarGari Singh <gari.r.singh@gmail.com>
      d6c20715
    • Jonathan Levi (HACERA)'s avatar
      f9318cdf
    • Gari Singh's avatar
      FAB-5184 Fix spelling error for peer version · d9875bb6
      Gari Singh authored
      
      
      Docker Namepace should be Docker Namespace
      
      Change-Id: Id3049bef4b2c0df81a6c4d7970ade27f8b3eb44d
      Signed-off-by: default avatarGari Singh <gari.r.singh@gmail.com>
      d9875bb6
    • Jonathan Levi (HACERA)'s avatar
    • yacovm's avatar
      [FAB-5165] Optimize block verification · 6d56e6eb
      yacovm authored
      
      
      In gossip, when block messages are gossiped among peers the
      signature of the ordering service on them is validated.
      
      This causes a message to be validated in several places:
      
      1) When it is received from the ordering service
      2) When it is received from a peer via forwarding or pull
      3) When it is received from a peer via state tranfer
      
      The problem with (2) is that it is done in an inefficient way:
      - When the block is received from the communication layer it is verified
        and then forwarded to the "channel" module that handles it.
      - The channel module verifies blocks in 2 cases:
        - If the block is part of a "data update" (gossip pull response) message
          the message is opened and all blocks are verified
        - If the block is a block message itself, it is verified again,
          although... it was verified before passed into the channel module.
          This is redundant.
      
      But the biggest inefficiency is w.r.t the handling in the channel module:
      When a block is verified it is then decided if it should be be
      propagated to the state transfer layer (the final stop before it is
      passed to the committer module). It is decided by asking the in-memory
      message store if the block has been already received before, or
      if it is too old.
      
      The problem is that this is done *AFTER* the verification and not *BEFORE*
      and therefore - since in gossip you may get the same block several times
      (from other peers) - we end up verifying the block and then discarding
      it anyway.
      
      Empirical performance tests I have conducted show that for blocks
      of 100KB, the time spent on verifying a block is between 700 micro-seconds
      to 2milliseconds.
      
      When testing a benchmark scenario of 1000 blocks with a single leader
      disseminating to 7 non-leader peers, with propagation factor of 4,
      a block entry rate (to the leader peer) of bursts of 20 blocks every 100ms,
      the gossip network is over committed and starting from block 500 -
      most blocks were dropped because the gossip internal buffers were full
      (we drop blocks in order for the network not to be "deadlocked").
      
      With this change applied, no block is dropped.
      
      Change-Id: I02ef1a203f469d324509a2fdbd1c8b449a9bcf8f
      Signed-off-by: default avataryacovm <yacovm@il.ibm.com>
      6d56e6eb
  6. 04 Jul, 2017 2 commits
    • Gari Singh's avatar
      FAB-5166 Docs should use Hyperledger Fabric · 9a86c1a2
      Gari Singh authored
      
      
      There are still several places in the docs
      which do not properly use Hyperledger Fabric
      when referring to Hyperledger Fabric.
      
      While this change covers a lot of files, it
      simply changes all reference to
      Hyperledger Fabric or provides a minor rewrite
      to avoid using the terms at all.
      
      As part of this, also addressed
      - FAB-5014
      - FAB-5139
      - Changed docker to Docker as appropriate
      - Other minor cleanups since this included most of the docs
      
      Change-Id: I7818a44b1411abb536a595c537202615bf901199
      Signed-off-by: default avatarGari Singh <gari.r.singh@gmail.com>
      9a86c1a2
    • yacovm's avatar
      [FAB-5157] Optimize peer selection of channel batches · 6c3cb99d
      yacovm authored
      
      
      In gossip whenever a batch of channel-scoped messages
      (either leadership, blocks, stateInfo, etc.) is sent to remote peers,
      a function goes over all existing alive peers and then selects from
      them a subset of peers. This is done in gossipInChan function
      
      In case of leadership messages, the subset is taken from the entire
      membership set without an upper bound (we gossip leadership messages
      to all peers in the channel), and in case of non-leadership messages
      the subset is taken with an upper bound equal to the propogation
      fanout (configurable).
      
      Finding peers that are eligible to receive any channel-related data
      involves cryptographical computations and is non-negligible
      (measured ~ 25ms in a network of 8 peers)
      
      The 2 possibilities (leadership and non-leadership) are calculated even
      though each method invocation of gossipInChan receives only 1 type of message.
      Therefore, it would be beneficial performance wise to just calculate the
      option that is relevant to that method invocation and not calculate
      both options each time.
      
      This commit addresses this by calculating the peers to send according to
      the type of message in the invocation.
      
      Change-Id: If6940182f83ef046c1d1f7186a71946128591e69
      Signed-off-by: default avataryacovm <yacovm@il.ibm.com>
      6c3cb99d
  7. 03 Jul, 2017 3 commits
    • Gari Singh's avatar
      Merge "[FAB-5046]Add missing title for doc" · b8e189eb
      Gari Singh authored
      b8e189eb
    • Gari Singh's avatar
      2a5be7bd
    • yacovm's avatar
      [FAB-5153] Relax gossip send buffer behavior · 4cd2a8c1
      yacovm authored
      
      
      In gossip there are send and receive buffers that are
      allocated for each connection.
      When the throughput of messages is too high and the send buffer
      overflows, the connection to the peer is closed and the peer is removed from the membership.
      
      From performance evaluations I did - I conclude that:
      - Increasing the send buffer size would benefit to withstand
        intense bursts of messages, even when the receive buffer stays the same.
      - Not closing the connection to the peer (and not removing it from the membership)
        that its corresponding send buffer overflowed helps the throughput by giving the
        runtime an opportunity to recover in spite of an intensive burst.
      
      Change-Id: I7bc84092e366b75b6cbcaee1ea9d5320274dfc1c
      Signed-off-by: default avataryacovm <yacovm@il.ibm.com>
      4cd2a8c1
  8. 02 Jul, 2017 7 commits
  9. 01 Jul, 2017 11 commits