IPFS和Filecoin之间有什么联系?

  • Filecoin和IPFS是互补协议,两者均由Protocol Labs创建。IPFS 允许网络中的参与者互相存储,索取和传输可验证的数据。 IPFS是开源的,可以被免费下载和使用,并且已经被大量的团队使用。运用IPFS,各个节点可存储它们认为重要的数据;没有简单的方法可以激励他人加入网络或存储特定数据。 为了解决这一关键问题,Filecoin的设计旨在提供一个持久的数据存储系统。在Filecoin的激励结构下,客户付费以在特定的冗余和可用性水平上存储数据,矿工通过不断地存储数据并以加密方式证明数据存储来获得付款和奖励。 简而言之:IPFS按内容寻址并使其移动; Filecoin就是缺失的激励机制。

    Filecoin还使用了IPFS的许多性能。例如:

    Filecoin将IPLD用于区块链数据结构
    Filecoin节点使用libp2p保证安全连接
    节点之间的消息传递和Filecoin块传播使用libp2p发布订阅
    此外,Filecoin核心团队包括IPFS核心团队的成员。 IPFS和Filecoin之间的兼容将尽可能无缝对接。即使在Filecoin发布之后,我们仍然期望IPFS和Filecoin的开源社区们继续协作和提升两个项目的兼容性。

Log in to reply
 

Email:filapp@protonmail.com

Twitter:

https://twitter.com/RalapXStartUp

Telegram:

https://t.me/bigdog403

全球Filecoin中文爱好者网站

  • X

    It has come to my attention that storage clients wish to obtain the CommD and CommR associated with the sector into which the piece referenced in their storage deal has been sealed. The client can already use the query-deal command to obtain the CID of the commitSector message corresponding to that sector - but message wait doesn't show individual commitSector arguments - it just shows some string encoding of the concatenated arguments' bytes.
    I propose to augment the ProofInfo struct with CommR and CommD such that the storage client can query for their deal and, when available, see the replica and data commitments in the query-storage-deal command. Alternatively, the query-storage-deal code could get deal state from the miner and use the CommitmentMessage CID to look up CommR and CommD (on chain) - but this seems like more work than is really necessary.

    阅读更多
  • X

    Hmmmm. These are just my thoughts off the cuff:

    For straight-ahead performance that's not specifically concerned with the issue of loading the data (from wherever), then work on smaller (1GiB) sectors is okay. When considering optimization for space so that large sectors can be replicated, 128GB for >1GiB sectors is obviously problematic from a normal replication perspective. However, if we consider the attacker who wants to replicate fast at any cost, then maybe it's okay.
    Based on this, we could probably focus on smaller sectors as a reasonable representation of the problem. This has the unfortunate consequence that the work is less applicable to the related problem of speeding replication even when memory must be conserved to some extent.
    I guess as a single datum to help calibrate our understanding of how R2 scales, it would be worth knowing exactly how much RAM is required for both 1GiB and (I guess) 2GiB. If the latter really fails with 128GB RAM, how much does it require not to? If the work you're already doing makes it easy to get this information, it might help us reason through this. I don't think you should spend much time or go out of your way to perform this investigation though, otherwise.
    Others may feel differently about any of this.

    阅读更多
  • X

    @xiedapao
    If there does exist such a thing, I cannot find it.

    zenground0 [7 hours ago]
    I don't believe there is

    zenground0 [7 hours ago]
    tho maybe phritz has some "refactor the vm" issues that are relevant

    laser [7 hours ago]
    I assert to you that we must create an InitActor in order for CreateStorageMiner conform to the specification.

    Why [7 hours ago]
    I’ll take things I don’t disagree with for $400 Alex

    zenground0 [7 hours ago]
    Agreement all around. However this is one of those changes that is pretty orthogonal to getting the storage protocol to work and something we can already do. We need to track it but I see it as a secondary priority to (for example) getting faults or arbitrating deals working.

    anorth [3 hours ago]
    Thanks @zenground0, I concur. Init actor is in our high-level backlog, but I'm not surprised there's no issue yet. Is it reasonable to draw our boundary there for now?

    阅读更多
  • X

    Does there already exist a story which addresses the need to create an InitActor?

    阅读更多

Looks like your connection to Filecoin.cn中国爱好者社区 was lost, please wait while we try to reconnect.