IPFS的商业托管服务

  • 0_1547710219922_3f54b6e4-a96f-467c-9273-fa168e0e6105-image.png


    用于星际文件系统(IPFS)的Pinning services(固定服务)是商业托管服务,它们为你的IPFS文件对象提供“pin”(在IPFS术语种表示“永久分发”),但需要付费。

    --

    我测试了两个这样的服务,Eternum(0.01美元/天/GiB)和Pinata云(1 GiB免费,之后每个GiB 0.30美元/月),并发现它们在存储记帐中对重复的IPFS对象收费过高。

    --

    IPFS对象可以由单个文件组成,也可以由一组对其他IPFS对象的引用组成。在本文中,我将后一种类型称为“目录对象”。例如:

    假设我们有一个IPFS目录对象,其中包含对文件File_1和File_2的唯一内容哈希的引用。你有三个不同的IPFS对象:目录对象和两个文件。当你固定这样一个目录对象时,你也间接地固定了两个文件对象。然后,如果你再向目录中添加另一个文件File_3,则有五个IPFS对象:包含两个文件的原始目录对象、三个文件的新目录对象,以及三个单独的文件。

    --

    由于File_1和File_2的内容永远不会更改,因此其IPFS对象保持不变。即使文件存储在两个不同的目录对象中,也不会单独存储文件的副本。目录对象只通过其哈希引用文件对象。IPFS应该是“永久性Web”,你可以将旧的目录对象保留在适当的位置,以保留对象的历史记录。除了任何新的/修改过的文件数据之外,保持旧版本目录的存储成本只有几个字节。

    0_1547710323886_54cf8963-349e-430b-a8f2-51825b5d7816-image.png

    IPFS节点不存储完全相同IPFS对象的多个副本。它只需要存储每个对象的一个副本,即使它们可以从多个对象引用。我们的两个目录对象的完整大小及其引用的所有内容,都在存储层上进行了重复数据删除,因此每个文件只存储一次。

    --

    IPFS还将大文件拆分成多个块; 每个块也可以在存储中进行重复数据删除。这意味着IPFS的实际存储需求,可能小于常规非重复数据删除文件系统上,所有文件的总和。

    --

    固定服务的问题在于,它们总是针对固定IPFS对象的累积大小收费。虽然他们利用重复数据删除的IPFS对象哈希和块,来满足他们自己的存储需求,但这些节省不会传递给他们的客户。这在传统的文件托管服务中是没有问题的。然而,文件和块的重复数据删除是被纳入IPFS的,客户期望获得好处。

    --

    不过,我不认为商业固定服务是故意过度收费的。我与Eternum的联合创始人、以及尚未启动的固定服务的开发人员进行了讨论,他们对为客户实际使用的重复数据删除存储空间计费的想法,持积极态度。这将使更多博客和网站能够在IPFS上保留其历史记录,同时简化部署。

    --

    IPFS本身不支持计算对象的实际存储大小,因此我建议将其添加到go-ipfs,通过创建一个由所有客户的固定对象,组成的新IPFS对象;并递归地遍历每个唯一引用的对象,并汇总它们,可以为每个客户获得准确的存储记帐。


    【往期文章】

    -IPFS与HTTP:如何推进互联网再次进化?

    -2019年五大最期待的区块链项目

    -IPFS最新动态-24


    识别二维码进入IPFS社群

    0_1547710513167_ba03b6ba-9ab7-4bda-9aa1-858ecc0701d0-image.png

Log in to reply
 

Email:filapp@protonmail.com

Twitter:

https://twitter.com/RalapXStartUp

Telegram:

https://t.me/bigdog403

全球Filecoin中文爱好者网站

  • X

    It has come to my attention that storage clients wish to obtain the CommD and CommR associated with the sector into which the piece referenced in their storage deal has been sealed. The client can already use the query-deal command to obtain the CID of the commitSector message corresponding to that sector - but message wait doesn't show individual commitSector arguments - it just shows some string encoding of the concatenated arguments' bytes.
    I propose to augment the ProofInfo struct with CommR and CommD such that the storage client can query for their deal and, when available, see the replica and data commitments in the query-storage-deal command. Alternatively, the query-storage-deal code could get deal state from the miner and use the CommitmentMessage CID to look up CommR and CommD (on chain) - but this seems like more work than is really necessary.

    阅读更多
  • X

    Hmmmm. These are just my thoughts off the cuff:

    For straight-ahead performance that's not specifically concerned with the issue of loading the data (from wherever), then work on smaller (1GiB) sectors is okay. When considering optimization for space so that large sectors can be replicated, 128GB for >1GiB sectors is obviously problematic from a normal replication perspective. However, if we consider the attacker who wants to replicate fast at any cost, then maybe it's okay.
    Based on this, we could probably focus on smaller sectors as a reasonable representation of the problem. This has the unfortunate consequence that the work is less applicable to the related problem of speeding replication even when memory must be conserved to some extent.
    I guess as a single datum to help calibrate our understanding of how R2 scales, it would be worth knowing exactly how much RAM is required for both 1GiB and (I guess) 2GiB. If the latter really fails with 128GB RAM, how much does it require not to? If the work you're already doing makes it easy to get this information, it might help us reason through this. I don't think you should spend much time or go out of your way to perform this investigation though, otherwise.
    Others may feel differently about any of this.

    阅读更多
  • X

    @xiedapao
    If there does exist such a thing, I cannot find it.

    zenground0 [7 hours ago]
    I don't believe there is

    zenground0 [7 hours ago]
    tho maybe phritz has some "refactor the vm" issues that are relevant

    laser [7 hours ago]
    I assert to you that we must create an InitActor in order for CreateStorageMiner conform to the specification.

    Why [7 hours ago]
    I’ll take things I don’t disagree with for $400 Alex

    zenground0 [7 hours ago]
    Agreement all around. However this is one of those changes that is pretty orthogonal to getting the storage protocol to work and something we can already do. We need to track it but I see it as a secondary priority to (for example) getting faults or arbitrating deals working.

    anorth [3 hours ago]
    Thanks @zenground0, I concur. Init actor is in our high-level backlog, but I'm not surprised there's no issue yet. Is it reasonable to draw our boundary there for now?

    阅读更多
  • X

    Does there already exist a story which addresses the need to create an InitActor?

    阅读更多

Looks like your connection to Filecoin.cn中国爱好者社区 was lost, please wait while we try to reconnect.