我为什么需要用Filecoin 来支付我的数据存储和检索,而不是其他虚拟货币,比如比特币,或者流通货币,比如美元?

  • Filecoin用原生代币来激励存储和检索矿工或者说我们网络上的供应方的参与。 Filecoin的设计目标是确保每个参与者(包括客户,矿工,投资者和开发者)的最获利的选择就是采取行动来提高网络的服务质量。 Filecoin是基于存储证明(Proof-of-Storage)的新型区块链共识协议而构建的。想要在Filecoin网络上存储文件的客户雇佣矿工在网络上存储他们文件的多个副本。为了获取filecoin付款和区块奖励,每个矿工必须提交公开可验证的加密证明,来证实他们在不断地存储数据。 Filecoin中的支付和奖励赋予激励结构以动力,以保证一个公平,无需许可,强大和去中心化的存储网络。

Log in to reply
 

Email:filapp@protonmail.com

Twitter:

https://twitter.com/RalapXStartUp

Telegram:

https://t.me/bigdog403

全球Filecoin中文爱好者网站

  • X

    It has come to my attention that storage clients wish to obtain the CommD and CommR associated with the sector into which the piece referenced in their storage deal has been sealed. The client can already use the query-deal command to obtain the CID of the commitSector message corresponding to that sector - but message wait doesn't show individual commitSector arguments - it just shows some string encoding of the concatenated arguments' bytes.
    I propose to augment the ProofInfo struct with CommR and CommD such that the storage client can query for their deal and, when available, see the replica and data commitments in the query-storage-deal command. Alternatively, the query-storage-deal code could get deal state from the miner and use the CommitmentMessage CID to look up CommR and CommD (on chain) - but this seems like more work than is really necessary.

    阅读更多
  • X

    Hmmmm. These are just my thoughts off the cuff:

    For straight-ahead performance that's not specifically concerned with the issue of loading the data (from wherever), then work on smaller (1GiB) sectors is okay. When considering optimization for space so that large sectors can be replicated, 128GB for >1GiB sectors is obviously problematic from a normal replication perspective. However, if we consider the attacker who wants to replicate fast at any cost, then maybe it's okay.
    Based on this, we could probably focus on smaller sectors as a reasonable representation of the problem. This has the unfortunate consequence that the work is less applicable to the related problem of speeding replication even when memory must be conserved to some extent.
    I guess as a single datum to help calibrate our understanding of how R2 scales, it would be worth knowing exactly how much RAM is required for both 1GiB and (I guess) 2GiB. If the latter really fails with 128GB RAM, how much does it require not to? If the work you're already doing makes it easy to get this information, it might help us reason through this. I don't think you should spend much time or go out of your way to perform this investigation though, otherwise.
    Others may feel differently about any of this.

    阅读更多
  • X

    @xiedapao
    If there does exist such a thing, I cannot find it.

    zenground0 [7 hours ago]
    I don't believe there is

    zenground0 [7 hours ago]
    tho maybe phritz has some "refactor the vm" issues that are relevant

    laser [7 hours ago]
    I assert to you that we must create an InitActor in order for CreateStorageMiner conform to the specification.

    Why [7 hours ago]
    I’ll take things I don’t disagree with for $400 Alex

    zenground0 [7 hours ago]
    Agreement all around. However this is one of those changes that is pretty orthogonal to getting the storage protocol to work and something we can already do. We need to track it but I see it as a secondary priority to (for example) getting faults or arbitrating deals working.

    anorth [3 hours ago]
    Thanks @zenground0, I concur. Init actor is in our high-level backlog, but I'm not surprised there's no issue yet. Is it reasonable to draw our boundary there for now?

    阅读更多
  • X

    Does there already exist a story which addresses the need to create an InitActor?

    阅读更多

Looks like your connection to Filecoin.cn中国爱好者社区 was lost, please wait while we try to reconnect.