Read more →
"Uploading" files in Zeronet is the equivalent of moving a file to the appropriate folder.
Oddly, this procedure of copying a file to the zeronet data folder is taking extremely long.
In theory, "uploading" a file should take roughly the same time as to "Transfer a file locally from an internal storage place to the Zeronet Data-Cache folder", which should be very fast, as it doesnt need to be "uploaded" anywhere outside of a users datafolder.
After it has been added it is only when someone requests that file, that it actually gets "uploaded".
Therefore anyone has an idea how to make "copying a file to ones data folder" to make it available on zeronet faster as its currently done in IFS?
Read more →
One of the areas zeronet could be improved is that each item is currently unique by default.
That means (hash-)identical items cannot be identified and commonly shared, even among merger-sites.
That leads to a fragile state of optional files in case not everyone shares all of the items of a big site default.
An example that illustrates the above would be a shared picture (or other informational item), which the original author might decide to deleted and therefore disappears the very moment that decision to delete the item spreads via "sign&publish".
Despite other seeders/peers may still have that item cached, there is no chance to preserve that exact item as the unique item hash it was shared (and referenced) by others within zeronet.
So even if someone got a copy of that original item archived out of zeronet and is re-adding it again, it then has to be re-discovered by any other peers due to its non-known new hash, starting its peer count with 0 again even though its identical to the item before.
Furthering this problem is the non-retrievable context that might have been built around the original item like the description, comments or votings, which after deletion would all be meaningless since the missing reference context file is gone.
For all the reasons above, an option to reference specifically marked files by its unique hash value across any zeronet sites would be tremendously useful long-term.
Anyone having ideas how to tackle this within zeronet (I know IPFS tackles this very problem, but is yet another technology), please add your thoughts to the comments.