In a conversation with @nb about decentral identifiers (PID, DID, IPFS...) we wondered why libraries have not been engaged in this area no more. Hardly any mention of #bittorrent or #ipfs in #code4lib mailing lists, journal...
#p2p is relevant for digital archiving but library institutions tend to avoid public infrastructure. Possible reasons: it may look a bit fishy (connection to illegal file sharing, cryptocoins...) and IT department has security concerns. Moreover adoption of technologies in libraries can literally take years to decades.
Another reason why libraries have not done more in #p2p may be: they don't provide their own content so they cannot freely share it without restrictions. This could change with #opendata such as research data. One of the rare recent publication I could find is https://doi.org/10.5281/zenodo.7646355 (application of #IPFS for research data in @nfdi4earth #nfdi) - see https://www.dkrz.de/de/kommunikation/aktuelles/ipfs-pinning
@nichtich I've made academic librarians use bittorrent (e.g. to transfer files between one computer to another over a slow network) but I admit there were very limited use cases. Downloading and archiving torrents would be useful but few libraries would be that brave. Distributing via bittorrent (or even #peertube) has limited benefits when you have unlimited egress for free from your university. Huge datasets à la #AcademicTorrents benefit from bittorrent clients but few have any (#Zenodo?).