Thursday, November 16, 2023

The Emergence of P2P Protocols Set to Revolutionize Content Distribution

 

The LSD85 Project

The next internet revolution won’t be running on blockchain, instead, it is a new set of peer-to-peer protocols which is about to emerge and disrupt global content distribution and curation.

The status quo

The first peer-to-peer revolution was stopped mainly because the major companies were able to sue individuals in the world for using such software, being scarred most people stopped using such tools. I foresee the emergence of a new generation of peer-to-peer protocols built on TOR, making them impossible to trace and censored on Clearnet.

Most of the governments that did allow major companies to sue citizens at the time are nowadays relying heavily on TOR to fight the information war, for this reason, the cost of censoring TOR (which would be the only way to limit the adoption of tool built on those new protocols) will be too high for this governments.

This will lead to a new revolution in content distribution and curation.

We are all streamers

We are living now in the world of streaming, each and everyone wants to share a part of their life with the world and communicate it in a safe space.

Existing centralized streaming platforms do arguably not offer a great experience for the average citizen: Impossible to stream licensed music (which is most of the music people listen to), the quality is often pretty limited until the stream is highly popular and finally platforms apply a heavy censorship when it comes to what content can be broadcasted.

In the 2000s, downloading mp3 was the killer app of the peer-to-peer ecosystem, in the future, it will be live streaming. The implementation of such protocols will face heavy resistance from established entities, namely the major streaming platforms being active today but also the social network platforms. That being said, we should remember that as of today YouTube is basically full of pirated content, so I think it will be difficult for them to have any solid arguments against the deployment of such a system.

To address their concerns, the legal framework will have to be reformed as it doesn’t make much sense to ask artists to make revenue mainly on live shows, due to very low fees on centralized platforms, and then let those platforms take most of the profits by adding unwanted advertisement.

Each content subscriber in this new world should be empowered to choose to make their content properly licensed if they want to do it in a non-anonymous manner. The governments and regulatory bodies will have to adapt to this new situation, either they adapt or they have to block TOR at the geographical level. It will be a good opportunity to revisit the approach towards advertising which could shift from content producers to content consumers.

At a high level, what are now the centralized content distribution platforms will become advertising agencies, offering money to content consumers in exchange for their metadata.

Back to the roots

As the protocols will be built on top of TOR, the usage of the tools built with them will be anonymous. With proper design, it is impossible for any third party to gather usage data without consent. The assurance of anonymity and privacy will foster creative content to be produced, content that is very likely not even produced today.

The ability, as an example, to share one person’s personal curated music collection with to rest of the world is giving new perspectives about exploring each other worlds. Exploring and discovering curated content from various individuals across the globe combined with the continuous progress of generative AI will lead to cultural change that is hard to foresee but feels crucial to our survival. We live in a time where we all need to be creative to survive in the long term.

In the long run, this could have a cultural impact as significant as when Albert Hoffman did discover Lysergic acid on that date, 85 years ago.

Collective responsability

Educated people are responsible, no technology can make people more responsible.

In our current world, most people are not aware of the massive amount of content moderation that is done by humans. It is a great opportunity to shift this aspect and make us collectively more responsible, maybe technology can help us here. The new karma is also digital, users will be rewarded for contributing to the curation process (flagging inappropriate or harmful content and engaging in discussions about what should be shared within the community).

As a user of this new platform, the score of your curation activity will be directly correlated to the time your content will live in the system when you are not actively streaming it. It is hard to predict what kind of content will emerge when the curation score of a content producer impacts the longevity and accessibility of its productions, one thing for sure is that it will be very different than what the recommendation algorithm offers to us on centralized platforms.

Looking back, we might perceive our current recommendation algorithms as an unlucky stage in the evolution of online content consumption, ultimately paving the way for a more participatory and user-driven content ecosystem.

Convergence

The potential for convergence with AI models is really high, a hybrid system combining the knowledge of the previously curated content to assist users in their daily curation tasks. This opens the way for the next generation of recommendation algorithms, which do not optimize for the profit of the content distributor.

Each user will have the ability to customize their fitness function, whether this is done individually or collectively. This shift represents a departure from the traditional one-size-fits-all approach of recommendation algorithms and opens up a world of possibilities where users shape their content experiences according to their unique preferences and values.

This should bring us closer together, but we will have also to take responsibility for tracking original content collaboratively. The biggest challenge, abuse of gamification by adversarial agents, can only be addressed at scale if the network is able to reach a consensus to identify the original content and their producers to prevent flooding and Sybil attacks.

There is no need for a traditional blockchain to ensure decentralized validation of ownership, the core value here is proof of curation which has value only by relying on a web of trust. No technology has ever helped us to solve ownership conflicts for us, and never it will, instead we should leverage metadata to assist our individual and collective responsibility. This can help build credibility and establish ownership validation through social consensus rather than relying solely on technological mechanisms like blockchain.

Combining a distributed consensus algorithm with social consensus is a novel subject, ideally, it should be left to the user to make a choice among the available strategies as we experiment and move forward. In the long run, strategies might have political colors, similar to how using ad-blockers can be seen as a political statement. The alignment of validation strategies with political or ideological beliefs in such a network does resonate with our current reasoning on AI alignment.

I don’t think anyone has a perfect answer for that, but hopefully reasoning about such a system might help. Fundamentally, the decisions made by developers of algorithms should not have any impact on the alignment. Every parameter must be exposed so they can be tuned dynamically by the hybrid consensus system.


Aloïs Cochard - November 16th 2023

Saturday, April 16, 2016

Quickfix all the things with Sarsi

Here is a new project which got released recently on github and hackage, it is named Sarsi and it aim at being a sort of pandoc for quick fixing.

Quick fixing

It's basically fixing inside your text editor/IDE the warnings or errors returned by the compiler/build tool.


A sample session using sarsi-hs and sarsi-nvim.


Motivation

I do use nvim with stack, and even though I really appreciate all the features provided by the neomake integration. I really like also to have my build tool running continuously and fix from the output of it, especially when hacking on source dependencies.

Doing some coding with sbt as well, it always felt wrong I had to use two completely different setups to solve the same problem.

Design

Sarsi decouple this problem using a simple protocol which is implemented using msgpack and unix pipes. This should make it easily to embed a client directly into a text editor or a build tool.

The current implementation is written in Haskell, the library provide the main abstractions to write consumers/producers.

Where the terminology might ring a bell to those familiar with message brokers, we can basically say that a consumer is a text editor and a producer a build tool. This can slightly vary depending of integration needs.

Modules

There is currently two producers, one for Haskell, sarsi-hs (which should work with cabal and stack, but only tested with the later) and an other one for SBT, sarsi-sbt.

They do work slightly differently as you'll see in the documentation, but those details are abstracted away by the protocol.

It is then possible to consume those fixes using one of the two consumers.

The first, sarsi-nvim works exclusively for Neovim and gives the best experience and the easiest integration path.

If you prefer to keep your good old vim/vi, there is an other consumer,
sarsi-vi which will have to be started independently of your text editor.
It will then maintain in real time a file containing the fixes which you can load/reload using :cfile `sarsi`.vi .

Get started

If you would like to give Sarsi a try, checkout the README at:
github.com/aloiscochard/sarsi

You'll find there installation instruction, but if you have issues or would like to contribute a consumer/producer just join gitter.im/aloiscochard/sarsi for a discussion.









Monday, December 22, 2014

The Cake Is a Lie

In case you don't know yet, the cake pattern is a terrible idea.


I often hear people advising to use the Reader Monad as an alternative, but even if I find the solution conceptually elegant (and I use it often in Haskell), I'm skeptical about it being practical in Scala.
Instead, I was -in my previous company- using mainly implicits with a bunch of ugly Guice refection at the top layer (an artifact from the past more than a principled technological choice).

Having the chance to start a greenfield project recently, I decided to find an alternative to the solutions I did know about... as none of them were really satisfactory.

Talks is cheap, show me the code.

Basically I use implicits until the application layer were I just have one layer of stacked trait... let's see it in action, in this gist we have:
  • common.Module
    • A basic trait to describe modules (set of components)
  • persistence.PersistenceModule
    • A module with two components
  • persistence.postgresql.Database
    • A component
  • web.controllers.Foo
    • A Play! controller
  • web.Global
    • Integration with Play! dependency injection
  • tools.InitializePlatform
    • Integration with a main application


This approach seems to work great so far! Let me know if you give it a try.

Saturday, February 2, 2013

Quick bug fixing in Scala with SBT and Vim


UPDATE: Dave Cleaver created a SBT plugin do generate quickfix! https://github.com/dscleaver/sbt-quickfix

Until now if you wanted to have vim quickfix integration with SBT, you had two solution:
Unfortunately none of them are in my opinion usable for day to day development:
  • Launching SBT from Vim create a new instance of SBT at each call, and starting SBT takes time... lot of time.
  • VimSIDE is still very experimental (wasn't able to make it work outside of the demo project) and heavy, too heavy for me... one of the reason I love using Vim it's because it's light, fast and he don't try to change the code I write because he pretend to be smarter than me (like IntelliJ).
So, I like to use SBT in his own shell, running in interactive mode and enjoying the power of '~' for continuous compilation/testing.
The only thing is that each time I'm fixing compilation errors/warnings I have to navigate manually to the file and enter the line number to jump in the right place.

This time is over!
The solution is very simple (but very hacky!), just a single bash script which:
  • Append warn/error SBT messages into a temporary file
  • Monitor certain action done in the interactive session and delete the file when necessary
Let's see how the script looks like:
That's all! if you want to use it:
  • Download and make it executable
  • Add it to your path with a fancy name (hint: qsbt)
  • Edit your .vimrc to add the SBT error format pattern and a key-binding (check comments in the script for an example).
And now enjoy continuous quick-fixing by starting sbt with the qsbt script!

Monday, May 16, 2011

A simple (REST) web service client in Scala

For the purpose of creating a nice API for my new project Caligo (more on this in a future blog post), I was looking for a simple solution to access REST web service in Scala.

My requirement was simple: access an HTTP web service, and exchange data with him using the JSON format.

During my experiment I came across theses two nice Scala libraries:

Dispatch seems to be the only available HTTP client for Scala today, and since it's based on the Apache's one no need to worry about his reliability or compatibility.

SJSON was choose against Lift-JSON due to it's better handling of reflection on beans, both of them did a great job on case classes, but only SJSON was effective on beans (was mandatory for me since I must do polymorphism on my model and case classes do not support this).

Lift-JSON support of beans (thru reflection) must be improved on the version targeting Scala 2.9

Here is a sample usage of the frameworks, both of them are easily integrated, and using the power of the Scala syntax a full client request doesn't take that much code to be implemented (in this case 4 lines of code, model included):

Tuesday, June 22, 2010

Vim is everywhere

I'm now sure that vim is the perfect tool to write *clean* code !

And for vim fanatics, you can even get vim inside NetBeans (integration work much better than in Eclipse):
http://jvi.sourceforge.net/

Friday, April 30, 2010

Spring Batch integration module for GridGain

For the purpose of using Spring Batch in a scalable and distributed manner to process huge amount of data, I am actually developing some components to make integration of Spring Batch with compute/data grid easier.

Different solutions is offered by Spring Batch to provide scalability, the one that best suit my needs is remote chunking.

As I already done some investigation before using GridGain I chose this framework to implement a distributed remote chunking system that can be easily integrated into any existing Spring Batch systems.

Using GridGain is really straightforward, and setting up a grid on a development machine doesn't need so much configuration.

The only issue I faced is due to the fact that GridGain use serialization to deploy tasks on nodes, in order to be able to deploy a remote ChunkProcessor, it must contains serializable ItemProcessor and ItemWriter, which unfortunately is not the case by default.

So instead of creating new interfaces, I made a SerializableChunkProcessor which only accept serializable ItemProcessor and ItemWriter. It's surely not the smarter solution, but since I can't modify default interfaces in Spring Batch and I don't want to create my own interfaces, this workaround will suffice.

Usage
Here is the job application context used for the integration test, as you can see the 'real' ItemProcessor / ItemWriter are injected into the GridGain chunk writer:

Download
You can download the spring-batch-integration-gridgain module here:
http://github.com/downloads/aloiscochard/spring-batch-integration-gridgain/spring-batch-integration-gridgain-0.0.1-SNAPSHOT.jar

If you want to see a full working sample, take a look at the integration test. The full project sources can be downloaded here:
http://github.com/aloiscochard/spring-batch-integration-gridgain