Streaming experiments

From this server (running on a small virtual machine in cloud, a single core xeon with 4 GB of RAM) I stream few samples of different size and bitrates in many ways. Common to all these streams is the usage of open source tools and the unicast trasmission, at the moment the only way available to anyone to transmit on the web.
Available, but effectively usable only through the cloud, because only a very small number of domestic connections allow more than 1 Mbps of uplink.
In the cloud - any virtual or physical server (cheap but never for free) - comes with (at least) 1 Gbps bandwidth web connection, even for a small virtual machine as mine (in italic because that slice of server is all but mine).
It is also clear that the diffusion of blade based servers, may increase the number of hosted virtual machines, but the the total bandwidth of the frame could only be statistically sufficient. The continuity of bandwidth performance could differ time by time and what is available at the creation of the virtual machine, will unavoidly be influenced by the surrounding Virtual machines load or at its moving from a physical server to another one.

Unicast

For those which do not know the terms, Unicast means the transmission of a number of identical and independent feeds as the number of requiring clients. This can be a true stream as well the copies of small files each one equal to few seconds of the entire lenght of a multimedia content, the media segment, as done in adaptive streaming. Anyhow it means : 1 user, 1 feed (or 1 fileset), 1000 users 1000 feeds (or 1000 filesets).
This identity seems mandatory in Video on Demand services (where any client requires to play or stop the content on demand, in an uncorrelated way) but it is also valid in live transmission (like a sport match or a news service) where there is only one feed to show, a show that happens now and which stop and repetition is much more a marketing argument than a required feature. But it is.

It is clear how this delivery is already hardly sustainable even for a little number such as 1000 simultaneous viewers. In the past the design requirements had been focussed on the disks performance on the hypotesys that any client would require to "butterfly" over the entire content. This would require as well the heads butterfly on discs. In practice this feature is used by a very little percentage of clients: The video is appreciated or rejected, and the usage of Solid State Memories is the solution of this problem.
The nightmare of output bandwidth increased with the drain: 1 million users, 1 million parallel feeds. Millions oh symultaneous hits are not common for video on the web, but broadcast services shares shows very often many millions of symultaneous viewers.
In radio broadcast the problem does not exist because the transmission relies on the satellite footprint or the terrestrial trasmitter towers network. On the web the approach is similar: as the number of clients share the the number of server is increased allowing to share the load between an increased number of servers. But even the cable/fiber bandwidth is finite: Only an intelligent usage of caches could avoid it.

http Caches

The only way the web has to hold up the peak loads of clients' requests is to reduce on the peaks the accesses to the servers. This was the point of failure of the pioneeristic distribution of multimedia contents over the Internet in mid 90ies.
The solution has been found supplying a certain storage capacity to the web nodes (i.e. the routers): the caches which store the contents passing by keeping them for reuse it for further requests.
It is clear that I'm not referring to the browser's cache (which steals part of our laptop hard disk to store last recalled stuffs), but to the reverse caches or proxy caches which, dealing with multimedia stream, allow te web to deliver contents (sometimes making a CDNs). (an interesting article has been written by Jeff Houston.

Many million of such web caches are today publicly available for distribution to all, not only through the CDN services sold as a service by CDN brokers or cloud operators. Probably I'm a dreamer, but I would prefer to have a plenty of embedded caches in the web, available to all for free, instead of enhancing the role of the bandwidth brokers which business would be reduced if a sufficiente number of such kind of free caches would be diffused. Unlucky the have a massive influence on the market.

What is sure is that today I cannot significantly transmit my contents over the web without passing through a broker (or alike) as YouTube which reserve the right of a non exclusive usage of my content.

The conventional caches are usually described as available only for all repetitive streams (e.g. a document as well a segment of an adaptive streaming transmission), but not for live streaming, because - they say - live stream is unpredictable.
On the contrary in case of live stream they could simply handled by edge cashes without requesting the content to the origin. Supplying a (small) degree of intelligence to the edge router, the same intelligence that allow the Internet to survive.

This is quite crazy, because - despite we all are convinced that bits and bytes has no sex - the CDN owners, brokers or service providers can discriminate the streams allowing of forbidding them from the usage of http caches.
Even with my bare tests, I noticed that at their beginning, the symultaneous drain of the same stream by multiple clients from many places (on different equipments with different OSs and software clients) charge the server with an aggregated bandwidth slightly more than one only stream, while today recalling the same stream from three PCs connected to the same router, charge the server with thrice the bandwidth.
Yes, it is clear, I cannot charge only the telcos, it could also due to my cloud supplier, depending on the settings of the VMware ESX manager on the server that hosts my small Virtual Machine.

What is shure is that with this approach is extremely expensive - impossible to normal cityzens - reach the million simultaneous viewers of radio broadcasting.
Nevertheless, ip delivery of media is the future, mainly because broadband connections will be available in the near future for anyone through mobile connections, limiting the physical wiring (copper as well fiber) to the first and second world.

The alternative to unicast is the multicast (the network approach to live broadcast delivery), made available only to the Telco's businesses.

Multicast

Multicast could be an alternative to this, but there are too many constraints for the effective diffusion of this delivery mode - usually disabled by Telco in IpV4, but mandatory in IpV6 - the first being the limited number of multicast addresses and the second (it couldn't be different) their centralized management.
To store the contents - expecially video - has been already proposed in 2000 by Hua, Tran and Villafane in a paper titled Caching multicast protocol for VoD delivery quickly called Caching Multicast Protocol (CMP).
In this paper they propose In other words, the network storage is managed as a huge "video server" to allow the application to scale far beyond the physical limitation of its video server. In this sense are already available intelligent enhancements required to perform clever multi-unicast or multi-multicast (or X-cast, mainly for VoD where - at the end - the same content is stored on caches instead been recalled from the origin.
But despite the diffusion of these enhancements on existing peripherals would probably require mainly firmware upgrades. Why Telcos should spent their money to decrease the delivery costs of OTT operators?

Transmission technology

My professional story is mainly centered around broadcast system integration. I'm not a programmer, so in this website you must not search the elegance of media purposing but a way to learn how the internet can be used to support content delivery over the web

You can find here:

    1) about all the size and compression formats available (up to hd and excluding at the moment Hevc/h265):
      * CIF 352x288 and - if not anamorphic - 512x288
      * PAL 720x288 and - if not anamorphic - 1024x288, with a couple of reduced horizontal size (Half D1)
      * HD 1080 as well 720 lines
    mainly progressive, but also interlaced
    2) live http conventional streaming and adaptive bitrate hls streaming segmented from live streaming with VLC command line
    3) rtp point-to-point potential web replacement of professional radio link

The stream are all pre-compressed (because of the small power of the server), the only things done live - when occurs - is the segmentation of hls from plain live stream. Only in one case (hls2) the segments are stored as they are.

I do not applied any encryption technology, but all external encryption methods can be used. But we must be aware that rights management laws must change aside the change of those economics models that brought to the current crysis. We can also be sure that - despite any effort of programmers - man will soon break any encryption: whatever man does man breaks, it is all a matter of time.

Nevertheless business centered on media distribution (expecially Business to Consumer) see in the defense of their expensive assets the core of their activity, after the advertisments the moment the only way to have a profitable Return of Investment.

Available contents

I've used:

    1) the collection of the first five short films of The Story of Stuff
    2) two samples of 5 and 10 minutes of the movie Home by Yann Hartus Betrand (see the disclaimer)
    3) one small insert about the Dyson's Fanless Fan (downloaded from the web, see the disclaimer)
    4) a nice interview downloaded from the web of an old partisan shot in Florence in 2003. Long live to him: today this partisan - i hope - is about 95 years old.
    5) 10 seconds shot on a highway bridge shop, filmed in interlaced HD (1080i) with my phone, useful to evaluate the quality of video compression, intenet connection speed, performance of encoder and decoders.
    6) A short 15 minutes long shot with my phone while biking along the river Tiber in Rome: it is a point of view never had by many romans too.