Segmented download is the key to battling the burden of long tail content in your content delivery strategy. As we know, online content has a famously long tail. Even with more than 3.5 billion internet users, there is a massive amount of content online that is rarely seen or has never been seen at all. Long tail content has become a bit like a black hole. It’s a web page that no one ever visits (or maybe just by mistake), a news article that gets lost in a never-ending newsfeed, or an episode of a long-lost show waiting for interest from one nostalgic fan. We’re generating content at an astonishing rate, and that long tail is growing longer and longer. So, while the number of internet users is growing, it’s being drastically outpaced by the rate of content being created. As content providers, it’s now more important than ever to have the right kind of tools in place to optimize the content that users are interested in and reclaim resources being taken up by long tail content or content that users may view but are far less interested in. If you keep the long tail content, you can take up a lot of your resources. However, doing away with it limits your content selections and leaves fewer options for your consumers. So, how can content providers take advantage of long tail content? Segment it. By moving from monolithic downloads to segmented downloads on our content delivery network, we’ve helped content providers optimize during the caching and downloading stages of the content delivery.
Historically, if a user made a byte range or partial content request, that range was streamed from the origin. Simultaneously, the full content would be cached from the origin to a local server. If multiple users request different byte ranges, our network used a 2MB threshold to determine whether to proxy the new range request or have the user(s) wait to receive their range from the already inbound stream. Otherwise, another stream would be opened from the origin to serve the content. This could result in a window of time where the origin would see a higher than expected load while the complete file is pulled into the cache on the CDN.
The hang-up with what we’re calling the monolithic download model is that as content libraries and file sizes continue to grow, two major inefficiencies emerge: (1) although we can swap many cached files in and out of memory, the mounting volume of content still takes up plenty of resources because this download process requires the full file to be in memory as part of the delivery; (2) downloading the entire file for a partial content request puts unnecessary draw on the origin server, especially when the file is large. Even when the overall cache-hit-ratio is high for a particular origin server, there are still times when content won’t be in cache either because it’s been purged and/or it’s infrequently requested.
Long tail content accounts for a large portion of your library, and it will rarely be consumed and/or only partially consumed. A segmented download only pulls the ranges from the origin that end users actually consume, thus lowering the total outbound traffic from the origin. If there is a two-minute excerpt of a video that keeps getting requested or a movie no one ever watches to the end, we only pull the data that is needed to fulfill the request. It’s a small change that can go a long way in improving cache efficiency and reducing the draw on the origin. We’re no longer pulling or storing bytes that your consumers are not interested in, allowing the cache to persist for a longer period with the same amount of physical storage. With this improved model, we’re segmenting downloads in 8-megabyte chunks to optimize for inventory space, download speed, and other empirically measured efficiencies. We’re also spreading segments across multiple active drives on a given server to take advantage of input and output (I/O) efficiencies.
Challenges and Implications
Segmenting downloads from your origin is not without its challenges. Jump to time logic (where we translate time offset into a byte offset) has to interact intelligently with the segmenting logic. Given the delicacy of media time translation libraries, our teams had to conduct extensive testing to overcome this challenge. If you use a TTL (Time to Live) system that passes through to your origin, you will need to adjust the TTL to be long enough to allow all segments to be requested. In order to maintain the reliability of our instant global purge feature, we also made our inventory simultaneously file-aware and segment-aware. This ensures that users can still instantly purge full and segmented files stored in different areas.
We are still developing this method to improve its effectiveness and unlock even more benefits. For starters, we’re also looking at cross edge segmenting. So, in addition to storing segments across the drives on a given server, we are developing the ability to spread segments across multiple edge servers to gain even greater I/O efficiencies. Another functionality we’re working to incorporate will allow us to apply our least recently used (LRU) algorithm, which we already use for complete files, to segments. Refining our LRU algorithm in this way will enable the system to erase complete files and segments at different rates, optimizing storage space based on the independent access frequency of each file or file segment. The larger the library, the longer the long tail grows, but having a massive content library allows you to cover a wider array of consumer interests. Segmented download allows you to capitalize on all the advantages of keeping infrequently/partially used content without the added costs and burden on your infrastructure.