Porg Teaches Us Why TTLB (Time to Last Byte) Beats TTFB (Time to First Byte)
In honor of the new Star Wars movie, I planned a DIY project with my five kids (yes, I have five!). We figured out parts and material and were ready to dive in. However, first I had to order some parts from a Chinese website, and pay extra for a rush order. We were excited to get the first package from Amazon Prime the very next day!
Over the next couple of days the rest of our order arrived except two critical things: the base, and a special clay-like material we needed. It has been over 10 days, and it doesn’t look like Porg will have a house anytime soon.
If you’re into network performance (if you’re a Site Reliability Engineer (SRE), in DevOps, etc.) you might understand the moral of the story.
Just because the first package got here so quickly did not mean that our project would be done quickly. If not all of the packages arrive, it doesn’t really matter. (In geek talk, replace package with TCP packet.) So why for years has website performance used TTFB (Time To First Byte) as a key metric? Truth be told, we should actually care at least as much about TTLB (Time To Last Byte).
TTFB is the amount of time it takes to receive the first byte from the server. A browser sends a request. It then gets to the web server, where it might call a database, some other logic, maybe even another server or service. When done, it starts sending the response back to the browser. So TTFB actually measures the time the browser request was waiting for the server to finish running its logic.
For SaaS and content providers, this is important, of course. It measures how effective the backend code is: the DB performance and all other backend components. As a matter of fact, in many cases, this is also the only thing directly in their control. Once the web response starts its journey over the Internet, there is nothing anyone can do (hint: anyone but Teridion and its customers).
Think about it. Formula One cars are engineered to perfection! But put them in the Silicon Valley traffic, and they will be stuck like the rest of us. So as good as developers, DevOps, and SREs are, all they can do is improve the TTFB.
By now, it’s clear that even if the first byte is really quick, unless ALL the bytes get to you, it’s not relevant from the end user perspective.
What Really Matters for Users
- We should actually care about 4 things:
- Time to Last Byte (TTLB)
- The ratio TTFB/TTLB
Only once the last byte of all components is received by the browser/client–the DocumentComplete–can the user actually enjoy the web page (assuming, your website is enjoyable!)
The ratio between the TTFB and TTLB is critical. This is also known as “throughput” or “how fast can you move a chunk of data from the server to the client” (or as I call it HFCYMACODFTSTTC). If the time from TTFB to TTLB is short, your connection is good.
The key thing is that as a developer/DevOps/SRE, you can (and should) control the TTFB. You can also improve the DocumentComplete using many known methods. While caching your static resources on a CDN will help, there are limited ways to improve the ratio between the TTFB and the TTLB for the many dynamically generated components of modern applications and websites.
How to Improve Throughput (for the whole page)
Teridion Kumo-X actually improves the NETWORK performance. Think of the network as pipes, and despite having a big pipe (say, 100Mbps) you only get a dribble.
This is your network:
This is your network with Teridion Kumo-X:
Now you choose!
It’s easy to see that more water flows “with Teridion Kumo-X.” That’s because Kumo-X actually increases the throughput, giving you a wide, fast lane that avoids traffic and congestion. It means that you can transfer the whole page payload 300% to 700% faster.
That means your application or website will load much faster for your customers. And that means improved user experience and happier customers.
Read more about how Teridion Kumo-X improves Internet performance for application and content providers.