HTTaP should not be used in public-facing networks, where bandwidth is a concern (latency often is more important). Compression is not a priority but it IS possible to implement it.
The inherent cost is that the server needs to parse the request headers and find the line that declares that compressed files are allowed. It's possible but this uselessly increases the coding effort...
20170429:
A couple of interesting things to notice.
First, compression is interesting for large files, not the small requests and answers. This happens in two cases:
- For large HTML or JS files that are served to the browser. Images are typically already compressed. Selected files can be pre-compressed and served if the client supports the chosen algorithm (usually gzip/deflate). But don't forget that proper minification and cleanup already contribute to smaller files and shorter downloads. Removing the tabs, whitespaces and comments shrinks data from 20 to 60% (depending on the source code's style)
- Large data chunks, such as memory dumps, are expected to be exchanged in both directions and can't be pre-compressed. One easy way to reduce the chunk sizes is to use raw binary encoding instead of ASCII-encoded strings. Another is to use the #HYX file format that supports repetitions of the last character, as well as address ranges. It's not as good as proper compression but is easily supported in C, JavaScript, bash...
Second, the client is not expected to change its capabilities during a TCP session. This means that the headers can be parsed just once, when the server receives the first request. The next requests can just skip the headers.
20200505:
If needed, the #micro HTTP server in C could serve pre-compressed files thanks to the "shadow" system that adds headers.
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.