I'm moving my personal site to #Hugo on #Nginx. I have a fresh server setup, firewall up, #Docker installed, an nginx docker container with a bind mounted folder with the site, nginx on the host redirecting :80 to the container, letsencrypt for https. Now I have to figure out Hugo themes? I thought that would be trivial, but I'm done for now. I'll read up tonight. https://robert.arles.us
Fediverse traffic is pretty bursty and sometimes there will be a large backlog of Activities to send to your server, each of which involves a POST. This can hammer your instance and overwhelm the backend’s ability to keep up. Nginx provides a rate-limiting function which can accept POSTs at full speed and proxy them slowly through to your backend at whatever rate you specify.
For example, PieFed has a backend which listens on port 5000. Nginx listens on port 443 for POSTs from outside and sends them through to port 5000:
upstream app_server { server 127.0.0.1:5000 fail_timeout=0;}
This will use up to 100 MB of RAM as a buffer and limit POSTs to 10 per second, per IP address. Adjust as needed. If the sender is using multiple IP addresses the rate limit will not be as effective. Put this directive outside your server {} block.
Then after our first location / {} block, add a second one that is a copy of the first except with one additional line (and change it to apply to location /inbox or whatever the inbox URL is for your instance):
300 is the maximum number of POSTs it will have in the queue. You can use limit_req_dry_run to test the rate limiting without actually doing any limiting – watch the nginx logs for messages while doing a dry run.
It’s been a while since I set this up so please let me know if I mixed anything crucial out or said something misleading.
The supply chain attack on XZ Utils is fascinating. It does not appear to be a hack but rather an inside job. The malicious code has been added by someone who has been co-maintaining the project for the past two years. There is a considerable amount of (presumably) legitimate and non-trivial changes associated with that person. No public changes unrelated to xz however from what I can tell quickly.
Given the effort that went into hiding the backdoor, I’m fairly certain that it was supposed to operate undetected for a long time. It’s probably just luck that someone noticed the side-effects it caused, discovering it merely a month after it was planted.
I’m looking forward to a thorough analysis of the implant, hopefully it will allow conclusions about intentions. As things stand know, this could be a long-term operation by an APT, pushing their maintainer into a popular project which (like way too many open source projects) was constantly short on contributors. Obviously, monetary interests are also a possible explanation.
Also worth noting: why was OpenSSH chosen as the target here? Some people blame systemd support, saying that distributors made OpenSSH vulnerable due to added dependencies. While this probably made the attackers’ job somewhat easier, I doubt that they would have given up without this dependency.
They also didn’t actually care that it is OpenSSH. They merely needed a network-connected vehicle for code execution. It certainly came in handy that OpenSSH runs as root and that it is installed pretty much on any Linux server.
The other obvious vehicle for this kind of attack would have been web server software. The only difference: with nginx and Apache there are two big players in this field, and the attackers would have to cover both. But there are plenty of dependencies here that could be abused.
Which means: nginx and Apache dependencies (especially the transitive ones and especially those used by both) should probably be checked for signs of suspicious activities. OpenSSL is the obvious target and has received significant scrutiny in the years since Heartbleed. But I wonder what else is there that nobody notices.
Well, actually, it is. #FreeBSD ports exist. You probably want to put guacamole in its own jail (but that's kind of a "standard" thing) and you also want to place such a service behind a reverse-proxy, but even that is covered for #nginx and #apache in guacamole docs: https://guacamole.apache.org/doc/gug/reverse-proxy.html
The only thing that wasn't trivial at all was setting up authentication against the samba AD, together with a database to store connection and map access privileges. But then, this is a very specific scenario, not sure it makes sense to write something about that?
Actually I can't really bash #Lemmy for this (as much as I'd want to) because it seems like pretty much the rest of the #fediverse has this problem of images not getting immediately removed from the server when you delete the image or even your account. I've tested with #Mastodon, #Pleroma, and #Misskey (even Misskey somehow gets it wrong by not immediately making your image unavailable by the URL even if you've deleted it from your Drive, however this doesn't always happen, see below).
Now I'm not sure about instance admin stuff (and I haven't tested with my admin hat on in this Misskey), but I'm guessing you don't have a view of all uploaded photos by a user that an admin can delete from in Mastodon and Pleroma either (please prove me wrong though!). :sagume_think: Misskey does let you view all files uploaded by a user and delete from there, though I'm not sure if the file gets immediately removed from the server once you hit delete, but it's probably gonna be like what I'm going to write below
What I do know about Misskey is that, if you've never accessed the URL generated for the file in your Drive, and hit delete, the next time you access it, it definitely will no longer be available; a pure 404 as expected. However if you've accessed the URL at least once, then the file will get cached. I'm not sure if it's Misskey doing this or the reverse proxy (like #nginx), because clearing cache or switching to a whole 'nother browser still shows the file is available.
Oh, and have I mentioned that all three software will automatically put your image into the server the moment you hit the upload button, even if you haven't submitted the post yet? Yeah it's equally as easy as Lemmy to accidentally upload something to those three as well.
The purgeable space feature of #macOS makes it an awful operating system for #server use cases. #nginx doesn't seem to support using purgeable space to fulfill its operations.
@GossiTheDog@dcoderlt@da_667 So ah… after someone had theoretically tried to whip something together for some minutes…
…would anybody have a link to how show someone could purely hypothetically configure certain files to map to /dev/urandom (so outside the webroot) on #nginx?
Звернувся клієнт зі скаргою на великі рахунки за використання сервісу маніпуляції медіа файлів https://cloudinary.com
Виявилось, що сервіс хостить файли на #AWS та перепродає трафік #Amazon з непоганою націнкою.
Запропонував взяти сервер у #Hetzner з безлімітним трафіком та зробити там кєш сервер за допомогою #nginx.
Вдалося зменшити трафік від #Cloudinary з 2-3Tb до 12-20Gb на день. Це дозволило зменшити місячний рахунок на 6000-8000 USD
@georgengelmann Sorry I didn't follow-up right away, I was kinda busy. I don't think this is my issue. I do have quic enabled, but only one the default server block in Nginx (listen 443 quic reuseport default_server;). For all my other domains I use: http2 on; http3 on;, which should go to HTTP/3 when possible. Yet, in all cases it uses HTTP/2 for all my resources :(... #http3#quic#nginx
@georgengelmann I'm still busy with it. I got some help from @debounced . http3 on; seems non existing anymore. So you need to use "listen 443 quic" on other domain confs. That being said, I still don't get HTTP/3 working. And just yesterday #nginx also forked to http://freenginx.org/
@georgengelmann@debounced Ow.. one more thing, adding a add_header to location, will REMOVE all existing headers from the server section. I know, it's insane right? Docs says (and read carefully): There could be several add_header directives. These directives are inherited from the previous level if and only if there are no add_header directives defined on the current level. #nginx#quic#header#headers#http#http3
Well then... this worked! I just dumped my old twitter archive* onto my mastodon server as another site/subdomain under https://twitterarchive.chrisalemany.ca
Now I can post live links to my old twitter posts, threads, etc. Awesome.