BazQux Reader Community

504 Gateway Time-out

I agree, the tool is good, but the support we get is getting worse and worse,
maybe he has no time for it, he wants to focus on his new job and only,

but it’s not our fault, he should think of a solution or something, I don’t know

also @glench these things happen, yeah, but they don’t take so much time to get fixed usually, waiting for mr @vshabanov to finish his daily job is his right of course, but seems ridiculous to me

Well, the service costs roughly one third of inoreader’s price.
You’re right, these things should be fixed much sooner.
But I guess you’re being also pretty harsh, given the much more affordable cost.

Still, I’ll think about moving off-the-cloud to something like ttrss, but I’m not sure they will be as complete solutions as bazqux et al.

1 Like

he just replied to my email this

"I didn’t check emails this morning and now I’m in the office and cant fix it until the evening.

It’s those damned “new” servers I recently migrated too. Old ones were starting to halt once in a few months and these new ones do it every few weeks. Need to move to something more reliable


I’ll echo what @glench said. I’ve been using Bazqux Reader for many years and the two recent outages have been the only ones I’ve noticed in all that time. If this starts happening all the time then I’d probably jump ship, but we’re talking twice in 10 years.

RSS readers are a niche product and BazQux has a fairly small market share, obviously we’re not going to be able to have on-demand support and I think that’s understandable, especially given how cheap the subscription is (under half of Inoreader).


Well at least we know it’s acknowledged and have some idea when it’ll be fixed. Thank you for sharing!

The service has been reliable far too long for me to be angry yet, but yes this is frustrating. I can’t imagine his anxiety having to go on about his day knowing it’s broken and there’s nothing he can do right now. :grimacing:

Catchup is going to be a slog, time to get a backup up and running when I can get my opml.

1 Like

Also echoing what @glench said. I’ve been a user since the beginning and the service has been superb. The two recent outages are unfortunate but I don’t doubt @vshabanov will look at measures to minimise/avoid any disruption going forward.

To be blunt, I’m surprised to hear the service isn’t monitored and managed 24/7. Obviously I don’t expect eyes on glass for something like this, but the owner should get notifications on his phone and have the ability to reboot hosts remotely.

I run a bunch of websites myself. I don’t charge users anything, it’s all hobby stuff. My hosts are monitored via the host provider (Linode, sends an email when a host goes down), locally (via the very simple monit, which sends emails when a disk is getting full, process not running, etc) and remotely via FreshPing (which I won’t link to as they aren’t accepting new accounts, but there are tons of free alternatives like Uptime Robot).

So I’m notified from multiple sources whenever there’s a problem, both via email and Pushover notifications on my phone. Then I can easily hit Linode’s web UI or SSH into my hosts to fix them. I do work in IT so I’m sitting at a computer, but even if I wasn’t I could do it from my phone given 15 minutes.

I don’t charge people $30/year to use my services, That’s the level of support I provide for free.

Edit: Very cool to see you here @Lon_Seidman! I’m a fan.


My goodness… most of you posting about the recent outage need to chill out, especially when we’re only paying a small annual subscription fee. As some have noted, the most recent problems aren’t typical. What that tells me is that the source of the problem might not be something our host can immediately - or perhaps even directly - fix. Think about it… how often has a software provider (gee, did I mean to say Microsoft)… introduced changes that disrupted a whole chain of previously bullet-proof systems? Some of you have suggested alternatives. That’s great… please go sign up for them! Others have complained about workflow disruption… gee, maybe you should improve your adaptability, just in case any other disruption ruins your day.


This is a commercial service. A full day of downtime with no communication is simply not OK. The service needs to be reasonably supported.

1 Like

Just loaded up for me. If he’s going for a new server to use maybe it’ll all work out


Indeed, back up now. Plenty of time before my commute home!

Yes, it’s fixed now. Feeds will be updated soon.

Worst possible timing – server died precisely when I went asleep and I didn’t check emails in the morning before going to work, so it was down for as long as possible.


Thanks @vshabanov. Someone mentioned above that you may be looking to move servers? Is that the current plan or are you hoping today was a one off event with the current servers?

Agreed. The recent outages have not been the norm. @vshabanov has provided a great service over the last 10 years.

1 Like

I moved to new servers this February. Old ones were 6 years old and were occasionally freezing (1-3 times a year). Unfortunately, new ones are not completely new (about 2-3 years old) and they had various issues during last few months (one halted few times, another one had memory related issues).

What happened today was actually the same issue I had on previous servers: freeze inside zfs-fuse – ZFS filesystem for Linux – server is alive but any disk I/O is halted (easily fixed with a reboot). Looks like it’s a software problem (in general zfs-fuse seems to not be maintained much nowadays), so I will probably switch to a more maintained filesystem.

Today’s issue was especially visible as it happened on a frontend load balancer making inaccessible ( was still available). Previous ones usually lead to feed updates being stopped.

I’ve just checked Hetzner’s website and they now have a few new server types and no longer offer models I switched to. So there’s a good chance that these ones will be really new (not 2-3 y.o.).

So the current plan is to move to new servers again and switch to a different filesystem. Hope it will help to mitigate issues regularly appearing during the last few months (and ZFS issues of the last few years).


Hey @vshabanov, I just learned about #hugops, a way of sending empathy and appreciation to the real people that run software. Just wanted to send this to you and appreciate your ongoing efforts to run a service I use and love. I understand completely why the downtime happened at the worst possible time and know that it was likely stressful to be in that position. And it sounds like you have a good path toward fixing the issue so that it won’t impact you or us. Good luck on making that happen!

Just wanted to share that the downtime didn’t impact me very negatively. I use it for personal use, following sites and newsletters mostly for entertainment.


Couldn’t agree more.

I’m ashamed that people react this way to @vshabanov.

It’s not his daily job. He runs a great cheap service. So what, that once in a while his service is interrupted? Don’t wine about it or go find yourself an other service.


BazQux now runs on new servers. And these are really new with new drives. And I switched from buggy zfs-fuse to a standard ext4 (probably there are better filesystems, but ext4 is a safe default from the stability point of view). As a nice bonus, new servers have faster CPUs (almost 1.5 times faster), so the reader should be faster now.


What kind of bugs did you experience with zfs?

It was zfs-fuse, not a proper ZFS (which isn’t available in Debian). It worked well until 2021 (maybe until some Debian upgrade). But then it started freezing every few months on random servers. Everything works fine, except any access to ZFS volume freezes (like it could be possible to cd some_dir_on_zfs but less some_file_on_zfs freezes somewhere in the kernel).

Initially, I thought that it was some kind of hardware failure. But I’ve got the same issue on newer hardware and the latest Debian. So I think it’s a software bug, and I decided to move to the default ext4 filesystem, as it should be the most stable from a software point of view.

1 Like