#213 Consider limiting journal size
Opened 2 years ago by mclasen. Modified 7 months ago

This issue has been raised again recently, so I wondered if we should discuss it in the workstation context:


This is a lot less of a problem for us now that we have one big space pool between / and /home with Btrfs. I'm not sure we really need to do anything anymore.

Assuming that systemd isn't doing anything funky, the F34 "Enable btrfs transparent zstd compression by default" feature should also help here - my 4GB journal (since July 31) compresses at 'zstd -1' to 318M.

So, I don't think disk usage is a big problem (though there are plenty of upgraded systems with uncompressed split / and /home.) There may still be an argument from a "retain unexpected and unnecessary amounts of information" point of view , but that would argue more for time limits than size limits.

yes, I agree that time limits are more interesting

That said since the amount of log data produced is somewhat predictable and time limit would in practice also function as as disk usage limiter in practice.

systemd-journald sets nodatacow (chattr +C) on /var/log/journal/ if it's on Btrfs, so these are not compressed. The typical write pattern is small enough that it's not compressible anyway, we typically get a new 4KiB extent every time something gets written to the journal.

My understanding of systemd-journald compression is that it only applies to coredumps, not the journal itself.

I think 60 days is a reasonable retention time frame.

I'm in favor of a retention time limit. I would have picked one year, but I'm fine with 60 days or anything in between.

Metadata Update from @catanzaro:
- Issue tagged with: meeting-request

2 years ago

In a "Workstation" product context log retention time is bound by data retention laws which makes it what no less than a year as @catanzaro would have picked.

If this was a "Desktop" product as in a personal computer used to handle daily tasks by individual users, 6 months would probably have been a sane default ( or even as low as 3 months ) but then you also would have completely different installation set and applications.

There is a difference between a workstation and a desktop, they have completely different scopes,target audience, different requirements and set of defaults and it often feels like the workstation group is trying to use these two different roles interchangeably, which in turn leads to at best a barely adequate product for those two different audiences.

tbh I don't know what the difference between a workstation and a desktop is. But I'll bite: what data retention laws do you think apply to the systemd journal...?

The data retention law that affects it bound by the regulatory or legal requirements for retaining it in the enterprise environment in which the workstation product is expected to be deployed in.

What arguably the workstation working group lacks is a clearly defined data retention policy, which outlines which information is being collected, how long that information is being kept and how the information is being disposed when it’s no longer needed, to have consistent collection/retention/erasure practices within the scope of the workstation product.

The journal would then be configured to match the scope of such document as would other components that make up the workstation product. ( like default's in web browser, file history duration, the deletion period of trash and temporary files etc. ).

Default Fedora 34 installation on 2021-01-20 is currently using 1.7G, max 4.0G, 2.2G free for the systemd journal. This represents 178 boots. If this is 3x more frequent booting and accumulation of logs than the typical case, then the typical case would have ~12 months of logging capacity.

man journald.conf lists MaxRetentionSec= which could take a value of e.g. 2month, 8week, 60day, etc.

Note: btrfs compression has no effect on systemd journals. (a) typical journal write is much less than 4KiB which is the minimum block size, thus no advantage of compression so it's skipped; (b) systemd enables nodatacow for journals by default, and nodatacow also means no compression.

Workstation WG doesn't seem to have a strong opinion on this.

Action: Chris to raise another trial balloon on devel@

Metadata Update from @catanzaro:
- Issue untagged with: meeting-request
- Issue tagged with: pending-action

a year ago

I don't have a strong opinion or deep understanding of the issue, but my immediate reaction was that 4GB is rather a lot of log and maybe 6 months worth might be sufficient for most cases.

i am often short of diskspace.

Metadata Update from @petersen:
- Issue untagged with: pending-action

a year ago

Metadata Update from @petersen:
- Issue tagged with: pending-action

a year ago

Followup to devel@ list has yielded no additional replies after 14 days. I think the next step is to propose 6 month retention as a Fedora wide change proposal for the Fedora 35 cycle.

A journal file, could contain up to 1 month of entries. Once a journal file contains an entry subject to max retention time, the entire file is deleted. Therefore, assuming no other Max values are reached, there are a few options:

This means a range of 5-6 months.


This means a range of 5 months plus 3 weeks to 6 months.


This means a range of 6 months to 6 months plus 1 week.

@chrismurphy Could we set that to 1 year, since that's relatively similar to the lifespan of a Fedora release?

SystemMaxUse defaults to 4G which is approximately a year of typical logging. Setting MaxRetentionSec=1year would have minimal effect. And it also wouldn't achieve one of the proposed goals, which was to make the retention more consistent. The heavier logging cases will still hit 4G before 1 year and thus in practice we'd see something like a 6 month float unless we also bump SystemMaxUse, probably by double the current value.

Since the range of opinions ranges from 2 weeks to 1 year, and the original proposal by @mattdm is six months retention, and logrotate defaults to 4 weeks retention, I think there'd be broader support for change to 6 months than anything else.

Sure, it's not a hill I'm willing to die on.

Action: Chris to raise another trial balloon on devel@

Did this happen?

Metadata Update from @aday:
- Issue assigned to chrismurphy

a year ago

This was discussed at the WG meeting on 1 Feb 2022. Two points raised there:

  1. If the logs were compressed with btrfs, that would alleviate the issue. There's a change coming in systemd, possibly in v250, to change journal files from nodatacow (and thus not compressed) to datacow at rotation time, thus enabling transparent btrfs compression. But we need to check and see if this works in practice.

  2. The general consensus was that, even if the logs are compressed, it would be good to optimise the size, since they are currently arbitrarily large. Chris, Michael, Allan, Owen were in favour of setting a time limit of ~3 months. Neal was in favor of 6-12 months retention.

Next step is presumably to make a more concrete proposal and see what the response is?

Default clean install of Fedora 36 Workstation with systemd-250.3-3.fc36.x86_64 and allowed to age a few days while using systemd.log_level=debug so I already have about 2G's logs.

$ sudo compsize /var/log/journal/1ba2f5e7849c488bbba8ed1fa16e0265/
Processed 29 files, 17193 regular extents (18645 refs), 0 inline.
Type       Perc     Disk Usage   Uncompressed Referenced  
TOTAL       23%      468M         1.9G         1.8G       
none       100%      217M         217M         129M       
zstd        13%      243M         1.7G         1.7G       
prealloc   100%      8.0M         8.0M          18M       


Login to comment on this ticket.

Attachments 1
Attached 7 months ago View Comment