#48383 db2index uses a buffer size derived from dbcachesize
Closed: Fixed None Opened 3 years ago by firstyear.

db2index currently allocates a buffer as a derivative of cachememsize.

This value should be independent for the index process and documented clearly, so that server re-tuning isn't needed just to run db2index.

See ./ldap/servers/slapd/back-ldbm/import.c:48: job->fifo.bsize = (inst->inst_cache.c_maxsize/10) << 3;


Please note that import_fifo_init is shared between ldif2db (import) and db2index (reindex). And that's fifo for the utilities to use to pass the entries between the producer and the foreman threads. I wonder you got this WARNING message?
{{{
import_log_notice(job, "WARNING: skipping entry \"%s\"", slapi_entry_get_dn(e));
import_log_notice(job, "REASON: entry too large (%lu bytes) for "
"the buffer size (%lu bytes)", (long unsigned int)newesize, (long unsigned int)job->fifo.bsize);
}}}

Could it be worth just calling realloc on the fifo.bsize rather than trying to add another configuration item? This way we can handle Directories with really large objects instead.

Replying to [comment:2 firstyear]:

Could it be worth just calling realloc on the fifo.bsize rather than trying to add another configuration item? This way we can handle Directories with really large objects instead.

I'm leaning toward your idea.

The original design was 1) entry cache is supposed to be set as large as possible but less than the available physical memory, 2) thus fifo.bsize is calculated based upon the safe/large size.

But the above assumption is not always guaranteed, of course. So, if the "entry too large" is observed, it might be a good idea just to realloc the fifo to support the size.

One concern is if the fifo size is indeed set to the physical memory size limit and an entry is larger tha that, realloc makes the import exit due to the out of memory... Currently, we skip it and import the rest of the entries. We may want to set some upper limit base on the available physical memory size and not allow the fifo size beyond it.

Given that I'm moving the pieces of machinery around for dbcachesize_is_sane, and it work works with current proc and free mem, maybe we can call that before we attempt the re-alloc as a precaution? It's probably the most "safe" solution.

Maybe we should rename dbcachesize_is_sane to "check_large_alloc_sane", to represent what we are using it for?

Replying to [comment:2 firstyear]:

Could it be worth just calling realloc on the fifo.bsize rather than trying to add another configuration item? This way we can handle Directories with really large objects instead.

I agree with you. If you could check the size to be reallocated does not go beyond the existing physical memory, it'd be the best solution.

Replying to [comment:6 firstyear]:

Given that I'm moving the pieces of machinery around for dbcachesize_is_sane, and it work works with current proc and free mem, maybe we can call that before we attempt the re-alloc as a precaution? It's probably the most "safe" solution.

Yes, you already fixed the free memory size, so you have the necessary parts, I think...

Maybe we should rename dbcachesize_is_sane to "check_large_alloc_sane", to represent what we are using it for?

Sounds like a good idea.

I do like the idea to dynamically increase the buffer as long as memory is available, but why do we need to start at size 0 and slowly increase to a useful size ?
The initial calculation of the fifo size is still based on the cachememsize. Why not base the initial buffersize also (somehow) on the cachememsize and add the dynamics to increase it.

My thinking is that the cachememsize is such an "arbitrary" number to use for for the fifo size. We could be basing it off anything.

I'm happy to put the initial value back to cachememsize to start with, but I wonder if there is a better way?

I have one question...

fifo.bsize is used in 2 cases:
One is just for checking the upper bound of each entry size.
Another is for checking the current used size + new entry size is larger then bsize or not. If larger, it waits for the space.
{{{
704 / Now check if fifo has enough space for the new entry /
705 if ((job->fifo.c_bsize + newesize) > job->fifo.bsize) {
706 import_wait_for_space_in_fifo( job, newesize );
707 }
}}}

In hte second case, if fifo.bsize is small, it has to wait for space every time? I'd think this slows down import badly... As being done originally, you want to start with the large enough fifo.bsize?

In the old code, it is (cache size * 0.8), which is supposed to be large enough in most cases, but if the cache size is too small and the DS contains a huge entry (> 0.8 * cache size), the entry is skipped to import. I still think that is a quite serious configuration problem if the same too small cache size is being used in the ordinary mode. I.e., adding the huge entry to the cache would evict lots of other entries... But that is not an issue here.

Since you are checking the size with the available physical memory size, you could set bsize based upon it in import_fifo_init instead of "job->fifo.bsize = 0"?
{{{
48 job->fifo.bsize = (inst->inst_cache.c_maxsize/10) << 3;
48 / We don't need this funny number: Let the validate_capacity_or_expand do it's job /
49 / job->fifo.bsize = (inst->inst_cache.c_maxsize/10) << 3; /
50 job->fifo.bsize = 0;
}}}

The hardest part is this import could run for one backend instance while the other instances are up and running for the ordinary operations. They need their own entry cache. So, one particular instance may not be able to allocate too large chunk of memory... Not sure how you could guess the initial value in the most optimal way, but something like (available physical memory / number of backend instances) * (some factor like 0.8)? (yep, I brought up 0.8 again... If you could come up with more convincing factor, I'd be more than happy to ack it. :)

And since you have no plan to change the fifo length, no malloc nor realloc is needed. You are making the entry size limit more flexible based upon the accurate physical memory, right? So, I think you could delete this comment not to confuse the rest of us... :)
{{{
87 / Do we need to do any mallocs? reallocs? /
}}}

I did see that this was part of the behaviour. I was thinking about it and this is why the code starts at 0, then when the first object won't fit it does size * 4. This should allow 4 objects at a time to be processed. If anything bigger comes along, we can resize to make it larger again.

But you're right, it could slow it down also ... There is no easy solution without hardcoding something.

We could have it so that import_wait_for_space_in_fifo instead of blocking, attempts to do the resize itself so that the next object could fit in? And if it can't get the space it waits? But this could quickly expend all the memory on a system.

Perhaps the easiest answer is to set fifo.bsize initially to inst_cach.c_maxsize, and just expand if something bigger comes along. That way there is a clearer, direct relationship between the cachesize and the import's initally bsize, and we can handle larger objects if they appear.

Replying to [comment:12 firstyear]:

Perhaps the easiest answer is to set fifo.bsize initially to inst_cach.c_maxsize, and just expand if something bigger comes along. That way there is a clearer, direct relationship between the cachesize and the import's initally bsize, and we can handle larger objects if they appear.

+1

I think this solves the skip entry problem. If we want to touch the fifo itself, let's do it separately since it is not causing any issue for now... ;)

Sorry, I did miss your new proposal 7 days ago... :(

The second patch looks good to me. Thanks!

commit cb5c3155f310d829674df2f6c493693666a60399

To ssh://git.fedorahosted.org/git/389/ds.git
6788445..daf40aa master -> master

Metadata Update from @nhosoi:
- Issue assigned to firstyear
- Issue set to the milestone: 1.3.5 backlog

2 years ago

Login to comment on this ticket.

Metadata