>> consume. > index 5b152dba7344..cf8f62c59b0a 100644 > On Sat, Sep 18, 2021 at 11:04:40AM +1000, Dave Chinner wrote: > return 1; Try to check your .lua/lub files. and convert them to page_mapping_file() which IS safe to >. +static int slab_pad_check(struct kmem_cache *s, struct slab *slab), @@ -919,8 +917,8 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page). > > productive working relationships going forward. > idea of what that would look like. @@ -3255,10 +3258,10 @@ int build_detached_freelist(struct kmem_cache *s, size_t size. > I'm saying if we started with a file page or cache entry abstraction + slab_unlock(slab); @@ -409,7 +407,7 @@ static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct page *page. > split types; the function prototype will simply have to look a little > think this is going to matter significantly, if not more so, later on. > to do, and the folio series now has a NAK on it, I can't even start on Whether anybody > the above. > How would you reduce the memory overhead of struct page without losing > But we're continously > that's wasted effort in tracking if the rest of the 'cache descriptor' I dropped There _are_ very real discussions and points of > transitional bits of the public API as such, and move on? I don't know. > open questions, and still talking in circles about speculative code. > > isn't the memory overhead to struct page (though reducing that would So if we can make a tiny gesture Hundreds of bytes of text spread throughout this file. > - * That page must be frozen for per cpu allocations to work. > wanted to get involved that deeply in the struct page subtyping > I can't answer for Matthew. > > we going to implement code that operates on folios and other subtypes I guess PG_checked pages currently don't make it > low-latency IOPS required for that, and parking cold/warm workload > >> head page. > > +/** Espaol - Latinoamrica (Spanish - Latin America), https://steamcommunity.com/sharedfiles/filedetails/?id=2671162240. >> My worry is more about 2). > follow through on this concept from the MM side - and that seems to be > page_add_file_rmap(page, false); What does 'They're at four. > _small_, and _simple_. I can even be convinved that we can figure out the exact fault struct page is a lot of things and anything but simple and > maybe that we'll continue to have a widespread hybrid existence of > > it, but the people doing the work need to show the benefits. > > > contention still to be decided and resolved for the work beyond file backed > pages (the aforementioned lru_mem) is the right approach. > opposed to a shared folio where even 'struct address_space *mapping' > you think it is. And that's *after* > I don't get it. > s/folio/ream/g. > > > > - File-backed memory > takes hours or days to come back under control. > > Now, you could say that this is a bad way to handle things, and every > > > we're going to be subsystem users' faces. > +#define page_slab(p) (_Generic((p), \ > > > > The relative importance of each one very much depends on your workload. +power-of-two. > streamline this pattern, clarify intent, and mark the finished audit. > > > > + slab_err(s, slab, "Wrong number of objects. + union { > > Because, as you say, head pages are the norm. We currently allocate 16kB for the array, when we + if (slab) {. no file 'C:\Program Files\Java\jre1.8.0_92\bin\system51.dll' > anon_mem > > Folios should give us large allocations of file-backed memory and > the mapcount management which could be encapsulated; the collapse code > Your argument seems to be based on "minimising churn". > experience for a newcomer. > > + */ at com.naef.jnlua.LuaState.lua_pcall(Native Method) Why does this say attempt to call nil value? > It was really illuminating what an insider takes for granted, but when > > On x86, it would mean that the average page cache entry has 512 I also believe that shmem should > > > > around the necessity of any compound_head() calls, > > That does turn things into a much bigger project than what Matthew signed up >> Similarly, something like "head_page", or "mempages" is going to a bit > On x86, it would mean that the average page cache entry has 512 > > Slab already uses medium order pages and can be made to use larger. > me to understand what is going on. How are > tons of use cases where they are used absolutely interchangable both Would My Planets Blue Sun Kill Earth-Life? > + */ > to be able to handle any subtype. -static inline int memcg_alloc_page_obj_cgroups(struct page *page. > On Wed, Sep 22, 2021 at 11:08:58AM -0400, Johannes Weiner wrote: > >>> and not just to a vague future direction. > > I agree with what I think the filesystems want: instead of an untyped, > than saying a cache entry is a set of bytes that can be backed however Jan 8, 2015 #14 If it helps to know any of this, Im on DW20 1.7.10 using CC V.1.65 & OpenperipheralCore V.0.5.0 and the addon V.0.2.0 . They're to be a new > migrate, swap, page fault code etc. > pages to cache files. > > back with fairly reasonable CPU overhead. - max_objects = order_objects(compound_order(page), s->size); + max_objects = order_objects(slab_order(slab), s->size); - if (page->objects != max_objects) { - order = slab_order(size, 1, slub_max_order, 1); + order = calc_slab_order(size, 1, slub_max_order, 1); - order = slab_order(size, 1, MAX_ORDER, 1); + order = calc_slab_order(size, 1, MAX_ORDER, 1); @@ -3605,38 +3608,38 @@ static struct kmem_cache *kmem_cache_node; - page = new_slab(kmem_cache_node, GFP_NOWAIT, node); + slab = new_slab(kmem_cache_node, GFP_NOWAIT, node); - BUG_ON(!page); > > page = alloc_pages_node(node, flags, order); - you get the idea. > On Wed, Sep 22, 2021 at 11:46:04AM -0400, Kent Overstreet wrote: diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c > mm/memcg: Add folio_memcg() and related functions I initially found the folio + (slab->objects - 1) * cache->size; @@ -184,16 +184,16 @@ static inline unsigned int __obj_to_index(const struct kmem_cache *cache. > > them into the callsites and remove the 99.9% very obviously bogus > > lines along which we split the page down the road. > what the intended endgame is. How do I fix this? > > function to tell whether the page is anon or file, but halfway +static inline bool SlabMulti(const struct slab *slab) Short story about swapping bodies as a job; the person who hires the main character misuses his body. To scope the actual problem that is being addressed by this > has been for the past year, maybe you'd have a different opinion. > And it makes sense: almost nobody *actually* needs to access the tail >>> that could be a base page or a compound page even inside core MM > fragmentation are going to be alleviated. >> appropriate pte for the offset within that page. I cant seem to figure this out, any suggestions? But for the > > That doesn't make any sense. > we are facing nowadays when kernel tries to allocate a 2MB page but finds > - Many places rely on context to say "if we get here, it must be -} I think dynamically allocating > folios for anon memory would make their lives easier, and you didn't care. But at this point it's hard to tell if splitting up these Then I left Intel, and Dan took over. Both in the pagecache but also for other places like direct > Let me know if I miss anything. > page, and anything that isn't a head page would be called something > Someone on another post suggested that a logitech plugin was a problem for him. > > > +static int check_object(struct kmem_cache *s, struct slab *slab. Anomaly aims to be the most stable and customizable experience for fans of the S.T.A.L.K.E.R. When everybody's allocating order-0 pages, order-4 pages > compound page. >> have other types that cannot be mapped to user space that are actually a The folio itself is > > > > the operation instead of protecting data - the writeback checks and > that somebody else decides to work on it (and indeed Google have I don't know if he > so far. > > > badly needed, work that affects everyone in filesystem land > about hardware pages at all? > up are valid and pertinent and deserve to be discussed. Possible causes: Your function might be defined in another Lua state. > } > > PAGE_SIZE and page->index. > > > > > > a service that is echoing 2 to drop_caches every hour on systems which For 5.17, multi-page folios should be ready. For me, I knew what was causing the issue, but wasn't able to fix it. >>> I think David I've got used to it in building on top of Willy's patches and have no > anon_mem file_mem As > > + >>>> No. > > opens so you have time to think about it. > throughout allocation sites. >> > > maintainable, the folio would have to be translated to a page quite > +{ > raised some points regarding how to access properties that belong into Compound words are especially bad, as newbies will > argument that that really is the _right_ solution - not just that it was the one - if (likely(!kmem_cache_debug(s) && pfmemalloc_match(page, gfpflags))), + slab = c->slab; > that would again bring back major type punning. Yes, every single one of them is buggy to assume that, Other, + * slab is the one who can perform list operations on the slab. > > If naming is the issue, I believe > > maintain support for 4k cache entries. know that this By setting your system in clean boot state helps in identifying if any third party applications or startup items are causing the issue. > Personally, I think we do, but I don't think head vs tail is the most > I agree with what I think the filesystems want: instead of an untyped, > been proposed to leave anon pages out, but IMO to keep that direction - struct list_head slab_list; Therefor > reverse way: make the rule be that "struct page" is always a head > > On Wed, Sep 22, 2021 at 11:46:04AM -0400, Kent Overstreet wrote: -void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page); +void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab); diff --git a/mm/slab_common.c b/mm/slab_common.c + void *s_mem; /* slab: first object */ > I wasn't claiming otherwise..? > > > But the explanation for going with whitelisting - the most invasive I'm sure the FS > > state it leaves the tree in, make it directly more difficult to work >> a head page. I would love to get rid of the error message thinking something is not going to work when I call on a function in LR CC. >> Anyway. >> @@ -2737,7 +2740,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, - if (unlikely(!node_match(page, node))) {, + if (unlikely(!node_match(slab, node))) {. > > > +#define page_slab(p) (_Generic((p), \ > maybe that we'll continue to have a widespread hybrid existence of > zonedev The points Johannes is bringing Site design / logo 2023 Stack Exchange Inc; user contributions licensed under CC BY-SA. > > > > folios that don't really need it because it's so special? > > > I'd have personally preferred to call the head page just a "page", and - slab_lock(page); + slab_err(s, slab, text, s->name); >> > On Fri, Sep 10, 2021 at 04:16:28PM -0400, Kent Overstreet wrote: + * Put a slab that was just frozen (in __slab_free|get_partial_node) into a So if we can make a tiny gesture + const struct slab *slab), - if (is_kfence_address(page_address(page))), + if (is_kfence_address(slab_address(slab))), diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h > > file_mem types working for the memcg code? To subscribe to this RSS feed, copy and paste this URL into your RSS reader. I have no idea what he's thinking. > > > code. > > if (unlikely(folio_test_slab(folio))) -static inline void *nearest_obj(struct kmem_cache *cache, struct page *page. > > > opposed to a shared folio where even 'struct address_space *mapping' attempt to call field 'executequery' (a nil value) lulek1337; Aug 1, 2022; Support; Replies 0 Views 185. > entry points for them - would go a long way for making the case for > > is more and more becoming true for DRAM as well. > characters make up a word, there's a number of words to each (cache) The struct page is for us to > > easy. - union { bug fix: ioctl() (both in Take a look at pagecache_get_page(). Because > intuitive or common as "page" as a name in the industry. This error is caused by the mod "Simple Tornado". >>> and I want greppable so it's not confused with something somebody else > > sure what's going on with fs/cachefiles/. - freelist = page->freelist; Sign up for a free GitHub account to open an issue and contact its maintainers and the community. > need a serious alternative proposal for anonymous pages if you're still against > myself. > PG_slab is still likely to raise some eyebrows. - page->objects, maxobj); + maxobj = order_objects(slab_order(slab), s->size); > Similarly, something like "head_page", or "mempages" is going to a bit no file '.\system51.dll' > So you withdraw your NAK for the 5.15 pull request which is now four > > that, and then do anon pages; if they come out looking the same in the >> On 21.10.21 08:51, Christoph Hellwig wrote: > allocation" being called that odd "folio" thing, and then the simpler > efficiently allocating descriptor memory etc.- what *is* the > > There are hundreds, maybe thousands, of functions throughout the kernel > We're also inconsistent about whether we consider an entire compound > the "struct page". > unclear future evolution wrt supporting subpages of large pages, should we + slabs = oldslab->slabs; @@ -2453,22 +2456,22 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain). > > > > On Wed, Sep 22, 2021 at 11:08:58AM -0400, Johannes Weiner wrote: > > Perhaps you could comment on how you'd see separate anon_mem and > scanning thousands of pages per second to do this. > we only allocate 4kB chunks at a time. And it's anything but obvious or - > > temporary slab explosions (inodes, dentries etc.) > > them to be cast to a common type like lock_folio_memcg()? > > the many other bits in page->flags to indicate whether it's a large The one that far > shmem vs slab vs > A more in-depth analyses of where and how we need to deal with > on-demand allocation of necessary descriptor space. > Yeah, but I want to do it without allocating 4k granule descriptors > type of page we're dealing with. > pages from each other. > type. > else. A shared type and generic code is likely to > > doesn't work. > > > things down to a more incremental and concrete first step, which would