GC.cpp (176297B)
1 /* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*- 2 * vim: set ts=8 sts=2 et sw=2 tw=80: 3 * This Source Code Form is subject to the terms of the Mozilla Public 4 * License, v. 2.0. If a copy of the MPL was not distributed with this 5 * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ 6 7 /* 8 * [SMDOC] Garbage Collector 9 * 10 * This code implements an incremental mark-and-sweep garbage collector, with 11 * most sweeping carried out in the background on a parallel thread. 12 * 13 * Full vs. zone GC 14 * ---------------- 15 * 16 * The collector can collect all zones at once, or a subset. These types of 17 * collection are referred to as a full GC and a zone GC respectively. 18 * 19 * It is possible for an incremental collection that started out as a full GC to 20 * become a zone GC if new zones are created during the course of the 21 * collection. 22 * 23 * Incremental collection 24 * ---------------------- 25 * 26 * For a collection to be carried out incrementally the following conditions 27 * must be met: 28 * - the collection must be run by calling js::GCSlice() rather than js::GC() 29 * - the GC parameter JSGC_INCREMENTAL_GC_ENABLED must be true. 30 * 31 * The last condition is an engine-internal mechanism to ensure that incremental 32 * collection is not carried out without the correct barriers being implemented. 33 * For more information see 'Incremental marking' below. 34 * 35 * If the collection is not incremental, all foreground activity happens inside 36 * a single call to GC() or GCSlice(). However the collection is not complete 37 * until the background sweeping activity has finished. 38 * 39 * An incremental collection proceeds as a series of slices, interleaved with 40 * mutator activity, i.e. running JavaScript code. Slices are limited by a time 41 * budget. The slice finishes as soon as possible after the requested time has 42 * passed. 43 * 44 * Collector states 45 * ---------------- 46 * 47 * The collector proceeds through the following states, the current state being 48 * held in JSRuntime::gcIncrementalState: 49 * 50 * - Prepare - unmarks GC things, discards JIT code and other setup 51 * - MarkRoots - marks the stack and other roots 52 * - Mark - incrementally marks reachable things 53 * - Sweep - sweeps zones in groups and continues marking unswept zones 54 * - Finalize - performs background finalization, concurrent with mutator 55 * - Compact - incrementally compacts by zone 56 * - Decommit - performs background decommit and chunk removal 57 * 58 * Roots are marked in the first MarkRoots slice; this is the start of the GC 59 * proper. The following states can take place over one or more slices. 60 * 61 * In other words an incremental collection proceeds like this: 62 * 63 * Slice 1: Prepare: Starts background task to unmark GC things 64 * 65 * ... JS code runs, background unmarking finishes ... 66 * 67 * Slice 2: MarkRoots: Roots are pushed onto the mark stack. 68 * Mark: The mark stack is processed by popping an element, 69 * marking it, and pushing its children. 70 * 71 * ... JS code runs ... 72 * 73 * Slice 3: Mark: More mark stack processing. 74 * 75 * ... JS code runs ... 76 * 77 * Slice n-1: Mark: More mark stack processing. 78 * 79 * ... JS code runs ... 80 * 81 * Slice n: Mark: Mark stack is completely drained. 82 * Sweep: Select first group of zones to sweep and sweep them. 83 * 84 * ... JS code runs ... 85 * 86 * Slice n+1: Sweep: Mark objects in unswept zones that were newly 87 * identified as alive (see below). Then sweep more zone 88 * sweep groups. 89 * 90 * ... JS code runs ... 91 * 92 * Slice n+2: Sweep: Mark objects in unswept zones that were newly 93 * identified as alive. Then sweep more zones. 94 * 95 * ... JS code runs ... 96 * 97 * Slice m: Sweep: Sweeping is finished, and background sweeping 98 * started on the helper thread. 99 * 100 * ... JS code runs, remaining sweeping done on background thread ... 101 * 102 * When background sweeping finishes the GC is complete. 103 * 104 * Incremental marking 105 * ------------------- 106 * 107 * Incremental collection requires close collaboration with the mutator (i.e., 108 * JS code) to guarantee correctness. 109 * 110 * - During an incremental GC, if a memory location (except a root) is written 111 * to, then the value it previously held must be marked. Write barriers 112 * ensure this. 113 * 114 * - Any object that is allocated during incremental GC must start out marked. 115 * 116 * - Roots are marked in the first slice and hence don't need write barriers. 117 * Roots are things like the C stack and the VM stack. 118 * 119 * The problem that write barriers solve is that between slices the mutator can 120 * change the object graph. We must ensure that it cannot do this in such a way 121 * that makes us fail to mark a reachable object (marking an unreachable object 122 * is tolerable). 123 * 124 * We use a snapshot-at-the-beginning algorithm to do this. This means that we 125 * promise to mark at least everything that is reachable at the beginning of 126 * collection. To implement it we mark the old contents of every non-root memory 127 * location written to by the mutator while the collection is in progress, using 128 * write barriers. This is described in gc/Barrier.h. 129 * 130 * Incremental sweeping 131 * -------------------- 132 * 133 * Sweeping is difficult to do incrementally because object finalizers must be 134 * run at the start of sweeping, before any mutator code runs. The reason is 135 * that some objects use their finalizers to remove themselves from caches. If 136 * mutator code was allowed to run after the start of sweeping, it could observe 137 * the state of the cache and create a new reference to an object that was just 138 * about to be destroyed. 139 * 140 * Sweeping all finalizable objects in one go would introduce long pauses, so 141 * instead sweeping broken up into groups of zones. Zones which are not yet 142 * being swept are still marked, so the issue above does not apply. 143 * 144 * The order of sweeping is restricted by cross compartment pointers - for 145 * example say that object |a| from zone A points to object |b| in zone B and 146 * neither object was marked when we transitioned to the Sweep phase. Imagine we 147 * sweep B first and then return to the mutator. It's possible that the mutator 148 * could cause |a| to become alive through a read barrier (perhaps it was a 149 * shape that was accessed via a shape table). Then we would need to mark |b|, 150 * which |a| points to, but |b| has already been swept. 151 * 152 * So if there is such a pointer then marking of zone B must not finish before 153 * marking of zone A. Pointers which form a cycle between zones therefore 154 * restrict those zones to being swept at the same time, and these are found 155 * using Tarjan's algorithm for finding the strongly connected components of a 156 * graph. 157 * 158 * GC things without finalizers, and things with finalizers that are able to run 159 * in the background, are swept on the background thread. This accounts for most 160 * of the sweeping work. 161 * 162 * Reset 163 * ----- 164 * 165 * During incremental collection it is possible, although unlikely, for 166 * conditions to change such that incremental collection is no longer safe. In 167 * this case, the collection is 'reset' by resetIncrementalGC(). If we are in 168 * the mark state, this just stops marking, but if we have started sweeping 169 * already, we continue non-incrementally until we have swept the current sweep 170 * group. Following a reset, a new collection is started. 171 * 172 * Compacting GC 173 * ------------- 174 * 175 * Compacting GC happens at the end of a major GC as part of the last slice. 176 * There are three parts: 177 * 178 * - Arenas are selected for compaction. 179 * - The contents of those arenas are moved to new arenas. 180 * - All references to moved things are updated. 181 * 182 * Collecting Atoms 183 * ---------------- 184 * 185 * Atoms are collected differently from other GC things. They are contained in 186 * a special zone and things in other zones may have pointers to them that are 187 * not recorded in the cross compartment pointer map. Each zone holds a bitmap 188 * with the atoms it might be keeping alive, and atoms are only collected if 189 * they are not included in any zone's atom bitmap. See AtomMarking.cpp for how 190 * this bitmap is managed. 191 */ 192 193 #include "gc/GC-inl.h" 194 195 #include "mozilla/glue/Debug.h" 196 #include "mozilla/ScopeExit.h" 197 #include "mozilla/TextUtils.h" 198 #include "mozilla/TimeStamp.h" 199 200 #include <algorithm> 201 #include <stdlib.h> 202 #include <string.h> 203 #include <utility> 204 205 #include "jsapi.h" // JS_AbortIfWrongThread 206 #include "jstypes.h" 207 208 #include "debugger/DebugAPI.h" 209 #include "gc/ClearEdgesTracer.h" 210 #include "gc/GCContext.h" 211 #include "gc/GCInternals.h" 212 #include "gc/GCLock.h" 213 #include "gc/GCProbes.h" 214 #include "gc/Memory.h" 215 #include "gc/ParallelMarking.h" 216 #include "gc/ParallelWork.h" 217 #include "gc/WeakMap.h" 218 #include "jit/ExecutableAllocator.h" 219 #include "jit/JitCode.h" 220 #include "jit/JitRuntime.h" 221 #include "jit/ProcessExecutableMemory.h" 222 #include "js/HeapAPI.h" // JS::GCCellPtr 223 #include "js/Printer.h" 224 #include "js/SliceBudget.h" 225 #include "util/DifferentialTesting.h" 226 #include "vm/BigIntType.h" 227 #include "vm/EnvironmentObject.h" 228 #include "vm/GetterSetter.h" 229 #include "vm/HelperThreadState.h" 230 #include "vm/JitActivation.h" 231 #include "vm/JSObject.h" 232 #include "vm/JSScript.h" 233 #include "vm/Logging.h" 234 #include "vm/PropMap.h" 235 #include "vm/Realm.h" 236 #include "vm/Shape.h" 237 #include "vm/StringType.h" 238 #include "vm/SymbolType.h" 239 #include "vm/Time.h" 240 241 #include "gc/Heap-inl.h" 242 #include "gc/Nursery-inl.h" 243 #include "gc/ObjectKind-inl.h" 244 #include "gc/PrivateIterators-inl.h" 245 #include "vm/GeckoProfiler-inl.h" 246 #include "vm/JSContext-inl.h" 247 #include "vm/Realm-inl.h" 248 #include "vm/Stack-inl.h" 249 #include "vm/StringType-inl.h" 250 251 using namespace js; 252 using namespace js::gc; 253 254 using mozilla::EnumSet; 255 using mozilla::MakeScopeExit; 256 using mozilla::Maybe; 257 using mozilla::Nothing; 258 using mozilla::Some; 259 using mozilla::TimeDuration; 260 using mozilla::TimeStamp; 261 262 using JS::SliceBudget; 263 using JS::TimeBudget; 264 using JS::WorkBudget; 265 266 // A table converting an object size in "slots" (increments of 267 // sizeof(js::Value)) to the total number of bytes in the corresponding 268 // AllocKind. See gc::slotsToThingKind. This primarily allows wasm jit code to 269 // remain compliant with the AllocKind system. 270 // 271 // To use this table, subtract sizeof(NativeObject) from your desired allocation 272 // size, divide by sizeof(js::Value) to get the number of "slots", and then 273 // index into this table. See gc::GetGCObjectKindForBytes. 274 constexpr uint32_t gc::slotsToAllocKindBytes[] = { 275 // These entries correspond exactly to gc::slotsToThingKind. The numeric 276 // comments therefore indicate the number of slots that the "bytes" would 277 // correspond to. 278 // clang-format off 279 /* 0 */ sizeof(JSObject_Slots0), sizeof(JSObject_Slots2), sizeof(JSObject_Slots2), sizeof(JSObject_Slots4), 280 /* 4 */ sizeof(JSObject_Slots4), sizeof(JSObject_Slots6), sizeof(JSObject_Slots6), sizeof(JSObject_Slots8), 281 /* 8 */ sizeof(JSObject_Slots8), sizeof(JSObject_Slots12), sizeof(JSObject_Slots12), sizeof(JSObject_Slots12), 282 /* 12 */ sizeof(JSObject_Slots12), sizeof(JSObject_Slots16), sizeof(JSObject_Slots16), sizeof(JSObject_Slots16), 283 /* 16 */ sizeof(JSObject_Slots16) 284 // clang-format on 285 }; 286 287 static_assert(std::size(slotsToAllocKindBytes) == std::size(slotsToThingKind)); 288 289 MOZ_THREAD_LOCAL(JS::GCContext*) js::TlsGCContext; 290 291 JS::GCContext::GCContext(JSRuntime* runtime) : runtime_(runtime) {} 292 293 JS::GCContext::~GCContext() { 294 MOZ_ASSERT(!hasJitCodeToPoison()); 295 MOZ_ASSERT(!isCollecting()); 296 MOZ_ASSERT(gcUse() == GCUse::None); 297 MOZ_ASSERT(!gcSweepZone()); 298 MOZ_ASSERT(!isTouchingGrayThings()); 299 MOZ_ASSERT(isPreWriteBarrierAllowed()); 300 } 301 302 void JS::GCContext::poisonJitCode() { 303 if (hasJitCodeToPoison()) { 304 jit::ExecutableAllocator::poisonCode(runtime(), jitPoisonRanges); 305 jitPoisonRanges.clearAndFree(); 306 } 307 } 308 309 #ifdef DEBUG 310 void GCRuntime::verifyAllChunks() { 311 AutoLockGC lock(this); 312 fullChunks(lock).verifyChunks(); 313 availableChunks(lock).verifyChunks(); 314 emptyChunks(lock).verifyChunks(); 315 if (currentChunk_) { 316 MOZ_ASSERT(currentChunk_->info.isCurrentChunk); 317 currentChunk_->verify(); 318 } else { 319 MOZ_ASSERT(pendingFreeCommittedArenas.ref().IsEmpty()); 320 } 321 } 322 #endif 323 324 void GCRuntime::setMinEmptyChunkCount(uint32_t value, const AutoLockGC& lock) { 325 minEmptyChunkCount_ = value; 326 } 327 328 inline bool GCRuntime::tooManyEmptyChunks(const AutoLockGC& lock) { 329 return emptyChunks(lock).count() > minEmptyChunkCount(lock); 330 } 331 332 ChunkPool GCRuntime::expireEmptyChunkPool(const AutoLockGC& lock) { 333 MOZ_ASSERT(emptyChunks(lock).verify()); 334 335 ChunkPool expired; 336 if (isShrinkingGC()) { 337 std::swap(expired, emptyChunks(lock)); 338 } else { 339 while (tooManyEmptyChunks(lock)) { 340 ArenaChunk* chunk = emptyChunks(lock).pop(); 341 prepareToFreeChunk(chunk->info); 342 expired.push(chunk); 343 } 344 } 345 346 MOZ_ASSERT(expired.verify()); 347 MOZ_ASSERT(emptyChunks(lock).verify()); 348 MOZ_ASSERT(emptyChunks(lock).count() <= minEmptyChunkCount(lock)); 349 return expired; 350 } 351 352 static void FreeChunkPool(ChunkPool& pool) { 353 for (ChunkPool::Iter iter(pool); !iter.done();) { 354 ArenaChunk* chunk = iter.get(); 355 iter.next(); 356 pool.remove(chunk); 357 MOZ_ASSERT(chunk->isEmpty()); 358 UnmapPages(static_cast<void*>(chunk), ChunkSize); 359 } 360 MOZ_ASSERT(pool.count() == 0); 361 } 362 363 void GCRuntime::freeEmptyChunks(const AutoLockGC& lock) { 364 FreeChunkPool(emptyChunks(lock)); 365 } 366 367 inline void GCRuntime::prepareToFreeChunk(ArenaChunkInfo& info) { 368 MOZ_ASSERT(info.numArenasFree == ArenasPerChunk); 369 stats().count(gcstats::COUNT_DESTROY_CHUNK); 370 #ifdef DEBUG 371 // Let FreeChunkPool detect a missing prepareToFreeChunk call before it frees 372 // chunk. 373 info.numArenasFreeCommitted = 0; 374 #endif 375 } 376 377 void GCRuntime::releaseArenaList(ArenaList& arenaList, const AutoLockGC& lock) { 378 releaseArenas(arenaList.release(), lock); 379 } 380 381 void GCRuntime::releaseArenas(Arena* arena, const AutoLockGC& lock) { 382 Arena* next; 383 for (; arena; arena = next) { 384 next = arena->next; 385 releaseArena(arena, lock); 386 } 387 } 388 389 void GCRuntime::releaseArena(Arena* arena, const AutoLockGC& lock) { 390 MOZ_ASSERT(arena->allocated()); 391 MOZ_ASSERT(!arena->onDelayedMarkingList()); 392 MOZ_ASSERT(TlsGCContext.get()->isFinalizing()); 393 394 arena->zone()->gcHeapSize.removeBytes(ArenaSize, true, heapSize); 395 if (arena->zone()->isAtomsZone()) { 396 arena->freeAtomMarkingBitmapIndex(this, lock); 397 } 398 arena->release(); 399 arena->chunk()->releaseArena(this, arena, lock); 400 } 401 402 GCRuntime::GCRuntime(JSRuntime* rt) 403 : rt(rt), 404 systemZone(nullptr), 405 mainThreadContext(rt), 406 heapState_(JS::HeapState::Idle), 407 stats_(this), 408 sweepingTracer(rt), 409 fullGCRequested(false), 410 helperThreadRatio(TuningDefaults::HelperThreadRatio), 411 maxHelperThreads(TuningDefaults::MaxHelperThreads), 412 helperThreadCount(1), 413 maxMarkingThreads(TuningDefaults::MaxMarkingThreads), 414 markingThreadCount(1), 415 createBudgetCallback(nullptr), 416 minEmptyChunkCount_(TuningDefaults::MinEmptyChunkCount), 417 rootsHash(256), 418 nextCellUniqueId_(LargestTaggedNullCellPointer + 419 1), // Ensure disjoint from null tagged pointers. 420 verifyPreData(nullptr), 421 lastGCStartTime_(TimeStamp::Now()), 422 lastGCEndTime_(TimeStamp::Now()), 423 incrementalGCEnabled(TuningDefaults::IncrementalGCEnabled), 424 perZoneGCEnabled(TuningDefaults::PerZoneGCEnabled), 425 numActiveZoneIters(0), 426 grayBitsValid(true), 427 majorGCTriggerReason(JS::GCReason::NO_REASON), 428 minorGCNumber(0), 429 majorGCNumber(0), 430 number(0), 431 sliceNumber(0), 432 isFull(false), 433 incrementalState(gc::State::NotActive), 434 initialState(gc::State::NotActive), 435 useZeal(false), 436 lastMarkSlice(false), 437 safeToYield(true), 438 markOnBackgroundThreadDuringSweeping(false), 439 useBackgroundThreads(false), 440 #ifdef DEBUG 441 hadShutdownGC(false), 442 #endif 443 requestSliceAfterBackgroundTask(false), 444 lifoBlocksToFree((size_t)JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE, 445 js::BackgroundMallocArena), 446 lifoBlocksToFreeAfterFullMinorGC( 447 (size_t)JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE, 448 js::BackgroundMallocArena), 449 lifoBlocksToFreeAfterNextMinorGC( 450 (size_t)JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE, 451 js::BackgroundMallocArena), 452 sweepGroupIndex(0), 453 sweepGroups(nullptr), 454 currentSweepGroup(nullptr), 455 sweepZone(nullptr), 456 abortSweepAfterCurrentGroup(false), 457 sweepMarkResult(IncrementalProgress::NotFinished), 458 #ifdef DEBUG 459 testMarkQueue(rt), 460 #endif 461 startedCompacting(false), 462 zonesCompacted(0), 463 #ifdef DEBUG 464 relocatedArenasToRelease(nullptr), 465 #endif 466 #ifdef JS_GC_ZEAL 467 markingValidator(nullptr), 468 #endif 469 defaultTimeBudgetMS_(TuningDefaults::DefaultTimeBudgetMS), 470 compactingEnabled(TuningDefaults::CompactingEnabled), 471 nurseryEnabled(TuningDefaults::NurseryEnabled), 472 parallelMarkingEnabled(TuningDefaults::ParallelMarkingEnabled), 473 rootsRemoved(false), 474 #ifdef JS_GC_ZEAL 475 zealModeBits(0), 476 zealFrequency(0), 477 nextScheduled(0), 478 deterministicOnly(false), 479 zealSliceBudget(0), 480 selectedForMarking(rt), 481 #endif 482 fullCompartmentChecks(false), 483 gcCallbackDepth(0), 484 alwaysPreserveCode(false), 485 lowMemoryState(false), 486 lock(mutexid::GCLock), 487 storeBufferLock(mutexid::StoreBuffer), 488 delayedMarkingLock(mutexid::GCDelayedMarkingLock), 489 bufferAllocatorLock(mutexid::BufferAllocator), 490 allocTask(this, emptyChunks_.ref()), 491 unmarkTask(this), 492 markTask(this), 493 sweepTask(this), 494 freeTask(this), 495 decommitTask(this), 496 nursery_(this), 497 storeBuffer_(rt), 498 lastAllocRateUpdateTime(TimeStamp::Now()) { 499 } 500 501 bool js::gc::SplitStringBy(const char* text, char delimiter, 502 CharRangeVector* result) { 503 return SplitStringBy(CharRange(text, strlen(text)), delimiter, result); 504 } 505 506 bool js::gc::SplitStringBy(const CharRange& text, char delimiter, 507 CharRangeVector* result) { 508 auto start = text.begin(); 509 for (auto ptr = start; ptr != text.end(); ptr++) { 510 if (*ptr == delimiter) { 511 if (!result->emplaceBack(start, ptr)) { 512 return false; 513 } 514 start = ptr + 1; 515 } 516 } 517 518 return result->emplaceBack(start, text.end()); 519 } 520 521 static bool ParseTimeDuration(const CharRange& text, 522 TimeDuration* durationOut) { 523 const char* str = text.begin().get(); 524 char* end; 525 long millis = strtol(str, &end, 10); 526 *durationOut = TimeDuration::FromMilliseconds(double(millis)); 527 return str != end && end == text.end().get(); 528 } 529 530 static void PrintProfileHelpAndExit(const char* envName, const char* helpText) { 531 fprintf(stderr, "%s=N[,(main|all)]\n", envName); 532 fprintf(stderr, "%s", helpText); 533 exit(0); 534 } 535 536 void js::gc::ReadProfileEnv(const char* envName, const char* helpText, 537 bool* enableOut, bool* workersOut, 538 TimeDuration* thresholdOut) { 539 *enableOut = false; 540 *workersOut = false; 541 *thresholdOut = TimeDuration::Zero(); 542 543 const char* env = getenv(envName); 544 if (!env) { 545 return; 546 } 547 548 if (strcmp(env, "help") == 0) { 549 PrintProfileHelpAndExit(envName, helpText); 550 } 551 552 CharRangeVector parts; 553 if (!SplitStringBy(env, ',', &parts)) { 554 MOZ_CRASH("OOM parsing environment variable"); 555 } 556 557 if (parts.length() == 0 || parts.length() > 2) { 558 PrintProfileHelpAndExit(envName, helpText); 559 } 560 561 *enableOut = true; 562 563 if (!ParseTimeDuration(parts[0], thresholdOut)) { 564 PrintProfileHelpAndExit(envName, helpText); 565 } 566 567 if (parts.length() == 2) { 568 const char* threads = parts[1].begin().get(); 569 if (strcmp(threads, "all") == 0) { 570 *workersOut = true; 571 } else if (strcmp(threads, "main") != 0) { 572 PrintProfileHelpAndExit(envName, helpText); 573 } 574 } 575 } 576 577 bool js::gc::ShouldPrintProfile(JSRuntime* runtime, bool enable, 578 bool profileWorkers, TimeDuration threshold, 579 TimeDuration duration) { 580 return enable && (runtime->isMainRuntime() || profileWorkers) && 581 duration >= threshold; 582 } 583 584 #ifdef JS_GC_ZEAL 585 586 void GCRuntime::getZealBits(uint32_t* zealBits, uint32_t* frequency, 587 uint32_t* scheduled) { 588 *zealBits = zealModeBits; 589 *frequency = zealFrequency; 590 *scheduled = nextScheduled; 591 } 592 593 // Please also update jit-test/tests/gc/gczeal.js when updating this help text. 594 // clang-format off 595 const char gc::ZealModeHelpText[] = 596 " Specifies how zealous the garbage collector should be. Some of these modes\n" 597 " can be set simultaneously, by passing multiple level options, e.g. \"2;4\"\n" 598 " will activate both modes 2 and 4. Modes can be specified by name or\n" 599 " number.\n" 600 " \n" 601 " Values:\n" 602 " 0: (None) Normal amount of collection (resets all modes)\n" 603 " 1: (RootsChange) Collect when roots are added or removed\n" 604 " 2: (Alloc) Collect when every N allocations (default: 100)\n" 605 " 4: (VerifierPre) Verify pre write barriers between instructions\n" 606 " 5: (VerifierPost) Verify post write barriers after minor GC\n" 607 " 6: (YieldBeforeRootMarking) Incremental GC in two slices that yields\n" 608 " before root marking\n" 609 " 7: (GenerationalGC) Collect the nursery every N nursery allocations\n" 610 " 8: (YieldBeforeMarking) Incremental GC in two slices that yields\n" 611 " between the root marking and marking phases\n" 612 " 9: (YieldBeforeSweeping) Incremental GC in two slices that yields\n" 613 " between the marking and sweeping phases\n" 614 " 10: (IncrementalMultipleSlices) Incremental GC in many slices\n" 615 " 11: (IncrementalMarkingValidator) Verify incremental marking\n" 616 " 12: (ElementsBarrier) Use the individual element post-write barrier\n" 617 " regardless of elements size\n" 618 " 13: (CheckHashTablesOnMinorGC) Check internal hashtables on minor GC\n" 619 " 14: (Compact) Perform a shrinking collection every N allocations\n" 620 " 15: (CheckHeapAfterGC) Walk the heap to check its integrity after every\n" 621 " GC\n" 622 " 17: (YieldBeforeSweepingAtoms) Incremental GC in two slices that yields\n" 623 " before sweeping the atoms table\n" 624 " 18: (CheckGrayMarking) Check gray marking invariants after every GC\n" 625 " 19: (YieldBeforeSweepingCaches) Incremental GC in two slices that yields\n" 626 " before sweeping weak caches\n" 627 " 21: (YieldBeforeSweepingObjects) Incremental GC that yields once per\n" 628 " zone before sweeping foreground finalized objects\n" 629 " 22: (YieldBeforeSweepingNonObjects) Incremental GC that yields once per\n" 630 " zone before sweeping non-object GC things\n" 631 " 23: (YieldBeforeSweepingPropMapTrees) Incremental GC that yields once\n" 632 " per zone before sweeping shape trees\n" 633 " 24: (CheckWeakMapMarking) Check weak map marking invariants after every\n" 634 " GC\n" 635 " 25: (YieldWhileGrayMarking) Incremental GC in two slices that yields\n" 636 " during gray marking\n" 637 " 26: (CheckHeapBeforeMinorGC) Check for invariant violations before every\n" 638 " minor GC\n"; 639 // clang-format on 640 641 // The set of zeal modes that yield at specific points in collection. 642 static constexpr EnumSet<ZealMode> YieldPointZealModes = { 643 ZealMode::YieldBeforeRootMarking, 644 ZealMode::YieldBeforeMarking, 645 ZealMode::YieldBeforeSweeping, 646 ZealMode::YieldBeforeSweepingAtoms, 647 ZealMode::YieldBeforeSweepingCaches, 648 ZealMode::YieldBeforeSweepingObjects, 649 ZealMode::YieldBeforeSweepingNonObjects, 650 ZealMode::YieldBeforeSweepingPropMapTrees, 651 ZealMode::YieldWhileGrayMarking}; 652 653 // The set of zeal modes that control incremental slices. 654 static constexpr EnumSet<ZealMode> IncrementalSliceZealModes = 655 YieldPointZealModes + 656 EnumSet<ZealMode>{ZealMode::IncrementalMultipleSlices}; 657 658 // The set of zeal modes that trigger GC periodically. 659 static constexpr EnumSet<ZealMode> PeriodicGCZealModes = 660 IncrementalSliceZealModes + 661 EnumSet<ZealMode>{ZealMode::Alloc, ZealMode::VerifierPost, 662 ZealMode::GenerationalGC, ZealMode::Compact}; 663 664 // The set of zeal modes that are mutually exclusive. All of these trigger GC 665 // except VerifierPre. 666 static constexpr EnumSet<ZealMode> ExclusiveZealModes = 667 PeriodicGCZealModes + EnumSet<ZealMode>{ZealMode::VerifierPre}; 668 669 void GCRuntime::setZeal(uint8_t zeal, uint32_t frequency) { 670 MOZ_ASSERT(zeal <= unsigned(ZealMode::Limit)); 671 672 if (verifyPreData) { 673 VerifyBarriers(rt, PreBarrierVerifier); 674 } 675 676 if (zeal == 0) { 677 if (hasZealMode(ZealMode::GenerationalGC)) { 678 clearZealMode(ZealMode::GenerationalGC); 679 } 680 681 if (isIncrementalGCInProgress()) { 682 finishGC(JS::GCReason::DEBUG_GC); 683 } 684 685 zealModeBits = 0; 686 zealFrequency = 0; 687 nextScheduled = 0; 688 return; 689 } 690 691 // Modes that trigger periodically are mutually exclusive. If we're setting 692 // one of those, we first reset all of them. 693 ZealMode zealMode = ZealMode(zeal); 694 if (ExclusiveZealModes.contains(zealMode)) { 695 for (auto mode : ExclusiveZealModes) { 696 if (hasZealMode(mode)) { 697 clearZealMode(mode); 698 } 699 } 700 } 701 702 if (zealMode == ZealMode::GenerationalGC) { 703 evictNursery(JS::GCReason::EVICT_NURSERY); 704 nursery().enterZealMode(); 705 } 706 707 zealModeBits |= 1 << zeal; 708 zealFrequency = frequency; 709 710 if (PeriodicGCZealModes.contains(zealMode)) { 711 nextScheduled = frequency; 712 } 713 } 714 715 void GCRuntime::unsetZeal(uint8_t zeal) { 716 MOZ_ASSERT(zeal <= unsigned(ZealMode::Limit)); 717 ZealMode zealMode = ZealMode(zeal); 718 719 if (!hasZealMode(zealMode)) { 720 return; 721 } 722 723 if (verifyPreData) { 724 VerifyBarriers(rt, PreBarrierVerifier); 725 } 726 727 clearZealMode(zealMode); 728 729 if (zealModeBits == 0) { 730 if (isIncrementalGCInProgress()) { 731 finishGC(JS::GCReason::DEBUG_GC); 732 } 733 734 zealFrequency = 0; 735 nextScheduled = 0; 736 } 737 } 738 739 void GCRuntime::setNextScheduled(uint32_t count) { nextScheduled = count; } 740 741 static bool ParseZealModeName(const CharRange& text, uint32_t* modeOut) { 742 struct ModeInfo { 743 const char* name; 744 size_t length; 745 uint32_t value; 746 }; 747 748 static const ModeInfo zealModes[] = {{"None", 0}, 749 # define ZEAL_MODE(name, value) {#name, strlen(#name), value}, 750 JS_FOR_EACH_ZEAL_MODE(ZEAL_MODE) 751 # undef ZEAL_MODE 752 }; 753 754 for (auto mode : zealModes) { 755 if (text.length() == mode.length && 756 memcmp(text.begin().get(), mode.name, mode.length) == 0) { 757 *modeOut = mode.value; 758 return true; 759 } 760 } 761 762 return false; 763 } 764 765 static bool ParseZealModeNumericParam(const CharRange& text, 766 uint32_t* paramOut) { 767 if (text.length() == 0) { 768 return false; 769 } 770 771 for (auto c : text) { 772 if (!mozilla::IsAsciiDigit(c)) { 773 return false; 774 } 775 } 776 777 *paramOut = atoi(text.begin().get()); 778 return true; 779 } 780 781 static bool PrintZealHelpAndFail() { 782 fprintf(stderr, "Format: JS_GC_ZEAL=mode[;mode2;mode3...][,frequency]\n"); 783 fprintf(stderr, " Examples: JS_GC_ZEAL=2 (mode 2 with default frequency)\n"); 784 fprintf( 785 stderr, 786 " JS_GC_ZEAL=2;7 (modes 2 and 7 with default frequency)\n"); 787 fprintf(stderr, " JS_GC_ZEAL=2,100 (mode 2 with frequency 100)\n"); 788 fprintf(stderr, 789 " JS_GC_ZEAL=2;7,100 (modes 2 and 7, both with frequency " 790 "100)\n"); 791 fputs(ZealModeHelpText, stderr); 792 return false; 793 } 794 795 bool GCRuntime::parseZeal(const char* str, size_t len, ZealSettings* zeal, 796 bool* invalid) { 797 CharRange text(str, len); 798 799 // The zeal mode setting is a string consisting of one or more mode 800 // specifiers separated by ';', optionally followed by a ',' and the trigger 801 // frequency. The mode specifiers can by a mode name or its number. 802 803 *invalid = false; 804 805 CharRangeVector parts; 806 if (!SplitStringBy(text, ',', &parts)) { 807 return false; 808 } 809 810 if (parts.length() == 0 || parts.length() > 2) { 811 *invalid = true; 812 return true; 813 } 814 815 uint32_t frequency = JS::ShellDefaultGCZealFrequency; 816 if (parts.length() == 2 && !ParseZealModeNumericParam(parts[1], &frequency)) { 817 *invalid = true; 818 return true; 819 } 820 821 CharRangeVector modes; 822 if (!SplitStringBy(parts[0], ';', &modes)) { 823 return false; 824 } 825 826 for (const auto& descr : modes) { 827 uint32_t mode; 828 if (!ParseZealModeName(descr, &mode) && 829 !(ParseZealModeNumericParam(descr, &mode) && 830 mode <= unsigned(ZealMode::Limit))) { 831 *invalid = true; 832 return true; 833 } 834 835 if (!zeal->append(ZealSetting{uint8_t(mode), frequency})) { 836 return false; 837 } 838 } 839 840 return true; 841 } 842 843 bool GCRuntime::parseAndSetZeal(const char* str) { 844 ZealSettings zeal; 845 bool invalid = false; 846 if (!parseZeal(str, strlen(str), &zeal, &invalid)) { 847 return false; 848 } 849 850 if (invalid) { 851 return PrintZealHelpAndFail(); 852 } 853 854 for (auto [mode, frequency] : zeal) { 855 setZeal(mode, frequency); 856 } 857 858 return true; 859 } 860 861 bool GCRuntime::needZealousGC() { 862 if (nextScheduled > 0 && --nextScheduled == 0) { 863 if (hasAnyZealModeOf(PeriodicGCZealModes)) { 864 nextScheduled = zealFrequency; 865 } 866 return true; 867 } 868 return false; 869 } 870 871 bool GCRuntime::zealModeControlsYieldPoint() const { 872 // Indicates whether a zeal mode is enabled that controls the point at which 873 // the collector yields to the mutator. Yield can happen once per collection 874 // or once per zone depending on the mode. 875 return hasAnyZealModeOf(YieldPointZealModes); 876 } 877 878 bool GCRuntime::hasZealMode(ZealMode mode) const { 879 static_assert(size_t(ZealMode::Limit) < sizeof(zealModeBits) * 8, 880 "Zeal modes must fit in zealModeBits"); 881 return zealModeBits & (1 << uint32_t(mode)); 882 } 883 884 bool GCRuntime::hasAnyZealModeOf(EnumSet<ZealMode> modes) const { 885 return zealModeBits & modes.serialize(); 886 } 887 888 void GCRuntime::clearZealMode(ZealMode mode) { 889 MOZ_ASSERT(hasZealMode(mode)); 890 891 if (mode == ZealMode::GenerationalGC) { 892 evictNursery(); 893 nursery().leaveZealMode(); 894 } 895 896 zealModeBits &= ~(1 << uint32_t(mode)); 897 MOZ_ASSERT(!hasZealMode(mode)); 898 } 899 900 void js::gc::DumpArenaInfo() { 901 fprintf(stderr, "Arena header size: %zu\n\n", ArenaHeaderSize); 902 903 fprintf(stderr, "GC thing kinds:\n"); 904 fprintf(stderr, "%25s %8s %8s %8s\n", 905 "AllocKind:", "Size:", "Count:", "Padding:"); 906 for (auto kind : AllAllocKinds()) { 907 fprintf(stderr, "%25s %8zu %8zu %8zu\n", AllocKindName(kind), 908 Arena::thingSize(kind), Arena::thingsPerArena(kind), 909 Arena::firstThingOffset(kind) - ArenaHeaderSize); 910 } 911 } 912 913 #endif // JS_GC_ZEAL 914 // 915 const char* js::gc::AllocKindName(AllocKind kind) { 916 static const char* const names[] = { 917 #define EXPAND_THING_NAME(allocKind, _1, _2, _3, _4, _5, _6) #allocKind, 918 FOR_EACH_ALLOCKIND(EXPAND_THING_NAME) 919 #undef EXPAND_THING_NAME 920 }; 921 static_assert(std::size(names) == AllocKindCount, 922 "names array should have an entry for every AllocKind"); 923 924 size_t i = size_t(kind); 925 MOZ_ASSERT(i < std::size(names)); 926 return names[i]; 927 } 928 929 bool GCRuntime::init(uint32_t maxbytes) { 930 MOZ_ASSERT(!wasInitialized()); 931 932 MOZ_ASSERT(SystemPageSize()); 933 Arena::staticAsserts(); 934 Arena::checkLookupTables(); 935 936 if (!TlsGCContext.init()) { 937 return false; 938 } 939 TlsGCContext.set(&mainThreadContext.ref()); 940 941 updateHelperThreadCount(); 942 943 #ifdef JS_GC_ZEAL 944 const char* size = getenv("JSGC_MARK_STACK_LIMIT"); 945 if (size) { 946 maybeMarkStackLimit = atoi(size); 947 } 948 #endif 949 950 if (!updateMarkersVector()) { 951 return false; 952 } 953 954 { 955 AutoLockGCBgAlloc lock(this); 956 957 MOZ_ALWAYS_TRUE(tunables.setParameter(JSGC_MAX_BYTES, maxbytes)); 958 959 if (!nursery().init(lock)) { 960 return false; 961 } 962 } 963 964 #ifdef JS_GC_ZEAL 965 const char* zealSpec = getenv("JS_GC_ZEAL"); 966 if (zealSpec && zealSpec[0] && !parseAndSetZeal(zealSpec)) { 967 return false; 968 } 969 #endif 970 971 for (auto& marker : markers) { 972 if (!marker->init()) { 973 return false; 974 } 975 } 976 977 if (!initSweepActions()) { 978 return false; 979 } 980 981 UniquePtr<Zone> zone = MakeUnique<Zone>(rt, Zone::AtomsZone); 982 if (!zone || !zone->init()) { 983 return false; 984 } 985 986 // The atoms zone is stored as the first element of the zones vector. 987 MOZ_ASSERT(zone->isAtomsZone()); 988 MOZ_ASSERT(zones().empty()); 989 MOZ_ALWAYS_TRUE(zones().reserve(1)); // ZonesVector has inline capacity 4. 990 zones().infallibleAppend(zone.release()); 991 992 gcprobes::Init(this); 993 994 initialized = true; 995 return true; 996 } 997 998 void GCRuntime::finish() { 999 MOZ_ASSERT(inPageLoadCount == 0); 1000 MOZ_ASSERT(!sharedAtomsZone_); 1001 1002 // Wait for nursery background free to end and disable it to release memory. 1003 if (nursery().isEnabled()) { 1004 nursery().disable(); 1005 } 1006 1007 // Wait until the background finalization and allocation stops and the 1008 // helper thread shuts down before we forcefully release any remaining GC 1009 // memory. 1010 sweepTask.join(); 1011 markTask.join(); 1012 freeTask.join(); 1013 allocTask.cancelAndWait(); 1014 decommitTask.cancelAndWait(); 1015 #ifdef DEBUG 1016 { 1017 MOZ_ASSERT(dispatchedParallelTasks == 0); 1018 AutoLockHelperThreadState lock; 1019 MOZ_ASSERT(queuedParallelTasks.ref().isEmpty(lock)); 1020 } 1021 #endif 1022 1023 releaseMarkingThreads(); 1024 1025 #ifdef JS_GC_ZEAL 1026 // Free memory associated with GC verification. 1027 finishVerifier(); 1028 #endif 1029 1030 // Delete all remaining zones. 1031 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) { 1032 AutoSetThreadIsSweeping threadIsSweeping(rt->gcContext(), zone); 1033 for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next()) { 1034 for (RealmsInCompartmentIter realm(comp); !realm.done(); realm.next()) { 1035 js_delete(realm.get()); 1036 } 1037 comp->realms().clear(); 1038 js_delete(comp.get()); 1039 } 1040 zone->compartments().clear(); 1041 js_delete(zone.get()); 1042 } 1043 1044 zones().clear(); 1045 1046 { 1047 AutoLockGC lock(this); 1048 clearCurrentChunk(lock); 1049 } 1050 1051 FreeChunkPool(fullChunks_.ref()); 1052 FreeChunkPool(availableChunks_.ref()); 1053 FreeChunkPool(emptyChunks_.ref()); 1054 1055 TlsGCContext.set(nullptr); 1056 1057 gcprobes::Finish(this); 1058 1059 nursery().printTotalProfileTimes(); 1060 stats().printTotalProfileTimes(); 1061 } 1062 1063 bool GCRuntime::freezeSharedAtomsZone() { 1064 // This is called just after permanent atoms and well-known symbols have been 1065 // created. At this point all existing atoms and symbols are permanent. 1066 // 1067 // This method makes the current atoms zone into a shared atoms zone and 1068 // removes it from the zones list. Everything in it is marked black. A new 1069 // empty atoms zone is created, where all atoms local to this runtime will 1070 // live. 1071 // 1072 // The shared atoms zone will not be collected until shutdown when it is 1073 // returned to the zone list by restoreSharedAtomsZone(). 1074 1075 MOZ_ASSERT(rt->isMainRuntime()); 1076 MOZ_ASSERT(!sharedAtomsZone_); 1077 MOZ_ASSERT(zones().length() == 1); 1078 MOZ_ASSERT(atomsZone()); 1079 MOZ_ASSERT(!atomsZone()->wasGCStarted()); 1080 MOZ_ASSERT(!atomsZone()->needsIncrementalBarrier()); 1081 1082 AutoAssertEmptyNursery nurseryIsEmpty(rt->mainContextFromOwnThread()); 1083 1084 atomsZone()->arenas.clearFreeLists(); 1085 1086 for (auto kind : AllAllocKinds()) { 1087 for (auto thing = 1088 atomsZone()->cellIterUnsafe<TenuredCell>(kind, nurseryIsEmpty); 1089 !thing.done(); thing.next()) { 1090 TenuredCell* cell = thing.getCell(); 1091 MOZ_ASSERT((cell->is<JSString>() && 1092 cell->as<JSString>()->isPermanentAndMayBeShared()) || 1093 (cell->is<JS::Symbol>() && 1094 cell->as<JS::Symbol>()->isPermanentAndMayBeShared())); 1095 cell->markBlack(); 1096 } 1097 } 1098 1099 sharedAtomsZone_ = atomsZone(); 1100 zones().clear(); 1101 1102 UniquePtr<Zone> zone = MakeUnique<Zone>(rt, Zone::AtomsZone); 1103 if (!zone || !zone->init()) { 1104 return false; 1105 } 1106 1107 MOZ_ASSERT(zone->isAtomsZone()); 1108 zones().infallibleAppend(zone.release()); 1109 1110 return true; 1111 } 1112 1113 void GCRuntime::restoreSharedAtomsZone() { 1114 // Return the shared atoms zone to the zone list. This allows the contents of 1115 // the shared atoms zone to be collected when the parent runtime is shut down. 1116 1117 if (!sharedAtomsZone_) { 1118 return; 1119 } 1120 1121 MOZ_ASSERT(rt->isMainRuntime()); 1122 MOZ_ASSERT(rt->childRuntimeCount == 0); 1123 1124 // Insert at start to preserve invariant that atoms zones come first. 1125 AutoEnterOOMUnsafeRegion oomUnsafe; 1126 if (!zones().insert(zones().begin(), sharedAtomsZone_)) { 1127 oomUnsafe.crash("restoreSharedAtomsZone"); 1128 } 1129 1130 sharedAtomsZone_ = nullptr; 1131 } 1132 1133 bool GCRuntime::setParameter(JSContext* cx, JSGCParamKey key, uint32_t value) { 1134 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt)); 1135 1136 AutoStopVerifyingBarriers pauseVerification(rt, false); 1137 FinishGC(cx); 1138 waitBackgroundSweepEnd(); 1139 1140 // Special case: if there is still an `AutoDisableGenerationalGC` active (eg 1141 // from the --no-ggc command-line flag), then do not allow controlling the 1142 // state of the nursery. Done here where cx is available. 1143 if (key == JSGC_NURSERY_ENABLED && cx->generationalDisabled > 0) { 1144 return false; 1145 } 1146 1147 AutoLockGC lock(this); 1148 return setParameter(key, value, lock); 1149 } 1150 1151 static bool IsGCThreadParameter(JSGCParamKey key) { 1152 return key == JSGC_HELPER_THREAD_RATIO || key == JSGC_MAX_HELPER_THREADS || 1153 key == JSGC_MAX_MARKING_THREADS; 1154 } 1155 1156 bool GCRuntime::setParameter(JSGCParamKey key, uint32_t value, 1157 AutoLockGC& lock) { 1158 switch (key) { 1159 case JSGC_SLICE_TIME_BUDGET_MS: 1160 defaultTimeBudgetMS_ = value; 1161 break; 1162 case JSGC_INCREMENTAL_GC_ENABLED: 1163 setIncrementalGCEnabled(value != 0); 1164 break; 1165 case JSGC_PER_ZONE_GC_ENABLED: 1166 perZoneGCEnabled = value != 0; 1167 break; 1168 case JSGC_COMPACTING_ENABLED: 1169 compactingEnabled = value != 0; 1170 break; 1171 case JSGC_NURSERY_ENABLED: { 1172 AutoUnlockGC unlock(lock); 1173 setNurseryEnabled(value != 0); 1174 break; 1175 } 1176 case JSGC_PARALLEL_MARKING_ENABLED: 1177 setParallelMarkingEnabled(value != 0); 1178 break; 1179 case JSGC_INCREMENTAL_WEAKMAP_ENABLED: 1180 for (auto& marker : markers) { 1181 marker->incrementalWeakMapMarkingEnabled = value != 0; 1182 } 1183 break; 1184 case JSGC_SEMISPACE_NURSERY_ENABLED: { 1185 AutoUnlockGC unlock(lock); 1186 nursery().setSemispaceEnabled(value); 1187 break; 1188 } 1189 case JSGC_MIN_EMPTY_CHUNK_COUNT: 1190 setMinEmptyChunkCount(value, lock); 1191 break; 1192 default: 1193 if (IsGCThreadParameter(key)) { 1194 return setThreadParameter(key, value, lock); 1195 } 1196 1197 if (!tunables.setParameter(key, value)) { 1198 return false; 1199 } 1200 updateAllGCStartThresholds(); 1201 } 1202 1203 return true; 1204 } 1205 1206 bool GCRuntime::setThreadParameter(JSGCParamKey key, uint32_t value, 1207 AutoLockGC& lock) { 1208 if (rt->parentRuntime) { 1209 // Don't allow these to be set for worker runtimes. 1210 return false; 1211 } 1212 1213 switch (key) { 1214 case JSGC_HELPER_THREAD_RATIO: 1215 if (value == 0) { 1216 return false; 1217 } 1218 helperThreadRatio = double(value) / 100.0; 1219 break; 1220 case JSGC_MAX_HELPER_THREADS: 1221 if (value == 0) { 1222 return false; 1223 } 1224 maxHelperThreads = value; 1225 break; 1226 case JSGC_MAX_MARKING_THREADS: 1227 maxMarkingThreads = std::min(size_t(value), MaxParallelWorkers); 1228 break; 1229 default: 1230 MOZ_CRASH("Unexpected parameter key"); 1231 } 1232 1233 updateHelperThreadCount(); 1234 initOrDisableParallelMarking(); 1235 1236 return true; 1237 } 1238 1239 void GCRuntime::resetParameter(JSContext* cx, JSGCParamKey key) { 1240 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt)); 1241 1242 AutoStopVerifyingBarriers pauseVerification(rt, false); 1243 FinishGC(cx); 1244 waitBackgroundSweepEnd(); 1245 1246 AutoLockGC lock(this); 1247 resetParameter(key, lock); 1248 } 1249 1250 void GCRuntime::resetParameter(JSGCParamKey key, AutoLockGC& lock) { 1251 switch (key) { 1252 case JSGC_SLICE_TIME_BUDGET_MS: 1253 defaultTimeBudgetMS_ = TuningDefaults::DefaultTimeBudgetMS; 1254 break; 1255 case JSGC_INCREMENTAL_GC_ENABLED: 1256 setIncrementalGCEnabled(TuningDefaults::IncrementalGCEnabled); 1257 break; 1258 case JSGC_PER_ZONE_GC_ENABLED: 1259 perZoneGCEnabled = TuningDefaults::PerZoneGCEnabled; 1260 break; 1261 case JSGC_COMPACTING_ENABLED: 1262 compactingEnabled = TuningDefaults::CompactingEnabled; 1263 break; 1264 case JSGC_NURSERY_ENABLED: 1265 setNurseryEnabled(TuningDefaults::NurseryEnabled); 1266 break; 1267 case JSGC_PARALLEL_MARKING_ENABLED: 1268 setParallelMarkingEnabled(TuningDefaults::ParallelMarkingEnabled); 1269 break; 1270 case JSGC_INCREMENTAL_WEAKMAP_ENABLED: 1271 for (auto& marker : markers) { 1272 marker->incrementalWeakMapMarkingEnabled = 1273 TuningDefaults::IncrementalWeakMapMarkingEnabled; 1274 } 1275 break; 1276 case JSGC_SEMISPACE_NURSERY_ENABLED: { 1277 AutoUnlockGC unlock(lock); 1278 nursery().setSemispaceEnabled(TuningDefaults::SemispaceNurseryEnabled); 1279 break; 1280 } 1281 case JSGC_MIN_EMPTY_CHUNK_COUNT: 1282 setMinEmptyChunkCount(TuningDefaults::MinEmptyChunkCount, lock); 1283 break; 1284 default: 1285 if (IsGCThreadParameter(key)) { 1286 resetThreadParameter(key, lock); 1287 return; 1288 } 1289 1290 tunables.resetParameter(key); 1291 updateAllGCStartThresholds(); 1292 } 1293 } 1294 1295 void GCRuntime::resetThreadParameter(JSGCParamKey key, AutoLockGC& lock) { 1296 if (rt->parentRuntime) { 1297 return; 1298 } 1299 1300 switch (key) { 1301 case JSGC_HELPER_THREAD_RATIO: 1302 helperThreadRatio = TuningDefaults::HelperThreadRatio; 1303 break; 1304 case JSGC_MAX_HELPER_THREADS: 1305 maxHelperThreads = TuningDefaults::MaxHelperThreads; 1306 break; 1307 case JSGC_MAX_MARKING_THREADS: 1308 maxMarkingThreads = TuningDefaults::MaxMarkingThreads; 1309 break; 1310 default: 1311 MOZ_CRASH("Unexpected parameter key"); 1312 } 1313 1314 updateHelperThreadCount(); 1315 initOrDisableParallelMarking(); 1316 } 1317 1318 uint32_t GCRuntime::getParameter(JSGCParamKey key) { 1319 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt)); 1320 AutoLockGC lock(this); 1321 return getParameter(key, lock); 1322 } 1323 1324 uint32_t GCRuntime::getParameter(JSGCParamKey key, const AutoLockGC& lock) { 1325 switch (key) { 1326 case JSGC_BYTES: 1327 return uint32_t(heapSize.bytes()); 1328 case JSGC_NURSERY_BYTES: 1329 return nursery().capacity(); 1330 case JSGC_NUMBER: 1331 return uint32_t(number); 1332 case JSGC_MAJOR_GC_NUMBER: 1333 return uint32_t(majorGCNumber); 1334 case JSGC_MINOR_GC_NUMBER: 1335 return uint32_t(minorGCNumber); 1336 case JSGC_SLICE_NUMBER: 1337 return uint32_t(sliceNumber); 1338 case JSGC_INCREMENTAL_GC_ENABLED: 1339 return incrementalGCEnabled; 1340 case JSGC_PER_ZONE_GC_ENABLED: 1341 return perZoneGCEnabled; 1342 case JSGC_UNUSED_CHUNKS: 1343 clearCurrentChunk(lock); 1344 return uint32_t(emptyChunks(lock).count()); 1345 case JSGC_TOTAL_CHUNKS: 1346 clearCurrentChunk(lock); 1347 return uint32_t(fullChunks(lock).count() + availableChunks(lock).count() + 1348 emptyChunks(lock).count()); 1349 case JSGC_SLICE_TIME_BUDGET_MS: 1350 MOZ_RELEASE_ASSERT(defaultTimeBudgetMS_ >= 0); 1351 MOZ_RELEASE_ASSERT(defaultTimeBudgetMS_ <= UINT32_MAX); 1352 return uint32_t(defaultTimeBudgetMS_); 1353 case JSGC_MIN_EMPTY_CHUNK_COUNT: 1354 return minEmptyChunkCount(lock); 1355 case JSGC_COMPACTING_ENABLED: 1356 return compactingEnabled; 1357 case JSGC_NURSERY_ENABLED: 1358 return nursery().isEnabled(); 1359 case JSGC_PARALLEL_MARKING_ENABLED: 1360 return parallelMarkingEnabled; 1361 case JSGC_INCREMENTAL_WEAKMAP_ENABLED: 1362 return marker().incrementalWeakMapMarkingEnabled; 1363 case JSGC_SEMISPACE_NURSERY_ENABLED: 1364 return nursery().semispaceEnabled(); 1365 case JSGC_CHUNK_BYTES: 1366 return ChunkSize; 1367 case JSGC_HELPER_THREAD_RATIO: 1368 MOZ_ASSERT(helperThreadRatio > 0.0); 1369 return uint32_t(helperThreadRatio * 100.0); 1370 case JSGC_MAX_HELPER_THREADS: 1371 MOZ_ASSERT(maxHelperThreads <= UINT32_MAX); 1372 return maxHelperThreads; 1373 case JSGC_HELPER_THREAD_COUNT: 1374 return helperThreadCount; 1375 case JSGC_MAX_MARKING_THREADS: 1376 return maxMarkingThreads; 1377 case JSGC_MARKING_THREAD_COUNT: 1378 return markingThreadCount; 1379 case JSGC_SYSTEM_PAGE_SIZE_KB: 1380 return SystemPageSize() / 1024; 1381 case JSGC_HIGH_FREQUENCY_MODE: 1382 return schedulingState.inHighFrequencyGCMode(); 1383 default: 1384 return tunables.getParameter(key); 1385 } 1386 } 1387 1388 #ifdef JS_GC_ZEAL 1389 void GCRuntime::setMarkStackLimit(size_t limit, AutoLockGC& lock) { 1390 MOZ_ASSERT(!JS::RuntimeHeapIsBusy()); 1391 1392 maybeMarkStackLimit = limit; 1393 1394 AutoUnlockGC unlock(lock); 1395 AutoStopVerifyingBarriers pauseVerification(rt, false); 1396 for (auto& marker : markers) { 1397 marker->setMaxCapacity(limit); 1398 } 1399 } 1400 #endif 1401 1402 void GCRuntime::setIncrementalGCEnabled(bool enabled) { 1403 incrementalGCEnabled = enabled; 1404 } 1405 1406 void GCRuntime::setNurseryEnabled(bool enabled) { 1407 if (enabled) { 1408 nursery().enable(); 1409 } else { 1410 if (nursery().isEnabled()) { 1411 minorGC(JS::GCReason::EVICT_NURSERY); 1412 nursery().disable(); 1413 } 1414 } 1415 } 1416 1417 void GCRuntime::updateHelperThreadCount() { 1418 if (!CanUseExtraThreads()) { 1419 // startTask will run the work on the main thread if the count is 1. 1420 MOZ_ASSERT(helperThreadCount == 1); 1421 markingThreadCount = 1; 1422 1423 AutoLockHelperThreadState lock; 1424 maxParallelThreads = 1; 1425 return; 1426 } 1427 1428 // Number of extra threads required during parallel marking to ensure we can 1429 // start the necessary marking tasks. Background free and background 1430 // allocation may already be running and we want to avoid these tasks blocking 1431 // marking. In real configurations there will be enough threads that this 1432 // won't affect anything. 1433 static constexpr size_t SpareThreadsDuringParallelMarking = 2; 1434 1435 // Calculate the target thread count for GC parallel tasks. 1436 size_t cpuCount = GetHelperThreadCPUCount(); 1437 helperThreadCount = 1438 std::clamp(size_t(double(cpuCount) * helperThreadRatio.ref()), size_t(1), 1439 maxHelperThreads.ref()); 1440 1441 // Calculate the target thread count for parallel marking, which uses separate 1442 // parameters to let us adjust this independently. 1443 markingThreadCount = std::min(cpuCount / 2, maxMarkingThreads.ref()); 1444 1445 // Calculate the overall target thread count taking into account the separate 1446 // target for parallel marking threads. Add spare threads to avoid blocking 1447 // parallel marking when there is other GC work happening. 1448 size_t targetCount = 1449 std::max(helperThreadCount.ref(), 1450 markingThreadCount.ref() + SpareThreadsDuringParallelMarking); 1451 1452 // Attempt to create extra threads if possible. This is not supported when 1453 // using an external thread pool. 1454 AutoLockHelperThreadState lock; 1455 (void)HelperThreadState().ensureThreadCount(targetCount, lock); 1456 1457 // Limit all thread counts based on the number of threads available, which may 1458 // be fewer than requested. 1459 size_t availableThreadCount = GetHelperThreadCount(); 1460 MOZ_ASSERT(availableThreadCount != 0); 1461 targetCount = std::min(targetCount, availableThreadCount); 1462 helperThreadCount = std::min(helperThreadCount.ref(), availableThreadCount); 1463 if (availableThreadCount < SpareThreadsDuringParallelMarking) { 1464 markingThreadCount = 1; 1465 } else { 1466 markingThreadCount = 1467 std::min(markingThreadCount.ref(), 1468 availableThreadCount - SpareThreadsDuringParallelMarking); 1469 } 1470 1471 // Update the maximum number of threads that will be used for GC work. 1472 maxParallelThreads = targetCount; 1473 } 1474 1475 size_t GCRuntime::markingWorkerCount() const { 1476 if (!CanUseExtraThreads() || !parallelMarkingEnabled) { 1477 return 1; 1478 } 1479 1480 if (markingThreadCount) { 1481 return markingThreadCount; 1482 } 1483 1484 // Limit parallel marking to use at most two threads initially. 1485 return 2; 1486 } 1487 1488 #ifdef DEBUG 1489 void GCRuntime::assertNoMarkingWork() const { 1490 for (const auto& marker : markers) { 1491 MOZ_ASSERT(marker->isDrained()); 1492 } 1493 MOZ_ASSERT(!hasDelayedMarking()); 1494 } 1495 #endif 1496 1497 bool GCRuntime::setParallelMarkingEnabled(bool enabled) { 1498 if (enabled == parallelMarkingEnabled) { 1499 return true; 1500 } 1501 1502 parallelMarkingEnabled = enabled; 1503 return initOrDisableParallelMarking(); 1504 } 1505 1506 bool GCRuntime::initOrDisableParallelMarking() { 1507 // Attempt to initialize parallel marking state or disable it on failure. This 1508 // is called when parallel marking is enabled or disabled. 1509 1510 MOZ_ASSERT(markers.length() != 0); 1511 1512 if (updateMarkersVector()) { 1513 return true; 1514 } 1515 1516 // Failed to initialize parallel marking so disable it instead. 1517 MOZ_ASSERT(parallelMarkingEnabled); 1518 parallelMarkingEnabled = false; 1519 MOZ_ALWAYS_TRUE(updateMarkersVector()); 1520 return false; 1521 } 1522 1523 void GCRuntime::releaseMarkingThreads() { 1524 MOZ_ALWAYS_TRUE(reserveMarkingThreads(0)); 1525 } 1526 1527 bool GCRuntime::reserveMarkingThreads(size_t newCount) { 1528 if (reservedMarkingThreads == newCount) { 1529 return true; 1530 } 1531 1532 // Update the helper thread system's global count by subtracting this 1533 // runtime's current contribution |reservedMarkingThreads| and adding the new 1534 // contribution |newCount|. 1535 1536 AutoLockHelperThreadState lock; 1537 auto& globalCount = HelperThreadState().gcParallelMarkingThreads; 1538 MOZ_ASSERT(globalCount >= reservedMarkingThreads); 1539 size_t newGlobalCount = globalCount - reservedMarkingThreads + newCount; 1540 if (newGlobalCount > HelperThreadState().threadCount) { 1541 // Not enough total threads. 1542 return false; 1543 } 1544 1545 globalCount = newGlobalCount; 1546 reservedMarkingThreads = newCount; 1547 return true; 1548 } 1549 1550 size_t GCRuntime::getMaxParallelThreads() const { 1551 AutoLockHelperThreadState lock; 1552 return maxParallelThreads.ref(); 1553 } 1554 1555 bool GCRuntime::updateMarkersVector() { 1556 MOZ_ASSERT(helperThreadCount >= 1, 1557 "There must always be at least one mark task"); 1558 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt)); 1559 assertNoMarkingWork(); 1560 1561 // Limit worker count to number of GC parallel tasks that can run 1562 // concurrently, otherwise one thread can deadlock waiting on another. 1563 size_t targetCount = std::min(markingWorkerCount(), getMaxParallelThreads()); 1564 1565 if (rt->isMainRuntime()) { 1566 // For the main runtime, reserve helper threads as long as parallel marking 1567 // is enabled. Worker runtimes may not mark in parallel if there are 1568 // insufficient threads available at the time. 1569 size_t threadsToReserve = targetCount > 1 ? targetCount : 0; 1570 if (!reserveMarkingThreads(threadsToReserve)) { 1571 return false; 1572 } 1573 } 1574 1575 if (markers.length() > targetCount) { 1576 return markers.resize(targetCount); 1577 } 1578 1579 while (markers.length() < targetCount) { 1580 auto marker = MakeUnique<GCMarker>(rt); 1581 if (!marker) { 1582 return false; 1583 } 1584 1585 #ifdef JS_GC_ZEAL 1586 if (maybeMarkStackLimit) { 1587 marker->setMaxCapacity(maybeMarkStackLimit); 1588 } 1589 #endif 1590 1591 if (!marker->init()) { 1592 return false; 1593 } 1594 1595 if (!markers.emplaceBack(std::move(marker))) { 1596 return false; 1597 } 1598 } 1599 1600 return true; 1601 } 1602 1603 template <typename F> 1604 static bool EraseCallback(CallbackVector<F>& vector, F callback) { 1605 for (Callback<F>* p = vector.begin(); p != vector.end(); p++) { 1606 if (p->op == callback) { 1607 vector.erase(p); 1608 return true; 1609 } 1610 } 1611 1612 return false; 1613 } 1614 1615 template <typename F> 1616 static bool EraseCallback(CallbackVector<F>& vector, F callback, void* data) { 1617 for (Callback<F>* p = vector.begin(); p != vector.end(); p++) { 1618 if (p->op == callback && p->data == data) { 1619 vector.erase(p); 1620 return true; 1621 } 1622 } 1623 1624 return false; 1625 } 1626 1627 bool GCRuntime::addBlackRootsTracer(JSTraceDataOp traceOp, void* data) { 1628 AssertHeapIsIdle(); 1629 return blackRootTracers.ref().append(Callback<JSTraceDataOp>(traceOp, data)); 1630 } 1631 1632 void GCRuntime::removeBlackRootsTracer(JSTraceDataOp traceOp, void* data) { 1633 // Can be called from finalizers 1634 MOZ_ALWAYS_TRUE(EraseCallback(blackRootTracers.ref(), traceOp, data)); 1635 } 1636 1637 void GCRuntime::setGrayRootsTracer(JSGrayRootsTracer traceOp, void* data) { 1638 AssertHeapIsIdle(); 1639 grayRootTracer.ref() = {traceOp, data}; 1640 } 1641 1642 void GCRuntime::clearBlackAndGrayRootTracers() { 1643 MOZ_ASSERT(rt->isBeingDestroyed()); 1644 blackRootTracers.ref().clear(); 1645 setGrayRootsTracer(nullptr, nullptr); 1646 } 1647 1648 void GCRuntime::setGCCallback(JSGCCallback callback, void* data) { 1649 gcCallback.ref() = {callback, data}; 1650 } 1651 1652 void GCRuntime::callGCCallback(JSGCStatus status, JS::GCReason reason) const { 1653 const auto& callback = gcCallback.ref(); 1654 MOZ_ASSERT(callback.op); 1655 callback.op(rt->mainContextFromOwnThread(), status, reason, callback.data); 1656 } 1657 1658 void GCRuntime::setObjectsTenuredCallback(JSObjectsTenuredCallback callback, 1659 void* data) { 1660 tenuredCallback.ref() = {callback, data}; 1661 } 1662 1663 void GCRuntime::callObjectsTenuredCallback() { 1664 JS::AutoSuppressGCAnalysis nogc; 1665 const auto& callback = tenuredCallback.ref(); 1666 if (callback.op) { 1667 callback.op(&mainThreadContext.ref(), callback.data); 1668 } 1669 } 1670 1671 bool GCRuntime::addFinalizeCallback(JSFinalizeCallback callback, void* data) { 1672 return finalizeCallbacks.ref().append( 1673 Callback<JSFinalizeCallback>(callback, data)); 1674 } 1675 1676 void GCRuntime::removeFinalizeCallback(JSFinalizeCallback callback) { 1677 MOZ_ALWAYS_TRUE(EraseCallback(finalizeCallbacks.ref(), callback)); 1678 } 1679 1680 void GCRuntime::callFinalizeCallbacks(JS::GCContext* gcx, 1681 JSFinalizeStatus status) const { 1682 for (const auto& p : finalizeCallbacks.ref()) { 1683 p.op(gcx, status, p.data); 1684 } 1685 } 1686 1687 void GCRuntime::setHostCleanupFinalizationRegistryCallback( 1688 JSHostCleanupFinalizationRegistryCallback callback, void* data) { 1689 hostCleanupFinalizationRegistryCallback.ref() = {callback, data}; 1690 } 1691 1692 void GCRuntime::callHostCleanupFinalizationRegistryCallback( 1693 JSFunction* doCleanup, JSObject* hostDefinedData) { 1694 JS::AutoSuppressGCAnalysis nogc; 1695 const auto& callback = hostCleanupFinalizationRegistryCallback.ref(); 1696 if (callback.op) { 1697 callback.op(doCleanup, hostDefinedData, callback.data); 1698 } 1699 } 1700 1701 bool GCRuntime::addWeakPointerZonesCallback(JSWeakPointerZonesCallback callback, 1702 void* data) { 1703 return updateWeakPointerZonesCallbacks.ref().append( 1704 Callback<JSWeakPointerZonesCallback>(callback, data)); 1705 } 1706 1707 void GCRuntime::removeWeakPointerZonesCallback( 1708 JSWeakPointerZonesCallback callback) { 1709 MOZ_ALWAYS_TRUE( 1710 EraseCallback(updateWeakPointerZonesCallbacks.ref(), callback)); 1711 } 1712 1713 void GCRuntime::callWeakPointerZonesCallbacks(JSTracer* trc) const { 1714 for (auto const& p : updateWeakPointerZonesCallbacks.ref()) { 1715 p.op(trc, p.data); 1716 } 1717 } 1718 1719 bool GCRuntime::addWeakPointerCompartmentCallback( 1720 JSWeakPointerCompartmentCallback callback, void* data) { 1721 return updateWeakPointerCompartmentCallbacks.ref().append( 1722 Callback<JSWeakPointerCompartmentCallback>(callback, data)); 1723 } 1724 1725 void GCRuntime::removeWeakPointerCompartmentCallback( 1726 JSWeakPointerCompartmentCallback callback) { 1727 MOZ_ALWAYS_TRUE( 1728 EraseCallback(updateWeakPointerCompartmentCallbacks.ref(), callback)); 1729 } 1730 1731 void GCRuntime::callWeakPointerCompartmentCallbacks( 1732 JSTracer* trc, JS::Compartment* comp) const { 1733 for (auto const& p : updateWeakPointerCompartmentCallbacks.ref()) { 1734 p.op(trc, comp, p.data); 1735 } 1736 } 1737 1738 JS::GCSliceCallback GCRuntime::setSliceCallback(JS::GCSliceCallback callback) { 1739 return stats().setSliceCallback(callback); 1740 } 1741 1742 bool GCRuntime::addNurseryCollectionCallback( 1743 JS::GCNurseryCollectionCallback callback, void* data) { 1744 return nurseryCollectionCallbacks.ref().append( 1745 Callback<JS::GCNurseryCollectionCallback>(callback, data)); 1746 } 1747 1748 void GCRuntime::removeNurseryCollectionCallback( 1749 JS::GCNurseryCollectionCallback callback, void* data) { 1750 MOZ_ALWAYS_TRUE( 1751 EraseCallback(nurseryCollectionCallbacks.ref(), callback, data)); 1752 } 1753 1754 void GCRuntime::callNurseryCollectionCallbacks(JS::GCNurseryProgress progress, 1755 JS::GCReason reason) { 1756 for (auto const& p : nurseryCollectionCallbacks.ref()) { 1757 p.op(rt->mainContextFromOwnThread(), progress, reason, p.data); 1758 } 1759 } 1760 1761 JS::DoCycleCollectionCallback GCRuntime::setDoCycleCollectionCallback( 1762 JS::DoCycleCollectionCallback callback) { 1763 const auto prior = gcDoCycleCollectionCallback.ref(); 1764 gcDoCycleCollectionCallback.ref() = {callback, nullptr}; 1765 return prior.op; 1766 } 1767 1768 void GCRuntime::callDoCycleCollectionCallback(JSContext* cx) { 1769 const auto& callback = gcDoCycleCollectionCallback.ref(); 1770 if (callback.op) { 1771 callback.op(cx); 1772 } 1773 } 1774 1775 bool GCRuntime::addRoot(Value* vp, const char* name) { 1776 /* 1777 * Sometimes Firefox will hold weak references to objects and then convert 1778 * them to strong references by calling AddRoot (e.g., via PreserveWrapper, 1779 * or ModifyBusyCount in workers). We need a read barrier to cover these 1780 * cases. 1781 */ 1782 MOZ_ASSERT(vp); 1783 Value value = *vp; 1784 if (value.isGCThing()) { 1785 ValuePreWriteBarrier(value); 1786 } 1787 1788 return rootsHash.ref().put(vp, name); 1789 } 1790 1791 void GCRuntime::removeRoot(Value* vp) { 1792 rootsHash.ref().remove(vp); 1793 notifyRootsRemoved(); 1794 } 1795 1796 /* Compacting GC */ 1797 1798 bool js::gc::IsCurrentlyAnimating(const TimeStamp& lastAnimationTime, 1799 const TimeStamp& currentTime) { 1800 // Assume that we're currently animating if js::NotifyAnimationActivity has 1801 // been called in the last second. 1802 static const auto oneSecond = TimeDuration::FromSeconds(1); 1803 return !lastAnimationTime.IsNull() && 1804 currentTime < (lastAnimationTime + oneSecond); 1805 } 1806 1807 static bool DiscardedCodeRecently(Zone* zone, const TimeStamp& currentTime) { 1808 static const auto thirtySeconds = TimeDuration::FromSeconds(30); 1809 return !zone->lastDiscardedCodeTime().IsNull() && 1810 currentTime < (zone->lastDiscardedCodeTime() + thirtySeconds); 1811 } 1812 1813 bool GCRuntime::shouldCompact() { 1814 // Compact on shrinking GC if enabled. Skip compacting in incremental GCs 1815 // if we are currently animating, unless the user is inactive or we're 1816 // responding to memory pressure. 1817 1818 if (!isShrinkingGC() || !isCompactingGCEnabled()) { 1819 return false; 1820 } 1821 1822 if (initialReason == JS::GCReason::USER_INACTIVE || 1823 initialReason == JS::GCReason::MEM_PRESSURE) { 1824 return true; 1825 } 1826 1827 return !isIncremental || 1828 !IsCurrentlyAnimating(rt->lastAnimationTime, TimeStamp::Now()); 1829 } 1830 1831 bool GCRuntime::isCompactingGCEnabled() const { 1832 return compactingEnabled && 1833 rt->mainContextFromOwnThread()->compactingDisabledCount == 0; 1834 } 1835 1836 JS_PUBLIC_API void JS::SetCreateGCSliceBudgetCallback( 1837 JSContext* cx, JS::CreateSliceBudgetCallback cb) { 1838 cx->runtime()->gc.createBudgetCallback = cb; 1839 } 1840 1841 void TimeBudget::setDeadlineFromNow() { deadline = TimeStamp::Now() + budget; } 1842 1843 SliceBudget::SliceBudget(TimeBudget time, InterruptRequestFlag* interrupt) 1844 : counter(StepsPerExpensiveCheck), 1845 interruptRequested(interrupt), 1846 budget(TimeBudget(time)) { 1847 budget.as<TimeBudget>().setDeadlineFromNow(); 1848 } 1849 1850 SliceBudget::SliceBudget(WorkBudget work) 1851 : counter(work.budget), interruptRequested(nullptr), budget(work) {} 1852 1853 int SliceBudget::describe(char* buffer, size_t maxlen) const { 1854 if (isUnlimited()) { 1855 return snprintf(buffer, maxlen, "unlimited"); 1856 } 1857 1858 const char* nonstop = ""; 1859 if (keepGoing) { 1860 nonstop = "nonstop "; 1861 } 1862 1863 if (isWorkBudget()) { 1864 return snprintf(buffer, maxlen, "%swork(%" PRId64 ")", nonstop, 1865 workBudget()); 1866 } 1867 1868 const char* interruptStr = ""; 1869 if (interruptRequested) { 1870 interruptStr = interrupted ? "INTERRUPTED " : "interruptible "; 1871 } 1872 const char* extra = ""; 1873 if (idle) { 1874 extra = extended ? " (started idle but extended)" : " (idle)"; 1875 } 1876 return snprintf(buffer, maxlen, "%s%s%" PRId64 "ms%s", nonstop, interruptStr, 1877 timeBudget(), extra); 1878 } 1879 1880 bool SliceBudget::checkOverBudget() { 1881 MOZ_ASSERT(counter <= 0); 1882 MOZ_ASSERT(!isUnlimited()); 1883 1884 if (isWorkBudget()) { 1885 return true; 1886 } 1887 1888 if (interruptRequested && *interruptRequested) { 1889 interrupted = true; 1890 } 1891 1892 if (interrupted) { 1893 return true; 1894 } 1895 1896 if (TimeStamp::Now() >= budget.as<TimeBudget>().deadline) { 1897 return true; 1898 } 1899 1900 counter = StepsPerExpensiveCheck; 1901 return false; 1902 } 1903 1904 void GCRuntime::requestMajorGC(JS::GCReason reason) { 1905 MOZ_ASSERT_IF(reason != JS::GCReason::BG_TASK_FINISHED, 1906 !CurrentThreadIsPerformingGC()); 1907 1908 if (majorGCRequested()) { 1909 return; 1910 } 1911 1912 majorGCTriggerReason = reason; 1913 rt->mainContextFromAnyThread()->requestInterrupt(InterruptReason::MajorGC); 1914 } 1915 1916 bool GCRuntime::triggerGC(JS::GCReason reason) { 1917 /* 1918 * Don't trigger GCs if this is being called off the main thread from 1919 * onTooMuchMalloc(). 1920 */ 1921 if (!CurrentThreadCanAccessRuntime(rt)) { 1922 return false; 1923 } 1924 1925 /* GC is already running. */ 1926 if (JS::RuntimeHeapIsCollecting()) { 1927 return false; 1928 } 1929 1930 JS::PrepareForFullGC(rt->mainContextFromOwnThread()); 1931 requestMajorGC(reason); 1932 return true; 1933 } 1934 1935 void GCRuntime::maybeTriggerGCAfterAlloc(Zone* zone) { 1936 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt)); 1937 MOZ_ASSERT(!JS::RuntimeHeapIsCollecting()); 1938 1939 TriggerResult trigger = 1940 checkHeapThreshold(zone, zone->gcHeapSize, zone->gcHeapThreshold); 1941 1942 if (trigger.shouldTrigger) { 1943 // Start or continue an in progress incremental GC. We do this to try to 1944 // avoid performing non-incremental GCs on zones which allocate a lot of 1945 // data, even when incremental slices can't be triggered via scheduling in 1946 // the event loop. 1947 triggerZoneGC(zone, JS::GCReason::ALLOC_TRIGGER, trigger.usedBytes, 1948 trigger.thresholdBytes); 1949 } 1950 } 1951 1952 void js::gc::MaybeMallocTriggerZoneGC(JSRuntime* rt, ZoneAllocator* zoneAlloc, 1953 const HeapSize& heap, 1954 const HeapThreshold& threshold, 1955 JS::GCReason reason) { 1956 rt->gc.maybeTriggerGCAfterMalloc(Zone::from(zoneAlloc), heap, threshold, 1957 reason); 1958 } 1959 1960 void GCRuntime::maybeTriggerGCAfterMalloc(Zone* zone) { 1961 if (maybeTriggerGCAfterMalloc(zone, zone->mallocHeapSize, 1962 zone->mallocHeapThreshold, 1963 JS::GCReason::TOO_MUCH_MALLOC)) { 1964 return; 1965 } 1966 1967 maybeTriggerGCAfterMalloc(zone, zone->jitHeapSize, zone->jitHeapThreshold, 1968 JS::GCReason::TOO_MUCH_JIT_CODE); 1969 } 1970 1971 bool GCRuntime::maybeTriggerGCAfterMalloc(Zone* zone, const HeapSize& heap, 1972 const HeapThreshold& threshold, 1973 JS::GCReason reason) { 1974 // Ignore malloc during sweeping, for example when we resize hash tables. 1975 if (heapState() != JS::HeapState::Idle) { 1976 return false; 1977 } 1978 1979 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt)); 1980 1981 TriggerResult trigger = checkHeapThreshold(zone, heap, threshold); 1982 if (!trigger.shouldTrigger) { 1983 return false; 1984 } 1985 1986 // Trigger a zone GC. budgetIncrementalGC() will work out whether to do an 1987 // incremental or non-incremental collection. 1988 triggerZoneGC(zone, reason, trigger.usedBytes, trigger.thresholdBytes); 1989 return true; 1990 } 1991 1992 TriggerResult GCRuntime::checkHeapThreshold( 1993 Zone* zone, const HeapSize& heapSize, const HeapThreshold& heapThreshold) { 1994 MOZ_ASSERT_IF(heapThreshold.hasSliceThreshold(), zone->wasGCStarted()); 1995 1996 size_t usedBytes = heapSize.bytes(); 1997 size_t thresholdBytes = heapThreshold.hasSliceThreshold() 1998 ? heapThreshold.sliceBytes() 1999 : heapThreshold.startBytes(); 2000 2001 // The incremental limit will be checked if we trigger a GC slice. 2002 MOZ_ASSERT(thresholdBytes <= heapThreshold.incrementalLimitBytes()); 2003 2004 return TriggerResult{usedBytes >= thresholdBytes, usedBytes, thresholdBytes}; 2005 } 2006 2007 bool GCRuntime::triggerZoneGC(Zone* zone, JS::GCReason reason, size_t used, 2008 size_t threshold) { 2009 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt)); 2010 2011 /* GC is already running. */ 2012 if (JS::RuntimeHeapIsBusy()) { 2013 return false; 2014 } 2015 2016 #ifdef JS_GC_ZEAL 2017 if (hasZealMode(ZealMode::Alloc)) { 2018 MOZ_RELEASE_ASSERT(triggerGC(reason)); 2019 return true; 2020 } 2021 #endif 2022 2023 if (zone->isAtomsZone()) { 2024 stats().recordTrigger(used, threshold); 2025 MOZ_RELEASE_ASSERT(triggerGC(reason)); 2026 return true; 2027 } 2028 2029 stats().recordTrigger(used, threshold); 2030 zone->scheduleGC(); 2031 requestMajorGC(reason); 2032 return true; 2033 } 2034 2035 void GCRuntime::maybeGC() { 2036 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt)); 2037 2038 #ifdef JS_GC_ZEAL 2039 if (hasZealMode(ZealMode::Alloc) || hasZealMode(ZealMode::RootsChange)) { 2040 JS::PrepareForFullGC(rt->mainContextFromOwnThread()); 2041 gc(JS::GCOptions::Normal, JS::GCReason::DEBUG_GC); 2042 return; 2043 } 2044 #endif 2045 2046 (void)gcIfRequestedImpl(/* eagerOk = */ true); 2047 } 2048 2049 JS::GCReason GCRuntime::wantMajorGC(bool eagerOk) { 2050 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt)); 2051 2052 if (majorGCRequested()) { 2053 return majorGCTriggerReason; 2054 } 2055 2056 if (isIncrementalGCInProgress() || !eagerOk) { 2057 return JS::GCReason::NO_REASON; 2058 } 2059 2060 JS::GCReason reason = JS::GCReason::NO_REASON; 2061 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) { 2062 if (checkEagerAllocTrigger(zone->gcHeapSize, zone->gcHeapThreshold) || 2063 checkEagerAllocTrigger(zone->mallocHeapSize, 2064 zone->mallocHeapThreshold)) { 2065 zone->scheduleGC(); 2066 reason = JS::GCReason::EAGER_ALLOC_TRIGGER; 2067 } 2068 } 2069 2070 return reason; 2071 } 2072 2073 bool GCRuntime::checkEagerAllocTrigger(const HeapSize& size, 2074 const HeapThreshold& threshold) { 2075 size_t thresholdBytes = 2076 threshold.eagerAllocTrigger(schedulingState.inHighFrequencyGCMode()); 2077 size_t usedBytes = size.bytes(); 2078 if (usedBytes <= 1024 * 1024 || usedBytes < thresholdBytes) { 2079 return false; 2080 } 2081 2082 stats().recordTrigger(usedBytes, thresholdBytes); 2083 return true; 2084 } 2085 2086 bool GCRuntime::shouldDecommit() const { 2087 switch (gcOptions()) { 2088 case JS::GCOptions::Normal: 2089 // If we are allocating heavily enough to trigger "high frequency" GC then 2090 // skip decommit so that we do not compete with the mutator. 2091 return !schedulingState.inHighFrequencyGCMode(); 2092 case JS::GCOptions::Shrink: 2093 // If we're doing a shrinking GC we always decommit to release as much 2094 // memory as possible. 2095 return true; 2096 case JS::GCOptions::Shutdown: 2097 // There's no point decommitting as we are about to free everything. 2098 return false; 2099 } 2100 2101 MOZ_CRASH("Unexpected GCOptions value"); 2102 } 2103 2104 void GCRuntime::startDecommit() { 2105 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::DECOMMIT); 2106 2107 #ifdef DEBUG 2108 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt)); 2109 MOZ_ASSERT(decommitTask.isIdle()); 2110 2111 { 2112 AutoLockGC lock(this); 2113 MOZ_ASSERT(fullChunks(lock).verify()); 2114 MOZ_ASSERT(availableChunks(lock).verify()); 2115 MOZ_ASSERT(emptyChunks(lock).verify()); 2116 2117 // Verify that all entries in the empty chunks pool are unused. 2118 for (ChunkPool::Iter chunk(emptyChunks(lock)); !chunk.done(); 2119 chunk.next()) { 2120 MOZ_ASSERT(chunk->isEmpty()); 2121 } 2122 } 2123 #endif 2124 2125 if (!shouldDecommit()) { 2126 return; 2127 } 2128 2129 { 2130 AutoLockGC lock(this); 2131 if (availableChunks(lock).empty() && !tooManyEmptyChunks(lock) && 2132 emptyChunks(lock).empty()) { 2133 return; // Nothing to do. 2134 } 2135 } 2136 2137 #ifdef DEBUG 2138 { 2139 AutoLockHelperThreadState lock; 2140 MOZ_ASSERT(!requestSliceAfterBackgroundTask); 2141 } 2142 #endif 2143 2144 if (useBackgroundThreads) { 2145 decommitTask.start(); 2146 return; 2147 } 2148 2149 decommitTask.runFromMainThread(); 2150 } 2151 2152 BackgroundDecommitTask::BackgroundDecommitTask(GCRuntime* gc) 2153 : GCParallelTask(gc, gcstats::PhaseKind::DECOMMIT) {} 2154 2155 void js::gc::BackgroundDecommitTask::run(AutoLockHelperThreadState& lock) { 2156 { 2157 AutoUnlockHelperThreadState unlock(lock); 2158 2159 ChunkPool emptyChunksToFree; 2160 { 2161 AutoLockGC gcLock(gc); 2162 emptyChunksToFree = gc->expireEmptyChunkPool(gcLock); 2163 } 2164 2165 FreeChunkPool(emptyChunksToFree); 2166 2167 { 2168 AutoLockGC gcLock(gc); 2169 2170 // To help minimize the total number of chunks needed over time, sort the 2171 // available chunks list so that we allocate into more-used chunks first. 2172 gc->availableChunks(gcLock).sort(); 2173 2174 if (DecommitEnabled()) { 2175 gc->decommitEmptyChunks(cancel_, gcLock); 2176 gc->decommitFreeArenas(cancel_, gcLock); 2177 } 2178 } 2179 } 2180 2181 gc->maybeRequestGCAfterBackgroundTask(lock); 2182 } 2183 2184 static inline bool CanDecommitWholeChunk(ArenaChunk* chunk) { 2185 return chunk->isEmpty() && chunk->info.numArenasFreeCommitted != 0; 2186 } 2187 2188 // Called from a background thread to decommit free arenas. Releases the GC 2189 // lock. 2190 void GCRuntime::decommitEmptyChunks(const bool& cancel, AutoLockGC& lock) { 2191 Vector<ArenaChunk*, 0, SystemAllocPolicy> chunksToDecommit; 2192 for (ChunkPool::Iter chunk(emptyChunks(lock)); !chunk.done(); chunk.next()) { 2193 if (CanDecommitWholeChunk(chunk) && !chunksToDecommit.append(chunk)) { 2194 onOutOfMallocMemory(lock); 2195 return; 2196 } 2197 } 2198 2199 for (ArenaChunk* chunk : chunksToDecommit) { 2200 if (cancel) { 2201 break; 2202 } 2203 2204 // Check whether something used the chunk while lock was released. 2205 if (!CanDecommitWholeChunk(chunk)) { 2206 continue; 2207 } 2208 2209 // Temporarily remove the chunk while decommitting its memory so that the 2210 // mutator doesn't start allocating from it when we drop the lock. 2211 emptyChunks(lock).remove(chunk); 2212 2213 { 2214 AutoUnlockGC unlock(lock); 2215 chunk->decommitAllArenas(); 2216 MOZ_ASSERT(chunk->info.numArenasFreeCommitted == 0); 2217 } 2218 2219 emptyChunks(lock).push(chunk); 2220 } 2221 } 2222 2223 // Called from a background thread to decommit free arenas. Releases the GC 2224 // lock. 2225 void GCRuntime::decommitFreeArenas(const bool& cancel, AutoLockGC& lock) { 2226 MOZ_ASSERT(DecommitEnabled()); 2227 2228 // Since we release the GC lock while doing the decommit syscall below, 2229 // it is dangerous to iterate the available list directly, as the active 2230 // thread could modify it concurrently. Instead, we build and pass an 2231 // explicit Vector containing the Chunks we want to visit. 2232 Vector<ArenaChunk*, 0, SystemAllocPolicy> chunksToDecommit; 2233 for (ChunkPool::Iter chunk(availableChunks(lock)); !chunk.done(); 2234 chunk.next()) { 2235 if (chunk->info.numArenasFreeCommitted != 0 && 2236 !chunksToDecommit.append(chunk)) { 2237 onOutOfMallocMemory(lock); 2238 return; 2239 } 2240 } 2241 2242 for (ArenaChunk* chunk : chunksToDecommit) { 2243 MOZ_ASSERT(chunk->getKind() == ChunkKind::TenuredArenas); 2244 MOZ_ASSERT(!chunk->isEmpty()); 2245 2246 if (chunk->info.isCurrentChunk) { 2247 // Chunk has become current chunk while lock was released. 2248 continue; 2249 } 2250 2251 if (!chunk->hasAvailableArenas()) { 2252 // Chunk has become full while lock was released. 2253 continue; 2254 } 2255 2256 MOZ_ASSERT(availableChunks(lock).contains(chunk)); 2257 chunk->decommitFreeArenas(this, cancel, lock); 2258 } 2259 } 2260 2261 // Do all possible decommit immediately from the current thread without 2262 // releasing the GC lock or allocating any memory. 2263 void GCRuntime::decommitFreeArenasWithoutUnlocking(const AutoLockGC& lock) { 2264 MOZ_ASSERT(DecommitEnabled()); 2265 for (ChunkPool::Iter chunk(availableChunks(lock)); !chunk.done(); 2266 chunk.next()) { 2267 chunk->decommitFreeArenasWithoutUnlocking(lock); 2268 } 2269 MOZ_ASSERT(availableChunks(lock).verify()); 2270 } 2271 2272 void GCRuntime::maybeRequestGCAfterBackgroundTask( 2273 const AutoLockHelperThreadState& lock) { 2274 if (requestSliceAfterBackgroundTask) { 2275 // Request a slice. The main thread may continue the collection immediately 2276 // or it may yield to let the embedding schedule a slice. 2277 requestSliceAfterBackgroundTask = false; 2278 requestMajorGC(JS::GCReason::BG_TASK_FINISHED); 2279 } 2280 } 2281 2282 void GCRuntime::cancelRequestedGCAfterBackgroundTask() { 2283 MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt)); 2284 2285 #ifdef DEBUG 2286 { 2287 AutoLockHelperThreadState lock; 2288 MOZ_ASSERT(!requestSliceAfterBackgroundTask); 2289 } 2290 #endif 2291 2292 majorGCTriggerReason.compareExchange(JS::GCReason::BG_TASK_FINISHED, 2293 JS::GCReason::NO_REASON); 2294 } 2295 2296 bool GCRuntime::isWaitingOnBackgroundTask() const { 2297 AutoLockHelperThreadState lock; 2298 return requestSliceAfterBackgroundTask; 2299 } 2300 2301 void GCRuntime::queueUnusedLifoBlocksForFree(LifoAlloc* lifo) { 2302 MOZ_ASSERT(JS::RuntimeHeapIsBusy()); 2303 AutoLockHelperThreadState lock; 2304 lifoBlocksToFree.ref().transferUnusedFrom(lifo); 2305 } 2306 2307 void GCRuntime::queueAllLifoBlocksForFreeAfterMinorGC(LifoAlloc* lifo) { 2308 lifoBlocksToFreeAfterFullMinorGC.ref().transferFrom(lifo); 2309 } 2310 2311 void GCRuntime::queueBuffersForFreeAfterMinorGC( 2312 Nursery::BufferSet& buffers, Nursery::StringBufferVector& stringBuffers) { 2313 AutoLockHelperThreadState lock; 2314 2315 if (!buffersToFreeAfterMinorGC.ref().empty() || 2316 !stringBuffersToReleaseAfterMinorGC.ref().empty()) { 2317 // In the rare case that this hasn't processed the buffers from a previous 2318 // minor GC we have to wait here. 2319 MOZ_ASSERT(!freeTask.isIdle(lock)); 2320 freeTask.joinWithLockHeld(lock); 2321 } 2322 2323 MOZ_ASSERT(buffersToFreeAfterMinorGC.ref().empty()); 2324 std::swap(buffersToFreeAfterMinorGC.ref(), buffers); 2325 2326 MOZ_ASSERT(stringBuffersToReleaseAfterMinorGC.ref().empty()); 2327 std::swap(stringBuffersToReleaseAfterMinorGC.ref(), stringBuffers); 2328 } 2329 2330 void Realm::destroy(JS::GCContext* gcx) { 2331 JSRuntime* rt = gcx->runtime(); 2332 if (auto callback = rt->destroyRealmCallback) { 2333 callback(gcx, this); 2334 } 2335 if (principals()) { 2336 JS_DropPrincipals(rt->mainContextFromOwnThread(), principals()); 2337 } 2338 // Bug 1560019: Malloc memory associated with a zone but not with a specific 2339 // GC thing is not currently tracked. 2340 gcx->deleteUntracked(this); 2341 } 2342 2343 void Compartment::destroy(JS::GCContext* gcx) { 2344 JSRuntime* rt = gcx->runtime(); 2345 if (auto callback = rt->destroyCompartmentCallback) { 2346 callback(gcx, this); 2347 } 2348 // Bug 1560019: Malloc memory associated with a zone but not with a specific 2349 // GC thing is not currently tracked. 2350 gcx->deleteUntracked(this); 2351 rt->gc.stats().sweptCompartment(); 2352 } 2353 2354 void Zone::destroy(JS::GCContext* gcx) { 2355 MOZ_ASSERT(compartments().empty()); 2356 JSRuntime* rt = gcx->runtime(); 2357 if (auto callback = rt->destroyZoneCallback) { 2358 callback(gcx, this); 2359 } 2360 // Bug 1560019: Malloc memory associated with a zone but not with a specific 2361 // GC thing is not currently tracked. 2362 gcx->deleteUntracked(this); 2363 gcx->runtime()->gc.stats().sweptZone(); 2364 } 2365 2366 /* 2367 * It's simpler if we preserve the invariant that every zone (except atoms 2368 * zones) has at least one compartment, and every compartment has at least one 2369 * realm. If we know we're deleting the entire zone, then sweepCompartments is 2370 * allowed to delete all compartments. In this case, |keepAtleastOne| is false. 2371 * If any cells remain alive in the zone, set |keepAtleastOne| true to prohibit 2372 * sweepCompartments from deleting every compartment. Instead, it preserves an 2373 * arbitrary compartment in the zone. 2374 */ 2375 void Zone::sweepCompartments(JS::GCContext* gcx, bool keepAtleastOne, 2376 bool destroyingRuntime) { 2377 MOZ_ASSERT_IF(!isAtomsZone(), !compartments().empty()); 2378 MOZ_ASSERT_IF(destroyingRuntime, !keepAtleastOne); 2379 2380 Compartment** read = compartments().begin(); 2381 Compartment** end = compartments().end(); 2382 Compartment** write = read; 2383 while (read < end) { 2384 Compartment* comp = *read++; 2385 2386 /* 2387 * Don't delete the last compartment and realm if keepAtleastOne is 2388 * still true, meaning all the other compartments were deleted. 2389 */ 2390 bool keepAtleastOneRealm = read == end && keepAtleastOne; 2391 comp->sweepRealms(gcx, keepAtleastOneRealm, destroyingRuntime); 2392 2393 if (!comp->realms().empty()) { 2394 *write++ = comp; 2395 keepAtleastOne = false; 2396 } else { 2397 comp->destroy(gcx); 2398 } 2399 } 2400 compartments().shrinkTo(write - compartments().begin()); 2401 MOZ_ASSERT_IF(keepAtleastOne, !compartments().empty()); 2402 MOZ_ASSERT_IF(destroyingRuntime, compartments().empty()); 2403 } 2404 2405 void Compartment::sweepRealms(JS::GCContext* gcx, bool keepAtleastOne, 2406 bool destroyingRuntime) { 2407 MOZ_ASSERT(!realms().empty()); 2408 MOZ_ASSERT_IF(destroyingRuntime, !keepAtleastOne); 2409 2410 Realm** read = realms().begin(); 2411 Realm** end = realms().end(); 2412 Realm** write = read; 2413 while (read < end) { 2414 Realm* realm = *read++; 2415 2416 /* 2417 * Don't delete the last realm if keepAtleastOne is still true, meaning 2418 * all the other realms were deleted. 2419 */ 2420 bool dontDelete = read == end && keepAtleastOne; 2421 if ((realm->marked() || dontDelete) && !destroyingRuntime) { 2422 *write++ = realm; 2423 keepAtleastOne = false; 2424 } else { 2425 realm->destroy(gcx); 2426 } 2427 } 2428 realms().shrinkTo(write - realms().begin()); 2429 MOZ_ASSERT_IF(keepAtleastOne, !realms().empty()); 2430 MOZ_ASSERT_IF(destroyingRuntime, realms().empty()); 2431 } 2432 2433 void GCRuntime::sweepZones(JS::GCContext* gcx, bool destroyingRuntime) { 2434 MOZ_ASSERT_IF(destroyingRuntime, numActiveZoneIters == 0); 2435 MOZ_ASSERT(foregroundFinalizedArenas.ref().isNothing()); 2436 2437 if (numActiveZoneIters) { 2438 return; 2439 } 2440 2441 assertBackgroundSweepingFinished(); 2442 2443 // Host destroy callbacks can access the store buffer, e.g. when resizing hash 2444 // tables containing nursery pointers. 2445 AutoLockStoreBuffer lock(rt); 2446 2447 // Sweep zones following the atoms zone. 2448 MOZ_ASSERT(zones()[0]->isAtomsZone()); 2449 Zone** read = zones().begin() + 1; 2450 Zone** end = zones().end(); 2451 Zone** write = read; 2452 2453 while (read < end) { 2454 Zone* zone = *read++; 2455 2456 if (zone->wasGCStarted()) { 2457 MOZ_ASSERT(!zone->isQueuedForBackgroundSweep()); 2458 AutoSetThreadIsSweeping threadIsSweeping(zone); 2459 const bool zoneIsDead = zone->arenas.arenaListsAreEmpty() && 2460 zone->bufferAllocator.isEmpty() && 2461 !zone->hasMarkedRealms(); 2462 MOZ_ASSERT_IF(destroyingRuntime, zoneIsDead); 2463 if (zoneIsDead) { 2464 zone->arenas.checkEmptyFreeLists(); 2465 zone->sweepCompartments(gcx, false, destroyingRuntime); 2466 MOZ_ASSERT(zone->compartments().empty()); 2467 zone->destroy(gcx); 2468 continue; 2469 } 2470 zone->sweepCompartments(gcx, true, destroyingRuntime); 2471 } 2472 *write++ = zone; 2473 } 2474 zones().shrinkTo(write - zones().begin()); 2475 } 2476 2477 void ArenaLists::checkEmptyArenaList(AllocKind kind) { 2478 MOZ_ASSERT(arenaList(kind).isEmpty()); 2479 } 2480 2481 void GCRuntime::purgeRuntimeForMinorGC() { 2482 for (ZonesIter zone(this, SkipAtoms); !zone.done(); zone.next()) { 2483 zone->externalStringCache().purge(); 2484 zone->functionToStringCache().purge(); 2485 } 2486 } 2487 2488 void GCRuntime::purgeRuntime() { 2489 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::PURGE); 2490 2491 for (GCRealmsIter realm(rt); !realm.done(); realm.next()) { 2492 realm->purge(); 2493 } 2494 2495 for (GCZonesIter zone(this); !zone.done(); zone.next()) { 2496 zone->purgeAtomCache(); 2497 zone->externalStringCache().purge(); 2498 zone->functionToStringCache().purge(); 2499 zone->boundPrefixCache().clearAndCompact(); 2500 zone->shapeZone().purgeShapeCaches(rt->gcContext()); 2501 } 2502 2503 JSContext* cx = rt->mainContextFromOwnThread(); 2504 queueUnusedLifoBlocksForFree(&cx->tempLifoAlloc()); 2505 cx->interpreterStack().purge(rt); 2506 cx->frontendCollectionPool().purge(); 2507 2508 rt->caches().purge(); 2509 2510 if (rt->isMainRuntime()) { 2511 SharedImmutableStringsCache::getSingleton().purge(); 2512 } 2513 2514 MOZ_ASSERT(marker().unmarkGrayStack.empty()); 2515 marker().unmarkGrayStack.clearAndFree(); 2516 } 2517 2518 bool GCRuntime::shouldPreserveJITCode(Realm* realm, 2519 const TimeStamp& currentTime, 2520 JS::GCReason reason, 2521 bool canAllocateMoreCode, 2522 bool isActiveCompartment) { 2523 // During shutdown, we must clean everything up, for the sake of leak 2524 // detection. 2525 if (isShutdownGC()) { 2526 return false; 2527 } 2528 2529 // A shrinking GC is trying to clear out as much as it can, and so we should 2530 // not preserve JIT code here! 2531 if (isShrinkingGC()) { 2532 return false; 2533 } 2534 2535 // We are close to our allocatable code limit, so let's try to clean it out. 2536 if (!canAllocateMoreCode) { 2537 return false; 2538 } 2539 2540 // The topmost frame of JIT code is in this compartment, and so we should 2541 // try to preserve this zone's code. 2542 if (isActiveCompartment) { 2543 return true; 2544 } 2545 2546 // The gcPreserveJitCode testing function was used. 2547 if (alwaysPreserveCode) { 2548 return true; 2549 } 2550 2551 // This realm explicitly requested we try to preserve its JIT code. 2552 if (realm->preserveJitCode()) { 2553 return true; 2554 } 2555 2556 // If we're currently animating, and we've already discarded code recently 2557 // we can preserve jit code; however we shouldn't hold onto JIT code forever 2558 // during animation. 2559 if (IsCurrentlyAnimating(realm->lastAnimationTime, currentTime) && 2560 DiscardedCodeRecently(realm->zone(), currentTime)) { 2561 return true; 2562 } 2563 2564 // GC Invoked via a testing function. 2565 if (reason == JS::GCReason::DEBUG_GC) { 2566 return true; 2567 } 2568 2569 return false; 2570 } 2571 2572 #ifdef DEBUG 2573 class CompartmentCheckTracer final : public JS::CallbackTracer { 2574 void onChild(JS::GCCellPtr thing, const char* name) override; 2575 bool edgeIsInCrossCompartmentMap(JS::GCCellPtr dst); 2576 2577 public: 2578 explicit CompartmentCheckTracer(JSRuntime* rt) 2579 : JS::CallbackTracer(rt, JS::TracerKind::CompartmentCheck, 2580 JS::WeakEdgeTraceAction::Skip) {} 2581 2582 Cell* src = nullptr; 2583 JS::TraceKind srcKind = JS::TraceKind::Null; 2584 Zone* zone = nullptr; 2585 Compartment* compartment = nullptr; 2586 }; 2587 2588 static bool InCrossCompartmentMap(JSRuntime* rt, JSObject* src, 2589 JS::GCCellPtr dst) { 2590 // Cross compartment edges are either in the cross compartment map or in a 2591 // debugger weakmap. 2592 2593 Compartment* srccomp = src->compartment(); 2594 2595 if (dst.is<JSObject>()) { 2596 if (ObjectWrapperMap::Ptr p = srccomp->lookupWrapper(&dst.as<JSObject>())) { 2597 if (*p->value().unsafeGet() == src) { 2598 return true; 2599 } 2600 } 2601 } 2602 2603 if (DebugAPI::edgeIsInDebuggerWeakmap(rt, src, dst)) { 2604 return true; 2605 } 2606 2607 return false; 2608 } 2609 2610 void CompartmentCheckTracer::onChild(JS::GCCellPtr thing, const char* name) { 2611 Compartment* comp = 2612 MapGCThingTyped(thing, [](auto t) { return t->maybeCompartment(); }); 2613 if (comp && compartment) { 2614 MOZ_ASSERT(comp == compartment || edgeIsInCrossCompartmentMap(thing)); 2615 } else { 2616 TenuredCell* tenured = &thing.asCell()->asTenured(); 2617 Zone* thingZone = tenured->zoneFromAnyThread(); 2618 MOZ_ASSERT(thingZone == zone || thingZone->isAtomsZone()); 2619 } 2620 } 2621 2622 bool CompartmentCheckTracer::edgeIsInCrossCompartmentMap(JS::GCCellPtr dst) { 2623 return srcKind == JS::TraceKind::Object && 2624 InCrossCompartmentMap(runtime(), static_cast<JSObject*>(src), dst); 2625 } 2626 2627 void GCRuntime::checkForCompartmentMismatches() { 2628 JSContext* cx = rt->mainContextFromOwnThread(); 2629 if (cx->disableStrictProxyCheckingCount) { 2630 return; 2631 } 2632 2633 CompartmentCheckTracer trc(rt); 2634 AutoAssertEmptyNursery empty(cx); 2635 for (ZonesIter zone(this, SkipAtoms); !zone.done(); zone.next()) { 2636 trc.zone = zone; 2637 for (auto thingKind : AllAllocKinds()) { 2638 for (auto i = zone->cellIterUnsafe<TenuredCell>(thingKind, empty); 2639 !i.done(); i.next()) { 2640 trc.src = i.getCell(); 2641 trc.srcKind = MapAllocToTraceKind(thingKind); 2642 trc.compartment = MapGCThingTyped( 2643 trc.src, trc.srcKind, [](auto t) { return t->maybeCompartment(); }); 2644 JS::TraceChildren(&trc, JS::GCCellPtr(trc.src, trc.srcKind)); 2645 } 2646 } 2647 } 2648 } 2649 #endif 2650 2651 static bool ShouldUseBackgroundThreads(bool isIncremental, 2652 JS::GCReason reason) { 2653 bool shouldUse = isIncremental && CanUseExtraThreads(); 2654 MOZ_ASSERT_IF(reason == JS::GCReason::DESTROY_RUNTIME, !shouldUse); 2655 return shouldUse; 2656 } 2657 2658 void GCRuntime::startCollection(JS::GCReason reason) { 2659 checkGCStateNotInUse(); 2660 MOZ_ASSERT_IF( 2661 isShuttingDown(), 2662 isShutdownGC() || 2663 reason == JS::GCReason::XPCONNECT_SHUTDOWN /* Bug 1650075 */); 2664 2665 initialReason = reason; 2666 isCompacting = shouldCompact(); 2667 rootsRemoved = false; 2668 sweepGroupIndex = 0; 2669 2670 #ifdef DEBUG 2671 if (isShutdownGC()) { 2672 hadShutdownGC = true; 2673 } 2674 2675 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) { 2676 zone->gcSweepGroupIndex = 0; 2677 } 2678 #endif 2679 } 2680 2681 static void RelazifyFunctions(Zone* zone, AllocKind kind) { 2682 MOZ_ASSERT(kind == AllocKind::FUNCTION || 2683 kind == AllocKind::FUNCTION_EXTENDED); 2684 2685 JSRuntime* rt = zone->runtimeFromMainThread(); 2686 AutoAssertEmptyNursery empty(rt->mainContextFromOwnThread()); 2687 2688 for (auto i = zone->cellIterUnsafe<JSObject>(kind, empty); !i.done(); 2689 i.next()) { 2690 JSFunction* fun = &i->as<JSFunction>(); 2691 // When iterating over the GC-heap, we may encounter function objects that 2692 // are incomplete (missing a BaseScript when we expect one). We must check 2693 // for this case before we can call JSFunction::hasBytecode(). 2694 if (fun->isIncomplete()) { 2695 continue; 2696 } 2697 if (fun->hasBytecode()) { 2698 fun->maybeRelazify(rt); 2699 } 2700 } 2701 } 2702 2703 static bool ShouldCollectZone(Zone* zone, JS::GCReason reason) { 2704 // If we are repeating a GC because we noticed dead compartments haven't 2705 // been collected, then only collect zones containing those compartments. 2706 if (reason == JS::GCReason::COMPARTMENT_REVIVED) { 2707 for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next()) { 2708 if (comp->gcState.scheduledForDestruction) { 2709 return true; 2710 } 2711 } 2712 2713 return false; 2714 } 2715 2716 // Otherwise we only collect scheduled zones. 2717 return zone->isGCScheduled(); 2718 } 2719 2720 bool GCRuntime::prepareZonesForCollection(JS::GCReason reason, 2721 bool* isFullOut) { 2722 #ifdef DEBUG 2723 /* Assert that zone state is as we expect */ 2724 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) { 2725 MOZ_ASSERT(!zone->isCollecting()); 2726 MOZ_ASSERT_IF(!zone->isAtomsZone(), !zone->compartments().empty()); 2727 for (auto i : AllAllocKinds()) { 2728 MOZ_ASSERT(zone->arenas.collectingArenaList(i).isEmpty()); 2729 } 2730 } 2731 #endif 2732 2733 *isFullOut = true; 2734 bool any = false; 2735 2736 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) { 2737 /* Set up which zones will be collected. */ 2738 bool shouldCollect = ShouldCollectZone(zone, reason); 2739 if (shouldCollect) { 2740 any = true; 2741 zone->changeGCState(Zone::NoGC, Zone::Prepare); 2742 } else { 2743 *isFullOut = false; 2744 } 2745 2746 zone->setWasCollected(shouldCollect); 2747 } 2748 2749 /* Check that at least one zone is scheduled for collection. */ 2750 return any; 2751 } 2752 2753 // Update JIT Code state for GC: A few different actions are combined here to 2754 // minimize the number of iterations over zones & scripts that required. 2755 void GCRuntime::maybeDiscardJitCodeForGC() { 2756 size_t nurserySiteResetCount = 0; 2757 size_t pretenuredSiteResetCount = 0; 2758 2759 js::CancelOffThreadCompile(rt, JS::Zone::Prepare); 2760 for (GCZonesIter zone(this); !zone.done(); zone.next()) { 2761 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::MARK_DISCARD_CODE); 2762 2763 // We may need to reset allocation sites and discard JIT code to recover if 2764 // we find object lifetimes have changed. 2765 PretenuringZone& pz = zone->pretenuring; 2766 bool resetNurserySites = pz.shouldResetNurseryAllocSites(); 2767 bool resetPretenuredSites = pz.shouldResetPretenuredAllocSites(); 2768 2769 if (!zone->isPreservingCode()) { 2770 Zone::JitDiscardOptions options; 2771 options.discardJitScripts = true; 2772 options.resetNurseryAllocSites = resetNurserySites; 2773 options.resetPretenuredAllocSites = resetPretenuredSites; 2774 zone->forceDiscardJitCode(rt->gcContext(), options); 2775 } else if (resetNurserySites || resetPretenuredSites) { 2776 zone->resetAllocSitesAndInvalidate(resetNurserySites, 2777 resetPretenuredSites); 2778 } 2779 2780 if (resetNurserySites) { 2781 nurserySiteResetCount++; 2782 } 2783 if (resetPretenuredSites) { 2784 pretenuredSiteResetCount++; 2785 } 2786 } 2787 2788 if (nursery().reportPretenuring()) { 2789 if (nurserySiteResetCount) { 2790 fprintf( 2791 stderr, 2792 "GC reset nursery alloc sites and invalidated code in %zu zones\n", 2793 nurserySiteResetCount); 2794 } 2795 if (pretenuredSiteResetCount) { 2796 fprintf( 2797 stderr, 2798 "GC reset pretenured alloc sites and invalidated code in %zu zones\n", 2799 pretenuredSiteResetCount); 2800 } 2801 } 2802 } 2803 2804 void GCRuntime::relazifyFunctionsForShrinkingGC() { 2805 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::RELAZIFY_FUNCTIONS); 2806 for (GCZonesIter zone(this); !zone.done(); zone.next()) { 2807 RelazifyFunctions(zone, AllocKind::FUNCTION); 2808 RelazifyFunctions(zone, AllocKind::FUNCTION_EXTENDED); 2809 } 2810 } 2811 2812 void GCRuntime::purgePropMapTablesForShrinkingGC() { 2813 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::PURGE_PROP_MAP_TABLES); 2814 for (GCZonesIter zone(this); !zone.done(); zone.next()) { 2815 if (!canRelocateZone(zone) || zone->keepPropMapTables()) { 2816 continue; 2817 } 2818 2819 // Note: CompactPropMaps never have a table. 2820 for (auto map = zone->cellIterUnsafe<NormalPropMap>(); !map.done(); 2821 map.next()) { 2822 if (map->asLinked()->hasTable()) { 2823 map->asLinked()->purgeTable(rt->gcContext()); 2824 } 2825 } 2826 for (auto map = zone->cellIterUnsafe<DictionaryPropMap>(); !map.done(); 2827 map.next()) { 2828 if (map->asLinked()->hasTable()) { 2829 map->asLinked()->purgeTable(rt->gcContext()); 2830 } 2831 } 2832 } 2833 } 2834 2835 // The debugger keeps track of the URLs for the sources of each realm's scripts. 2836 // These URLs are purged on shrinking GCs. 2837 void GCRuntime::purgeSourceURLsForShrinkingGC() { 2838 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::PURGE_SOURCE_URLS); 2839 for (GCZonesIter zone(this); !zone.done(); zone.next()) { 2840 // URLs are not tracked for realms in the system zone. 2841 if (!canRelocateZone(zone) || zone->isSystemZone()) { 2842 continue; 2843 } 2844 for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next()) { 2845 for (RealmsInCompartmentIter realm(comp); !realm.done(); realm.next()) { 2846 GlobalObject* global = realm.get()->unsafeUnbarrieredMaybeGlobal(); 2847 if (global) { 2848 global->clearSourceURLSHolder(); 2849 } 2850 } 2851 } 2852 } 2853 } 2854 2855 void GCRuntime::purgePendingWrapperPreservationBuffersForShrinkingGC() { 2856 gcstats::AutoPhase ap(stats(), 2857 gcstats::PhaseKind::PURGE_WRAPPER_PRESERVATION); 2858 for (GCZonesIter zone(this); !zone.done(); zone.next()) { 2859 zone->purgePendingWrapperPreservationBuffer(); 2860 } 2861 } 2862 2863 void GCRuntime::unmarkWeakMaps() { 2864 for (GCZonesIter zone(this); !zone.done(); zone.next()) { 2865 /* Unmark all weak maps in the zones being collected. */ 2866 WeakMapBase::unmarkZone(zone); 2867 } 2868 } 2869 2870 bool GCRuntime::beginPreparePhase(JS::GCReason reason, AutoGCSession& session) { 2871 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::PREPARE); 2872 2873 if (!prepareZonesForCollection(reason, &isFull.ref())) { 2874 return false; 2875 } 2876 2877 /* 2878 * Start a parallel task to clear all mark state for the zones we are 2879 * collecting. This is linear in the size of the heap we are collecting and so 2880 * can be slow. This usually happens concurrently with the mutator and GC 2881 * proper does not start until this is complete. 2882 */ 2883 unmarkTask.initZones(); 2884 if (useBackgroundThreads) { 2885 unmarkTask.start(); 2886 } else { 2887 unmarkTask.runFromMainThread(); 2888 } 2889 2890 /* 2891 * Process any queued source compressions during the start of a major 2892 * GC. 2893 * 2894 * Bug 1650075: When we start passing GCOptions::Shutdown for 2895 * GCReason::XPCONNECT_SHUTDOWN GCs we can remove the extra check. 2896 */ 2897 if (!isShutdownGC() && reason != JS::GCReason::XPCONNECT_SHUTDOWN) { 2898 StartOffThreadCompressionsOnGC(rt, isShrinkingGC()); 2899 } 2900 2901 return true; 2902 } 2903 2904 BackgroundUnmarkTask::BackgroundUnmarkTask(GCRuntime* gc) 2905 : GCParallelTask(gc, gcstats::PhaseKind::UNMARK) {} 2906 2907 void BackgroundUnmarkTask::initZones() { 2908 MOZ_ASSERT(isIdle()); 2909 MOZ_ASSERT(zones.empty()); 2910 MOZ_ASSERT(!isCancelled()); 2911 2912 // We can't safely iterate the zones vector from another thread so we copy the 2913 // zones to be collected into another vector. 2914 AutoEnterOOMUnsafeRegion oomUnsafe; 2915 for (GCZonesIter zone(gc); !zone.done(); zone.next()) { 2916 if (!zones.append(zone.get())) { 2917 oomUnsafe.crash("BackgroundUnmarkTask::initZones"); 2918 } 2919 2920 zone->arenas.clearFreeLists(); 2921 zone->arenas.moveArenasToCollectingLists(); 2922 } 2923 } 2924 2925 void BackgroundUnmarkTask::run(AutoLockHelperThreadState& lock) { 2926 { 2927 AutoUnlockHelperThreadState unlock(lock); 2928 unmark(); 2929 zones.clear(); 2930 } 2931 2932 gc->maybeRequestGCAfterBackgroundTask(lock); 2933 } 2934 2935 void BackgroundUnmarkTask::unmark() { 2936 for (Zone* zone : zones) { 2937 for (auto kind : AllAllocKinds()) { 2938 ArenaList& arenas = zone->arenas.collectingArenaList(kind); 2939 for (auto arena = arenas.iter(); !arena.done(); arena.next()) { 2940 arena->unmarkAll(); 2941 if (isCancelled()) { 2942 return; 2943 } 2944 } 2945 } 2946 } 2947 } 2948 2949 void GCRuntime::endPreparePhase(JS::GCReason reason) { 2950 MOZ_ASSERT(unmarkTask.isIdle()); 2951 2952 for (GCZonesIter zone(this); !zone.done(); zone.next()) { 2953 zone->setPreservingCode(false); 2954 } 2955 2956 // Discard JIT code more aggressively if the process is approaching its 2957 // executable code limit. 2958 bool canAllocateMoreCode = jit::CanLikelyAllocateMoreExecutableMemory(); 2959 auto currentTime = TimeStamp::Now(); 2960 2961 Compartment* activeCompartment = nullptr; 2962 jit::JitActivationIterator activation(rt->mainContextFromOwnThread()); 2963 if (!activation.done()) { 2964 activeCompartment = activation->compartment(); 2965 } 2966 2967 for (CompartmentsIter c(rt); !c.done(); c.next()) { 2968 c->gcState.scheduledForDestruction = false; 2969 c->gcState.maybeAlive = false; 2970 c->gcState.hasEnteredRealm = false; 2971 if (c->invisibleToDebugger()) { 2972 c->gcState.maybeAlive = true; // Presumed to be a system compartment. 2973 } 2974 bool isActiveCompartment = c == activeCompartment; 2975 for (RealmsInCompartmentIter r(c); !r.done(); r.next()) { 2976 if (r->shouldTraceGlobal() || !r->zone()->isGCScheduled()) { 2977 c->gcState.maybeAlive = true; 2978 } 2979 if (shouldPreserveJITCode(r, currentTime, reason, canAllocateMoreCode, 2980 isActiveCompartment)) { 2981 r->zone()->setPreservingCode(true); 2982 } 2983 if (r->hasBeenEnteredIgnoringJit()) { 2984 c->gcState.hasEnteredRealm = true; 2985 } 2986 } 2987 } 2988 2989 /* 2990 * Perform remaining preparation work that must take place in the first true 2991 * GC slice. 2992 */ 2993 2994 { 2995 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::PREPARE); 2996 2997 AutoLockHelperThreadState helperLock; 2998 2999 /* Clear mark state for WeakMaps in parallel with other work. */ 3000 AutoRunParallelTask unmarkWeakMaps(this, &GCRuntime::unmarkWeakMaps, 3001 gcstats::PhaseKind::UNMARK_WEAKMAPS, 3002 GCUse::Unspecified, helperLock); 3003 3004 AutoUnlockHelperThreadState unlock(helperLock); 3005 3006 maybeDiscardJitCodeForGC(); 3007 3008 /* 3009 * We must purge the runtime at the beginning of an incremental GC. The 3010 * danger if we purge later is that the snapshot invariant of 3011 * incremental GC will be broken, as follows. If some object is 3012 * reachable only through some cache (say the dtoaCache) then it will 3013 * not be part of the snapshot. If we purge after root marking, then 3014 * the mutator could obtain a pointer to the object and start using 3015 * it. This object might never be marked, so a GC hazard would exist. 3016 */ 3017 purgeRuntime(); 3018 } 3019 3020 // This will start background free for lifo blocks queued by purgeRuntime, 3021 // even if there's nothing in the nursery. Record the number of the minor GC 3022 // so we can check whether we need to wait for it to finish or whether a 3023 // subsequent minor GC already did this. 3024 collectNurseryFromMajorGC(reason); 3025 initialMinorGCNumber = minorGCNumber; 3026 3027 { 3028 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::PREPARE); 3029 // Relazify functions after discarding JIT code (we can't relazify functions 3030 // with JIT code) and before the actual mark phase, so that the current GC 3031 // can collect the JSScripts we're unlinking here. We do this only when 3032 // we're performing a shrinking GC, as too much relazification can cause 3033 // performance issues when we have to reparse the same functions over and 3034 // over. 3035 if (isShrinkingGC()) { 3036 relazifyFunctionsForShrinkingGC(); 3037 purgePropMapTablesForShrinkingGC(); 3038 purgeSourceURLsForShrinkingGC(); 3039 { 3040 AutoGCSession commitSession(this, JS::HeapState::Idle); 3041 rt->commitPendingWrapperPreservations(); 3042 } 3043 purgePendingWrapperPreservationBuffersForShrinkingGC(); 3044 } 3045 3046 if (isShutdownGC()) { 3047 /* Clear any engine roots that may hold external data live. */ 3048 for (GCZonesIter zone(this); !zone.done(); zone.next()) { 3049 zone->clearRootsForShutdownGC(); 3050 } 3051 3052 #ifdef DEBUG 3053 testMarkQueue.clear(); 3054 queuePos = 0; 3055 #endif 3056 } 3057 } 3058 3059 #ifdef DEBUG 3060 if (fullCompartmentChecks) { 3061 checkForCompartmentMismatches(); 3062 } 3063 #endif 3064 } 3065 3066 AutoUpdateLiveCompartments::AutoUpdateLiveCompartments(GCRuntime* gc) : gc(gc) { 3067 for (GCCompartmentsIter c(gc->rt); !c.done(); c.next()) { 3068 c->gcState.hasMarkedCells = false; 3069 } 3070 } 3071 3072 AutoUpdateLiveCompartments::~AutoUpdateLiveCompartments() { 3073 for (GCCompartmentsIter c(gc->rt); !c.done(); c.next()) { 3074 if (c->gcState.hasMarkedCells) { 3075 c->gcState.maybeAlive = true; 3076 } 3077 } 3078 } 3079 3080 Zone::GCState Zone::initialMarkingState() const { 3081 if (isAtomsZone()) { 3082 // Don't delay gray marking in the atoms zone like we do in other zones. 3083 return MarkBlackAndGray; 3084 } 3085 3086 return MarkBlackOnly; 3087 } 3088 3089 static bool HasUncollectedNonAtomZones(GCRuntime* gc) { 3090 for (ZonesIter zone(gc, SkipAtoms); !zone.done(); zone.next()) { 3091 if (!zone->wasGCStarted()) { 3092 return true; 3093 } 3094 } 3095 return false; 3096 } 3097 3098 void GCRuntime::beginMarkPhase(AutoGCSession& session) { 3099 /* 3100 * Mark phase. 3101 */ 3102 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::MARK); 3103 3104 // This is the slice we actually start collecting. The number can be used to 3105 // check whether a major GC has started so we must not increment it until we 3106 // get here. 3107 incMajorGcNumber(); 3108 3109 #ifdef DEBUG 3110 queuePos = 0; 3111 queueMarkColor.reset(); 3112 #endif 3113 3114 { 3115 BufferAllocator::MaybeLock lock; 3116 for (GCZonesIter zone(this); !zone.done(); zone.next()) { 3117 MOZ_ASSERT(zone->cellsToAssertNotGray().empty()); 3118 3119 // In an incremental GC, clear the arena free lists to ensure that 3120 // subsequent allocations refill them and end up marking new cells black. 3121 // See arenaAllocatedDuringGC(). 3122 zone->arenas.clearFreeLists(); 3123 3124 #ifdef JS_GC_ZEAL 3125 if (hasZealMode(ZealMode::YieldBeforeRootMarking)) { 3126 for (auto kind : AllAllocKinds()) { 3127 for (ArenaIter arena(zone, kind); !arena.done(); arena.next()) { 3128 arena->checkNoMarkedCells(); 3129 } 3130 } 3131 } 3132 #endif 3133 3134 // Incremental marking barriers are enabled at this point. 3135 zone->changeGCState(Zone::Prepare, zone->initialMarkingState()); 3136 3137 // Merge arenas allocated during the prepare phase, then move all arenas 3138 // to the collecting arena lists. 3139 zone->arenas.mergeArenasFromCollectingLists(); 3140 zone->arenas.moveArenasToCollectingLists(); 3141 3142 // Prepare sized allocator for major GC. 3143 zone->bufferAllocator.startMajorCollection(lock); 3144 3145 for (RealmsInZoneIter realm(zone); !realm.done(); realm.next()) { 3146 realm->clearAllocatedDuringGC(); 3147 } 3148 } 3149 } 3150 3151 updateSchedulingStateOnGCStart(); 3152 stats().measureInitialHeapSizes(); 3153 3154 useParallelMarking = SingleThreadedMarking; 3155 if (canMarkInParallel() && initParallelMarking()) { 3156 useParallelMarking = AllowParallelMarking; 3157 } 3158 3159 MOZ_ASSERT(!hasDelayedMarking()); 3160 for (auto& marker : markers) { 3161 marker->start(); 3162 } 3163 3164 if (rt->isBeingDestroyed()) { 3165 checkNoRuntimeRoots(session); 3166 } else { 3167 AutoUpdateLiveCompartments updateLive(this); 3168 #ifdef DEBUG 3169 AutoSetThreadIsMarking threadIsMarking; 3170 #endif // DEBUG 3171 3172 marker().setRootMarkingMode(true); 3173 traceRuntimeForMajorGC(marker().tracer(), session); 3174 marker().setRootMarkingMode(false); 3175 3176 if (atomsZone()->wasGCStarted() && HasUncollectedNonAtomZones(this)) { 3177 atomsUsedByUncollectedZones = 3178 atomMarking.getOrMarkAtomsUsedByUncollectedZones(this); 3179 } 3180 } 3181 } 3182 3183 void GCRuntime::findDeadCompartments() { 3184 gcstats::AutoPhase ap1(stats(), gcstats::PhaseKind::FIND_DEAD_COMPARTMENTS); 3185 3186 /* 3187 * This code ensures that if a compartment is "dead", then it will be 3188 * collected in this GC. A compartment is considered dead if its maybeAlive 3189 * flag is false. The maybeAlive flag is set if: 3190 * 3191 * (1) the compartment has been entered (set in beginMarkPhase() above) 3192 * (2) the compartment's zone is not being collected (set in 3193 * endPreparePhase() above) 3194 * (3) an object in the compartment was marked during root marking, either 3195 * as a black root or a gray root. This is arranged by 3196 * SetCompartmentHasMarkedCells and AutoUpdateLiveCompartments. 3197 * (4) the compartment has incoming cross-compartment edges from another 3198 * compartment that has maybeAlive set (set by this method). 3199 * (5) the compartment has the invisibleToDebugger flag set, as it is 3200 * presumed to be a system compartment (set in endPreparePhase() above) 3201 * 3202 * If the maybeAlive is false, then we set the scheduledForDestruction flag. 3203 * At the end of the GC, we look for compartments where 3204 * scheduledForDestruction is true. These are compartments that were somehow 3205 * "revived" during the incremental GC. If any are found, we do a special, 3206 * non-incremental GC of those compartments to try to collect them. 3207 * 3208 * Compartments can be revived for a variety of reasons, including: 3209 * 3210 * (1) A dead reflector can be revived by DOM code that still refers to the 3211 * underlying DOM node (see bug 811587). 3212 * (2) JS_TransplantObject iterates over all compartments, live or dead, and 3213 * operates on their objects. This can trigger read barriers and mark 3214 * unreachable objects. See bug 803376 for details on this problem. To 3215 * avoid the problem, we try to avoid allocation and read barriers 3216 * during JS_TransplantObject and the like. 3217 * (3) Read barriers. A compartment may only have weak roots and reading one 3218 * of these will cause the compartment to stay alive even though the GC 3219 * thought it should die. An example of this is Gecko's unprivileged 3220 * junk scope, which is handled by ignoring system compartments (see bug 3221 * 1868437). 3222 */ 3223 3224 // Propagate the maybeAlive flag via cross-compartment edges. 3225 3226 Vector<Compartment*, 0, js::SystemAllocPolicy> workList; 3227 3228 for (CompartmentsIter comp(rt); !comp.done(); comp.next()) { 3229 if (comp->gcState.maybeAlive) { 3230 if (!workList.append(comp)) { 3231 return; 3232 } 3233 } 3234 } 3235 3236 while (!workList.empty()) { 3237 Compartment* comp = workList.popCopy(); 3238 for (Compartment::WrappedObjectCompartmentEnum e(comp); !e.empty(); 3239 e.popFront()) { 3240 Compartment* dest = e.front(); 3241 if (!dest->gcState.maybeAlive) { 3242 dest->gcState.maybeAlive = true; 3243 if (!workList.append(dest)) { 3244 return; 3245 } 3246 } 3247 } 3248 } 3249 3250 // Set scheduledForDestruction based on maybeAlive. 3251 3252 for (GCCompartmentsIter comp(rt); !comp.done(); comp.next()) { 3253 MOZ_ASSERT(!comp->gcState.scheduledForDestruction); 3254 if (!comp->gcState.maybeAlive) { 3255 comp->gcState.scheduledForDestruction = true; 3256 } 3257 } 3258 } 3259 3260 void GCRuntime::updateSchedulingStateOnGCStart() { 3261 heapSize.updateOnGCStart(); 3262 3263 // Update memory counters for the zones we are collecting. 3264 for (GCZonesIter zone(this); !zone.done(); zone.next()) { 3265 zone->updateSchedulingStateOnGCStart(); 3266 } 3267 } 3268 3269 inline bool GCRuntime::canMarkInParallel() const { 3270 MOZ_ASSERT(state() >= gc::State::MarkRoots); 3271 3272 #if defined(DEBUG) || defined(JS_OOM_BREAKPOINT) 3273 // OOM testing limits the engine to using a single helper thread. 3274 if (oom::simulator.targetThread() == THREAD_TYPE_GCPARALLEL) { 3275 return false; 3276 } 3277 #endif 3278 3279 return markers.length() > 1 && stats().initialCollectedBytes() >= 3280 tunables.parallelMarkingThresholdBytes(); 3281 } 3282 3283 bool GCRuntime::initParallelMarking() { 3284 // This is called at the start of collection. 3285 3286 MOZ_ASSERT(canMarkInParallel()); 3287 3288 // Reserve/release helper threads for worker runtimes. These are released at 3289 // the end of sweeping. If there are not enough helper threads because 3290 // other runtimes are marking in parallel then parallel marking will not be 3291 // used. 3292 if (!rt->isMainRuntime() && !reserveMarkingThreads(markers.length())) { 3293 return false; 3294 } 3295 3296 // Allocate stack for parallel markers. The first marker always has stack 3297 // allocated. Other markers have their stack freed in 3298 // GCRuntime::finishCollection. 3299 for (size_t i = 1; i < markers.length(); i++) { 3300 if (!markers[i]->initStack()) { 3301 return false; 3302 } 3303 } 3304 3305 return true; 3306 } 3307 3308 IncrementalProgress GCRuntime::markUntilBudgetExhausted( 3309 SliceBudget& sliceBudget, ParallelMarking allowParallelMarking, 3310 ShouldReportMarkTime reportTime) { 3311 // Run a marking slice and return whether the stack is now empty. 3312 3313 AutoMajorGCProfilerEntry s(this); 3314 3315 if (initialState != State::Mark) { 3316 sliceBudget.forceCheck(); 3317 if (sliceBudget.isOverBudget()) { 3318 return NotFinished; 3319 } 3320 } 3321 3322 #ifdef DEBUG 3323 AutoSetThreadIsMarking threadIsMarking; 3324 #endif // DEBUG 3325 3326 if (processTestMarkQueue() == QueueYielded) { 3327 return NotFinished; 3328 } 3329 3330 if (allowParallelMarking) { 3331 MOZ_ASSERT(canMarkInParallel()); 3332 MOZ_ASSERT(parallelMarkingEnabled); 3333 MOZ_ASSERT(reportTime); 3334 MOZ_ASSERT(!isBackgroundMarking()); 3335 3336 if (!ParallelMarker::mark(this, sliceBudget)) { 3337 return NotFinished; 3338 } 3339 3340 assertNoMarkingWork(); 3341 return Finished; 3342 } 3343 3344 return marker().markUntilBudgetExhausted(sliceBudget, reportTime) 3345 ? Finished 3346 : NotFinished; 3347 } 3348 3349 void GCRuntime::drainMarkStack() { 3350 auto unlimited = SliceBudget::unlimited(); 3351 MOZ_RELEASE_ASSERT(marker().markUntilBudgetExhausted(unlimited)); 3352 } 3353 3354 #ifdef DEBUG 3355 3356 const GCVector<HeapPtr<JS::Value>, 0, SystemAllocPolicy>& 3357 GCRuntime::getTestMarkQueue() const { 3358 return testMarkQueue.get(); 3359 } 3360 3361 bool GCRuntime::appendTestMarkQueue(const JS::Value& value) { 3362 return testMarkQueue.append(value); 3363 } 3364 3365 void GCRuntime::clearTestMarkQueue() { 3366 testMarkQueue.clear(); 3367 queuePos = 0; 3368 } 3369 3370 size_t GCRuntime::testMarkQueuePos() const { return queuePos; } 3371 3372 size_t GCRuntime::testMarkQueueRemaining() const { 3373 MOZ_ASSERT(queuePos <= testMarkQueue.length()); 3374 return testMarkQueue.length() - queuePos; 3375 } 3376 3377 #endif 3378 3379 GCRuntime::MarkQueueProgress GCRuntime::processTestMarkQueue() { 3380 #ifdef DEBUG 3381 if (testMarkQueue.empty()) { 3382 return QueueComplete; 3383 } 3384 3385 if (queueMarkColor == mozilla::Some(MarkColor::Gray) && 3386 state() != State::Sweep) { 3387 return QueueSuspended; 3388 } 3389 3390 // If the queue wants to be gray marking, but we've pushed a black object 3391 // since set-color-gray was processed, then we can't switch to gray and must 3392 // again wait until gray marking is possible. 3393 // 3394 // Remove this code if the restriction against marking gray during black is 3395 // relaxed. 3396 if (queueMarkColor == mozilla::Some(MarkColor::Gray) && 3397 marker().hasBlackEntries()) { 3398 return QueueSuspended; 3399 } 3400 3401 // If the queue wants to be marking a particular color, switch to that color. 3402 // In any case, restore the mark color to whatever it was when we entered 3403 // this function. 3404 bool willRevertToGray = marker().markColor() == MarkColor::Gray; 3405 AutoSetMarkColor autoRevertColor( 3406 marker(), queueMarkColor.valueOr(marker().markColor())); 3407 3408 // Process the mark queue by taking each object in turn, pushing it onto the 3409 // mark stack, and processing just the top element with processMarkStackTop 3410 // without recursing into reachable objects. 3411 while (queuePos < testMarkQueue.length()) { 3412 Value val = testMarkQueue[queuePos++].get(); 3413 if (val.isObject()) { 3414 JSObject* obj = &val.toObject(); 3415 JS::Zone* zone = obj->zone(); 3416 if (!zone->isGCMarking() || obj->isMarkedAtLeast(marker().markColor())) { 3417 continue; 3418 } 3419 3420 // If we have started sweeping, obey sweep group ordering. But note that 3421 // we will first be called during the initial sweep slice, when the sweep 3422 // group indexes have not yet been computed. In that case, we can mark 3423 // freely. 3424 if (state() == State::Sweep && initialState != State::Sweep) { 3425 if (zone->gcSweepGroupIndex < getCurrentSweepGroupIndex()) { 3426 // Too late. This must have been added after we started collecting, 3427 // and we've already processed its sweep group. Skip it. 3428 continue; 3429 } 3430 if (zone->gcSweepGroupIndex > getCurrentSweepGroupIndex()) { 3431 // Not ready yet. Wait until we reach the object's sweep group. 3432 queuePos--; 3433 return QueueSuspended; 3434 } 3435 } 3436 3437 if (marker().markColor() == MarkColor::Gray && 3438 zone->isGCMarkingBlackOnly()) { 3439 // Have not yet reached the point where we can mark this object, so 3440 // continue with the GC. 3441 queuePos--; 3442 return QueueSuspended; 3443 } 3444 3445 if (marker().markColor() == MarkColor::Black && willRevertToGray) { 3446 // If we put any black objects on the stack, we wouldn't be able to 3447 // return to gray marking. So delay the marking until we're back to 3448 // black marking. 3449 queuePos--; 3450 return QueueSuspended; 3451 } 3452 3453 // Mark the object. 3454 if (!marker().markOneObjectForTest(obj)) { 3455 // If we overflowed the stack here and delayed marking, then we won't be 3456 // testing what we think we're testing. 3457 MOZ_ASSERT(obj->asTenured().arena()->onDelayedMarkingList()); 3458 printf_stderr( 3459 "Hit mark stack limit while marking test queue; test results may " 3460 "be invalid"); 3461 } 3462 } else if (val.isString()) { 3463 JSLinearString* str = &val.toString()->asLinear(); 3464 if (js::StringEqualsLiteral(str, "yield") && isIncrementalGc()) { 3465 return QueueYielded; 3466 } 3467 3468 if (js::StringEqualsLiteral(str, "enter-weak-marking-mode") || 3469 js::StringEqualsLiteral(str, "abort-weak-marking-mode")) { 3470 if (marker().isRegularMarking()) { 3471 // We can't enter weak marking mode at just any time, so instead 3472 // we'll stop processing the queue and continue on with the GC. Once 3473 // we enter weak marking mode, we can continue to the rest of the 3474 // queue. Note that we will also suspend for aborting, and then abort 3475 // the earliest following weak marking mode. 3476 queuePos--; 3477 return QueueSuspended; 3478 } 3479 if (js::StringEqualsLiteral(str, "abort-weak-marking-mode")) { 3480 marker().abortLinearWeakMarking(); 3481 } 3482 } else if (js::StringEqualsLiteral(str, "drain")) { 3483 auto unlimited = SliceBudget::unlimited(); 3484 MOZ_RELEASE_ASSERT( 3485 marker().markUntilBudgetExhausted(unlimited, DontReportMarkTime)); 3486 } else if (js::StringEqualsLiteral(str, "set-color-gray")) { 3487 queueMarkColor = mozilla::Some(MarkColor::Gray); 3488 if (state() != State::Sweep || marker().hasBlackEntries()) { 3489 // Cannot mark gray yet, so continue with the GC. 3490 queuePos--; 3491 return QueueSuspended; 3492 } 3493 marker().setMarkColor(MarkColor::Gray); 3494 } else if (js::StringEqualsLiteral(str, "set-color-black")) { 3495 queueMarkColor = mozilla::Some(MarkColor::Black); 3496 marker().setMarkColor(MarkColor::Black); 3497 } else if (js::StringEqualsLiteral(str, "unset-color")) { 3498 queueMarkColor.reset(); 3499 } 3500 } 3501 } 3502 3503 // Once the queue is complete, do not force a mark color (since the next time 3504 // the queue is processed, it should not be forcing one.) 3505 queueMarkColor.reset(); 3506 #endif 3507 3508 return QueueComplete; 3509 } 3510 3511 static bool IsEmergencyGC(JS::GCReason reason) { 3512 return reason == JS::GCReason::LAST_DITCH || 3513 reason == JS::GCReason::MEM_PRESSURE; 3514 } 3515 3516 void GCRuntime::finishCollection(JS::GCReason reason) { 3517 assertBackgroundSweepingFinished(); 3518 3519 MOZ_ASSERT(!hasDelayedMarking()); 3520 for (size_t i = 0; i < markers.length(); i++) { 3521 const auto& marker = markers[i]; 3522 marker->stop(); 3523 if (i == 0) { 3524 marker->resetStackCapacity(); 3525 } else { 3526 marker->freeStack(); 3527 } 3528 } 3529 3530 maybeStopPretenuring(); 3531 3532 if (IsEmergencyGC(reason)) { 3533 waitBackgroundFreeEnd(); 3534 } 3535 3536 TimeStamp currentTime = TimeStamp::Now(); 3537 3538 updateSchedulingStateOnGCEnd(currentTime); 3539 3540 for (GCZonesIter zone(this); !zone.done(); zone.next()) { 3541 zone->changeGCState(Zone::Finished, Zone::NoGC); 3542 zone->notifyObservingDebuggers(); 3543 zone->gcNextGraphNode = nullptr; 3544 zone->gcNextGraphComponent = nullptr; 3545 } 3546 3547 atomsUsedByUncollectedZones.ref().reset(); 3548 3549 #ifdef JS_GC_ZEAL 3550 clearSelectedForMarking(); 3551 #endif 3552 3553 lastGCEndTime_ = currentTime; 3554 3555 checkGCStateNotInUse(); 3556 } 3557 3558 void GCRuntime::checkGCStateNotInUse() { 3559 #ifdef DEBUG 3560 for (auto& marker : markers) { 3561 MOZ_ASSERT(!marker->isActive()); 3562 MOZ_ASSERT(marker->isDrained()); 3563 } 3564 MOZ_ASSERT(!hasDelayedMarking()); 3565 MOZ_ASSERT(!lastMarkSlice); 3566 3567 MOZ_ASSERT(!disableBarriersForSweeping); 3568 MOZ_ASSERT(foregroundFinalizedArenas.ref().isNothing()); 3569 3570 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) { 3571 if (zone->wasCollected()) { 3572 zone->arenas.checkGCStateNotInUse(); 3573 } 3574 MOZ_ASSERT(!zone->wasGCStarted()); 3575 MOZ_ASSERT(!zone->needsIncrementalBarrier()); 3576 MOZ_ASSERT(!zone->isOnList()); 3577 MOZ_ASSERT(!zone->gcNextGraphNode); 3578 MOZ_ASSERT(!zone->gcNextGraphComponent); 3579 MOZ_ASSERT(zone->cellsToAssertNotGray().empty()); 3580 zone->bufferAllocator.checkGCStateNotInUse(); 3581 } 3582 3583 MOZ_ASSERT(zonesToMaybeCompact.ref().isEmpty()); 3584 3585 MOZ_ASSERT(!atomsUsedByUncollectedZones.ref()); 3586 3587 AutoLockHelperThreadState lock; 3588 MOZ_ASSERT(!requestSliceAfterBackgroundTask); 3589 MOZ_ASSERT(unmarkTask.isIdle(lock)); 3590 MOZ_ASSERT(markTask.isIdle(lock)); 3591 MOZ_ASSERT(sweepTask.isIdle(lock)); 3592 MOZ_ASSERT(decommitTask.isIdle(lock)); 3593 #endif 3594 } 3595 3596 void GCRuntime::maybeStopPretenuring() { 3597 nursery().maybeStopPretenuring(this); 3598 3599 size_t zonesWhereStringsEnabled = 0; 3600 size_t zonesWhereBigIntsEnabled = 0; 3601 3602 for (GCZonesIter zone(this); !zone.done(); zone.next()) { 3603 if (zone->nurseryStringsDisabled || zone->nurseryBigIntsDisabled) { 3604 // We may need to reset allocation sites and discard JIT code to recover 3605 // if we find object lifetimes have changed. 3606 if (zone->pretenuring.shouldResetPretenuredAllocSites()) { 3607 zone->unknownAllocSite(JS::TraceKind::String)->maybeResetState(); 3608 zone->unknownAllocSite(JS::TraceKind::BigInt)->maybeResetState(); 3609 if (zone->nurseryStringsDisabled) { 3610 zone->nurseryStringsDisabled = false; 3611 zonesWhereStringsEnabled++; 3612 } 3613 if (zone->nurseryBigIntsDisabled) { 3614 zone->nurseryBigIntsDisabled = false; 3615 zonesWhereBigIntsEnabled++; 3616 } 3617 nursery().updateAllocFlagsForZone(zone); 3618 } 3619 } 3620 } 3621 3622 if (nursery().reportPretenuring()) { 3623 if (zonesWhereStringsEnabled) { 3624 fprintf(stderr, "GC re-enabled nursery string allocation in %zu zones\n", 3625 zonesWhereStringsEnabled); 3626 } 3627 if (zonesWhereBigIntsEnabled) { 3628 fprintf(stderr, "GC re-enabled nursery big int allocation in %zu zones\n", 3629 zonesWhereBigIntsEnabled); 3630 } 3631 } 3632 } 3633 3634 void GCRuntime::updateSchedulingStateOnGCEnd(TimeStamp currentTime) { 3635 TimeDuration totalGCTime = stats().totalGCTime(); 3636 size_t totalInitialBytes = stats().initialCollectedBytes(); 3637 3638 for (GCZonesIter zone(this); !zone.done(); zone.next()) { 3639 if (tunables.balancedHeapLimitsEnabled() && totalInitialBytes != 0) { 3640 zone->updateCollectionRate(totalGCTime, totalInitialBytes); 3641 } 3642 zone->clearGCSliceThresholds(); 3643 zone->updateGCStartThresholds(*this); 3644 } 3645 } 3646 3647 void GCRuntime::updateAllGCStartThresholds() { 3648 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) { 3649 zone->updateGCStartThresholds(*this); 3650 } 3651 } 3652 3653 void GCRuntime::updateAllocationRates() { 3654 // Calculate mutator time since the last update. This ignores the fact that 3655 // the zone could have been created since the last update. 3656 3657 TimeStamp currentTime = TimeStamp::Now(); 3658 TimeDuration totalTime = currentTime - lastAllocRateUpdateTime; 3659 if (collectorTimeSinceAllocRateUpdate >= totalTime) { 3660 // It shouldn't happen but occasionally we see collector time being larger 3661 // than total time. Skip the update in that case. 3662 return; 3663 } 3664 3665 TimeDuration mutatorTime = totalTime - collectorTimeSinceAllocRateUpdate; 3666 3667 for (AllZonesIter zone(this); !zone.done(); zone.next()) { 3668 zone->updateAllocationRate(mutatorTime); 3669 zone->updateGCStartThresholds(*this); 3670 } 3671 3672 lastAllocRateUpdateTime = currentTime; 3673 collectorTimeSinceAllocRateUpdate = TimeDuration::Zero(); 3674 } 3675 3676 static const char* GCHeapStateToLabel(JS::HeapState heapState) { 3677 switch (heapState) { 3678 case JS::HeapState::MinorCollecting: 3679 return "Minor GC"; 3680 case JS::HeapState::MajorCollecting: 3681 return "Major GC"; 3682 default: 3683 MOZ_CRASH("Unexpected heap state when pushing GC profiling stack frame"); 3684 } 3685 MOZ_ASSERT_UNREACHABLE("Should have exhausted every JS::HeapState variant!"); 3686 return nullptr; 3687 } 3688 3689 static JS::ProfilingCategoryPair GCHeapStateToProfilingCategory( 3690 JS::HeapState heapState) { 3691 return heapState == JS::HeapState::MinorCollecting 3692 ? JS::ProfilingCategoryPair::GCCC_MinorGC 3693 : JS::ProfilingCategoryPair::GCCC_MajorGC; 3694 } 3695 3696 /* Start a new heap session. */ 3697 AutoHeapSession::AutoHeapSession(GCRuntime* gc, JS::HeapState heapState) 3698 : gc(gc), prevState(gc->heapState_) { 3699 MOZ_ASSERT(CurrentThreadCanAccessRuntime(gc->rt)); 3700 MOZ_ASSERT(prevState == JS::HeapState::Idle || 3701 (prevState == JS::HeapState::MajorCollecting && 3702 heapState == JS::HeapState::Idle) || 3703 (prevState == JS::HeapState::MajorCollecting && 3704 heapState == JS::HeapState::MinorCollecting)); 3705 3706 gc->heapState_ = heapState; 3707 3708 if (heapState == JS::HeapState::MinorCollecting || 3709 heapState == JS::HeapState::MajorCollecting) { 3710 profilingStackFrame.emplace( 3711 gc->rt->mainContextFromOwnThread(), GCHeapStateToLabel(heapState), 3712 GCHeapStateToProfilingCategory(heapState), 3713 uint32_t(ProfilingStackFrame::Flags::RELEVANT_FOR_JS)); 3714 } 3715 } 3716 3717 AutoHeapSession::~AutoHeapSession() { gc->heapState_ = prevState; } 3718 3719 AutoTraceSession::AutoTraceSession(JSRuntime* rt) 3720 : AutoHeapSession(&rt->gc, JS::HeapState::Tracing), 3721 JS::AutoCheckCannotGC() {} 3722 3723 static const char* MajorGCStateToLabel(State state) { 3724 switch (state) { 3725 case State::Mark: 3726 return "js::GCRuntime::markUntilBudgetExhausted"; 3727 case State::Sweep: 3728 return "js::GCRuntime::performSweepActions"; 3729 case State::Compact: 3730 return "js::GCRuntime::compactPhase"; 3731 default: 3732 MOZ_CRASH("Unexpected heap state when pushing GC profiling stack frame"); 3733 } 3734 3735 MOZ_ASSERT_UNREACHABLE("Should have exhausted every State variant!"); 3736 return nullptr; 3737 } 3738 3739 static JS::ProfilingCategoryPair MajorGCStateToProfilingCategory(State state) { 3740 switch (state) { 3741 case State::Mark: 3742 return JS::ProfilingCategoryPair::GCCC_MajorGC_Mark; 3743 case State::Sweep: 3744 return JS::ProfilingCategoryPair::GCCC_MajorGC_Sweep; 3745 case State::Compact: 3746 return JS::ProfilingCategoryPair::GCCC_MajorGC_Compact; 3747 default: 3748 MOZ_CRASH("Unexpected heap state when pushing GC profiling stack frame"); 3749 } 3750 } 3751 3752 AutoMajorGCProfilerEntry::AutoMajorGCProfilerEntry(GCRuntime* gc) 3753 : AutoGeckoProfilerEntry(gc->rt->mainContextFromAnyThread(), 3754 MajorGCStateToLabel(gc->state()), 3755 MajorGCStateToProfilingCategory(gc->state())) { 3756 MOZ_ASSERT(gc->heapState() == JS::HeapState::MajorCollecting); 3757 } 3758 3759 GCRuntime::IncrementalResult GCRuntime::resetIncrementalGC( 3760 GCAbortReason reason) { 3761 MOZ_ASSERT(reason != GCAbortReason::None); 3762 3763 // Drop as much work as possible from an ongoing incremental GC so 3764 // we can start a new GC after it has finished. 3765 if (incrementalState == State::NotActive) { 3766 return IncrementalResult::Ok; 3767 } 3768 3769 AutoGCSession session(this, JS::HeapState::MajorCollecting); 3770 3771 switch (incrementalState) { 3772 case State::NotActive: 3773 case State::Finish: 3774 MOZ_CRASH("Unexpected GC state in resetIncrementalGC"); 3775 break; 3776 3777 case State::Prepare: 3778 unmarkTask.cancelAndWait(); 3779 cancelRequestedGCAfterBackgroundTask(); 3780 [[fallthrough]]; 3781 3782 case State::MarkRoots: 3783 // We haven't done any marking yet at this point. 3784 for (GCZonesIter zone(this); !zone.done(); zone.next()) { 3785 zone->changeGCState(zone->gcState(), Zone::NoGC); 3786 zone->clearGCSliceThresholds(); 3787 zone->arenas.clearFreeLists(); 3788 zone->arenas.mergeArenasFromCollectingLists(); 3789 } 3790 3791 // The gray marking state may not be valid. We don't do gray unmarking 3792 // when zones are in the Prepare state. 3793 setGrayBitsInvalid(); 3794 3795 incrementalState = State::NotActive; 3796 checkGCStateNotInUse(); 3797 break; 3798 3799 case State::Mark: { 3800 // Cancel any ongoing marking. 3801 for (auto& marker : markers) { 3802 marker->reset(); 3803 } 3804 resetDelayedMarking(); 3805 3806 for (GCCompartmentsIter c(rt); !c.done(); c.next()) { 3807 resetGrayList(c); 3808 } 3809 3810 // The gray marking state may not be valid. We depend on the mark stack to 3811 // do gray unmarking in zones that are being marked by the GC and we've 3812 // just cancelled that part way through. 3813 setGrayBitsInvalid(); 3814 3815 // Wait for sweeping of nursery owned sized allocations to finish. 3816 nursery().joinSweepTask(); 3817 3818 { 3819 BufferAllocator::AutoLock lock(this); 3820 for (GCZonesIter zone(this); !zone.done(); zone.next()) { 3821 zone->changeGCState(zone->initialMarkingState(), Zone::NoGC); 3822 zone->clearGCSliceThresholds(); 3823 zone->arenas.unmarkPreMarkedFreeCells(); 3824 zone->arenas.mergeArenasFromCollectingLists(); 3825 3826 // Merge sized alloc data structures back without sweeping them. 3827 zone->bufferAllocator.finishMajorCollection(lock); 3828 } 3829 } 3830 3831 atomsUsedByUncollectedZones.ref().reset(); 3832 3833 { 3834 AutoLockHelperThreadState lock; 3835 lifoBlocksToFree.ref().freeAll(); 3836 } 3837 3838 lastMarkSlice = false; 3839 incrementalState = State::Finish; 3840 3841 #ifdef DEBUG 3842 for (auto& marker : markers) { 3843 MOZ_ASSERT(!marker->shouldCheckCompartments()); 3844 } 3845 #endif 3846 3847 break; 3848 } 3849 3850 case State::Sweep: { 3851 // Finish sweeping the current sweep group, then abort. 3852 for (CompartmentsIter c(rt); !c.done(); c.next()) { 3853 c->gcState.scheduledForDestruction = false; 3854 } 3855 3856 abortSweepAfterCurrentGroup = true; 3857 isCompacting = false; 3858 3859 break; 3860 } 3861 3862 case State::Finalize: { 3863 isCompacting = false; 3864 break; 3865 } 3866 3867 case State::Compact: { 3868 // Skip any remaining zones that would have been compacted. 3869 MOZ_ASSERT(isCompacting); 3870 startedCompacting = true; 3871 zonesToMaybeCompact.ref().clear(); 3872 break; 3873 } 3874 3875 case State::Decommit: { 3876 break; 3877 } 3878 } 3879 3880 stats().reset(reason); 3881 3882 if (reason == GCAbortReason::AbortRequested) { 3883 return IncrementalResult::Abort; 3884 } 3885 3886 return IncrementalResult::Reset; 3887 } 3888 3889 void GCRuntime::setGrayBitsInvalid() { 3890 grayBitsValid = false; 3891 atomMarking.unmarkAllGrayReferences(this); 3892 } 3893 3894 void GCRuntime::disableIncrementalBarriers() { 3895 // Clear needsIncrementalBarrier so we don't do any write barriers during 3896 // foreground finalization. This would otherwise happen when destroying 3897 // HeapPtr<>s to GC things in zones which are still marking. 3898 3899 for (GCZonesIter zone(this); !zone.done(); zone.next()) { 3900 if (zone->isGCMarking()) { 3901 MOZ_ASSERT(zone->needsIncrementalBarrier()); 3902 zone->setNeedsIncrementalBarrier(false); 3903 } 3904 MOZ_ASSERT(!zone->needsIncrementalBarrier()); 3905 } 3906 } 3907 3908 void GCRuntime::enableIncrementalBarriers() { 3909 for (GCZonesIter zone(this); !zone.done(); zone.next()) { 3910 MOZ_ASSERT(!zone->needsIncrementalBarrier()); 3911 if (zone->isGCMarking()) { 3912 zone->setNeedsIncrementalBarrier(true); 3913 } 3914 } 3915 } 3916 3917 static bool NeedToCollectNursery(GCRuntime* gc) { 3918 return !gc->nursery().isEmpty() || !gc->storeBuffer().isEmpty(); 3919 } 3920 3921 static bool ShouldPauseMutatorWhileWaiting(const SliceBudget& budget, 3922 JS::GCReason reason, 3923 bool budgetWasIncreased) { 3924 // When we're nearing the incremental limit at which we will finish the 3925 // collection synchronously, pause the main thread if there is only background 3926 // GC work happening. This allows the GC to catch up and avoid hitting the 3927 // limit. 3928 return budget.isTimeBudget() && 3929 (reason == JS::GCReason::ALLOC_TRIGGER || 3930 reason == JS::GCReason::TOO_MUCH_MALLOC) && 3931 budgetWasIncreased; 3932 } 3933 3934 void GCRuntime::incrementalSlice(SliceBudget& budget, JS::GCReason reason, 3935 bool budgetWasIncreased) { 3936 MOZ_ASSERT_IF(isIncrementalGCInProgress(), isIncremental); 3937 3938 AutoSetThreadIsPerformingGC performingGC(rt->gcContext()); 3939 3940 AutoGCSession session(this, JS::HeapState::MajorCollecting); 3941 3942 bool destroyingRuntime = (reason == JS::GCReason::DESTROY_RUNTIME); 3943 3944 initialState = incrementalState; 3945 isIncremental = !budget.isUnlimited(); 3946 useBackgroundThreads = ShouldUseBackgroundThreads(isIncremental, reason); 3947 3948 #ifdef JS_GC_ZEAL 3949 // Do the incremental collection type specified by zeal mode if the collection 3950 // was triggered by runDebugGC() and incremental GC has not been cancelled by 3951 // resetIncrementalGC(). 3952 useZeal = isIncremental && reason == JS::GCReason::DEBUG_GC; 3953 #endif 3954 3955 if (useZeal && zealModeControlsYieldPoint()) { 3956 // Yields between slices occurs at predetermined points in these modes; the 3957 // budget is not used. |isIncremental| is still true. 3958 budget = SliceBudget::unlimited(); 3959 } 3960 3961 bool shouldPauseMutator = 3962 ShouldPauseMutatorWhileWaiting(budget, reason, budgetWasIncreased); 3963 3964 switch (incrementalState) { 3965 case State::NotActive: 3966 startCollection(reason); 3967 3968 incrementalState = State::Prepare; 3969 if (!beginPreparePhase(reason, session)) { 3970 incrementalState = State::NotActive; 3971 break; 3972 } 3973 3974 if (useZeal && hasZealMode(ZealMode::YieldBeforeRootMarking)) { 3975 break; 3976 } 3977 3978 [[fallthrough]]; 3979 3980 case State::Prepare: 3981 if (waitForBackgroundTask(unmarkTask, budget, shouldPauseMutator) == 3982 NotFinished) { 3983 break; 3984 } 3985 3986 incrementalState = State::MarkRoots; 3987 3988 if (isIncremental && initialState == State::Prepare && 3989 reason == JS::GCReason::BG_TASK_FINISHED) { 3990 // The next slice may be long so wait for the embedding to schedule it 3991 // rather than doing it as soon as unmarking finishes. This can happen 3992 // when the embedding's GC callback sees this slice end with work 3993 // available. 3994 MOZ_ASSERT(hasForegroundWork()); 3995 break; 3996 } 3997 3998 [[fallthrough]]; 3999 4000 case State::MarkRoots: 4001 endPreparePhase(reason); 4002 4003 { 4004 AutoGCSession commitSession(this, JS::HeapState::Idle); 4005 rt->commitPendingWrapperPreservations(); 4006 } 4007 4008 beginMarkPhase(session); 4009 incrementalState = State::Mark; 4010 4011 if (useZeal && hasZealMode(ZealMode::YieldBeforeMarking) && 4012 isIncremental) { 4013 break; 4014 } 4015 4016 [[fallthrough]]; 4017 4018 case State::Mark: 4019 if (mightSweepInThisSlice(budget.isUnlimited())) { 4020 prepareForSweepSlice(reason); 4021 4022 // Incremental marking validation re-runs all marking non-incrementally, 4023 // which requires collecting the nursery. If that might happen in this 4024 // slice, do it now while it's safe to do so. 4025 if (isIncremental && 4026 hasZealMode(ZealMode::IncrementalMarkingValidator)) { 4027 collectNurseryFromMajorGC(JS::GCReason::EVICT_NURSERY); 4028 } 4029 } 4030 4031 { 4032 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::MARK); 4033 if (markUntilBudgetExhausted(budget, useParallelMarking) == 4034 NotFinished) { 4035 break; 4036 } 4037 } 4038 4039 assertNoMarkingWork(); 4040 4041 /* 4042 * There are a number of reasons why we break out of collection here, 4043 * either ending the slice or to run a new interation of the loop in 4044 * GCRuntime::collect() 4045 */ 4046 4047 /* 4048 * In incremental GCs where we have already performed more than one 4049 * slice we yield after marking with the aim of starting the sweep in 4050 * the next slice, since the first slice of sweeping can be expensive. 4051 * 4052 * This is modified by the various zeal modes. We don't yield in 4053 * YieldBeforeMarking mode and we always yield in YieldBeforeSweeping 4054 * mode. 4055 * 4056 * We will need to mark anything new on the stack when we resume, so 4057 * we stay in Mark state. 4058 */ 4059 if (isIncremental && !lastMarkSlice) { 4060 if ((initialState == State::Mark && 4061 !(useZeal && hasZealMode(ZealMode::YieldBeforeMarking))) || 4062 (useZeal && hasZealMode(ZealMode::YieldBeforeSweeping))) { 4063 lastMarkSlice = true; 4064 break; 4065 } 4066 } 4067 4068 incrementalState = State::Sweep; 4069 lastMarkSlice = false; 4070 4071 beginSweepPhase(reason, session); 4072 4073 [[fallthrough]]; 4074 4075 case State::Sweep: 4076 if (initialState == State::Sweep) { 4077 prepareForSweepSlice(reason); 4078 } 4079 4080 if (performSweepActions(budget) == NotFinished) { 4081 break; 4082 } 4083 4084 endSweepPhase(destroyingRuntime); 4085 4086 incrementalState = State::Finalize; 4087 4088 [[fallthrough]]; 4089 4090 case State::Finalize: 4091 if (waitForBackgroundTask(sweepTask, budget, shouldPauseMutator) == 4092 NotFinished) { 4093 break; 4094 } 4095 4096 for (GCZonesIter zone(this); !zone.done(); zone.next()) { 4097 zone->arenas.mergeBackgroundSweptArenas(); 4098 } 4099 4100 { 4101 BufferAllocator::AutoLock lock(this); 4102 for (GCZonesIter zone(this); !zone.done(); zone.next()) { 4103 zone->bufferAllocator.finishMajorCollection(lock); 4104 } 4105 } 4106 4107 atomMarking.mergePendingFreeArenaIndexes(this); 4108 4109 { 4110 AutoLockGC lock(this); 4111 clearCurrentChunk(lock); 4112 } 4113 4114 assertBackgroundSweepingFinished(); 4115 4116 // Ensure freeing of nursery owned sized allocations from the initial 4117 // minor GC has finished. 4118 MOZ_ASSERT(minorGCNumber >= initialMinorGCNumber); 4119 if (minorGCNumber == initialMinorGCNumber) { 4120 MOZ_ASSERT(nursery().sweepTaskIsIdle()); 4121 } 4122 4123 { 4124 // Sweep the zones list now that background finalization is finished to 4125 // remove and free dead zones, compartments and realms. 4126 gcstats::AutoPhase ap1(stats(), gcstats::PhaseKind::SWEEP); 4127 gcstats::AutoPhase ap2(stats(), gcstats::PhaseKind::DESTROY); 4128 sweepZones(rt->gcContext(), destroyingRuntime); 4129 } 4130 4131 MOZ_ASSERT(!startedCompacting); 4132 incrementalState = State::Compact; 4133 4134 // Always yield before compacting since it is not incremental. 4135 if (isCompacting && !budget.isUnlimited()) { 4136 break; 4137 } 4138 4139 [[fallthrough]]; 4140 4141 case State::Compact: 4142 if (isCompacting) { 4143 { 4144 AutoGCSession commitSession(this, JS::HeapState::Idle); 4145 rt->commitPendingWrapperPreservations(); 4146 } 4147 4148 if (NeedToCollectNursery(this)) { 4149 collectNurseryFromMajorGC(reason); 4150 } 4151 4152 storeBuffer().checkEmpty(); 4153 if (!startedCompacting) { 4154 beginCompactPhase(); 4155 } 4156 4157 nursery().joinSweepTask(); 4158 if (compactPhase(reason, budget, session) == NotFinished) { 4159 break; 4160 } 4161 4162 endCompactPhase(); 4163 } 4164 4165 startDecommit(); 4166 incrementalState = State::Decommit; 4167 4168 [[fallthrough]]; 4169 4170 case State::Decommit: 4171 if (waitForBackgroundTask(decommitTask, budget, shouldPauseMutator) == 4172 NotFinished) { 4173 break; 4174 } 4175 4176 incrementalState = State::Finish; 4177 4178 [[fallthrough]]; 4179 4180 case State::Finish: 4181 finishCollection(reason); 4182 incrementalState = State::NotActive; 4183 break; 4184 } 4185 4186 #ifdef DEBUG 4187 MOZ_ASSERT(safeToYield); 4188 for (auto& marker : markers) { 4189 MOZ_ASSERT(marker->markColor() == MarkColor::Black); 4190 } 4191 MOZ_ASSERT(!rt->gcContext()->hasJitCodeToPoison()); 4192 #endif 4193 } 4194 4195 void GCRuntime::collectNurseryFromMajorGC(JS::GCReason reason) { 4196 collectNursery(gcOptions(), JS::GCReason::EVICT_NURSERY, 4197 gcstats::PhaseKind::EVICT_NURSERY_FOR_MAJOR_GC); 4198 4199 MOZ_ASSERT(nursery().isEmpty()); 4200 MOZ_ASSERT(storeBuffer().isEmpty()); 4201 } 4202 4203 bool GCRuntime::hasForegroundWork() const { 4204 switch (incrementalState) { 4205 case State::NotActive: 4206 // Incremental GC is not running and no work is pending. 4207 return false; 4208 case State::Prepare: 4209 // We yield in the Prepare state after starting unmarking. 4210 return !unmarkTask.wasStarted(); 4211 case State::Finalize: 4212 // We yield in the Finalize state to wait for background sweeping. 4213 return !isBackgroundSweeping(); 4214 case State::Decommit: 4215 // We yield in the Decommit state to wait for background decommit. 4216 return !decommitTask.wasStarted(); 4217 default: 4218 // In all other states there is still work to do. 4219 return true; 4220 } 4221 } 4222 4223 IncrementalProgress GCRuntime::waitForBackgroundTask(GCParallelTask& task, 4224 const SliceBudget& budget, 4225 bool shouldPauseMutator) { 4226 // Wait here in non-incremental collections, or if we want to pause the 4227 // mutator to let the GC catch up. 4228 if (budget.isUnlimited() || shouldPauseMutator) { 4229 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::WAIT_BACKGROUND_THREAD); 4230 Maybe<TimeStamp> deadline; 4231 if (budget.isTimeBudget()) { 4232 deadline.emplace(budget.deadline()); 4233 } 4234 task.join(deadline); 4235 } 4236 4237 // In incremental collections, yield if the task has not finished and request 4238 // a slice to notify us when this happens. 4239 if (!budget.isUnlimited()) { 4240 AutoLockHelperThreadState lock; 4241 if (task.wasStarted(lock)) { 4242 requestSliceAfterBackgroundTask = true; 4243 return NotFinished; 4244 } 4245 4246 task.joinWithLockHeld(lock); 4247 } 4248 4249 MOZ_ASSERT(task.isIdle()); 4250 4251 cancelRequestedGCAfterBackgroundTask(); 4252 4253 return Finished; 4254 } 4255 4256 inline void GCRuntime::checkZoneIsScheduled(Zone* zone, JS::GCReason reason, 4257 const char* trigger) { 4258 #ifdef DEBUG 4259 if (zone->isGCScheduled()) { 4260 return; 4261 } 4262 4263 fprintf(stderr, 4264 "checkZoneIsScheduled: Zone %p not scheduled as expected in %s GC " 4265 "for %s trigger\n", 4266 zone, JS::ExplainGCReason(reason), trigger); 4267 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) { 4268 fprintf(stderr, " Zone %p:%s%s\n", zone.get(), 4269 zone->isAtomsZone() ? " atoms" : "", 4270 zone->isGCScheduled() ? " scheduled" : ""); 4271 } 4272 fflush(stderr); 4273 MOZ_CRASH("Zone not scheduled"); 4274 #endif 4275 } 4276 4277 GCRuntime::IncrementalResult GCRuntime::budgetIncrementalGC( 4278 bool nonincrementalByAPI, JS::GCReason reason, SliceBudget& budget) { 4279 if (nonincrementalByAPI) { 4280 stats().nonincremental(GCAbortReason::NonIncrementalRequested); 4281 budget = SliceBudget::unlimited(); 4282 4283 // Reset any in progress incremental GC if this was triggered via the 4284 // API. This isn't required for correctness, but sometimes during tests 4285 // the caller expects this GC to collect certain objects, and we need 4286 // to make sure to collect everything possible. 4287 if (reason != JS::GCReason::ALLOC_TRIGGER) { 4288 return resetIncrementalGC(GCAbortReason::NonIncrementalRequested); 4289 } 4290 4291 return IncrementalResult::Ok; 4292 } 4293 4294 if (reason == JS::GCReason::ABORT_GC) { 4295 budget = SliceBudget::unlimited(); 4296 stats().nonincremental(GCAbortReason::AbortRequested); 4297 return resetIncrementalGC(GCAbortReason::AbortRequested); 4298 } 4299 4300 if (!budget.isUnlimited()) { 4301 GCAbortReason unsafeReason = GCAbortReason::None; 4302 if (reason == JS::GCReason::COMPARTMENT_REVIVED) { 4303 unsafeReason = GCAbortReason::CompartmentRevived; 4304 } else if (!incrementalGCEnabled) { 4305 unsafeReason = GCAbortReason::ModeChange; 4306 } 4307 4308 if (unsafeReason != GCAbortReason::None) { 4309 budget = SliceBudget::unlimited(); 4310 stats().nonincremental(unsafeReason); 4311 return resetIncrementalGC(unsafeReason); 4312 } 4313 } 4314 4315 GCAbortReason resetReason = GCAbortReason::None; 4316 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) { 4317 if (zone->gcHeapSize.bytes() >= 4318 zone->gcHeapThreshold.incrementalLimitBytes()) { 4319 checkZoneIsScheduled(zone, reason, "GC bytes"); 4320 budget = SliceBudget::unlimited(); 4321 stats().nonincremental(GCAbortReason::GCBytesTrigger); 4322 if (zone->wasGCStarted() && zone->gcState() > Zone::Sweep) { 4323 resetReason = GCAbortReason::GCBytesTrigger; 4324 } 4325 } 4326 4327 if (zone->mallocHeapSize.bytes() >= 4328 zone->mallocHeapThreshold.incrementalLimitBytes()) { 4329 checkZoneIsScheduled(zone, reason, "malloc bytes"); 4330 budget = SliceBudget::unlimited(); 4331 stats().nonincremental(GCAbortReason::MallocBytesTrigger); 4332 if (zone->wasGCStarted() && zone->gcState() > Zone::Sweep) { 4333 resetReason = GCAbortReason::MallocBytesTrigger; 4334 } 4335 } 4336 4337 if (zone->jitHeapSize.bytes() >= 4338 zone->jitHeapThreshold.incrementalLimitBytes()) { 4339 checkZoneIsScheduled(zone, reason, "JIT code bytes"); 4340 budget = SliceBudget::unlimited(); 4341 stats().nonincremental(GCAbortReason::JitCodeBytesTrigger); 4342 if (zone->wasGCStarted() && zone->gcState() > Zone::Sweep) { 4343 resetReason = GCAbortReason::JitCodeBytesTrigger; 4344 } 4345 } 4346 4347 if (isIncrementalGCInProgress() && 4348 zone->isGCScheduled() != zone->wasGCStarted()) { 4349 budget = SliceBudget::unlimited(); 4350 resetReason = GCAbortReason::ZoneChange; 4351 } 4352 } 4353 4354 if (resetReason != GCAbortReason::None) { 4355 return resetIncrementalGC(resetReason); 4356 } 4357 4358 return IncrementalResult::Ok; 4359 } 4360 4361 bool GCRuntime::maybeIncreaseSliceBudget(SliceBudget& budget, 4362 TimeStamp sliceStartTime, 4363 TimeStamp gcStartTime) { 4364 if (js::SupportDifferentialTesting()) { 4365 return false; 4366 } 4367 4368 if (!budget.isTimeBudget() || !isIncrementalGCInProgress()) { 4369 return false; 4370 } 4371 4372 bool wasIncreasedForLongCollections = 4373 maybeIncreaseSliceBudgetForLongCollections(budget, sliceStartTime, 4374 gcStartTime); 4375 bool wasIncreasedForUgentCollections = 4376 maybeIncreaseSliceBudgetForUrgentCollections(budget); 4377 4378 return wasIncreasedForLongCollections || wasIncreasedForUgentCollections; 4379 } 4380 4381 // Return true if the budget is actually extended after rounding. 4382 static bool ExtendBudget(SliceBudget& budget, double newDuration) { 4383 long millis = lround(newDuration); 4384 if (millis <= budget.timeBudget()) { 4385 return false; 4386 } 4387 4388 bool idleTriggered = budget.idle; 4389 budget = SliceBudget(TimeBudget(millis), nullptr); // Uninterruptible. 4390 budget.idle = idleTriggered; 4391 budget.extended = true; 4392 return true; 4393 } 4394 4395 bool GCRuntime::maybeIncreaseSliceBudgetForLongCollections( 4396 SliceBudget& budget, TimeStamp sliceStartTime, TimeStamp gcStartTime) { 4397 // For long-running collections, enforce a minimum time budget that increases 4398 // linearly with time up to a maximum. 4399 4400 // All times are in milliseconds. 4401 struct BudgetAtTime { 4402 double time; 4403 double budget; 4404 }; 4405 const BudgetAtTime MinBudgetStart{1500, 0.0}; 4406 const BudgetAtTime MinBudgetEnd{2500, 100.0}; 4407 4408 double totalTime = (sliceStartTime - gcStartTime).ToMilliseconds(); 4409 4410 double minBudget = 4411 LinearInterpolate(totalTime, MinBudgetStart.time, MinBudgetStart.budget, 4412 MinBudgetEnd.time, MinBudgetEnd.budget); 4413 4414 return ExtendBudget(budget, minBudget); 4415 } 4416 4417 bool GCRuntime::maybeIncreaseSliceBudgetForUrgentCollections( 4418 SliceBudget& budget) { 4419 // Enforce a minimum time budget based on how close we are to the incremental 4420 // limit. 4421 4422 size_t minBytesRemaining = SIZE_MAX; 4423 for (AllZonesIter zone(this); !zone.done(); zone.next()) { 4424 if (!zone->wasGCStarted()) { 4425 continue; 4426 } 4427 size_t gcBytesRemaining = 4428 zone->gcHeapThreshold.incrementalBytesRemaining(zone->gcHeapSize); 4429 minBytesRemaining = std::min(minBytesRemaining, gcBytesRemaining); 4430 size_t mallocBytesRemaining = 4431 zone->mallocHeapThreshold.incrementalBytesRemaining( 4432 zone->mallocHeapSize); 4433 minBytesRemaining = std::min(minBytesRemaining, mallocBytesRemaining); 4434 } 4435 4436 if (minBytesRemaining < tunables.urgentThresholdBytes() && 4437 minBytesRemaining != 0) { 4438 // Increase budget based on the reciprocal of the fraction remaining. 4439 double fractionRemaining = 4440 double(minBytesRemaining) / double(tunables.urgentThresholdBytes()); 4441 double minBudget = double(defaultSliceBudgetMS()) / fractionRemaining; 4442 return ExtendBudget(budget, minBudget); 4443 } 4444 4445 return false; 4446 } 4447 4448 static void ScheduleZones(GCRuntime* gc, JS::GCReason reason) { 4449 for (ZonesIter zone(gc, WithAtoms); !zone.done(); zone.next()) { 4450 // Re-check heap threshold for alloc-triggered zones that were not 4451 // previously collected. Now we have allocation rate data, the heap limit 4452 // may have been increased beyond the current size. 4453 if (gc->tunables.balancedHeapLimitsEnabled() && zone->isGCScheduled() && 4454 zone->smoothedCollectionRate.ref().isNothing() && 4455 reason == JS::GCReason::ALLOC_TRIGGER && 4456 zone->gcHeapSize.bytes() < zone->gcHeapThreshold.startBytes()) { 4457 zone->unscheduleGC(); // May still be re-scheduled below. 4458 } 4459 4460 if (gc->isShutdownGC()) { 4461 zone->scheduleGC(); 4462 } 4463 4464 if (!gc->isPerZoneGCEnabled()) { 4465 zone->scheduleGC(); 4466 } 4467 4468 // To avoid resets, continue to collect any zones that were being 4469 // collected in a previous slice. 4470 if (gc->isIncrementalGCInProgress() && zone->wasGCStarted()) { 4471 zone->scheduleGC(); 4472 } 4473 4474 // This is a heuristic to reduce the total number of collections. 4475 bool inHighFrequencyMode = gc->schedulingState.inHighFrequencyGCMode(); 4476 if (zone->gcHeapSize.bytes() >= 4477 zone->gcHeapThreshold.eagerAllocTrigger(inHighFrequencyMode) || 4478 zone->mallocHeapSize.bytes() >= 4479 zone->mallocHeapThreshold.eagerAllocTrigger(inHighFrequencyMode) || 4480 zone->jitHeapSize.bytes() >= zone->jitHeapThreshold.startBytes()) { 4481 zone->scheduleGC(); 4482 } 4483 } 4484 } 4485 4486 static void UnscheduleZones(GCRuntime* gc) { 4487 for (ZonesIter zone(gc->rt, WithAtoms); !zone.done(); zone.next()) { 4488 zone->unscheduleGC(); 4489 } 4490 } 4491 4492 class js::gc::AutoCallGCCallbacks { 4493 GCRuntime& gc_; 4494 JS::GCReason reason_; 4495 4496 public: 4497 explicit AutoCallGCCallbacks(GCRuntime& gc, JS::GCReason reason) 4498 : gc_(gc), reason_(reason) { 4499 gc_.maybeCallGCCallback(JSGC_BEGIN, reason); 4500 } 4501 ~AutoCallGCCallbacks() { gc_.maybeCallGCCallback(JSGC_END, reason_); } 4502 }; 4503 4504 void GCRuntime::maybeCallGCCallback(JSGCStatus status, JS::GCReason reason) { 4505 if (!gcCallback.ref().op) { 4506 return; 4507 } 4508 4509 if (isIncrementalGCInProgress()) { 4510 return; 4511 } 4512 4513 if (gcCallbackDepth == 0) { 4514 // Save scheduled zone information in case the callback clears it. 4515 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) { 4516 zone->gcScheduledSaved_ = zone->gcScheduled_; 4517 } 4518 } 4519 4520 // Save and clear GC options and state in case the callback reenters GC. 4521 JS::GCOptions options = gcOptions(); 4522 maybeGcOptions = Nothing(); 4523 bool savedFullGCRequested = fullGCRequested; 4524 fullGCRequested = false; 4525 4526 gcCallbackDepth++; 4527 4528 callGCCallback(status, reason); 4529 4530 MOZ_ASSERT(gcCallbackDepth != 0); 4531 gcCallbackDepth--; 4532 4533 // Restore the original GC options. 4534 maybeGcOptions = Some(options); 4535 4536 // At the end of a GC, clear out the fullGCRequested state. At the start, 4537 // restore the previous setting. 4538 fullGCRequested = savedFullGCRequested; 4539 4540 if (gcCallbackDepth == 0) { 4541 // Ensure any zone that was originally scheduled stays scheduled. 4542 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) { 4543 zone->gcScheduled_ = zone->gcScheduled_ || zone->gcScheduledSaved_; 4544 } 4545 } 4546 } 4547 4548 /* 4549 * We disable inlining to ensure that the bottom of the stack with possible GC 4550 * roots recorded in MarkRuntime excludes any pointers we use during the marking 4551 * implementation. 4552 */ 4553 MOZ_NEVER_INLINE GCRuntime::IncrementalResult GCRuntime::gcCycle( 4554 bool nonincrementalByAPI, const SliceBudget& budgetArg, 4555 JS::GCReason reason) { 4556 // Assert if this is a GC unsafe region. 4557 rt->mainContextFromOwnThread()->verifyIsSafeToGC(); 4558 4559 // It's ok if threads other than the main thread have suppressGC set, as 4560 // they are operating on zones which will not be collected from here. 4561 MOZ_ASSERT(!rt->mainContextFromOwnThread()->suppressGC); 4562 4563 // This reason is used internally. See below. 4564 MOZ_ASSERT(reason != JS::GCReason::RESET); 4565 4566 // Background finalization and decommit are finished by definition before we 4567 // can start a new major GC. Background allocation may still be running, but 4568 // that's OK because chunk pools are protected by the GC lock. 4569 bool firstSlice = !isIncrementalGCInProgress(); 4570 if (firstSlice) { 4571 assertBackgroundSweepingFinished(); 4572 MOZ_ASSERT(decommitTask.isIdle()); 4573 } 4574 4575 // Note that GC callbacks are allowed to re-enter GC. 4576 AutoCallGCCallbacks callCallbacks(*this, reason); 4577 4578 // Reset the fullGCRequested flag at the end of GC. 4579 auto resetFullFlag = MakeScopeExit([&] { 4580 if (!isIncrementalGCInProgress()) { 4581 fullGCRequested = false; 4582 } 4583 }); 4584 4585 // Record GC start time and update global scheduling state. 4586 TimeStamp now = TimeStamp::Now(); 4587 if (firstSlice) { 4588 schedulingState.updateHighFrequencyModeOnGCStart( 4589 gcOptions(), lastGCStartTime_, now, tunables); 4590 lastGCStartTime_ = now; 4591 } 4592 schedulingState.updateHighFrequencyModeOnSliceStart(gcOptions(), reason); 4593 4594 // Increase slice budget for long running collections before it is recorded by 4595 // AutoGCSlice. 4596 SliceBudget budget(budgetArg); 4597 bool budgetWasIncreased = 4598 maybeIncreaseSliceBudget(budget, now, lastGCStartTime_); 4599 4600 ScheduleZones(this, reason); 4601 4602 auto updateCollectorTime = MakeScopeExit([&] { 4603 if (const gcstats::Statistics::SliceData* slice = stats().lastSlice()) { 4604 collectorTimeSinceAllocRateUpdate += slice->duration(); 4605 } 4606 }); 4607 4608 gcstats::AutoGCSlice agc(stats(), scanZonesBeforeGC(), gcOptions(), budget, 4609 reason, budgetWasIncreased); 4610 4611 IncrementalResult result = 4612 budgetIncrementalGC(nonincrementalByAPI, reason, budget); 4613 4614 if (result != IncrementalResult::Ok && incrementalState == State::NotActive) { 4615 // The collection was reset or aborted and has finished. 4616 return result; 4617 } 4618 4619 if (result == IncrementalResult::Reset) { 4620 // The collection was reset but we must finish up some remaining work. This 4621 // happens with the reset reason, after which a new collection will be 4622 // started. 4623 reason = JS::GCReason::RESET; 4624 } 4625 4626 majorGCTriggerReason = JS::GCReason::NO_REASON; 4627 MOZ_ASSERT(!stats().hasTrigger()); 4628 4629 incGcNumber(); 4630 incGcSliceNumber(); 4631 4632 gcprobes::MajorGCStart(); 4633 incrementalSlice(budget, reason, budgetWasIncreased); 4634 gcprobes::MajorGCEnd(); 4635 4636 MOZ_ASSERT_IF(result == IncrementalResult::Reset, 4637 !isIncrementalGCInProgress()); 4638 return result; 4639 } 4640 4641 inline bool GCRuntime::mightSweepInThisSlice(bool nonIncremental) { 4642 MOZ_ASSERT(incrementalState < State::Sweep); 4643 return nonIncremental || lastMarkSlice || zealModeControlsYieldPoint(); 4644 } 4645 4646 #ifdef JS_GC_ZEAL 4647 static bool IsDeterministicGCReason(JS::GCReason reason) { 4648 switch (reason) { 4649 case JS::GCReason::API: 4650 case JS::GCReason::DESTROY_RUNTIME: 4651 case JS::GCReason::LAST_DITCH: 4652 case JS::GCReason::TOO_MUCH_MALLOC: 4653 case JS::GCReason::TOO_MUCH_WASM_MEMORY: 4654 case JS::GCReason::TOO_MUCH_JIT_CODE: 4655 case JS::GCReason::ALLOC_TRIGGER: 4656 case JS::GCReason::DEBUG_GC: 4657 case JS::GCReason::CC_FORCED: 4658 case JS::GCReason::SHUTDOWN_CC: 4659 case JS::GCReason::ABORT_GC: 4660 case JS::GCReason::DISABLE_GENERATIONAL_GC: 4661 case JS::GCReason::FINISH_GC: 4662 case JS::GCReason::PREPARE_FOR_TRACING: 4663 return true; 4664 4665 default: 4666 return false; 4667 } 4668 } 4669 #endif 4670 4671 gcstats::ZoneGCStats GCRuntime::scanZonesBeforeGC() { 4672 gcstats::ZoneGCStats zoneStats; 4673 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) { 4674 zoneStats.zoneCount++; 4675 zoneStats.compartmentCount += zone->compartments().length(); 4676 for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next()) { 4677 zoneStats.realmCount += comp->realms().length(); 4678 } 4679 if (zone->isGCScheduled()) { 4680 zoneStats.collectedZoneCount++; 4681 zoneStats.collectedCompartmentCount += zone->compartments().length(); 4682 } 4683 } 4684 4685 return zoneStats; 4686 } 4687 4688 // The GC can only clean up scheduledForDestruction realms that were marked live 4689 // by a barrier (e.g. by RemapWrappers from a navigation event). It is also 4690 // common to have realms held live because they are part of a cycle in gecko, 4691 // e.g. involving the HTMLDocument wrapper. In this case, we need to run the 4692 // CycleCollector in order to remove these edges before the realm can be freed. 4693 void GCRuntime::maybeDoCycleCollection() { 4694 const static float ExcessiveGrayRealms = 0.8f; 4695 const static size_t LimitGrayRealms = 200; 4696 4697 size_t realmsTotal = 0; 4698 size_t realmsGray = 0; 4699 for (RealmsIter realm(rt); !realm.done(); realm.next()) { 4700 ++realmsTotal; 4701 GlobalObject* global = realm->unsafeUnbarrieredMaybeGlobal(); 4702 if (global && global->isMarkedGray()) { 4703 ++realmsGray; 4704 } 4705 } 4706 float grayFraction = float(realmsGray) / float(realmsTotal); 4707 if (grayFraction > ExcessiveGrayRealms || realmsGray > LimitGrayRealms) { 4708 callDoCycleCollectionCallback(rt->mainContextFromOwnThread()); 4709 } 4710 } 4711 4712 void GCRuntime::checkCanCallAPI() { 4713 MOZ_RELEASE_ASSERT(CurrentThreadCanAccessRuntime(rt)); 4714 4715 /* If we attempt to invoke the GC while we are running in the GC, assert. */ 4716 MOZ_RELEASE_ASSERT(!JS::RuntimeHeapIsBusy()); 4717 } 4718 4719 bool GCRuntime::checkIfGCAllowedInCurrentState(JS::GCReason reason) { 4720 if (rt->mainContextFromOwnThread()->suppressGC) { 4721 return false; 4722 } 4723 4724 // This detects coding errors where we are trying to run a GC when GC is 4725 // supposed to be impossible. Do this check here, before any other early 4726 // returns that might miss bugs. (Do not do this check first thing, because it 4727 // is legal to call GC() if you know GC is suppressed.) 4728 rt->mainContextFromOwnThread()->verifyIsSafeToGC(); 4729 4730 // Only allow shutdown GCs when we're destroying the runtime. This keeps 4731 // the GC callback from triggering a nested GC and resetting global state. 4732 if (rt->isBeingDestroyed() && !isShutdownGC()) { 4733 return false; 4734 } 4735 4736 #ifdef JS_GC_ZEAL 4737 if (deterministicOnly && !IsDeterministicGCReason(reason)) { 4738 return false; 4739 } 4740 #endif 4741 4742 return true; 4743 } 4744 4745 bool GCRuntime::shouldRepeatForDeadZone(JS::GCReason reason) { 4746 MOZ_ASSERT_IF(reason == JS::GCReason::COMPARTMENT_REVIVED, !isIncremental); 4747 MOZ_ASSERT(!isIncrementalGCInProgress()); 4748 4749 if (!isIncremental) { 4750 return false; 4751 } 4752 4753 for (CompartmentsIter c(rt); !c.done(); c.next()) { 4754 if (c->gcState.scheduledForDestruction) { 4755 return true; 4756 } 4757 } 4758 4759 return false; 4760 } 4761 4762 struct MOZ_RAII AutoSetZoneSliceThresholds { 4763 explicit AutoSetZoneSliceThresholds(GCRuntime* gc) : gc(gc) { 4764 // On entry, zones that are already collecting should have a slice threshold 4765 // set. 4766 for (ZonesIter zone(gc, WithAtoms); !zone.done(); zone.next()) { 4767 MOZ_ASSERT(zone->wasGCStarted() == 4768 zone->gcHeapThreshold.hasSliceThreshold()); 4769 MOZ_ASSERT(zone->wasGCStarted() == 4770 zone->mallocHeapThreshold.hasSliceThreshold()); 4771 } 4772 } 4773 4774 ~AutoSetZoneSliceThresholds() { 4775 // On exit, update the thresholds for all collecting zones. 4776 bool waitingOnBGTask = gc->isWaitingOnBackgroundTask(); 4777 for (ZonesIter zone(gc, WithAtoms); !zone.done(); zone.next()) { 4778 if (zone->wasGCStarted()) { 4779 zone->setGCSliceThresholds(*gc, waitingOnBGTask); 4780 } else { 4781 MOZ_ASSERT(!zone->gcHeapThreshold.hasSliceThreshold()); 4782 MOZ_ASSERT(!zone->mallocHeapThreshold.hasSliceThreshold()); 4783 } 4784 } 4785 } 4786 4787 GCRuntime* gc; 4788 }; 4789 4790 void GCRuntime::collect(bool nonincrementalByAPI, const SliceBudget& budget, 4791 JS::GCReason reason) { 4792 auto clearGCOptions = MakeScopeExit([&] { 4793 if (!isIncrementalGCInProgress()) { 4794 maybeGcOptions = Nothing(); 4795 } 4796 }); 4797 4798 MOZ_ASSERT(reason != JS::GCReason::NO_REASON); 4799 4800 // Checks run for each request, even if we do not actually GC. 4801 checkCanCallAPI(); 4802 4803 // Check if we are allowed to GC at this time before proceeding. 4804 if (!checkIfGCAllowedInCurrentState(reason)) { 4805 return; 4806 } 4807 4808 JS_LOG(gc, Info, "begin slice for reason %s in state %s", 4809 ExplainGCReason(reason), StateName(incrementalState)); 4810 4811 AutoStopVerifyingBarriers av(rt, isShutdownGC()); 4812 AutoMaybeLeaveAtomsZone leaveAtomsZone(rt->mainContextFromOwnThread()); 4813 AutoSetZoneSliceThresholds sliceThresholds(this); 4814 4815 if (!isIncrementalGCInProgress() && tunables.balancedHeapLimitsEnabled()) { 4816 updateAllocationRates(); 4817 } 4818 4819 bool repeat; 4820 do { 4821 IncrementalResult cycleResult = 4822 gcCycle(nonincrementalByAPI, budget, reason); 4823 4824 if (cycleResult == IncrementalResult::Abort) { 4825 MOZ_ASSERT(reason == JS::GCReason::ABORT_GC); 4826 MOZ_ASSERT(!isIncrementalGCInProgress()); 4827 JS_LOG(gc, Info, "aborted by request"); 4828 break; 4829 } 4830 4831 /* 4832 * Sometimes when we finish a GC we need to immediately start a new one. 4833 * This happens in the following cases: 4834 * - when we reset the current GC 4835 * - when finalizers drop roots during shutdown 4836 * - when zones that we thought were dead at the start of GC are 4837 * not collected (see the large comment in beginMarkPhase) 4838 */ 4839 repeat = false; 4840 if (!isIncrementalGCInProgress()) { 4841 if (cycleResult == IncrementalResult::Reset) { 4842 repeat = true; 4843 } else if (rootsRemoved && isShutdownGC()) { 4844 /* Need to re-schedule all zones for GC. */ 4845 JS::PrepareForFullGC(rt->mainContextFromOwnThread()); 4846 repeat = true; 4847 reason = JS::GCReason::ROOTS_REMOVED; 4848 } else if (shouldRepeatForDeadZone(reason)) { 4849 repeat = true; 4850 reason = JS::GCReason::COMPARTMENT_REVIVED; 4851 } 4852 } 4853 } while (repeat); 4854 4855 if (reason == JS::GCReason::COMPARTMENT_REVIVED) { 4856 maybeDoCycleCollection(); 4857 } 4858 4859 #ifdef JS_GC_ZEAL 4860 if (!isIncrementalGCInProgress()) { 4861 if (hasZealMode(ZealMode::CheckHeapAfterGC)) { 4862 gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::TRACE_HEAP); 4863 CheckHeapAfterGC(rt); 4864 } 4865 if (hasZealMode(ZealMode::CheckGrayMarking)) { 4866 MOZ_RELEASE_ASSERT(CheckGrayMarkingState(rt)); 4867 } 4868 } 4869 #endif 4870 JS_LOG(gc, Info, "end slice in state %s", StateName(incrementalState)); 4871 4872 UnscheduleZones(this); 4873 } 4874 4875 SliceBudget GCRuntime::defaultBudget(JS::GCReason reason, int64_t millis) { 4876 // millis == 0 means use internal GC scheduling logic to come up with 4877 // a duration for the slice budget. This may end up still being zero 4878 // based on preferences. 4879 if (millis == 0) { 4880 millis = defaultSliceBudgetMS(); 4881 } 4882 4883 // If the embedding has registered a callback for creating SliceBudgets, 4884 // then use it. 4885 if (createBudgetCallback) { 4886 return createBudgetCallback(reason, millis); 4887 } 4888 4889 // Otherwise, the preference can request an unlimited duration slice. 4890 if (millis == 0) { 4891 return SliceBudget::unlimited(); 4892 } 4893 4894 return SliceBudget(TimeBudget(millis)); 4895 } 4896 4897 void GCRuntime::gc(JS::GCOptions options, JS::GCReason reason) { 4898 if (!isIncrementalGCInProgress()) { 4899 setGCOptions(options); 4900 } 4901 4902 collect(true, SliceBudget::unlimited(), reason); 4903 } 4904 4905 void GCRuntime::startGC(JS::GCOptions options, JS::GCReason reason, 4906 const SliceBudget& budget) { 4907 MOZ_ASSERT(!isIncrementalGCInProgress()); 4908 setGCOptions(options); 4909 4910 if (!JS::IsIncrementalGCEnabled(rt->mainContextFromOwnThread())) { 4911 collect(true, SliceBudget::unlimited(), reason); 4912 return; 4913 } 4914 4915 collect(false, budget, reason); 4916 } 4917 4918 void GCRuntime::setGCOptions(JS::GCOptions options) { 4919 MOZ_ASSERT(maybeGcOptions == Nothing()); 4920 maybeGcOptions = Some(options); 4921 } 4922 4923 void GCRuntime::gcSlice(JS::GCReason reason, const SliceBudget& budget) { 4924 MOZ_ASSERT(isIncrementalGCInProgress()); 4925 collect(false, budget, reason); 4926 } 4927 4928 void GCRuntime::finishGC(JS::GCReason reason) { 4929 MOZ_ASSERT(isIncrementalGCInProgress()); 4930 4931 // If we're not collecting because we're out of memory then skip the 4932 // compacting phase if we need to finish an ongoing incremental GC 4933 // non-incrementally to avoid janking the browser. 4934 if (!IsOOMReason(initialReason)) { 4935 if (incrementalState == State::Compact) { 4936 abortGC(); 4937 return; 4938 } 4939 4940 isCompacting = false; 4941 } 4942 4943 collect(false, SliceBudget::unlimited(), reason); 4944 } 4945 4946 void GCRuntime::abortGC() { 4947 MOZ_ASSERT(isIncrementalGCInProgress()); 4948 checkCanCallAPI(); 4949 MOZ_ASSERT(!rt->mainContextFromOwnThread()->suppressGC); 4950 4951 collect(false, SliceBudget::unlimited(), JS::GCReason::ABORT_GC); 4952 } 4953 4954 static bool ZonesSelected(GCRuntime* gc) { 4955 for (ZonesIter zone(gc, WithAtoms); !zone.done(); zone.next()) { 4956 if (zone->isGCScheduled()) { 4957 return true; 4958 } 4959 } 4960 return false; 4961 } 4962 4963 void GCRuntime::startDebugGC(JS::GCOptions options, const SliceBudget& budget) { 4964 MOZ_ASSERT(!isIncrementalGCInProgress()); 4965 setGCOptions(options); 4966 4967 if (!ZonesSelected(this)) { 4968 JS::PrepareForFullGC(rt->mainContextFromOwnThread()); 4969 } 4970 4971 collect(false, budget, JS::GCReason::DEBUG_GC); 4972 } 4973 4974 void GCRuntime::debugGCSlice(const SliceBudget& budget) { 4975 MOZ_ASSERT(isIncrementalGCInProgress()); 4976 4977 if (!ZonesSelected(this)) { 4978 JS::PrepareForIncrementalGC(rt->mainContextFromOwnThread()); 4979 } 4980 4981 collect(false, budget, JS::GCReason::DEBUG_GC); 4982 } 4983 4984 void js::PrepareForDebugGC(JSRuntime* rt) { 4985 // If zones have already been scheduled then use them. 4986 if (ZonesSelected(&rt->gc)) { 4987 return; 4988 } 4989 4990 // If we already started a GC then continue with the same set of zones. This 4991 // prevents resetting an ongoing GC when new zones are added. 4992 JSContext* cx = rt->mainContextFromOwnThread(); 4993 if (JS::IsIncrementalGCInProgress(cx)) { 4994 JS::PrepareForIncrementalGC(cx); 4995 return; 4996 } 4997 4998 // Otherwise schedule all zones. 4999 JS::PrepareForFullGC(rt->mainContextFromOwnThread()); 5000 } 5001 5002 void GCRuntime::onOutOfMallocMemory() { 5003 // Stop allocating new chunks. 5004 allocTask.cancelAndWait(); 5005 5006 // Make sure we release anything queued for release. 5007 decommitTask.join(); 5008 nursery().joinDecommitTask(); 5009 5010 // Wait for background free of nursery huge slots to finish. 5011 sweepTask.join(); 5012 5013 AutoLockGC lock(this); 5014 onOutOfMallocMemory(lock); 5015 } 5016 5017 void GCRuntime::onOutOfMallocMemory(const AutoLockGC& lock) { 5018 #ifdef DEBUG 5019 // Release any relocated arenas we may be holding on to, without releasing 5020 // the GC lock. 5021 releaseHeldRelocatedArenasWithoutUnlocking(lock); 5022 #endif 5023 5024 // Throw away any excess chunks we have lying around. 5025 freeEmptyChunks(lock); 5026 5027 // Immediately decommit as many arenas as possible in the hopes that this 5028 // might let the OS scrape together enough pages to satisfy the failing 5029 // malloc request. 5030 if (DecommitEnabled()) { 5031 decommitFreeArenasWithoutUnlocking(lock); 5032 } 5033 } 5034 5035 void GCRuntime::minorGC(JS::GCReason reason, gcstats::PhaseKind phase) { 5036 MOZ_ASSERT(!JS::RuntimeHeapIsBusy()); 5037 5038 MOZ_ASSERT_IF(reason == JS::GCReason::EVICT_NURSERY, 5039 !rt->mainContextFromOwnThread()->suppressGC); 5040 if (rt->mainContextFromOwnThread()->suppressGC) { 5041 return; 5042 } 5043 5044 incGcNumber(); 5045 5046 collectNursery(JS::GCOptions::Normal, reason, phase); 5047 5048 #ifdef JS_GC_ZEAL 5049 if (hasZealMode(ZealMode::CheckHeapAfterGC)) { 5050 gcstats::AutoPhase ap(stats(), phase); 5051 waitBackgroundSweepEnd(); 5052 waitBackgroundDecommitEnd(); 5053 CheckHeapAfterGC(rt); 5054 } 5055 #endif 5056 5057 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) { 5058 maybeTriggerGCAfterAlloc(zone); 5059 maybeTriggerGCAfterMalloc(zone); 5060 } 5061 } 5062 5063 void GCRuntime::collectNursery(JS::GCOptions options, JS::GCReason reason, 5064 gcstats::PhaseKind phase) { 5065 AutoMaybeLeaveAtomsZone leaveAtomsZone(rt->mainContextFromOwnThread()); 5066 5067 uint32_t numAllocs = 0; 5068 for (ZonesIter zone(this, WithAtoms); !zone.done(); zone.next()) { 5069 numAllocs += zone->getAndResetTenuredAllocsSinceMinorGC(); 5070 } 5071 stats().setAllocsSinceMinorGCTenured(numAllocs); 5072 5073 gcstats::AutoPhase ap(stats(), phase); 5074 5075 nursery().collect(options, reason); 5076 5077 startBackgroundFreeAfterMinorGC(); 5078 5079 // We ignore gcMaxBytes when allocating for minor collection. However, if we 5080 // overflowed, we disable the nursery. The next time we allocate, we'll fail 5081 // because bytes >= gcMaxBytes. 5082 if (heapSize.bytes() >= tunables.gcMaxBytes()) { 5083 if (!nursery().isEmpty()) { 5084 nursery().collect(options, JS::GCReason::DISABLE_GENERATIONAL_GC); 5085 MOZ_ASSERT(nursery().isEmpty()); 5086 startBackgroundFreeAfterMinorGC(); 5087 } 5088 5089 // Disabling the nursery triggers pre-barriers when we discard JIT 5090 // code. Normally we don't allow any barriers in a major GC and there is an 5091 // assertion to check this in PreWriteBarrier. For the case where we disable 5092 // the nursery during a major GC, set up a minor GC session to silence the 5093 // assertion. 5094 AutoGCSession session(this, JS::HeapState::MinorCollecting); 5095 5096 nursery().disable(); 5097 } 5098 } 5099 5100 void GCRuntime::startBackgroundFreeAfterMinorGC() { 5101 // Called after nursery collection. Free whatever blocks are safe to free now. 5102 5103 AutoLockHelperThreadState lock; 5104 5105 lifoBlocksToFree.ref().transferFrom(&lifoBlocksToFreeAfterNextMinorGC.ref()); 5106 5107 if (nursery().tenuredEverything) { 5108 lifoBlocksToFree.ref().transferFrom( 5109 &lifoBlocksToFreeAfterFullMinorGC.ref()); 5110 } else { 5111 lifoBlocksToFreeAfterNextMinorGC.ref().transferFrom( 5112 &lifoBlocksToFreeAfterFullMinorGC.ref()); 5113 } 5114 5115 if (!hasBuffersForBackgroundFree()) { 5116 return; 5117 } 5118 5119 freeTask.startOrRunIfIdle(lock); 5120 } 5121 5122 bool GCRuntime::gcIfRequestedImpl(bool eagerOk) { 5123 // This method returns whether a major GC was performed. 5124 5125 if (nursery().minorGCRequested()) { 5126 minorGC(nursery().minorGCTriggerReason()); 5127 } 5128 5129 JS::GCReason reason = wantMajorGC(eagerOk); 5130 if (reason == JS::GCReason::NO_REASON) { 5131 return false; 5132 } 5133 5134 SliceBudget budget = defaultBudget(reason, 0); 5135 if (!isIncrementalGCInProgress()) { 5136 startGC(JS::GCOptions::Normal, reason, budget); 5137 } else { 5138 gcSlice(reason, budget); 5139 } 5140 return true; 5141 } 5142 5143 void js::gc::FinishGC(JSContext* cx, JS::GCReason reason) { 5144 // Calling this when GC is suppressed won't have any effect. 5145 MOZ_ASSERT(!cx->suppressGC); 5146 5147 // GC callbacks may run arbitrary code, including JS. Check this regardless of 5148 // whether we GC for this invocation. 5149 MOZ_ASSERT(cx->isNurseryAllocAllowed()); 5150 5151 if (JS::IsIncrementalGCInProgress(cx)) { 5152 JS::PrepareForIncrementalGC(cx); 5153 JS::FinishIncrementalGC(cx, reason); 5154 } 5155 } 5156 5157 void js::gc::WaitForBackgroundTasks(JSContext* cx) { 5158 cx->runtime()->gc.waitForBackgroundTasks(); 5159 } 5160 5161 void GCRuntime::waitForBackgroundTasks() { 5162 MOZ_ASSERT(!isIncrementalGCInProgress()); 5163 MOZ_ASSERT(sweepTask.isIdle()); 5164 MOZ_ASSERT(decommitTask.isIdle()); 5165 MOZ_ASSERT(markTask.isIdle()); 5166 5167 allocTask.join(); 5168 freeTask.join(); 5169 nursery().joinDecommitTask(); 5170 } 5171 5172 Realm* js::NewRealm(JSContext* cx, JSPrincipals* principals, 5173 const JS::RealmOptions& options) { 5174 JSRuntime* rt = cx->runtime(); 5175 JS_AbortIfWrongThread(cx); 5176 5177 UniquePtr<Zone> zoneHolder; 5178 UniquePtr<Compartment> compHolder; 5179 5180 Compartment* comp = nullptr; 5181 Zone* zone = nullptr; 5182 JS::CompartmentSpecifier compSpec = 5183 options.creationOptions().compartmentSpecifier(); 5184 switch (compSpec) { 5185 case JS::CompartmentSpecifier::NewCompartmentInSystemZone: 5186 // systemZone might be null here, in which case we'll make a zone and 5187 // set this field below. 5188 zone = rt->gc.systemZone; 5189 break; 5190 case JS::CompartmentSpecifier::NewCompartmentInExistingZone: 5191 zone = options.creationOptions().zone(); 5192 MOZ_ASSERT(zone); 5193 break; 5194 case JS::CompartmentSpecifier::ExistingCompartment: 5195 comp = options.creationOptions().compartment(); 5196 zone = comp->zone(); 5197 break; 5198 case JS::CompartmentSpecifier::NewCompartmentAndZone: 5199 break; 5200 } 5201 5202 if (!zone) { 5203 Zone::Kind kind = Zone::NormalZone; 5204 const JSPrincipals* trusted = rt->trustedPrincipals(); 5205 if (compSpec == JS::CompartmentSpecifier::NewCompartmentInSystemZone || 5206 (principals && principals == trusted)) { 5207 kind = Zone::SystemZone; 5208 } 5209 5210 zoneHolder = MakeUnique<Zone>(cx->runtime(), kind); 5211 if (!zoneHolder || !zoneHolder->init()) { 5212 ReportOutOfMemory(cx); 5213 return nullptr; 5214 } 5215 5216 zone = zoneHolder.get(); 5217 } 5218 5219 bool invisibleToDebugger = options.creationOptions().invisibleToDebugger(); 5220 if (comp) { 5221 // Debugger visibility is per-compartment, not per-realm, so make sure the 5222 // new realm's visibility matches its compartment's. 5223 MOZ_ASSERT(comp->invisibleToDebugger() == invisibleToDebugger); 5224 } else { 5225 compHolder = cx->make_unique<JS::Compartment>(zone, invisibleToDebugger); 5226 if (!compHolder) { 5227 return nullptr; 5228 } 5229 5230 comp = compHolder.get(); 5231 } 5232 5233 UniquePtr<Realm> realm(cx->new_<Realm>(comp, options)); 5234 if (!realm) { 5235 return nullptr; 5236 } 5237 realm->init(cx, principals); 5238 5239 // Make sure we don't put system and non-system realms in the same 5240 // compartment. 5241 if (!compHolder) { 5242 MOZ_RELEASE_ASSERT(realm->isSystem() == IsSystemCompartment(comp)); 5243 } 5244 5245 AutoLockGC lock(rt); 5246 5247 // Reserve space in the Vectors before we start mutating them. 5248 if (!comp->realms().reserve(comp->realms().length() + 1) || 5249 (compHolder && 5250 !zone->compartments().reserve(zone->compartments().length() + 1)) || 5251 (zoneHolder && !rt->gc.zones().reserve(rt->gc.zones().length() + 1))) { 5252 ReportOutOfMemory(cx); 5253 return nullptr; 5254 } 5255 5256 // After this everything must be infallible. 5257 5258 comp->realms().infallibleAppend(realm.get()); 5259 5260 if (compHolder) { 5261 zone->compartments().infallibleAppend(compHolder.release()); 5262 } 5263 5264 if (zoneHolder) { 5265 rt->gc.zones().infallibleAppend(zoneHolder.release()); 5266 5267 // Lazily set the runtime's system zone. 5268 if (compSpec == JS::CompartmentSpecifier::NewCompartmentInSystemZone) { 5269 MOZ_RELEASE_ASSERT(!rt->gc.systemZone); 5270 MOZ_ASSERT(zone->isSystemZone()); 5271 rt->gc.systemZone = zone; 5272 } 5273 } 5274 5275 return realm.release(); 5276 } 5277 5278 void GCRuntime::runDebugGC() { 5279 #ifdef JS_GC_ZEAL 5280 if (rt->mainContextFromOwnThread()->suppressGC) { 5281 return; 5282 } 5283 5284 if (hasZealMode(ZealMode::VerifierPost) || 5285 hasZealMode(ZealMode::GenerationalGC)) { 5286 return minorGC(JS::GCReason::DEBUG_GC); 5287 } 5288 5289 PrepareForDebugGC(rt); 5290 5291 auto budget = SliceBudget::unlimited(); 5292 if (hasZealMode(ZealMode::IncrementalMultipleSlices)) { 5293 /* 5294 * Start with a small slice limit and double it every slice. This 5295 * ensure that we get multiple slices, and collection runs to 5296 * completion. 5297 */ 5298 if (!isIncrementalGCInProgress()) { 5299 zealSliceBudget = zealFrequency / 2; 5300 } else { 5301 zealSliceBudget *= 2; 5302 } 5303 budget = SliceBudget(WorkBudget(zealSliceBudget)); 5304 5305 js::gc::State initialState = incrementalState; 5306 if (!isIncrementalGCInProgress()) { 5307 setGCOptions(JS::GCOptions::Shrink); 5308 } 5309 collect(false, budget, JS::GCReason::DEBUG_GC); 5310 5311 /* Reset the slice size when we get to the sweep or compact phases. */ 5312 if ((initialState == State::Mark && incrementalState == State::Sweep) || 5313 (initialState == State::Sweep && incrementalState == State::Compact)) { 5314 zealSliceBudget = zealFrequency / 2; 5315 } 5316 } else if (zealModeControlsYieldPoint()) { 5317 // These modes trigger incremental GC that happens in two slices and the 5318 // supplied budget is ignored by incrementalSlice. 5319 budget = SliceBudget(WorkBudget(1)); 5320 5321 if (!isIncrementalGCInProgress()) { 5322 setGCOptions(JS::GCOptions::Normal); 5323 } 5324 collect(false, budget, JS::GCReason::DEBUG_GC); 5325 } else if (hasZealMode(ZealMode::Compact)) { 5326 gc(JS::GCOptions::Shrink, JS::GCReason::DEBUG_GC); 5327 } else { 5328 gc(JS::GCOptions::Normal, JS::GCReason::DEBUG_GC); 5329 } 5330 5331 #endif 5332 } 5333 5334 void GCRuntime::setFullCompartmentChecks(bool enabled) { 5335 MOZ_ASSERT(!JS::RuntimeHeapIsMajorCollecting()); 5336 fullCompartmentChecks = enabled; 5337 } 5338 5339 void GCRuntime::notifyRootsRemoved() { 5340 rootsRemoved = true; 5341 5342 #ifdef JS_GC_ZEAL 5343 /* Schedule a GC to happen "soon". */ 5344 if (hasZealMode(ZealMode::RootsChange)) { 5345 nextScheduled = 1; 5346 } 5347 #endif 5348 } 5349 5350 #ifdef JS_GC_ZEAL 5351 bool GCRuntime::selectForMarking(JSObject* object) { 5352 MOZ_ASSERT(!JS::RuntimeHeapIsMajorCollecting()); 5353 return selectedForMarking.ref().get().append(object); 5354 } 5355 5356 void GCRuntime::clearSelectedForMarking() { 5357 selectedForMarking.ref().get().clearAndFree(); 5358 } 5359 5360 void GCRuntime::setDeterministic(bool enabled) { 5361 MOZ_ASSERT(!JS::RuntimeHeapIsMajorCollecting()); 5362 deterministicOnly = enabled; 5363 } 5364 #endif 5365 5366 #ifdef DEBUG 5367 5368 AutoAssertNoNurseryAlloc::AutoAssertNoNurseryAlloc() { 5369 TlsContext.get()->disallowNurseryAlloc(); 5370 } 5371 5372 AutoAssertNoNurseryAlloc::~AutoAssertNoNurseryAlloc() { 5373 TlsContext.get()->allowNurseryAlloc(); 5374 } 5375 5376 #endif // DEBUG 5377 5378 #ifdef JSGC_HASH_TABLE_CHECKS 5379 void GCRuntime::checkHashTablesAfterMovingGC() { 5380 waitBackgroundSweepEnd(); 5381 waitBackgroundDecommitEnd(); 5382 5383 /* 5384 * Check that internal hash tables no longer have any pointers to things 5385 * that have been moved. 5386 */ 5387 rt->geckoProfiler().checkStringsMapAfterMovingGC(); 5388 if (rt->hasJitRuntime() && rt->jitRuntime()->hasInterpreterEntryMap()) { 5389 rt->jitRuntime()->getInterpreterEntryMap()->checkScriptsAfterMovingGC(); 5390 } 5391 for (ZonesIter zone(this, SkipAtoms); !zone.done(); zone.next()) { 5392 zone->checkUniqueIdTableAfterMovingGC(); 5393 zone->shapeZone().checkTablesAfterMovingGC(zone); 5394 zone->checkAllCrossCompartmentWrappersAfterMovingGC(); 5395 zone->checkScriptMapsAfterMovingGC(); 5396 5397 // Note: CompactPropMaps never have a table. 5398 JS::AutoCheckCannotGC nogc; 5399 for (auto map = zone->cellIterUnsafe<NormalPropMap>(); !map.done(); 5400 map.next()) { 5401 if (PropMapTable* table = map->asLinked()->maybeTable(nogc)) { 5402 table->checkAfterMovingGC(zone); 5403 } 5404 } 5405 for (auto map = zone->cellIterUnsafe<DictionaryPropMap>(); !map.done(); 5406 map.next()) { 5407 if (PropMapTable* table = map->asLinked()->maybeTable(nogc)) { 5408 table->checkAfterMovingGC(zone); 5409 } 5410 } 5411 5412 WeakMapBase::checkWeakMapsAfterMovingGC(zone); 5413 } 5414 5415 for (CompartmentsIter c(this); !c.done(); c.next()) { 5416 for (RealmsInCompartmentIter r(c); !r.done(); r.next()) { 5417 r->dtoaCache.checkCacheAfterMovingGC(); 5418 if (r->debugEnvs()) { 5419 r->debugEnvs()->checkHashTablesAfterMovingGC(); 5420 } 5421 } 5422 } 5423 } 5424 #endif 5425 5426 #ifdef DEBUG 5427 bool GCRuntime::hasZone(Zone* target) { 5428 for (AllZonesIter zone(this); !zone.done(); zone.next()) { 5429 if (zone == target) { 5430 return true; 5431 } 5432 } 5433 return false; 5434 } 5435 #endif 5436 5437 void AutoAssertEmptyNursery::checkCondition(JSContext* cx) { 5438 if (!noAlloc) { 5439 noAlloc.emplace(); 5440 } 5441 this->cx = cx; 5442 MOZ_ASSERT(cx->nursery().isEmpty()); 5443 } 5444 5445 AutoEmptyNursery::AutoEmptyNursery(JSContext* cx) { 5446 MOZ_ASSERT(!cx->suppressGC); 5447 cx->runtime()->gc.stats().suspendPhases(); 5448 cx->runtime()->gc.evictNursery(JS::GCReason::EVICT_NURSERY); 5449 cx->runtime()->gc.stats().resumePhases(); 5450 checkCondition(cx); 5451 } 5452 5453 #ifdef DEBUG 5454 5455 namespace js { 5456 5457 // We don't want jsfriendapi.h to depend on GenericPrinter, 5458 // so these functions are declared directly in the cpp. 5459 5460 extern JS_PUBLIC_API void DumpString(JSString* str, js::GenericPrinter& out); 5461 5462 } // namespace js 5463 5464 void js::gc::Cell::dump(js::GenericPrinter& out) const { 5465 switch (getTraceKind()) { 5466 case JS::TraceKind::Object: 5467 reinterpret_cast<const JSObject*>(this)->dump(out); 5468 break; 5469 5470 case JS::TraceKind::String: 5471 js::DumpString(reinterpret_cast<JSString*>(const_cast<Cell*>(this)), out); 5472 break; 5473 5474 case JS::TraceKind::Shape: 5475 reinterpret_cast<const Shape*>(this)->dump(out); 5476 break; 5477 5478 default: 5479 out.printf("%s(%p)\n", JS::GCTraceKindToAscii(getTraceKind()), 5480 (void*)this); 5481 } 5482 } 5483 5484 // For use in a debugger. 5485 void js::gc::Cell::dump() const { 5486 js::Fprinter out(stderr); 5487 dump(out); 5488 } 5489 #endif 5490 5491 JS_PUBLIC_API bool js::gc::detail::CanCheckGrayBits(const TenuredCell* cell) { 5492 // We do not check the gray marking state of cells in the following cases: 5493 // 5494 // 1) When OOM has caused us to clear the gcGrayBitsValid_ flag. 5495 // 5496 // 2) When we are in an incremental GC and examine a cell that is in a zone 5497 // that is not being collected. Gray targets of CCWs that are marked black 5498 // by a barrier will eventually be marked black in a later GC slice. 5499 // 5500 // 3) When mark bits are being cleared concurrently by a helper thread. 5501 5502 MOZ_ASSERT(cell); 5503 5504 JS::Zone* zone = cell->zoneFromAnyThread(); 5505 if (zone->isAtomsZone() && cell->isMarkedBlack()) { 5506 // This could be a shared atom in the parent runtime. Skip this check. 5507 return true; 5508 } 5509 5510 auto* runtime = cell->runtimeFromAnyThread(); 5511 MOZ_ASSERT(CurrentThreadCanAccessRuntime(runtime)); 5512 5513 if (!runtime->gc.areGrayBitsValid()) { 5514 return false; 5515 } 5516 5517 if (runtime->gc.isIncrementalGCInProgress() && !zone->wasGCStarted()) { 5518 return false; 5519 } 5520 5521 return !zone->isGCPreparing(); 5522 } 5523 5524 JS_PUBLIC_API bool js::gc::detail::CellIsMarkedGrayIfKnown( 5525 const TenuredCell* cell) { 5526 MOZ_ASSERT_IF(cell->isPermanentAndMayBeShared(), cell->isMarkedBlack()); 5527 if (!cell->isMarkedGray()) { 5528 return false; 5529 } 5530 5531 return CanCheckGrayBits(cell); 5532 } 5533 5534 #ifdef DEBUG 5535 5536 JS_PUBLIC_API void js::gc::detail::AssertCellIsNotGray(const Cell* cell) { 5537 if (!cell->isTenured()) { 5538 return; 5539 } 5540 5541 // Check that a cell is not marked gray. 5542 // 5543 // Since this is a debug-only check, take account of the eventual mark state 5544 // of cells that will be marked black by the next GC slice in an incremental 5545 // GC. For performance reasons we don't do this in CellIsMarkedGrayIfKnown. 5546 5547 const auto* tc = &cell->asTenured(); 5548 if (!tc->isMarkedGray() || !CanCheckGrayBits(tc)) { 5549 return; 5550 } 5551 5552 // TODO: I'd like to AssertHeapIsIdle() here, but this ends up getting 5553 // called during GC and while iterating the heap for memory reporting. 5554 MOZ_ASSERT(!JS::RuntimeHeapIsCycleCollecting()); 5555 5556 Zone* zone = tc->zone(); 5557 if (zone->isGCMarkingBlackAndGray()) { 5558 // We are doing gray marking in the cell's zone. Even if the cell is 5559 // currently marked gray it may eventually be marked black. Delay checking 5560 // non-black cells until we finish gray marking. 5561 5562 if (!tc->isMarkedBlack()) { 5563 AutoEnterOOMUnsafeRegion oomUnsafe; 5564 if (!zone->cellsToAssertNotGray().append(cell)) { 5565 oomUnsafe.crash("Can't append to delayed gray checks list"); 5566 } 5567 } 5568 return; 5569 } 5570 5571 MOZ_ASSERT(!tc->isMarkedGray()); 5572 } 5573 5574 extern JS_PUBLIC_API bool js::gc::detail::ObjectIsMarkedBlack( 5575 const JSObject* obj) { 5576 return obj->isMarkedBlack(); 5577 } 5578 5579 #endif 5580 5581 js::gc::ClearEdgesTracer::ClearEdgesTracer(JSRuntime* rt) 5582 : GenericTracerImpl(rt, JS::TracerKind::ClearEdges, 5583 JS::WeakMapTraceAction::TraceKeysAndValues) {} 5584 5585 template <typename T> 5586 void js::gc::ClearEdgesTracer::onEdge(T** thingp, const char* name) { 5587 // We don't handle removing pointers to nursery edges from the store buffer 5588 // with this tracer. Check that this doesn't happen. 5589 T* thing = *thingp; 5590 MOZ_ASSERT(!IsInsideNursery(thing)); 5591 5592 // Fire the pre-barrier since we're removing an edge from the graph. 5593 InternalBarrierMethods<T*>::preBarrier(thing); 5594 5595 *thingp = nullptr; 5596 } 5597 5598 void GCRuntime::setPerformanceHint(PerformanceHint hint) { 5599 if (hint == PerformanceHint::InPageLoad) { 5600 inPageLoadCount++; 5601 } else { 5602 MOZ_ASSERT(inPageLoadCount); 5603 inPageLoadCount--; 5604 } 5605 }