tor-browser

The Tor Browser
git clone https://git.dasho.dev/tor-browser.git
Log | Files | Refs | README | LICENSE

commit c25e7fad845266b14ff33256fed8ea24a0d1c94e
parent 41c62af8d756dd9748a8343edada689e41f3922d
Author: Florian Quèze <florian@queze.net>
Date:   Mon, 15 Dec 2025 15:58:20 +0000

Bug 2005073 - create xpcshell-issues.json files for aggregated data over 21 days, r=ahal.

Differential Revision: https://phabricator.services.mozilla.com/D275688

Diffstat:
Mtesting/timings/JSON_FORMAT.md | 235+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Mtesting/timings/fetch-xpcshell-data.js | 517+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--
2 files changed, 743 insertions(+), 9 deletions(-)

diff --git a/testing/timings/JSON_FORMAT.md b/testing/timings/JSON_FORMAT.md @@ -397,3 +397,238 @@ Dates are sorted in descending order (newest first). - **Component mapping:** Components are fetched from the TaskCluster index `gecko.v2.mozilla-central.latest.source.source-bugzilla-info` and mapped to test paths. The component ID in `testInfo.componentIds` may be `null` if the test path is not found in the mapping - Components are formatted as `"Product :: Component"` (e.g., `"Core :: Storage: IndexedDB"`) - The data structure is optimized for sequential access patterns used by the dashboards + +--- + +## Aggregated Files Format + +When running with `--days N` where N > 1, two aggregated files are generated: + +1. **`xpcshell-issues-with-taskids.json`** (~30MB for 21 days): Includes task IDs for all non-passing runs, allowing drill-down to specific CI tasks. Passing runs and non-passing runs are both aggregated by hour. + +2. **`xpcshell-issues.json`** (~15MB for 21 days): No task IDs or minidumps - all runs are aggregated to counts only. Optimized for fast dashboard initial load. + +### Detailed File (xpcshell-issues-with-taskids.json) + +#### Differences from Daily Files + +#### 1. Metadata Changes + +```json +{ + "metadata": { + "startDate": "2025-11-12", // First date in the range (earliest) + "endDate": "2025-12-02", // Last date in the range (most recent) + "days": 21, // Number of days aggregated + "startTime": 1731456000, // Unix timestamp for startDate at 00:00:00 UTC + "generatedAt": "...", + "totalTestCount": 4506, // Total number of unique tests + "testsWithFailures": 3614, // Number of tests that had at least one non-passing run + "aggregatedFrom": [...] // Array of source filenames + } +} +``` + +Additional fields: +- `startDate`, `endDate`, `days` indicate the date range +- `startTime` is the base timestamp for the entire aggregated period (00:00:00 UTC on `startDate`) +- `testsWithFailures` counts tests with any non-passing status +- `aggregatedFrom` lists all source files that were merged + +#### 2. Passing Test Runs Are Aggregated + +**Daily files** store individual runs for all statuses: +```json +{ + "taskIdIds": [123, 456, 789], + "durations": [1500, 1600, 1550], + "timestamps": [3600, 3600, 7200] +} +``` + +**Aggregated file** stores only counts per hour for passing statuses (status starts with "PASS"): +```json +{ + "counts": [150, 200, 180, 145, ...], + "hours": [0, 5, 1, 2, 8, ...] +} +``` + +Where: +- `counts[i]` = total number of passing runs in that hour +- `hours[i]` = differential compressed hour offset (hours since previous bucket) +- No `taskIdIds` or `durations` arrays +- Typically sparse - only hours with passing runs are included + +**Decompressing hours:** +```javascript +let currentHour = 0; +const absoluteHours = []; +for (const delta of hours) { + currentHour += delta; + absoluteHours.push(currentHour); +} +// absoluteHours[i] is now the hour number (0 = startTime, 1 = startTime + 1 hour, etc.) +``` + +**Example: Calculate pass rate for a test on day 5:** +```javascript +const testId = 0; +const day = 5; // 5 days after startDate + +// Find pass status +const passStatusId = data.tables.statuses.findIndex(s => s.startsWith("PASS")); +const passGroup = data.testRuns[testId]?.[passStatusId]; + +// Count passes in day 5 (hours 120-143) +const dayStartHour = day * 24; +const dayEndHour = (day + 1) * 24; +let passCount = 0; +let currentHour = 0; +if (passGroup) { + for (let i = 0; i < passGroup.hours.length; i++) { + currentHour += passGroup.hours[i]; + if (currentHour >= dayStartHour && currentHour < dayEndHour) { + passCount += passGroup.counts[i]; + } + } +} + +// For fail count, need to count timestamps in that day's range +const dayStartSeconds = day * 86400; +const dayEndSeconds = (day + 1) * 86400; +``` + +#### 3. All Test Runs Aggregated by Hour + +Both passing and non-passing test runs are aggregated by hour. The difference is in what data is preserved: + +**Passing tests** (status starts with "PASS"): +```json +{ + "counts": [150, 200, 180], + "hours": [0, 5, 1] +} +``` + +**Non-passing tests** (FAIL, CRASH, TIMEOUT, SKIP, etc.): +```json +{ + "taskIdIds": [ + [45, 67], // Task IDs that failed in hour 0 with message 23 + [89, 12, 56], // Task IDs that failed in hour 5 with message 23 + [34] // Task IDs that failed in hour 6 with message 24 + ], + "hours": [0, 5, 1], + "messageIds": [23, 23, 24], + "crashSignatureIds": [5, 5, 6], + "minidumps": [ + ["abc123", "def456"], // Minidumps for crashes in hour 0 + ["ghi789", null, "jkl"], // Minidumps for crashes in hour 5 + [null] // Minidumps for crashes in hour 6 + ] +} +``` + +Key differences from daily files: +- `taskIdIds` is an **array of arrays** - one array per (hour, message, crashSignature) bucket +- `minidumps` is an **array of arrays** - parallel to `taskIdIds`, preserving minidump for each task +- `hours` provides differentially compressed hour offsets +- Durations are **removed** +- Individual timestamps are **removed** - only the hour bucket is preserved +- Failures with different messages or crash signatures are in separate buckets + +#### 4. String Tables Are Merged + +All string tables are merged and deduplicated across all input days. A string that appears in multiple daily files will only appear once in the aggregated file. + +#### 5. TaskInfo Only Contains Failed Tasks + +Since passing runs don't store `taskIdIds`, the `taskInfo` object only contains mappings for tasks that appear in non-passing test runs. This significantly reduces the size of these arrays. + +#### 6. Platform-Irrelevant Tests Are Filtered + +SKIP tests with messages starting with "run-if" are filtered out during aggregation. These represent tests that are not relevant on certain platforms (e.g., "run-if = os == 'win'") and are not actual issues. The dashboard would filter these out anyway, so excluding them reduces file size. + +### Use Cases + +**Show pass/fail trends over time:** +- Passing runs: Use `counts` and `hours` arrays +- Failing runs: Count taskIds in buckets within day ranges using `hours` + +**Investigate specific failures:** +- Task IDs preserved for all non-passing runs +- Can identify which tasks/jobs/repos had failures +- Can see error messages, crash signatures, and minidumps + +**Calculate overall pass rate:** +```javascript +const testId = 0; +const passStatusId = data.tables.statuses.findIndex(s => s.startsWith("PASS")); +const failStatusId = data.tables.statuses.indexOf("FAIL"); + +// Total passes +const totalPasses = data.testRuns[testId]?.[passStatusId]?.counts.reduce((a, b) => a + b, 0) ?? 0; + +// Total fails - count all taskIds across all buckets +const failGroup = data.testRuns[testId]?.[failStatusId]; +const totalFails = failGroup?.taskIdIds.reduce((sum, arr) => sum + arr.length, 0) ?? 0; + +const passRate = totalPasses / (totalPasses + totalFails); +``` + +--- + +### Small File (xpcshell-issues.json) + +This file omits task IDs and minidumps to minimize file size for fast dashboard loading. + +#### Differences from xpcshell-issues-with-taskids.json + +#### 1. No taskInfo or taskIds + +The `taskInfo` object and `tables.taskIds` array are completely omitted since all runs are aggregated. + +#### 2. Reduced String Tables + +Only includes tables needed for aggregated data: +```json +{ + "tables": { + "testPaths": [...], + "testNames": [...], + "statuses": [...], + "messages": [...], // Kept for failure details + "crashSignatures": [...], // Kept for crash details + "components": [...] + // No jobNames, repositories, or taskIds + } +} +``` + +#### 3. No Task IDs - Only Counts + +All status groups use counts instead of task ID arrays: + +```json +{ + "counts": [5, 12, 8, 3], + "hours": [0, 5, 1, 2], + "messageIds": [23, 23, 24, 24], // For failures with different messages + "crashSignatureIds": [5, 6, 5, 6] // For crashes with different signatures + // Note: taskIdIds and minidumps are NOT included in this file +} +``` + +Failures with different messages or crash signatures are bucketed separately, preserving distinct failure modes. + +Task IDs and minidumps are omitted to reduce size. They are available in the detailed file. + +**Example:** A test that fails 5 times in hour 10 with message A and 3 times with message B will have two entries: +```json +{ + "counts": [5, 3], + "hours": [10, 0], // Both in same hour, so second delta is 0 + "messageIds": [23, 24] +} +``` diff --git a/testing/timings/fetch-xpcshell-data.js b/testing/timings/fetch-xpcshell-data.js @@ -557,10 +557,38 @@ function sortStringTablesByFrequency(dataStructure) { return; } - frequencyCounts.statuses[statusId] += statusGroup.taskIdIds.length; + // Handle both aggregated format (counts/hours) and detailed format (taskIdIds) + if (statusGroup.taskIdIds) { + // Check if taskIdIds is array of arrays (aggregated) or flat array (daily) + const isArrayOfArrays = + !!statusGroup.taskIdIds.length && + Array.isArray(statusGroup.taskIdIds[0]); + + if (isArrayOfArrays) { + // Aggregated format: array of arrays + const totalRuns = statusGroup.taskIdIds.reduce( + (sum, arr) => sum + arr.length, + 0 + ); + frequencyCounts.statuses[statusId] += totalRuns; + + for (const taskIdIdsArray of statusGroup.taskIdIds) { + for (const taskIdId of taskIdIdsArray) { + frequencyCounts.taskIds[taskIdId]++; + } + } + } else { + // Daily format: flat array + frequencyCounts.statuses[statusId] += statusGroup.taskIdIds.length; - for (const taskIdId of statusGroup.taskIdIds) { - frequencyCounts.taskIds[taskIdId]++; + for (const taskIdId of statusGroup.taskIdIds) { + frequencyCounts.taskIds[taskIdId]++; + } + } + } else if (statusGroup.counts) { + // Aggregated passing tests - count total runs + const totalRuns = statusGroup.counts.reduce((a, b) => a + b, 0); + frequencyCounts.statuses[statusId] += totalRuns; } if (statusGroup.messageIds) { @@ -656,13 +684,36 @@ function sortStringTablesByFrequency(dataStructure) { return statusGroup; } - const remapped = { - taskIdIds: statusGroup.taskIdIds.map(oldId => + // Handle aggregated format (counts/hours) differently from detailed format + if (statusGroup.counts) { + // Aggregated passing tests - no remapping needed + return { + counts: statusGroup.counts, + hours: statusGroup.hours, + }; + } + + // Check if this is aggregated format (array of arrays) or daily format (flat array) + const isArrayOfArrays = + !!statusGroup.taskIdIds.length && + Array.isArray(statusGroup.taskIdIds[0]); + + const remapped = {}; + + if (isArrayOfArrays) { + // Aggregated format: array of arrays with hours + remapped.taskIdIds = statusGroup.taskIdIds.map(taskIdIdsArray => + taskIdIdsArray.map(oldId => indexMaps.taskIds.get(oldId)) + ); + remapped.hours = statusGroup.hours; + } else { + // Daily format: flat array with durations and timestamps + remapped.taskIdIds = statusGroup.taskIdIds.map(oldId => indexMaps.taskIds.get(oldId) - ), - durations: statusGroup.durations, - timestamps: statusGroup.timestamps, - }; + ); + remapped.durations = statusGroup.durations; + remapped.timestamps = statusGroup.timestamps; + } // Remap message IDs for status groups that have messages if (statusGroup.messageIds) { @@ -1198,6 +1249,449 @@ async function processDateData(targetDate, forceRefetch = false) { } } +// eslint-disable-next-line complexity +async function createAggregatedFailuresFile(dates) { + console.log( + `\n=== Creating aggregated failures file from ${dates.length} days ===` + ); + + const dailyFiles = []; + for (const date of dates) { + const filePath = path.join(OUTPUT_DIR, `xpcshell-${date}.json`); + if (fs.existsSync(filePath)) { + dailyFiles.push({ date, filePath }); + } + } + + if (dailyFiles.length === 0) { + console.log("No daily files found to aggregate"); + return; + } + + console.log(`Found ${dailyFiles.length} daily files to aggregate`); + + const startDate = dates[dates.length - 1]; + const endDate = dates[0]; + const startTime = Math.floor( + new Date(startDate + "T00:00:00.000Z").getTime() / 1000 + ); + + const mergedTables = { + jobNames: [], + testPaths: [], + testNames: [], + repositories: [], + statuses: [], + taskIds: [], + messages: [], + crashSignatures: [], + components: [], + }; + + const stringMaps = { + jobNames: new Map(), + testPaths: new Map(), + testNames: new Map(), + repositories: new Map(), + statuses: new Map(), + taskIds: new Map(), + messages: new Map(), + crashSignatures: new Map(), + components: new Map(), + }; + + function addToMergedTable(tableName, value) { + if (value === null || value === undefined) { + return null; + } + const map = stringMaps[tableName]; + let index = map.get(value); + if (index === undefined) { + index = mergedTables[tableName].length; + mergedTables[tableName].push(value); + map.set(value, index); + } + return index; + } + + const mergedTaskInfo = { + repositoryIds: [], + jobNameIds: [], + }; + + const mergedTestInfo = { + testPathIds: [], + testNameIds: [], + componentIds: [], + }; + + const testPathMap = new Map(); + const mergedTestRuns = []; + + for (let fileIdx = 0; fileIdx < dailyFiles.length; fileIdx++) { + const { date, filePath } = dailyFiles[fileIdx]; + console.log(`Processing ${fileIdx + 1}/${dailyFiles.length}: ${date}...`); + + const data = JSON.parse(fs.readFileSync(filePath, "utf-8")); + + const dayStartTime = data.metadata.startTime; + const timeOffset = dayStartTime - startTime; + + for (let testId = 0; testId < data.testRuns.length; testId++) { + const testGroup = data.testRuns[testId]; + if (!testGroup) { + continue; + } + + const testPathId = data.testInfo.testPathIds[testId]; + const testNameId = data.testInfo.testNameIds[testId]; + const componentId = data.testInfo.componentIds[testId]; + + const testPath = data.tables.testPaths[testPathId]; + const testName = data.tables.testNames[testNameId]; + const fullPath = testPath ? `${testPath}/${testName}` : testName; + + let mergedTestId = testPathMap.get(fullPath); + if (mergedTestId === undefined) { + mergedTestId = mergedTestInfo.testPathIds.length; + + const mergedTestPathId = addToMergedTable("testPaths", testPath); + const mergedTestNameId = addToMergedTable("testNames", testName); + const component = + componentId !== null ? data.tables.components[componentId] : null; + const mergedComponentId = addToMergedTable("components", component); + + mergedTestInfo.testPathIds.push(mergedTestPathId); + mergedTestInfo.testNameIds.push(mergedTestNameId); + mergedTestInfo.componentIds.push(mergedComponentId); + + testPathMap.set(fullPath, mergedTestId); + mergedTestRuns[mergedTestId] = []; + } + + for (let statusId = 0; statusId < testGroup.length; statusId++) { + const statusGroup = testGroup[statusId]; + if (!statusGroup) { + continue; + } + + const status = data.tables.statuses[statusId]; + const mergedStatusId = addToMergedTable("statuses", status); + + if (!mergedTestRuns[mergedTestId][mergedStatusId]) { + mergedTestRuns[mergedTestId][mergedStatusId] = []; + } + + const mergedStatusGroup = mergedTestRuns[mergedTestId][mergedStatusId]; + const isPass = status.startsWith("PASS"); + + let absoluteTimestamp = 0; + for (let i = 0; i < statusGroup.taskIdIds.length; i++) { + absoluteTimestamp += statusGroup.timestamps[i]; + + // Skip platform-irrelevant tests (SKIP with run-if messages) + if ( + status === "SKIP" && + data.tables.messages[statusGroup.messageIds?.[i]]?.startsWith( + "run-if" + ) + ) { + continue; + } + + const taskIdId = statusGroup.taskIdIds[i]; + const taskIdString = data.tables.taskIds[taskIdId]; + const repositoryId = data.taskInfo.repositoryIds[taskIdId]; + const jobNameId = data.taskInfo.jobNameIds[taskIdId]; + + const repository = data.tables.repositories[repositoryId]; + const jobName = data.tables.jobNames[jobNameId]; + + const mergedRepositoryId = addToMergedTable( + "repositories", + repository + ); + const mergedJobNameId = addToMergedTable("jobNames", jobName); + + const run = { + repositoryId: mergedRepositoryId, + jobNameId: mergedJobNameId, + timestamp: absoluteTimestamp + timeOffset, + duration: statusGroup.durations[i], + }; + + mergedStatusGroup.push(run); + + if (isPass) { + continue; + } + + const mergedTaskIdId = addToMergedTable("taskIds", taskIdString); + + if (mergedTaskInfo.repositoryIds[mergedTaskIdId] === undefined) { + mergedTaskInfo.repositoryIds[mergedTaskIdId] = mergedRepositoryId; + mergedTaskInfo.jobNameIds[mergedTaskIdId] = mergedJobNameId; + } + + run.taskIdId = mergedTaskIdId; + + if (statusGroup.messageIds && statusGroup.messageIds[i] !== null) { + const message = data.tables.messages[statusGroup.messageIds[i]]; + run.messageId = addToMergedTable("messages", message); + } else if (statusGroup.messageIds) { + run.messageId = null; + } + + if ( + statusGroup.crashSignatureIds && + statusGroup.crashSignatureIds[i] !== null + ) { + const crashSig = + data.tables.crashSignatures[statusGroup.crashSignatureIds[i]]; + run.crashSignatureId = addToMergedTable( + "crashSignatures", + crashSig + ); + } else if (statusGroup.crashSignatureIds) { + run.crashSignatureId = null; + } + + if (statusGroup.minidumps) { + run.minidump = statusGroup.minidumps[i]; + } + } + } + } + } + + function aggregateRunsByHour( + statusGroup, + includeMessages = false, + returnTaskIds = false + ) { + const buckets = new Map(); + for (const run of statusGroup) { + const hourBucket = Math.floor(run.timestamp / 3600); + let key = hourBucket; + + if (includeMessages && "messageId" in run) { + key = `${hourBucket}:m${run.messageId}`; + } else if (includeMessages && "crashSignatureId" in run) { + key = `${hourBucket}:c${run.crashSignatureId}`; + } + + if (!buckets.has(key)) { + buckets.set(key, { + hour: hourBucket, + count: 0, + taskIdIds: [], + minidumps: [], + messageId: run.messageId, + crashSignatureId: run.crashSignatureId, + }); + } + const bucket = buckets.get(key); + bucket.count++; + if (returnTaskIds && run.taskIdId !== undefined) { + bucket.taskIdIds.push(run.taskIdId); + } + if (returnTaskIds && "minidump" in run) { + bucket.minidumps.push(run.minidump ?? null); + } + } + + const aggregated = Array.from(buckets.values()).sort((a, b) => { + if (a.hour !== b.hour) { + return a.hour - b.hour; + } + if (a.messageId !== b.messageId) { + if (a.messageId === null || a.messageId === undefined) { + return 1; + } + if (b.messageId === null || b.messageId === undefined) { + return -1; + } + return a.messageId - b.messageId; + } + if (a.crashSignatureId !== b.crashSignatureId) { + if (a.crashSignatureId === null || a.crashSignatureId === undefined) { + return 1; + } + if (b.crashSignatureId === null || b.crashSignatureId === undefined) { + return -1; + } + return a.crashSignatureId - b.crashSignatureId; + } + return 0; + }); + + const hours = []; + let previousBucket = 0; + for (const item of aggregated) { + hours.push(item.hour - previousBucket); + previousBucket = item.hour; + } + + const result = { + hours, + }; + + if (returnTaskIds) { + result.taskIdIds = aggregated.map(a => a.taskIdIds); + } else { + result.counts = aggregated.map(a => a.count); + } + + if (includeMessages) { + if (aggregated.some(a => "messageId" in a && a.messageId !== undefined)) { + result.messageIds = aggregated.map(a => a.messageId ?? null); + } + if ( + aggregated.some( + a => "crashSignatureId" in a && a.crashSignatureId !== undefined + ) + ) { + result.crashSignatureIds = aggregated.map( + a => a.crashSignatureId ?? null + ); + } + if (returnTaskIds && aggregated.some(a => a.minidumps?.length)) { + result.minidumps = aggregated.map(a => a.minidumps); + } + } + + return result; + } + + console.log("Aggregating passing test runs by hour..."); + + const finalTestRuns = []; + + for (let testId = 0; testId < mergedTestRuns.length; testId++) { + const testGroup = mergedTestRuns[testId]; + if (!testGroup) { + continue; + } + + finalTestRuns[testId] = []; + + for (let statusId = 0; statusId < testGroup.length; statusId++) { + const statusGroup = testGroup[statusId]; + if (!statusGroup || statusGroup.length === 0) { + continue; + } + + const status = mergedTables.statuses[statusId]; + const isPass = status.startsWith("PASS"); + + if (isPass) { + finalTestRuns[testId][statusId] = aggregateRunsByHour(statusGroup); + } else { + finalTestRuns[testId][statusId] = aggregateRunsByHour( + statusGroup, + true, + true + ); + } + } + } + + const testsWithFailures = finalTestRuns.filter(testGroup => + testGroup?.some( + (sg, idx) => sg && !mergedTables.statuses[idx].startsWith("PASS") + ) + ).length; + + console.log("Sorting string tables by frequency..."); + + // Sort string tables by frequency for better compression + const dataStructure = { + tables: mergedTables, + taskInfo: mergedTaskInfo, + testInfo: mergedTestInfo, + testRuns: finalTestRuns, + }; + + const sortedData = sortStringTablesByFrequency(dataStructure); + + const outputData = { + metadata: { + startDate, + endDate, + days: dates.length, + startTime, + generatedAt: new Date().toISOString(), + totalTestCount: mergedTestInfo.testPathIds.length, + testsWithFailures, + aggregatedFrom: dailyFiles.map(f => path.basename(f.filePath)), + }, + tables: sortedData.tables, + taskInfo: sortedData.taskInfo, + testInfo: sortedData.testInfo, + testRuns: sortedData.testRuns, + }; + + const outputFileWithDetails = path.join( + OUTPUT_DIR, + "xpcshell-issues-with-taskids.json" + ); + saveJsonFile(outputData, outputFileWithDetails); + + // Create small file with all statuses aggregated + console.log("Creating small aggregated version..."); + + const smallTestRuns = sortedData.testRuns.map(testGroup => { + if (!testGroup) { + return testGroup; + } + return testGroup.map(statusGroup => { + if (!statusGroup) { + return statusGroup; + } + if (statusGroup.counts) { + return statusGroup; + } + + const result = { + counts: statusGroup.taskIdIds.map(arr => arr.length), + hours: statusGroup.hours, + }; + + if (statusGroup.messageIds) { + result.messageIds = statusGroup.messageIds; + } + + if (statusGroup.crashSignatureIds) { + result.crashSignatureIds = statusGroup.crashSignatureIds; + } + + return result; + }); + }); + + const smallOutput = { + metadata: outputData.metadata, + tables: { + testPaths: sortedData.tables.testPaths, + testNames: sortedData.tables.testNames, + statuses: sortedData.tables.statuses, + messages: sortedData.tables.messages, + crashSignatures: sortedData.tables.crashSignatures, + components: sortedData.tables.components, + }, + testInfo: sortedData.testInfo, + testRuns: smallTestRuns, + }; + + const outputFileSmall = path.join(OUTPUT_DIR, "xpcshell-issues.json"); + saveJsonFile(smallOutput, outputFileSmall); + + console.log( + `Successfully created aggregated files with ${outputData.metadata.totalTestCount} tests` + ); + console.log(` Tests with failures: ${testsWithFailures}`); +} + async function main() { const forceRefetch = process.argv.includes("--force"); @@ -1274,6 +1768,11 @@ async function main() { await processDateData(date, forceRefetch); } + // Create aggregated failures file if processing multiple days + if (numDays > 1) { + await createAggregatedFailuresFile(dates); + } + // Create index file with available dates const indexFile = path.join(OUTPUT_DIR, "index.json"); const availableDates = [];