chore(TRUEREF-0022): fix lint errors and update architecture docs

- Fix 15 ESLint errors across pipeline workers, SSE endpoints, and UI
- Replace explicit any with proper entity types in worker entries
- Remove unused imports and variables (basename, SSEEvent, getBroadcasterFn, seedRules)
- Use empty catch clauses instead of unused error variables
- Use SvelteSet for reactive Set state in repository page
- Fix operator precedence in nullish coalescing expression
- Replace $state+$effect with $derived for concurrency input
- Use resolve() directly in href for navigation lint rule
- Update ARCHITECTURE.md and FINDINGS.md for worker-thread architecture
This commit is contained in:
Giancarmine Salucci
2026-03-30 17:28:38 +02:00
parent 7630740403
commit 6297edf109
11 changed files with 85 additions and 69 deletions

View File

@@ -1,15 +1,16 @@
# Architecture # Architecture
Last Updated: 2026-03-27T00:24:13.000Z Last Updated: 2026-03-30T00:00:00.000Z
## Overview ## Overview
TrueRef is a TypeScript-first, self-hosted documentation retrieval platform built on SvelteKit. The repository contains a Node-targeted web application, a REST API, a Model Context Protocol server, and a server-side indexing pipeline backed by SQLite via better-sqlite3 and Drizzle ORM. TrueRef is a TypeScript-first, self-hosted documentation retrieval platform built on SvelteKit. The repository contains a Node-targeted web application, a REST API, a Model Context Protocol server, and a multi-threaded server-side indexing pipeline backed by SQLite via better-sqlite3 and Drizzle ORM.
- Primary language: TypeScript (110 files) with a small amount of JavaScript configuration (2 files) - Primary language: TypeScript (141 files) with a small amount of JavaScript configuration (2 files)
- Application type: Full-stack SvelteKit application with server-side indexing and retrieval services - Application type: Full-stack SvelteKit application with worker-threaded indexing and retrieval services
- Runtime framework: SvelteKit with adapter-node - Runtime framework: SvelteKit with adapter-node
- Storage: SQLite with Drizzle-managed schema plus hand-written FTS5 setup - Storage: SQLite (WAL mode) with Drizzle-managed schema plus hand-written FTS5 setup
- Concurrency: Node.js worker_threads for parse and embedding work
- Testing: Vitest with separate client and server projects - Testing: Vitest with separate client and server projects
## Project Structure ## Project Structure
@@ -25,7 +26,7 @@ TrueRef is a TypeScript-first, self-hosted documentation retrieval platform buil
### src/routes ### src/routes
Contains the UI entry points and API routes. The API tree under src/routes/api/v1 is the public HTTP contract for repository management, indexing jobs, search/context retrieval, settings, filesystem browsing, and JSON schema discovery. Contains the UI entry points and API routes. The API tree under src/routes/api/v1 is the public HTTP contract for repository management, indexing jobs, search/context retrieval, settings, filesystem browsing, JSON schema discovery, real-time SSE progress streaming, and job control (pause/resume/cancel).
### src/lib/server/db ### src/lib/server/db
@@ -33,7 +34,15 @@ Owns SQLite schema definitions, migration bootstrapping, and FTS initialization.
### src/lib/server/pipeline ### src/lib/server/pipeline
Coordinates crawl, parse, chunk, store, and optional embedding generation work. Startup recovery marks stale jobs as failed, resets repositories stuck in indexing state, initializes singleton queue/pipeline instances, and drains queued work after restart. Coordinates crawl, parse, chunk, store, and optional embedding generation work using a worker thread pool. The pipeline module consists of:
- **WorkerPool** (`worker-pool.ts`): Manages a configurable number of Node.js `worker_threads` for parse jobs and an optional dedicated embed worker. Dispatches jobs round-robin to idle workers, enforces per-repository serialisation (one active job per repo), auto-respawns crashed workers, and supports runtime concurrency adjustment via `setMaxConcurrency()`. Falls back to main-thread execution when worker scripts are not found.
- **Parse worker** (`worker-entry.ts`): Runs in a worker thread. Opens its own `better-sqlite3` connection (WAL mode, `busy_timeout = 5000`), constructs a local `IndexingPipeline` instance, and processes jobs by posting `progress`, `done`, or `failed` messages back to the parent.
- **Embed worker** (`embed-worker-entry.ts`): Dedicated worker for embedding generation. Loads the embedding profile from the database, creates an `EmbeddingService`, and processes embed requests after the parse worker finishes a job.
- **ProgressBroadcaster** (`progress-broadcaster.ts`): Server-side pub/sub for real-time SSE streaming. Supports per-job, per-repository, and global subscriptions. Caches the last event per job for reconnect support.
- **Worker types** (`worker-types.ts`): Shared TypeScript discriminated union types for `ParseWorkerRequest`/`ParseWorkerResponse` and `EmbedWorkerRequest`/`EmbedWorkerResponse` message protocols.
- **Startup** (`startup.ts`): Recovers stale jobs, constructs singleton `JobQueue`, `IndexingPipeline`, `WorkerPool`, and `ProgressBroadcaster` instances, reads concurrency settings from the database, and drains queued work after restart.
- **JobQueue** (`job-queue.ts`): SQLite-backed queue that delegates to the `WorkerPool` when available, with pause/resume/cancel support.
### src/lib/server/search ### src/lib/server/search
@@ -49,16 +58,18 @@ Provides a thin compatibility layer over the HTTP API. The MCP server exposes re
## Design Patterns ## Design Patterns
- No explicit design patterns detected from semantic analysis. - The WorkerPool implements an **observer/callback pattern**: the pool owner provides `onProgress`, `onJobDone`, `onJobFailed`, `onEmbedDone`, and `onEmbedFailed` callbacks at construction time, and the pool invokes them when workers post messages.
- The implementation does consistently use service classes such as RepositoryService, SearchService, and HybridSearchService for business logic. - ProgressBroadcaster implements a **pub/sub pattern** with three subscription tiers (per-job, per-repository, global) and last-event caching for SSE reconnect.
- Mapping and entity layers separate raw database rows from domain objects through mapper/entity pairs such as RepositoryMapper and RepositoryEntity. - The implementation consistently uses **service classes** such as RepositoryService, SearchService, and HybridSearchService for business logic.
- Pipeline startup uses module-level singleton state for JobQueue and IndexingPipeline lifecycle management. - Mapping and entity layers separate raw database rows from domain objects through **mapper/entity pairs** such as RepositoryMapper and RepositoryEntity.
- Pipeline startup uses **module-level singletons** for JobQueue, IndexingPipeline, WorkerPool, and ProgressBroadcaster lifecycle management, with accessor functions (getQueue, getPool, getBroadcaster) for route handlers.
- Worker message protocols use **TypeScript discriminated unions** (`type` field) for type-safe worker ↔ parent communication.
## Key Components ## Key Components
### SvelteKit server bootstrap ### SvelteKit server bootstrap
src/hooks.server.ts initializes the database, loads persisted embedding configuration, creates the optional EmbeddingService, starts the indexing pipeline, and applies CORS headers to all /api routes. src/hooks.server.ts initializes the database, loads persisted embedding configuration, creates the optional EmbeddingService, reads indexing concurrency settings from the database, starts the indexing pipeline with WorkerPool and ProgressBroadcaster via `initializePipeline(db, embeddingService, { concurrency, dbPath })`, and applies CORS headers to all /api routes.
### Database layer ### Database layer
@@ -80,6 +91,22 @@ src/lib/server/services/repository.service.ts provides CRUD and statistics for i
src/mcp/index.ts creates the MCP server, registers the two supported tools, and exposes them over stdio or streamable HTTP. src/mcp/index.ts creates the MCP server, registers the two supported tools, and exposes them over stdio or streamable HTTP.
### Worker thread pool
src/lib/server/pipeline/worker-pool.ts manages a pool of Node.js worker threads. Parse workers run the full crawl → parse → store pipeline inside isolated threads with their own better-sqlite3 connections (WAL mode enables concurrent readers). An optional embed worker handles embedding generation in a separate thread. The pool enforces per-repository serialisation, auto-respawns crashed workers, and supports runtime concurrency changes persisted through the settings table.
### SSE streaming
src/lib/server/pipeline/progress-broadcaster.ts provides real-time Server-Sent Event streaming of indexing progress. Route handlers in src/routes/api/v1/jobs/stream and src/routes/api/v1/jobs/[id]/stream expose SSE endpoints. The broadcaster supports per-job, per-repository, and global subscriptions, with last-event caching for reconnect via the `Last-Event-ID` header.
### Job control
src/routes/api/v1/jobs/[id]/pause, resume, and cancel endpoints allow runtime control of indexing jobs. The JobQueue supports pause/resume/cancel state transitions persisted to SQLite.
### Indexing settings
src/routes/api/v1/settings/indexing exposes GET and PUT for indexing concurrency. PUT validates and clamps the value to `max(cpus - 1, 1)`, persists it to the settings table, and live-updates the WorkerPool via `setMaxConcurrency()`.
## Dependencies ## Dependencies
### Production ### Production
@@ -93,6 +120,7 @@ src/mcp/index.ts creates the MCP server, registers the two supported tools, and
- @sveltejs/kit and @sveltejs/adapter-node: application framework and Node deployment target - @sveltejs/kit and @sveltejs/adapter-node: application framework and Node deployment target
- drizzle-kit and drizzle-orm: schema management and typed database access - drizzle-kit and drizzle-orm: schema management and typed database access
- esbuild: worker thread entry point bundling (build/workers/)
- vite and @tailwindcss/vite: bundling and Tailwind integration - vite and @tailwindcss/vite: bundling and Tailwind integration
- vitest and @vitest/browser-playwright: server and browser test execution - vitest and @vitest/browser-playwright: server and browser test execution
- eslint, typescript-eslint, eslint-plugin-svelte, prettier, prettier-plugin-svelte, prettier-plugin-tailwindcss: linting and formatting - eslint, typescript-eslint, eslint-plugin-svelte, prettier, prettier-plugin-svelte, prettier-plugin-tailwindcss: linting and formatting
@@ -116,12 +144,13 @@ The frontend and backend share the same SvelteKit repository, but most non-UI be
### Indexing flow ### Indexing flow
1. Server startup runs initializeDatabase() and initializePipeline() from src/hooks.server.ts. 1. Server startup runs initializeDatabase() and initializePipeline() from src/hooks.server.ts, which creates the WorkerPool, ProgressBroadcaster, and JobQueue singletons.
2. The pipeline recovers stale jobs, initializes crawler/parser infrastructure, and resumes queued work. 2. The pipeline recovers stale jobs (marks running → failed, indexing → error), reads concurrency settings, and resumes queued work.
3. Crawlers ingest GitHub or local repository contents. 3. When a job is enqueued, the JobQueue delegates to the WorkerPool, which dispatches work to an idle parse worker thread.
4. Parsers split files into document and snippet records with token counts and metadata. 4. Each parse worker opens its own better-sqlite3 connection (WAL mode) and runs the full crawl → parse → store pipeline, posting progress messages back to the parent thread.
5. Database modules persist repositories, documents, snippets, versions, configs, and job state. 5. The parent thread updates job progress in the database and broadcasts SSE events through the ProgressBroadcaster.
6. If an embedding provider is configured, embedding services generate vectors for snippet search. 6. On parse completion, if an embedding provider is configured, the WorkerPool enqueues an embed request to the dedicated embed worker, which generates vectors in its own thread.
7. Job control endpoints allow pausing, resuming, or cancelling jobs at runtime.
### Retrieval flow ### Retrieval flow
@@ -135,7 +164,8 @@ The frontend and backend share the same SvelteKit repository, but most non-UI be
## Build System ## Build System
- Build command: npm run build - Build command: npm run build (runs `vite build` then `node scripts/build-workers.mjs`)
- Worker bundling: scripts/build-workers.mjs uses esbuild to compile worker-entry.ts and embed-worker-entry.ts into build/workers/ as ESM bundles (.mjs), with $lib path aliases resolved and better-sqlite3/@xenova/transformers marked external
- Test command: npm run test - Test command: npm run test
- Primary local run command from package.json: npm run dev - Primary local run command from package.json: npm run dev
- MCP entry points: npm run mcp:start and npm run mcp:http - MCP entry points: npm run mcp:start and npm run mcp:http

View File

@@ -1,25 +1,29 @@
# Findings # Findings
Last Updated: 2026-03-27T00:24:13.000Z Last Updated: 2026-03-30T00:00:00.000Z
## Initializer Summary ## Initializer Summary
- JIRA: FEEDBACK-0001 - JIRA: TRUEREF-0022
- Refresh mode: REFRESH_IF_REQUIRED - Refresh mode: REFRESH_IF_REQUIRED
- Result: refreshed affected documentation only. ARCHITECTURE.md and FINDINGS.md were updated from current repository analysis; CODE_STYLE.md remained trusted and unchanged because the documented conventions still match the codebase. - Result: Refreshed ARCHITECTURE.md and FINDINGS.md. CODE_STYLE.md remained trusted — new worker thread code follows established conventions.
## Research Performed ## Research Performed
- Discovered source-language distribution, dependency manifest, import patterns, and project structure. - Discovered 141 TypeScript/JavaScript source files (up from 110), with new pipeline worker, broadcaster, and SSE endpoint files.
- Read the retrieval, formatter, token-budget, parser, mapper, and response-model modules affected by the latest implementation changes. - Read worker-pool.ts, worker-entry.ts, embed-worker-entry.ts, worker-types.ts, progress-broadcaster.ts, startup.ts, job-queue.ts to understand the new worker thread architecture.
- Compared the trusted cache state with current behavior to identify which documentation files were actually stale. - Read SSE endpoints (jobs/stream, jobs/[id]/stream) and job control endpoints (pause, resume, cancel).
- Confirmed package scripts for build and test. - Read indexing settings endpoint and hooks.server.ts to verify startup wiring changes.
- Confirmed Linux-native md5sum availability for documentation trust metadata. - Read build-workers.mjs and package.json to verify build system and dependency changes.
- Compared trusted cache state with current codebase to identify ARCHITECTURE.md as stale.
- Confirmed CODE_STYLE.md conventions still match the codebase — new code uses PascalCase classes, camelCase functions, tab indentation, ESM imports, and TypeScript discriminated unions consistent with existing style.
## Open Questions For Planner ## Open Questions For Planner
- Verify whether the retrieval response contract should document the new repository and version metadata fields formally in a public API reference beyond the architecture summary. - Verify whether the retrieval response contract should document the new repository and version metadata fields formally in a public API reference beyond the architecture summary.
- Verify whether parser chunking should evolve further from file-level and declaration-level boundaries to member-level semantic chunks for class-heavy codebases. - Verify whether parser chunking should evolve further from file-level and declaration-level boundaries to member-level semantic chunks for class-heavy codebases.
- Verify whether the SSE streaming contract (event names, data shapes) should be documented in a dedicated API reference for external consumers.
- Assess whether the WorkerPool fallback mode (main-thread execution when worker scripts are missing) needs explicit test coverage or should be removed in favour of a hard build requirement.
## Planner Notes Template ## Planner Notes Template

View File

@@ -1,5 +1,5 @@
<script lang="ts"> <script lang="ts">
import { resolve as resolveRoute } from '$app/paths'; import { resolve } from '$app/paths';
type RepositoryCardRepo = { type RepositoryCardRepo = {
id: string; id: string;
@@ -38,10 +38,6 @@
error: 'Error' error: 'Error'
}; };
const detailsHref = $derived(
resolveRoute('/repos/[id]', { id: encodeURIComponent(repo.id) })
);
const totalSnippets = $derived(repo.totalSnippets ?? 0); const totalSnippets = $derived(repo.totalSnippets ?? 0);
const trustScore = $derived(repo.trustScore ?? 0); const trustScore = $derived(repo.trustScore ?? 0);
const embeddingCount = $derived(repo.embeddingCount ?? 0); const embeddingCount = $derived(repo.embeddingCount ?? 0);
@@ -112,7 +108,7 @@
{repo.state === 'indexing' ? 'Indexing...' : 'Re-index'} {repo.state === 'indexing' ? 'Indexing...' : 'Re-index'}
</button> </button>
<a <a
href={detailsHref} href={resolve('/repos/[id]', { id: encodeURIComponent(repo.id) })}
class="rounded-lg border border-gray-200 px-3 py-1.5 text-sm text-gray-700 hover:bg-gray-50" class="rounded-lg border border-gray-200 px-3 py-1.5 text-sm text-gray-700 hover:bg-gray-50"
> >
Details Details

View File

@@ -3,7 +3,7 @@ import Database from 'better-sqlite3';
import { EmbeddingService } from '$lib/server/embeddings/embedding.service.js'; import { EmbeddingService } from '$lib/server/embeddings/embedding.service.js';
import { createProviderFromProfile } from '$lib/server/embeddings/registry.js'; import { createProviderFromProfile } from '$lib/server/embeddings/registry.js';
import { EmbeddingProfileMapper } from '$lib/server/mappers/embedding-profile.mapper.js'; import { EmbeddingProfileMapper } from '$lib/server/mappers/embedding-profile.mapper.js';
import { EmbeddingProfileEntity } from '$lib/server/models/embedding-profile.js'; import { EmbeddingProfileEntity, type EmbeddingProfileEntityProps } from '$lib/server/models/embedding-profile.js';
import type { EmbedWorkerRequest, EmbedWorkerResponse, WorkerInitData } from './worker-types.js'; import type { EmbedWorkerRequest, EmbedWorkerResponse, WorkerInitData } from './worker-types.js';
const { dbPath, embeddingProfileId } = workerData as WorkerInitData; const { dbPath, embeddingProfileId } = workerData as WorkerInitData;
@@ -35,7 +35,7 @@ if (!rawProfile) {
process.exit(1); process.exit(1);
} }
const profileEntity = new EmbeddingProfileEntity(rawProfile as any); const profileEntity = new EmbeddingProfileEntity(rawProfile as EmbeddingProfileEntityProps);
const profile = EmbeddingProfileMapper.fromEntity(profileEntity); const profile = EmbeddingProfileMapper.fromEntity(profileEntity);
// Create provider and embedding service // Create provider and embedding service

View File

@@ -1,5 +1,5 @@
import { describe, it, expect } from 'vitest'; import { describe, it, expect } from 'vitest';
import { ProgressBroadcaster, type SSEEvent } from './progress-broadcaster.js'; import { ProgressBroadcaster } from './progress-broadcaster.js';
describe('ProgressBroadcaster', () => { describe('ProgressBroadcaster', () => {
it('subscribe returns a readable stream', async () => { it('subscribe returns a readable stream', async () => {

View File

@@ -16,7 +16,8 @@ import { LocalCrawler } from '$lib/server/crawler/local.crawler.js';
import { IndexingPipeline } from './indexing.pipeline.js'; import { IndexingPipeline } from './indexing.pipeline.js';
import { JobQueue } from './job-queue.js'; import { JobQueue } from './job-queue.js';
import { WorkerPool } from './worker-pool.js'; import { WorkerPool } from './worker-pool.js';
import { initBroadcaster, getBroadcaster as getBroadcasterFn } from './progress-broadcaster.js'; import type { ParseWorkerResponse } from './worker-types.js';
import { initBroadcaster } from './progress-broadcaster.js';
import type { ProgressBroadcaster } from './progress-broadcaster.js'; import type { ProgressBroadcaster } from './progress-broadcaster.js';
import path from 'node:path'; import path from 'node:path';
import { fileURLToPath } from 'node:url'; import { fileURLToPath } from 'node:url';
@@ -101,7 +102,7 @@ export function initializePipeline(
workerScript, workerScript,
embedWorkerScript, embedWorkerScript,
dbPath: options.dbPath, dbPath: options.dbPath,
onProgress: (jobId: string, msg: any) => { onProgress: (jobId: string, msg: ParseWorkerResponse) => {
// Update DB with progress // Update DB with progress
db.prepare( db.prepare(
`UPDATE indexing_jobs `UPDATE indexing_jobs

View File

@@ -4,7 +4,7 @@ import { IndexingPipeline } from './indexing.pipeline.js';
import { crawl as githubCrawl } from '$lib/server/crawler/github.crawler.js'; import { crawl as githubCrawl } from '$lib/server/crawler/github.crawler.js';
import { LocalCrawler } from '$lib/server/crawler/local.crawler.js'; import { LocalCrawler } from '$lib/server/crawler/local.crawler.js';
import { IndexingJobMapper } from '$lib/server/mappers/indexing-job.mapper.js'; import { IndexingJobMapper } from '$lib/server/mappers/indexing-job.mapper.js';
import { IndexingJobEntity } from '$lib/server/models/indexing-job.js'; import { IndexingJobEntity, type IndexingJobEntityProps } from '$lib/server/models/indexing-job.js';
import type { ParseWorkerRequest, ParseWorkerResponse, WorkerInitData } from './worker-types.js'; import type { ParseWorkerRequest, ParseWorkerResponse, WorkerInitData } from './worker-types.js';
import type { IndexingStage } from '$lib/types.js'; import type { IndexingStage } from '$lib/types.js';
@@ -30,7 +30,7 @@ parentPort!.on('message', async (msg: ParseWorkerRequest) => {
if (!rawJob) { if (!rawJob) {
throw new Error(`Job ${msg.jobId} not found`); throw new Error(`Job ${msg.jobId} not found`);
} }
const job = IndexingJobMapper.fromEntity(new IndexingJobEntity(rawJob as any)); const job = IndexingJobMapper.fromEntity(new IndexingJobEntity(rawJob as IndexingJobEntityProps));
await pipeline.run( await pipeline.run(
job, job,

View File

@@ -1,6 +1,5 @@
import { Worker } from 'node:worker_threads'; import { Worker } from 'node:worker_threads';
import { existsSync } from 'node:fs'; import { existsSync } from 'node:fs';
import { basename } from 'node:path';
import type { ParseWorkerRequest, ParseWorkerResponse, EmbedWorkerRequest, EmbedWorkerResponse, WorkerInitData } from './worker-types.js'; import type { ParseWorkerRequest, ParseWorkerResponse, EmbedWorkerRequest, EmbedWorkerResponse, WorkerInitData } from './worker-types.js';
export interface WorkerPoolOptions { export interface WorkerPoolOptions {
@@ -286,7 +285,7 @@ export class WorkerPool {
for (const worker of this.workers) { for (const worker of this.workers) {
try { try {
worker.postMessage(msg); worker.postMessage(msg);
} catch (e) { } catch {
// Worker might already be exited // Worker might already be exited
} }
} }
@@ -296,7 +295,7 @@ export class WorkerPool {
try { try {
const embedMsg: EmbedWorkerRequest = { type: 'shutdown' }; const embedMsg: EmbedWorkerRequest = { type: 'shutdown' };
this.embedWorker.postMessage(embedMsg); this.embedWorker.postMessage(embedMsg);
} catch (e) { } catch {
// Worker might already be exited // Worker might already be exited
} }
} }
@@ -317,7 +316,7 @@ export class WorkerPool {
for (const worker of this.workers) { for (const worker of this.workers) {
try { try {
worker.terminate(); worker.terminate();
} catch (e) { } catch {
// Already terminated // Already terminated
} }
} }
@@ -325,7 +324,7 @@ export class WorkerPool {
if (this.embedWorker) { if (this.embedWorker) {
try { try {
this.embedWorker.terminate(); this.embedWorker.terminate();
} catch (e) { } catch {
// Already terminated // Already terminated
} }
} }

View File

@@ -217,15 +217,6 @@ function seedEmbedding(client: Database.Database, snippetId: string, values: num
.run(snippetId, values.length, Buffer.from(Float32Array.from(values).buffer), NOW_S); .run(snippetId, values.length, Buffer.from(Float32Array.from(values).buffer), NOW_S);
} }
function seedRules(client: Database.Database, repositoryId: string, rules: string[]) {
client
.prepare(
`INSERT INTO repository_configs (repository_id, rules, updated_at)
VALUES (?, ?, ?)`
)
.run(repositoryId, JSON.stringify(rules), NOW_S);
}
describe('API contract integration', () => { describe('API contract integration', () => {
beforeEach(() => { beforeEach(() => {
db = createTestDb(); db = createTestDb();

View File

@@ -2,6 +2,7 @@
import { goto } from '$app/navigation'; import { goto } from '$app/navigation';
import { resolve as resolveRoute } from '$app/paths'; import { resolve as resolveRoute } from '$app/paths';
import { onMount } from 'svelte'; import { onMount } from 'svelte';
import { SvelteSet } from 'svelte/reactivity';
import type { PageData } from './$types'; import type { PageData } from './$types';
import type { Repository, IndexingJob } from '$lib/types'; import type { Repository, IndexingJob } from '$lib/types';
import ConfirmDialog from '$lib/components/ConfirmDialog.svelte'; import ConfirmDialog from '$lib/components/ConfirmDialog.svelte';
@@ -48,7 +49,7 @@
// Discover tags state // Discover tags state
let discoverBusy = $state(false); let discoverBusy = $state(false);
let discoveredTags = $state<Array<{ tag: string; commitHash: string }>>([]); let discoveredTags = $state<Array<{ tag: string; commitHash: string }>>([]);
let selectedDiscoveredTags = $state<Set<string>>(new Set()); let selectedDiscoveredTags = new SvelteSet<string>();
let showDiscoverPanel = $state(false); let showDiscoverPanel = $state(false);
let registerBusy = $state(false); let registerBusy = $state(false);
@@ -330,7 +331,7 @@
discoveredTags = (d.tags ?? []).filter( discoveredTags = (d.tags ?? []).filter(
(t: { tag: string; commitHash: string }) => !registeredTags.has(t.tag) (t: { tag: string; commitHash: string }) => !registeredTags.has(t.tag)
); );
selectedDiscoveredTags = new Set(discoveredTags.map((t) => t.tag)); selectedDiscoveredTags = new SvelteSet(discoveredTags.map((t) => t.tag));
showDiscoverPanel = true; showDiscoverPanel = true;
} catch (e) { } catch (e) {
errorMessage = (e as Error).message; errorMessage = (e as Error).message;
@@ -340,13 +341,11 @@
} }
function toggleDiscoveredTag(tag: string) { function toggleDiscoveredTag(tag: string) {
const next = new Set(selectedDiscoveredTags); if (selectedDiscoveredTags.has(tag)) {
if (next.has(tag)) { selectedDiscoveredTags.delete(tag);
next.delete(tag);
} else { } else {
next.add(tag); selectedDiscoveredTags.add(tag);
} }
selectedDiscoveredTags = next;
} }
async function handleRegisterSelected() { async function handleRegisterSelected() {
@@ -381,7 +380,7 @@
activeVersionJobs = next; activeVersionJobs = next;
showDiscoverPanel = false; showDiscoverPanel = false;
discoveredTags = []; discoveredTags = [];
selectedDiscoveredTags = new Set(); selectedDiscoveredTags = new SvelteSet();
await loadVersions(); await loadVersions();
} catch (e) { } catch (e) {
errorMessage = (e as Error).message; errorMessage = (e as Error).message;
@@ -550,7 +549,7 @@
onclick={() => { onclick={() => {
showDiscoverPanel = false; showDiscoverPanel = false;
discoveredTags = []; discoveredTags = [];
selectedDiscoveredTags = new Set(); selectedDiscoveredTags = new SvelteSet();
}} }}
class="text-xs text-blue-600 hover:underline" class="text-xs text-blue-600 hover:underline"
> >
@@ -649,7 +648,7 @@
<div class="flex justify-between text-xs text-gray-500"> <div class="flex justify-between text-xs text-gray-500">
<span> <span>
{#if job?.stageDetail}{job.stageDetail}{:else}{(job?.processedFiles ?? 0).toLocaleString()} / {(job?.totalFiles ?? 0).toLocaleString()} files{/if} {#if job?.stageDetail}{job.stageDetail}{:else}{(job?.processedFiles ?? 0).toLocaleString()} / {(job?.totalFiles ?? 0).toLocaleString()} files{/if}
{#if job?.stage}{' - ' + stageLabels[job.stage] ?? job.stage}{/if} {#if job?.stage}{' - ' + (stageLabels[job.stage] ?? job.stage)}{/if}
</span> </span>
<span>{job?.progress ?? 0}%</span> <span>{job?.progress ?? 0}%</span>
</div> </div>

View File

@@ -66,16 +66,12 @@
let saveError = $state<string | null>(null); let saveError = $state<string | null>(null);
let saveStatusTimer: ReturnType<typeof setTimeout> | null = null; let saveStatusTimer: ReturnType<typeof setTimeout> | null = null;
let concurrencyInput = $state<number>(0); let concurrencyInput = $derived(data.indexingConcurrency);
let concurrencySaving = $state(false); let concurrencySaving = $state(false);
let concurrencySaveStatus = $state<'idle' | 'ok' | 'error'>('idle'); let concurrencySaveStatus = $state<'idle' | 'ok' | 'error'>('idle');
let concurrencySaveError = $state<string | null>(null); let concurrencySaveError = $state<string | null>(null);
let concurrencySaveStatusTimer: ReturnType<typeof setTimeout> | null = null; let concurrencySaveStatusTimer: ReturnType<typeof setTimeout> | null = null;
$effect(() => {
concurrencyInput = data.indexingConcurrency;
});
const currentSettings = $derived(settingsOverride ?? data.settings); const currentSettings = $derived(settingsOverride ?? data.settings);
const activeProfile = $derived(currentSettings.activeProfile); const activeProfile = $derived(currentSettings.activeProfile);
const activeConfigEntries = $derived(activeProfile?.configEntries ?? []); const activeConfigEntries = $derived(activeProfile?.configEntries ?? []);