web-csv-toolbox (Default - Full Features)import { parseString, EnginePresets, /* ... */ } from 'web-csv-toolbox';
Resolves to: platform-specific builds
./dist/main.web.js./dist/main.node.jsExports:
parseString, parseBinary, etc.)EnginePresets)
EnginePresets.stable() - Stability optimizedEnginePresets.responsive() - UI responsiveness optimizedEnginePresets.memoryEfficient() - Memory efficiency optimizedEnginePresets.fast() - Parse speed optimizedEnginePresets.responsiveFast() - UI responsiveness + parse speed optimizedEnginePresets.balanced() - Balanced (general-purpose)FlexibleStringObjectCSVParser, FlexibleStringArrayCSVParser, FlexibleBinaryObjectCSVParser, FlexibleBinaryArrayCSVParser, createStringCSVParser, createBinaryCSVParser, StringCSVParserStream, BinaryCSVParserStreamFlexibleStringCSVLexer, createStringCSVLexer, FlexibleCSVRecordAssembler, createCSVRecordAssembler, CSVLexerTransformer, CSVRecordAssemblerTransformerWorkerPool, WorkerSession)loadWASM, isWASMReady, parseStringToArraySyncWASM)Characteristics:
loadWASM() reduces first‑parse latencyweb-csv-toolbox/slim (Slim Entry - Smaller Bundle)import { parseString, loadWASM, parseStringToArraySyncWASM } from 'web-csv-toolbox/slim';
Resolves to: platform-specific builds
./dist/slim.web.js./dist/slim.node.jsExports:
loadWASM() - Must be called before using WASM functionsisSyncInitialized() - Check WASM initialization statusparseStringToArraySyncWASM() - Synchronous WASM parsingCharacteristics:
loadWASM() call before using WASM featuresUsage pattern:
import { loadWASM, parseStringToArraySyncWASM } from 'web-csv-toolbox/slim';
// Must initialize WASM before use
await loadWASM();
// Now can use WASM functions
const records = parseStringToArraySyncWASM(csv);
web-csv-toolbox/workerEnvironment-specific Worker implementation.
Node.js:
// Resolves to: ./dist/worker.node.js
import workerUrl from 'web-csv-toolbox/worker';
Browser:
// Resolves to: ./dist/worker.web.js
import workerUrl from 'web-csv-toolbox/worker';
Usage with bundlers:
// Vite
import workerUrl from 'web-csv-toolbox/worker?url';
// Webpack
const workerUrl = new URL('web-csv-toolbox/worker', import.meta.url);
The ./worker export uses Node.js conditional exports:
| Condition | File | Environment |
|---|---|---|
node |
worker.node.js |
Node.js (Worker Threads) |
browser |
worker.web.js |
Browser (Web Workers) |
default |
worker.web.js |
Deno, other environments |
web-csv-toolbox/csv.wasmimport wasmUrl from 'web-csv-toolbox/csv.wasm';
Resolves to: ./dist/csv.wasm
Pre-compiled WebAssembly module for high-performance CSV parsing.
Do you need this?
No, in most cases. The library automatically loads the WASM module when you use WASM-enabled features:
import { parse, loadWASM } from 'web-csv-toolbox';
// WASM module is automatically loaded
await loadWASM();
// Just use the API - WASM file is handled internally
for await (const record of parse(csv, {
engine: { wasm: true }
})) {
console.log(record);
}
Current limitations:
⚠️ The WASM module is currently embedded as base64 in the JavaScript bundle for automatic initialization. Importing csv.wasm separately does not reduce bundle size in the current architecture.
Potential future use cases:
When combined with future distribution improvements, this export could enable:
See: Package Exports Explanation for detailed discussion of current limitations and future improvements.
The library exports a 3-tier architecture for low-level CSV parsing:
Recommended for most custom parsing needs. Combines Lexer + Assembler internally.
createStringCSVParser(options?) - Factory function for creating format-specific parsers
FlexibleStringObjectCSVParser (default) or FlexibleStringArrayCSVParserCSVProcessingOptions (no engine option)import { createStringCSVParser } from 'web-csv-toolbox';
// Object format (default)
const objectParser = createStringCSVParser({
header: ['name', 'age'] as const
});
const records = objectParser.parse('Alice,30\nBob,25\n');
// records: [{ name: 'Alice', age: '30' }, { name: 'Bob', age: '25' }]
// Array format
const arrayParser = createStringCSVParser({
header: ['name', 'age'] as const,
outputFormat: 'array'
});
const arrayRecords = arrayParser.parse('Alice,30\nBob,25\n');
// arrayRecords: [['Alice', '30'], ['Bob', '25']]
Direct class usage (format-specific):
FlexibleStringObjectCSVParser - Always outputs object recordsFlexibleStringArrayCSVParser - Always outputs array recordsStringCSVParserStream - TransformStream for string CSV parsing
import { createStringCSVParser, StringCSVParserStream } from 'web-csv-toolbox';
const parser = createStringCSVParser({ header: ['name', 'age'] as const });
const stream = new StringCSVParserStream(parser);
await stringStream.pipeThrough(stream).pipeTo(yourSink);
createBinaryCSVParser(options?) - Factory function for creating format-specific binary parsers
FlexibleBinaryObjectCSVParser (default) or FlexibleBinaryArrayCSVParserBinaryCSVProcessingOptions (no engine option)import { createBinaryCSVParser } from 'web-csv-toolbox';
// Object format (default)
const objectParser = createBinaryCSVParser({
header: ['name', 'age'] as const,
charset: 'utf-8'
});
const buffer = await fetch('data.csv').then(r => r.arrayBuffer());
const records = objectParser.parse(buffer);
// records: [{ name: 'Alice', age: '30' }, { name: 'Bob', age: '25' }]
// Array format
const arrayParser = createBinaryCSVParser({
header: ['name', 'age'] as const,
outputFormat: 'array',
charset: 'utf-8'
});
const arrayRecords = arrayParser.parse(buffer);
// arrayRecords: [['Alice', '30'], ['Bob', '25']]
Direct class usage (format-specific):
FlexibleBinaryObjectCSVParser - Always outputs object recordsFlexibleBinaryArrayCSVParser - Always outputs array recordsBinaryCSVParserStream - TransformStream for binary CSV parsing
import { createBinaryCSVParser, BinaryCSVParserStream } from 'web-csv-toolbox';
const parser = createBinaryCSVParser({
header: ['name', 'age'] as const,
charset: 'utf-8'
});
const stream = new BinaryCSVParserStream(parser);
await fetch('data.csv')
.then(res => res.body)
.pipeThrough(stream)
.pipeTo(yourSink);
For advanced use cases requiring fine-grained control over tokenization and record assembly.
FlexibleStringCSVLexer - CSV tokenizer
createStringCSVLexer(options?) - Create lexer instanceimport { createStringCSVLexer } from 'web-csv-toolbox';
const lexer = createStringCSVLexer({ delimiter: '\t' });
const tokens = lexer.lex('name\tage\nAlice\t30');
CSVLexerTransformer - TransformStream for CSV tokenization
FlexibleCSVRecordAssembler - Token-to-record assembler
createCSVRecordAssembler(options?) - Create assembler instanceimport { createStringCSVLexer, createCSVRecordAssembler } from 'web-csv-toolbox';
const lexer = createStringCSVLexer();
const assembler = createCSVRecordAssembler({ header: ['name', 'age'] });
const tokens = lexer.lex('Alice,30\nBob,25');
const records = [...assembler.assemble(tokens)];
CSVRecordAssemblerTransformer - TransformStream for record assembly
Build completely custom parsers using the primitives above. See Custom CSV Parser Guide for details.
web-csv-toolbox/package.jsonimport pkg from 'web-csv-toolbox/package.json';
Resolves to: ./package.json
Access to package metadata (version, etc.).
All exports include TypeScript declarations:
{
"exports": {
".": {
"types": "./dist/web-csv-toolbox.d.ts",
"default": "./dist/web-csv-toolbox.js"
},
"./worker": {
"node": {
"types": "./dist/worker.node.d.ts",
"default": "./dist/worker.node.js"
}
// ...
}
}
}