Optionaloptions: StringCSVLexerOptions<Delimiter, Quotation, false>CSV lexer options including delimiter, quotation, abort signal, and engine
OptionalstreamOptions: StringCSVLexerTransformerStreamOptionsStream-specific options like backpressureCheckInterval
OptionalwritableStrategy: QueuingStrategy<string>Strategy for the writable side (default: { highWaterMark: 65536, size: chunk => chunk.length })
OptionalreadableStrategy: QueuingStrategy<TokenNoLocation>Strategy for the readable side (default: { highWaterMark: 1024, size: () => 1 })
A StringCSVLexerTransformer instance configured with the specified options
Choosing the Right API for guidance on selecting the appropriate API level.
import { createStringCSVLexerTransformer } from 'web-csv-toolbox';
new ReadableStream({
start(controller) {
controller.enqueue("name,age\r\n");
controller.enqueue("Alice,20\r\n");
controller.close();
}
})
.pipeThrough(createStringCSVLexerTransformer())
.pipeTo(new WritableStream({ write(token) {
console.log(token);
}}));
// { type: Field, value: "name", location: {...} }
// { type: FieldDelimiter, value: ",", location: {...} }
// ...
import { createStringCSVLexerTransformer } from 'web-csv-toolbox';
const tsvTransformer = createStringCSVLexerTransformer({
delimiter: '\t'
});
tsvStream.pipeThrough(tsvTransformer);
import { createStringCSVLexerTransformer } from 'web-csv-toolbox';
const transformer = createStringCSVLexerTransformer(
{ delimiter: ',' },
{ backpressureCheckInterval: 50 },
{ highWaterMark: 131072, size: (chunk) => chunk.length },
new CountQueuingStrategy({ highWaterMark: 2048 })
);
await fetch('large-file.csv')
.then(res => res.body)
.pipeThrough(new TextDecoderStream())
.pipeThrough(transformer)
.pipeTo(yourProcessor);
Factory function to create a StringCSVLexerTransformer instance.
This function internally creates a StringCSVLexer and wraps it in a StringCSVLexerTransformer, providing a simpler API for stream-based CSV tokenization.