web-csv-toolbox - v0.15.0
    Preparing search index...

    Class BinaryCSVParserStream<Header, Format>

    A transform stream that converts a stream of binary data (BufferSource) into a stream of CSV records. Wraps a BinaryCSVParser instance to provide streaming CSV parsing.

    BinaryCSVParser instance to use for parsing (required). Use createBinaryCSVParser to create one.

    Stream-specific options (backpressureCheckInterval, etc.)

    Strategy for the writable side (default: ByteLengthQueuingStrategy({ highWaterMark: 65536 }))

    Strategy for the readable side (default: CountQueuingStrategy({ highWaterMark: 256 }))

    Recommended: Use the factory function

    For simpler usage, use createBinaryCSVParserStream which handles parser creation internally:

    import { createBinaryCSVParserStream } from 'web-csv-toolbox';

    await fetch('data.csv')
    .then(res => res.body)
    .pipeThrough(createBinaryCSVParserStream({ header: ['name', 'age'] }))
    .pipeTo(yourProcessor);

    Direct instantiation (advanced)

    If you need direct access to the parser or want to reuse it, use the constructor directly:

    import { createBinaryCSVParser, BinaryCSVParserStream } from 'web-csv-toolbox';
    const parser = createBinaryCSVParser({ header: ['name', 'age'] });
    binaryStream.pipeThrough(new BinaryCSVParserStream(parser));

    Accepts any BufferSource type (Uint8Array, ArrayBuffer, or other TypedArray views) as input chunks.

    Queuing Strategy:

    • Writable side: ByteLengthQueuingStrategy with highWaterMark of 65536 bytes (64KB).
    • Readable side: CountQueuingStrategy with highWaterMark of 256 records.

    Backpressure Handling: The transformer monitors controller.desiredSize and yields to the event loop when backpressure is detected (desiredSize ≤ 0). This prevents blocking the main thread during heavy processing and allows the downstream consumer to catch up.

    import { createBinaryCSVParserStream } from 'web-csv-toolbox';

    // Directly pipe fetch response body (no TextDecoderStream needed)
    await fetch('data.csv')
    .then(res => res.body)
    .pipeThrough(createBinaryCSVParserStream())
    .pipeTo(new WritableStream({
    write(record) {
    console.log(record); // { name: 'Alice', age: '30' }
    }
    }));
    import { createBinaryCSVParser, BinaryCSVParserStream } from 'web-csv-toolbox';

    const parser = createBinaryCSVParser({
    header: ['name', 'age'],
    charset: 'utf-8'
    });
    const stream = new BinaryCSVParserStream(parser);

    binaryStream.pipeThrough(stream);
    import { createBinaryCSVParser, BinaryCSVParserStream } from 'web-csv-toolbox';

    const parser = createBinaryCSVParser({ header: ['name', 'age'] });
    const stream = new BinaryCSVParserStream(parser);

    await fetch('large-file.csv')
    .then(res => res.body)
    .pipeThrough(stream)
    .pipeTo(yourProcessor);

    Type Parameters

    • Header extends ReadonlyArray<string> = readonly string[]

      The type of the header row

    • Format extends "object" | "array" = "object"

      Output format: 'object' or 'array'

    Hierarchy

    Index

    Constructors

    Properties

    Methods

    Constructors

    Properties

    parser: {
        parse(
            chunk?: BufferSource,
            options?: { stream?: boolean },
        ): IterableIterator<CSVRecord<Header, Format>>;
    }
    readable: ReadableStream<CSVRecord<Header, Format, "keep">>

    The readable read-only property of the TransformStream interface returns the ReadableStream instance controlled by this TransformStream.

    MDN Reference

    writable: WritableStream<BufferSource>

    The writable read-only property of the TransformStream interface returns the WritableStream instance controlled by this TransformStream.

    MDN Reference

    Methods