This tutorial will guide you through the basics of using web-csv-toolbox to parse CSV data.
By the end of this tutorial, you'll be able to:
Install the package using your preferred package manager:
# npm
npm install web-csv-toolbox
# yarn
yarn add web-csv-toolbox
# pnpm
pnpm add web-csv-toolbox
Let's start with a simple example - parsing a CSV string:
import { parse } from 'web-csv-toolbox';
const csv = `name,age,city
Alice,30,New York
Bob,25,San Francisco
Charlie,35,Los Angeles`;
for await (const record of parse(csv)) {
console.log(record);
}
Output:
{ name: 'Alice', age: '30', city: 'New York' }
{ name: 'Bob', age: '25', city: 'San Francisco' }
{ name: 'Charlie', age: '35', city: 'Los Angeles' }
Notice that:
In real applications, you often fetch CSV data from a network:
import { parse } from 'web-csv-toolbox';
const response = await fetch('https://example.com/data.csv');
for await (const record of parse(response)) {
console.log(record);
}
web-csv-toolbox automatically handles:
You can customize how the CSV is parsed:
import { parse } from 'web-csv-toolbox';
// Tab-separated values
const tsv = `name\tage\tcity
Alice\t30\tNew York`;
for await (const record of parse(tsv, { delimiter: '\t' })) {
console.log(record);
}
import { parse } from 'web-csv-toolbox';
// CSV without header row
const csv = `Alice,30,New York
Bob,25,San Francisco`;
for await (const record of parse(csv, { header: ['name', 'age', 'city'] })) {
console.log(record);
}
For large files, you can use streaming to keep memory usage constant:
import { parse } from 'web-csv-toolbox';
const response = await fetch('https://example.com/large-data.csv');
// Process one record at a time
for await (const record of parse(response)) {
// Each record is processed immediately
// Memory footprint: O(1) - only one record in memory at a time
console.log(record);
}
Now that you know the basics, explore more advanced topics:
In this tutorial, you learned:
Next: Try Working with Workers to learn about performance optimization.