High-speed mechanisms for loading large volumes of data from flat files or streams into a table, bypassing row-by-row INSERT overhead. The fastest way to populate or refresh large tables.

Efficiently bulk-loads data from files or streams into tables. COPY is a server-side command with FILE privileges; \copy is a psql client command that works without server-side access. Dramatically faster than INSERT for large data loads.

Compatibility

Show:
Database System Support Status Since Version Notes
MySQL ✓ Supported 3.23 LOAD DATA INFILE reads from a server-side file path. LOAD DATA LOCAL INFILE reads from the client machine (requires local_infile=ON on both client and server). The mysqlimport CLI tool wraps LOAD DATA INFILE. For cloud environments where file access is restricted, use INSERT with multi-row VALUES or a pipeline tool. CSV, TSV, and custom delimiter/enclosure formats supported.
PostgreSQL ✓ Supported 7.0 COPY is the standard bulk load mechanism. Server-side COPY reads/writes files on the server (requires superuser). Client-side \copy (psql meta-command) reads/writes files on the client machine without superuser. COPY also accepts FROM STDIN / TO STDOUT for pipe-based loading. Supported formats: text (default), CSV, binary. Cannot COPY into a view; use INSTEAD OF triggers or a function. The pg_bulkload extension offers even faster loading for very large datasets.
SQL Server ✓ Supported 6.5 Two main options: BULK INSERT (T-SQL statement, server-side file access) and the bcp utility (command-line tool for client-side file transfer). The OPENROWSET(BULK ...) function embeds bulk data in a SELECT, useful for transformation during load. SQL Server Integration Services (SSIS) is the full ETL solution. Azure SQL supports loading from Azure Blob Storage via BULK INSERT with DATASOURCE.
Oracle ✓ Supported 7 SQL*Loader (sqlldr) is the primary bulk load utility — a standalone command-line tool with a control file describing the source format. External tables allow querying a flat file as if it were a table and loading with INSERT INTO ... SELECT. The Data Pump (expdp/impdp) is used for Oracle-to-Oracle bulk transfers. Direct-path loading (DIRECT=TRUE in sqlldr) bypasses redo logging for maximum speed.
SQLite ✗ Not Supported SQLite has no native bulk load statement. The sqlite3 CLI tool has a .import command for loading CSV files, and the .mode csv setting controls CSV parsing. Programmatically, the fastest approach is to use a transaction wrapping many INSERT statements, or to use the SQLite C API's batch insert with prepared statements. The sqlite-utils Python library provides a convenient bulk insert interface.

Details

Wrapping bulk inserts in a single transaction is the universal speedup when native bulk load is unavailable — it eliminates per-row fsync overhead. PostgreSQL's COPY is SQL-based and can be used from application code via libpq or drivers (e.g., pgcopy in Go, COPY from STDIN in Python's psycopg2). SQL Server's TABLOCK hint with BULK INSERT enables minimal logging on simple recovery model databases, significantly reducing log growth. Oracle's direct-path load (sqlldr DIRECT=TRUE) is the fastest option but bypasses triggers and constraints — run a validation pass afterward. MySQL's LOAD DATA LOCAL INFILE is commonly blocked by default for security reasons and requires explicit opt-in on both server and client.

Standard Syntax

-- PostgreSQL: COPY orders (id, customer_id, amount) FROM '/data/orders.csv' CSV HEADER; -- MySQL: LOAD DATA INFILE '/data/orders.csv' INTO TABLE orders FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' IGNORE 1 ROWS;

Version Support

MySQL: Since 3.23 PostgreSQL: Since 7.0 SQL Server: Since 6.5 Oracle: Since 7 SQLite: Not supported

Per-Database Syntax & Notes

MySQL

LOAD DATA INFILE reads from a server-side file path. LOAD DATA LOCAL INFILE reads from the client machine (requires local_infile=ON on both client and server). The mysqlimport CLI tool wraps LOAD DATA INFILE. For cloud environments where file access is restricted, use INSERT with multi-row VALUES or a pipeline tool. CSV, TSV, and custom delimiter/enclosure formats supported.

-- From server-side file: LOAD DATA INFILE '/var/lib/mysql-files/orders.csv' INTO TABLE orders FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\n' IGNORE 1 ROWS (id, customer_id, amount, order_date); -- From client-side file: LOAD DATA LOCAL INFILE '/home/user/orders.csv' INTO TABLE orders FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' IGNORE 1 ROWS; -- Replace existing rows: LOAD DATA INFILE '/data/orders.csv' REPLACE INTO TABLE orders FIELDS TERMINATED BY ',';

PostgreSQL

COPY is the standard bulk load mechanism. Server-side COPY reads/writes files on the server (requires superuser). Client-side \copy (psql meta-command) reads/writes files on the client machine without superuser. COPY also accepts FROM STDIN / TO STDOUT for pipe-based loading. Supported formats: text (default), CSV, binary. Cannot COPY into a view; use INSTEAD OF triggers or a function. The pg_bulkload extension offers even faster loading for very large datasets.

-- From server-side CSV file (requires superuser): COPY orders (id, customer_id, amount, order_date) FROM '/data/orders.csv' CSV HEADER; -- From STDIN (useful in scripts / application code): COPY orders FROM STDIN WITH (FORMAT csv, HEADER true); -- Client-side (psql only, no superuser required): -- \copy orders FROM '/home/user/orders.csv' CSV HEADER -- To file (export): COPY (SELECT * FROM orders WHERE order_date >= '2024-01-01') TO '/data/orders_2024.csv' CSV HEADER; -- Binary format (fastest, non-portable): COPY orders FROM '/data/orders.bin' FORMAT binary;

SQL Server

Two main options: BULK INSERT (T-SQL statement, server-side file access) and the bcp utility (command-line tool for client-side file transfer). The OPENROWSET(BULK ...) function embeds bulk data in a SELECT, useful for transformation during load. SQL Server Integration Services (SSIS) is the full ETL solution. Azure SQL supports loading from Azure Blob Storage via BULK INSERT with DATASOURCE.

-- BULK INSERT from server-side file: BULK INSERT orders FROM 'C:\data\orders.csv' WITH ( FIRSTROW = 2, FIELDTERMINATOR = ',', ROWTERMINATOR = '\n', TABLOCK ); -- OPENROWSET(BULK) — load and transform in one step: INSERT INTO orders (id, customer_id, amount) SELECT id, customer_id, amount FROM OPENROWSET( BULK 'C:\data\orders.csv', FORMATFILE = 'C:\data\orders.fmt' ) AS src; -- bcp utility (command line): -- bcp mydb.dbo.orders in orders.csv -c -t, -S server -U user -P pass

Oracle

SQL*Loader (sqlldr) is the primary bulk load utility — a standalone command-line tool with a control file describing the source format. External tables allow querying a flat file as if it were a table and loading with INSERT INTO ... SELECT. The Data Pump (expdp/impdp) is used for Oracle-to-Oracle bulk transfers. Direct-path loading (DIRECT=TRUE in sqlldr) bypasses redo logging for maximum speed.

-- SQL*Loader control file (orders.ctl): -- LOAD DATA -- INFILE '/data/orders.csv' -- INTO TABLE orders -- FIELDS TERMINATED BY ',' -- TRAILING NULLCOLS -- (id, customer_id, amount, order_date DATE 'YYYY-MM-DD') -- Run from command line: -- sqlldr userid=user/pass@db control=orders.ctl log=orders.log -- External table approach: CREATE TABLE orders_ext ( id NUMBER, customer_id NUMBER, amount NUMBER ) ORGANIZATION EXTERNAL ( TYPE oracle_loader DEFAULT DIRECTORY data_dir ACCESS PARAMETERS ( RECORDS DELIMITED BY NEWLINE FIELDS TERMINATED BY ',' ) LOCATION ('orders.csv') ); INSERT INTO orders SELECT * FROM orders_ext;

SQLite

SQLite has no native bulk load statement. The sqlite3 CLI tool has a .import command for loading CSV files, and the .mode csv setting controls CSV parsing. Programmatically, the fastest approach is to use a transaction wrapping many INSERT statements, or to use the SQLite C API's batch insert with prepared statements. The sqlite-utils Python library provides a convenient bulk insert interface.

-- sqlite3 CLI import: -- .mode csv -- .import /data/orders.csv orders -- Or with headers (sqlite3 3.32+): -- .import --csv --skip 1 /data/orders.csv orders -- Programmatic bulk insert (wrap in transaction for speed): -- BEGIN; -- INSERT INTO orders VALUES (1, 101, 99.99, '2024-01-15'); -- INSERT INTO orders VALUES (2, 102, 49.50, '2024-01-16'); -- ... (thousands of rows) -- COMMIT; -- Without the transaction wrapper each INSERT is its own fsync.