High-speed mechanisms for loading large volumes of data from flat files or streams into a table, bypassing row-by-row INSERT overhead. The fastest way to populate or refresh large tables.
Efficiently bulk-loads data from files or streams into tables. COPY is a server-side command with FILE privileges; \copy is a psql client command that works without server-side access. Dramatically faster than INSERT for large data loads.
| Database System | Support Status | Since Version | Notes |
|---|---|---|---|
| MySQL | ✓ Supported | 3.23 | LOAD DATA INFILE reads from a server-side file path. LOAD DATA LOCAL INFILE reads from the client machine (requires local_infile=ON on both client and server). The mysqlimport CLI tool wraps LOAD DATA INFILE. For cloud environments where file access is restricted, use INSERT with multi-row VALUES or a pipeline tool. CSV, TSV, and custom delimiter/enclosure formats supported. |
| PostgreSQL | ✓ Supported | 7.0 | COPY is the standard bulk load mechanism. Server-side COPY reads/writes files on the server (requires superuser). Client-side \copy (psql meta-command) reads/writes files on the client machine without superuser. COPY also accepts FROM STDIN / TO STDOUT for pipe-based loading. Supported formats: text (default), CSV, binary. Cannot COPY into a view; use INSTEAD OF triggers or a function. The pg_bulkload extension offers even faster loading for very large datasets. |
| SQL Server | ✓ Supported | 6.5 | Two main options: BULK INSERT (T-SQL statement, server-side file access) and the bcp utility (command-line tool for client-side file transfer). The OPENROWSET(BULK ...) function embeds bulk data in a SELECT, useful for transformation during load. SQL Server Integration Services (SSIS) is the full ETL solution. Azure SQL supports loading from Azure Blob Storage via BULK INSERT with DATASOURCE. |
| Oracle | ✓ Supported | 7 | SQL*Loader (sqlldr) is the primary bulk load utility — a standalone command-line tool with a control file describing the source format. External tables allow querying a flat file as if it were a table and loading with INSERT INTO ... SELECT. The Data Pump (expdp/impdp) is used for Oracle-to-Oracle bulk transfers. Direct-path loading (DIRECT=TRUE in sqlldr) bypasses redo logging for maximum speed. |
| SQLite | ✗ Not Supported | — | SQLite has no native bulk load statement. The sqlite3 CLI tool has a .import command for loading CSV files, and the .mode csv setting controls CSV parsing. Programmatically, the fastest approach is to use a transaction wrapping many INSERT statements, or to use the SQLite C API's batch insert with prepared statements. The sqlite-utils Python library provides a convenient bulk insert interface. |
Wrapping bulk inserts in a single transaction is the universal speedup when native bulk load is unavailable — it eliminates per-row fsync overhead. PostgreSQL's COPY is SQL-based and can be used from application code via libpq or drivers (e.g., pgcopy in Go, COPY from STDIN in Python's psycopg2). SQL Server's TABLOCK hint with BULK INSERT enables minimal logging on simple recovery model databases, significantly reducing log growth. Oracle's direct-path load (sqlldr DIRECT=TRUE) is the fastest option but bypasses triggers and constraints — run a validation pass afterward. MySQL's LOAD DATA LOCAL INFILE is commonly blocked by default for security reasons and requires explicit opt-in on both server and client.
LOAD DATA INFILE reads from a server-side file path. LOAD DATA LOCAL INFILE reads from the client machine (requires local_infile=ON on both client and server). The mysqlimport CLI tool wraps LOAD DATA INFILE. For cloud environments where file access is restricted, use INSERT with multi-row VALUES or a pipeline tool. CSV, TSV, and custom delimiter/enclosure formats supported.
COPY is the standard bulk load mechanism. Server-side COPY reads/writes files on the server (requires superuser). Client-side \copy (psql meta-command) reads/writes files on the client machine without superuser. COPY also accepts FROM STDIN / TO STDOUT for pipe-based loading. Supported formats: text (default), CSV, binary. Cannot COPY into a view; use INSTEAD OF triggers or a function. The pg_bulkload extension offers even faster loading for very large datasets.
Two main options: BULK INSERT (T-SQL statement, server-side file access) and the bcp utility (command-line tool for client-side file transfer). The OPENROWSET(BULK ...) function embeds bulk data in a SELECT, useful for transformation during load. SQL Server Integration Services (SSIS) is the full ETL solution. Azure SQL supports loading from Azure Blob Storage via BULK INSERT with DATASOURCE.
SQL*Loader (sqlldr) is the primary bulk load utility — a standalone command-line tool with a control file describing the source format. External tables allow querying a flat file as if it were a table and loading with INSERT INTO ... SELECT. The Data Pump (expdp/impdp) is used for Oracle-to-Oracle bulk transfers. Direct-path loading (DIRECT=TRUE in sqlldr) bypasses redo logging for maximum speed.
SQLite has no native bulk load statement. The sqlite3 CLI tool has a .import command for loading CSV files, and the .mode csv setting controls CSV parsing. Programmatically, the fastest approach is to use a transaction wrapping many INSERT statements, or to use the SQLite C API's batch insert with prepared statements. The sqlite-utils Python library provides a convenient bulk insert interface.