How to load data to hive from HDFS without removing the source file?

from your question I assume that you already have your data in hdfs. So you don't need to LOAD DATA, which moves the files to the default hive location /user/hive/warehouse. You can simply define the table using the externalkeyword, which leaves the files in place, but creates the table definition in the hive metastore. See here: Create Table DDL eg.:

create external table table_name (
  id int,
  myfields string
)
location '/my/location/in/hdfs';

Please note that the format you use might differ from the default (as mentioned by JigneshRawal in the comments). You can use your own delimiter, for example when using Sqoop:

row format delimited fields terminated by ','

I found that, when you use EXTERNAL TABLE and LOCATION together, Hive creates table and initially no data will present (assuming your data location is different from the Hive 'LOCATION').

When you use 'LOAD DATA INPATH' command, the data get MOVED (instead of copy) from data location to location that you specified while creating Hive table.

If location is not given when you create Hive table, it uses internal Hive warehouse location and data will get moved from your source data location to internal Hive data warehouse location (i.e. /user/hive/warehouse/).


An alternative to 'LOAD DATA' is available in which the data will not be moved from your existing source location to hive data warehouse location.

You can use ALTER TABLE command with 'LOCATION' option. Here is below required command

ALTER TABLE table_name ADD PARTITION (date_col='2017-02-07') LOCATION 'hdfs/path/to/location/'

The only condition here is, the location should be a directory instead of file.

Hope this will solve the problem.