How to split data into trainset and testset randomly?
I have a large dataset and want to split it into training(50%) and testing set(50%).
Say I have 100 examples stored the input file, each line contains one example. I need to choose 50 lines as training set and 50 lines testing set.
My idea is first generate a random list with length 100 (values range from 1 to 100), then use the first 50 elements as the line number for the 50 training examples. The same with testing set.
This could be achieved easily in Matlab
fid=fopen(datafile);
C = textscan(fid, '%s','delimiter', '\n');
plist=randperm(100);
for i=1:50
trainstring = C{plist(i)};
fprintf(train_file,trainstring);
end
for i=51:100
teststring = C{plist(i)};
fprintf(test_file,teststring);
end
But how could I accomplish this function in Python? I'm new to Python, and don't know whether I could read the whole file into an array, and choose certain lines.
Solution 1:
This can be done similarly in Python using lists, (note that the whole list is shuffled in place).
import random
with open("datafile.txt", "rb") as f:
data = f.read().split('\n')
random.shuffle(data)
train_data = data[:50]
test_data = data[50:]
Solution 2:
from sklearn.model_selection import train_test_split
import numpy
with open("datafile.txt", "rb") as f:
data = f.read().split('\n')
data = numpy.array(data) #convert array to numpy type array
x_train ,x_test = train_test_split(data,test_size=0.5) #test_size=0.5(whole_data)
Solution 3:
To answer @desmond.carros question, I modified the best answer as follows,
import random
file=open("datafile.txt","r")
data=list()
for line in file:
data.append(line.split(#your preferred delimiter))
file.close()
random.shuffle(data)
train_data = data[:int((len(data)+1)*.80)] #Remaining 80% to training set
test_data = data[int((len(data)+1)*.80):] #Splits 20% data to test set
The code splits the entire dataset to 80% train and 20% test data
Solution 4:
You could also use numpy. When your data is stored in a numpy.ndarray:
import numpy as np
from random import sample
l = 100 #length of data
f = 50 #number of elements you need
indices = sample(range(l),f)
train_data = data[indices]
test_data = np.delete(data,indices)
Solution 5:
sklearn.cross_validation
is deprecated since version 0.18, instead you should use sklearn.model_selection
as show below
from sklearn.model_selection import train_test_split
import numpy
with open("datafile.txt", "rb") as f:
data = f.read().split('\n')
data = numpy.array(data) #convert array to numpy type array
x_train ,x_test = train_test_split(data,test_size=0.5) #test_size=0.5(whole_data)