Tuesday 16 March 2021

Reproducing Sklearn SVC within GridSearchCV's roc_auc scores manually

I would like to be able to reproduce sklearn SelectKBest results when using GridSearchCV by performing the grid-search CV myself. However, I find my code to produce different results. Here is a reproducible example:

import numpy as np
from sklearn.datasets import make_classification
from sklearn.feature_selection import SelectKBest
from sklearn.model_selection import GridSearchCV, StratifiedKFold
from sklearn.metrics import roc_auc_score
from sklearn.pipeline import Pipeline
from sklearn.svm import SVC
import itertools

r = 1
X, y = make_classification(n_samples = 50, n_features = 20, weights = [3/5], random_state = r)
np.random.seed(r)
X = np.random.rand(X.shape[0], X.shape[1])

K = [1,3,5]
C = [0.1,1]
cv = StratifiedKFold(n_splits = 10)
space = dict()
space['anova__k'] = K
space['svc__C'] = C    
clf = Pipeline([('anova', SelectKBest()), ('svc', SVC(probability = True, random_state = r))])
search = GridSearchCV(clf, space, scoring = 'roc_auc', cv = cv, refit = True, n_jobs = -1)
result = search.fit(X, y)

print('GridSearchCV results:')
print(result.cv_results_['mean_test_score'])

scores = []
for train_indx, test_indx in cv.split(X, y):
    X_train, y_train = X[train_indx,:], y[train_indx]
    X_test, y_test = X[test_indx,:], y[test_indx]
    scores_ = []
    for k, c in itertools.product(K, C):
        anova = SelectKBest(k = k)
        X_train_k = anova.fit_transform(X_train, y_train)
        clf = SVC(C = c, probability = True, random_state = r).fit(X_train_k, y_train)
        y_pred = clf.predict_proba(anova.transform(X_test))[:, 1]
        scores_.append(roc_auc_score(y_test, y_pred))
    scores.append(scores_)
    
print('Manual grid-search CV results:')    
print(np.mean(np.array(scores), axis = 0)) 

For me, this produces the following output:

GridSearchCV results:
[0.41666667 0.4        0.4        0.4        0.21666667 0.26666667]
Manual grid-search CV results:
[0.58333333 0.6        0.53333333 0.46666667 0.48333333 0.5       ]

when using the make_classification dataset directly, the output matches. On the other hand, when X is computed based on np.random.rand, the scores differ.

Is there some random process that I am not aware of underneath?



from Reproducing Sklearn SVC within GridSearchCV's roc_auc scores manually

No comments:

Post a Comment