Friday 4 December 2020

How can I get treeinterpreter's Tree Contributions, if we are using a Pipeline?

I am using sklearns' pipeline function, to one hot encode, and to model. Almost exactly as in this post.

After using a Pipeline, I am not able to get tree contributions anymore. Getting this error:

AttributeError: 'Pipeline' object has no attribute 'n_outputs_'

I tried to play around with the parameters of the treeinterpreter, but I am stuck.

Therefore my question: is there any way how we can get the contributions out of a Tree, when we are using sklearns Pipeline?

EDIT 2 - Real data as requested by Venkatachalam:

# Data DF to train model
df = pd.DataFrame(
  [['SGOHC', 'd',   'onetwothree',  'BAN',  488.0580347,    960 ,841,   82, 0.902497027,    841 ,0.548155625    ,0.001078211,   0.123958333 ,1],
   ['ABCDEFGHIJK',  'SOC'   ,'CON','CAN',   680.84, 1638,   0,  0,  0   ,0  ,3.011140743    ,0.007244358,   1   ,0],
   ['Hello',    'AA',   'onetwothree',  'SPEAKER',  5823.230967,    2633,   1494    ,338    ,0.773761714    ,1494,  12.70144386 ,0.005743015,   0.432586403,    8]], 
  columns=['B','C','D','E','F','G','H','I','J','K','L','M', 'N', 'target'])

# Create test and train set (useless, but for the example...) 
from sklearn.model_selection  import train_test_split

# Define X and y 
X = df.drop('target', axis=1)
y = df['target']

# Create Train and Test Sets 
X_train, X_validation, Y_train, Y_validation = train_test_split(X, y, test_size=0.20, random_state=1)


 # Make the pipeline and model 
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import OneHotEncoder
import numpy as np
import pandas as pd
from sklearn import set_config
from sklearn.model_selection import ParameterGrid
from sklearn.ensemble import RandomForestRegressor
import matplotlib.pyplot as plt

rfr = Pipeline([('preprocess',
                   ColumnTransformer([('ohe',
                                       OneHotEncoder(handle_unknown='ignore'), [1])])),
                  ('rf', RandomForestRegressor())])

rfr.fit(X_train, Y_train)


# The New, Real data that we need to predict & explain! 

new_data = pd.DataFrame(
  [['DEBTYIPL', 'de',   'onetwothreefour',  'BANAAN',   4848.0580347,   923460  ,823441,    5,  0.902497027,    43  ,0.548155625    ,0.001078211,   0.123958333 ],
   ['ABCDEFGHIJK',  'SOC'   ,'CON','CAN23', 680.84, 1638,   0,  0,  0   ,0  ,1.011140743    ,4.007244358,   1   ],
   ['Hello_NO', 'AAAAa',    'onetwothree',  'SPEAKER',  5823.230967,    123,    32  ,22 ,0.773761714    ,1678,  12.70144386 ,0.005743015,   0.432586403]], 
  columns=['B','C','D','E','F','G','H','I','J','K','L','M', 'N'])
new_data.head()

# Predicting the values 
rfr.predict(new_data)

# Now the error... the contributions: 
from treeinterpreter import treeinterpreter as ti
prediction, bias, contributions = ti.predict(rfr[-1], rfr[:-1].fit_transform(new_data))

#ValueError: Number of features of the model must match the input. Model n_features is 2 and input n_features is 3 


from How can I get treeinterpreter's Tree Contributions, if we are using a Pipeline?

No comments:

Post a Comment