You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Define dataloader and learner, but without (!) internal validation:
df_fai<-fastai::TabularDataTable(train, cat_names=cat_cols, cont_names=num_cols,
y_names="Species", splits=NULL) # Note that no splits are passeddl<-fastai::dataloaders(df_fai)
tab_learner<-fastai::tabular_learner(dl, layers=c(8, 16)) # Note that no metric is passedfastai::fit(tab_learner, n_epoch=5, lr=0.005) # works fine
Prediction fails despite successful training:
pred= predict(tab_learner, test) # Error occurs here
Error message
Error in py_get_item(x, name) : IndexError: list index out of range
However, no error occurs when passing an evalutation metric to the learner and spilts to TabularDataTable:
df_fai<-fastai::TabularDataTable(
train, cat_names=cat_cols, cont_names=num_cols, y_names="Species",
splits=list(
seq(1, nrow(train), 2), seq(2, nrow(train), 2)
)
)
dl<-fastai::dataloaders(df_fai)
tab_learner_eval<-fastai::tabular_learner(dl, layers=c(8, 16),
metrics= accuracy())
fastai::fit(tab_learner_eval, n_epoch=5, lr=0.005) # works fine
predict(tab_learner_eval, test) # works fine
Additional Observations:
I checked the code of predict.fastai.tabular.learner.TabularLearner. The error seems to be caused by the learner’s metric being accessed via this line:
object$metrics=object$metrics[0]
This is problematic because if no metric is passed, tab_learner$metrics becomes an empty list, and attempting to access tab_learner$metrics[0] leads to an index error in python.
I'm also not quite sure why the metric must be removed in the first place? Replicating the code of predict.fastai.tabular.learner.TabularLearner reveals that setting (or removing?) the metric does not have an impact:
Add the same error-checking mechanism in predict.fastai.tabular.learner.TabularLearner as is done in predict.fastai.learner.Learner, to ensure that predictions work even without a passed evaluation metric.
Please let me know if you'd like me to make any further changes or if additional clarification is needed!
The text was updated successfully, but these errors were encountered:
Error when trying to predict with tabular learners without passing an evaluation metric to the learner.
Steps to Reproduce
Error message
However, no error occurs when passing an evalutation metric to the learner and spilts to
TabularDataTable
:Additional Observations:
I checked the code of
predict.fastai.tabular.learner.TabularLearner
. The error seems to be caused by the learner’s metric being accessed via this line:This is problematic because if no metric is passed,
tab_learner$metrics
becomes an empty list, and attempting to accesstab_learner$metrics[0]
leads to an index error in python.I'm also not quite sure why the metric must be removed in the first place? Replicating the code of
predict.fastai.tabular.learner.TabularLearner
reveals that setting (or removing?) the metric does not have an impact:Interestingly, when examining the code in
predict.fastai.learner.Learner
, I noticed that a check for the metric is done to prevent the error:Suggestion:
Add the same error-checking mechanism in
predict.fastai.tabular.learner.TabularLearner
as is done inpredict.fastai.learner.Learner
, to ensure that predictions work even without a passed evaluation metric.Please let me know if you'd like me to make any further changes or if additional clarification is needed!
The text was updated successfully, but these errors were encountered: