XPER.compute package

XPER.compute.Performance. evaluate(Eval_Metric, CFP=None, CFN=None) [source]

Evaluate the performance of the model using various evaluation metrics.

Parameters
Eval_Metricstr or list

Evaluation metric(s) to compute. Options include: "AUC", "Accuracy", "Balanced_accuracy", "BS" (Brier Score), "MC" (Misclassification Cost), "Precision", "Sensitivity", "Specificity".

CFPfloat

Cost of false positive.

CFNfloat

Cost of false negative.

Returns

Performance Metrics: Computed performance metrics for the model.

Examples
from XPER.compute.Performance import ModelPerformance

XPER = ModelPerformance(X_train.values, y_train.values, X_test.values, y_test.values, model)
PM = XPER.evaluate(['AUC'])
print("Performance Metrics: ", round(PM, 3))
                        
Example output:
Sample data visualization
XPER.compute.Performance. calculate_XPER_values(Eval_Metric, CFP=None, CFN=None, N_coalition_sampled=None, kernel=True, intercept=False, execution_type='ThreadPoolExecutor') [source]

Calculate XPER values for the model's performance.

Parameters
Eval_Metricstr or list

Evaluation metric(s) for XPER calculation. Options include: "AUC", "Accuracy", etc.

CFPfloat

Cost of false positive.

CFNfloat

Cost of false negative.

N_coalition_sampledint

Number of coalitions to consider in XPER calculation.

kernelbool, optional

If True, use kernel approximation for XPER values. Default is True.

interceptbool, optional

If True, include intercept in model. Default is False.

execution_typestr, optional

Execution type for computation, "ThreadPoolExecutor" or "ProcessPoolExecutor". Default is "ThreadPoolExecutor".

Returns

XPER values: Computed XPER values for the specified metrics.

Examples
from XPER.compute.Performance import ModelPerformance

XPER = ModelPerformance(X_train.values, y_train.values, X_test.values, y_test.values, model)

# Option 1 - Kernel = True
XPER_values = XPER.calculate_XPER_values(['AUC'])
XPER_values = XPER.calculate_XPER_values(['AUC'], execution_type='ProcessPoolExecutor')

# Option 2 - Kernel = False
XPER_values = XPER.calculate_XPER_values(['AUC'], kernel=False)
                        
Kernel True:
Sample data visualization
Kernel False:
Sample data visualization