가장 쉬운 XGBoost 모델 [분류, 회귀] by 바죠

가장 쉬운 XGBoost 모델 [분류, 회귀]
Scalable, Portable and Distributed Gradient Boosting (GBM, GBRT, GBDT) Library".

Gradient Boost를 먼저 고려한다. 
초기계산에서 평균값으로 모든 예측 값을 예측한다.
실제 값과 오차를 구해낸다. 
해당 오차를 예측하는 의사 결정 나무(decision tree)를 만든다.

기계학습에서 부스팅(Boosting)은 약한 학습기(Weak Learner) 여러개를  결합해서 정확하고 강력한 학습기(Strong Learner)를 만드는 과정이다.
정확도와 상관없이 모델 첫번째 부분에서 만들고, 드러난 예측 오류를 모델 두 번째 부분에서 보완한다. 
이 둘을 합쳐서 모델이 만든다.  여전히 남아 있는 오류는 모델 다음 부분에서 보완한다.

해당 훈련 데이터에 대한 충실한 학습으로 인해서 overfitting 되기 쉽다는 문제점이 있다. 
이를 해결하기 위해 XGBoost는 regularization term을 추가로 도입한 알고리듬이다.  

Boosting : 배깅과 유사한 진행, 오류가 나타난 데이터에 가중치를 부여.
gradient boosting --> overfitting 발생함. --> XGBoost overfitting 문제를 정규화 항으로 해결함.

XGBoost는 Gradient boosting 알고리듬을 분산환경에서도 실행할 수 있도록 구현 해놓은 유명한 라이브러리이다. 병렬  학습이 지원되도록 구현한 라이브러리가 XGBoost 이다. Regression, Classification 문제를 모두 지원한다. 스스로 GPU를 사용할 수 있다.
XGBoost 는 수 많은 경진대회에서 입상한 팀들이 사용한 알고리듬이다. 매우 높은 모델의 정확도를 추구해 왔다.
2017년 LightGBM이 나오기전까지, 소위 테이블 데이터에 대해서는 최강의 회귀와 분류 성능을 자랑하는 XGBoost의 사용법에 대해서 알아보고자 한다.
현재로서는 LightGBM이 XGBoost 보다 더 효율적이면서도 보다 더 정확하다고 한다.

LightBGM은 성능면에서 XGBoost와 유사하다고 한다.
하지만, 상대적으로 '가볍게' 일을 처리한다.
즉, 더 빨리 계산을 끝낸다고 한다. 일을 쉽게 처리하다보니 모델 성능도 다소 더 좋다고 한다. 
https://en.wikipedia.org/wiki/XGBoost
https://machinelearningmastery.com/xgboost-python-mini-course/
https://machinelearningmastery.com/gentle-introduction-gradient-boosting-algorithm-machine-learning/
Keras를 활용한 딥러닝 문제를 풀 경우: http://incredible.egloos.com/7454154

랜덤 포레스트 (의사 결정 나무들이 모여있는 것) :


베이지안 옵티마이제이션을 활용한 hyperparameter 최적화: http://incredible.egloos.com/7479039
---------------------------------------------------------------------------------------------------------------------

#     First XGBoost model for Pima Indians dataset
from numpy import loadtxt
from xgboost import XGBClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from matplotlib import pyplot
#     load data
dataset = loadtxt('pima-indians-diabetes.csv', delimiter=",")
#     split data into X and y
X = dataset[:,0:8]
Y = dataset[:,8]
#     split data into train and test sets
seed = 7
test_size = 0.33
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=test_size, random_state=seed)
#     fit model no training data
model = XGBClassifier()
model.fit(X_train, y_train)
#     make predictions for test data
y_pred = model.predict(X_test)
predictions = [round(value) for value in y_pred]
#     evaluate predictions
accuracy = accuracy_score(y_test, predictions)
print("Accuracy: %.2f%%" % (accuracy * 100.0))

---------------------------------------------------------------------------------------------------------------------

#     plot feature importance manually
from numpy import loadtxt
from xgboost import XGBClassifier
from matplotlib import pyplot
#     load data
dataset = loadtxt('pima-indians-diabetes.csv', delimiter=",")
#     split data into X and y
X = dataset[:,0:8]
y = dataset[:,8]
#     fit model no training data
model = XGBClassifier()
model.fit(X, y)
#     feature importance
print(model.feature_importances_ )
#    plot
pyplot.bar(range(len(model.feature_importances_)), model.feature_importances_)
pyplot.show()

---------------------------------------------------------------------------------------------------------------------

#     plot feature importance using built-in function
from numpy import loadtxt
from xgboost import XGBClassifier
from xgboost import plot_importance
from matplotlib import pyplot
#     load data
dataset = loadtxt('pima-indians-diabetes.csv', delimiter=",")
#     split data into X and y
X = dataset[:,0:8]
y = dataset[:,8]
#     fit model no training data
model = XGBClassifier()
model.fit(X, y)
#     plot feature importance
plot_importance(model)
pyplot.show()

---------------------------------------------------------------------------------------------------------------------

#      monitor training performance
from numpy import loadtxt
from xgboost import XGBClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
#      load data
dataset = loadtxt('pima-indians-diabetes.csv', delimiter=",")
#      split data into X and y
X = dataset[:,0:8]
Y = dataset[:,8]
#     split data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=7)
#     fit model no training data
model = XGBClassifier()
eval_set = [(X_test, y_test)]
model.fit(X_train, y_train, eval_metric="error", eval_set=eval_set, verbose=True)
#     make predictions for test data
y_pred = model.predict(X_test)
predictions = [round(value) for value in y_pred]
#     evaluate predictions
accuracy = accuracy_score(y_test, predictions)
print("Accuracy: %.2f%%" % (accuracy * 100.0))

---------------------------------------------------------------------------------------------------------------------

#      plot decision tree
from numpy import loadtxt
from xgboost import XGBClassifier
from xgboost import plot_tree
import matplotlib.pyplot as plt
#     load data
dataset = loadtxt('pima-indians-diabetes.csv', delimiter=",")
#      split data into X and y
X = dataset[:,0:8]
y = dataset[:,8]
#      fit model no training data
model = XGBClassifier()
model.fit(X, y)
#      plot single tree
plot_tree(model)
plt.show()

---------------------------------------------------------------------------------------------------------------------

#      XGBoost on Otto dataset, Tune n_estimators
from pandas import read_csv
from xgboost import XGBClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import StratifiedKFold
from sklearn.preprocessing import LabelEncoder
import matplotlib
matplotlib.use('Agg')
from matplotlib import pyplot
#       load data
data = read_csv('train.csv')
dataset = data.values
#      split data into X and y
X = dataset[:,0:94]
y = dataset[:,94]
#      encode string class values as integers
label_encoded_y = LabelEncoder().fit_transform(y)
#      grid search
model = XGBClassifier()
n_estimators = range(50, 400, 50)
param_grid = dict(n_estimators=n_estimators)
kfold = StratifiedKFold(n_splits=10, shuffle=True, random_state=7)
grid_search = GridSearchCV(model, param_grid, scoring="neg_log_loss", n_jobs=-1, cv=kfold)
grid_result = grid_search.fit(X, label_encoded_y)
#      summarize results
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
     print("%f (%f) with: %r" % (mean, stdev, param))
#      plot
pyplot.errorbar(n_estimators, means, yerr=stds)
pyplot.title("XGBoost n_estimators vs Log Loss")
pyplot.xlabel('n_estimators')
pyplot.ylabel('Log Loss')
pyplot.savefig('n_estimators.png')

--------------------------------------------------------------------------------------------------------------------

import xgboost as xgb
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score, KFold
from sklearn.metrics import mean_squared_error
import matplotlib.pyplot as plt
import numpy as np
boston = load_boston()
x, y = boston.data, boston.target
xtrain, xtest, ytrain, ytest=train_test_split(x, y, test_size=0.15)
xgbr = xgb.XGBRegressor()
print(xgbr)
xgbr.fit(xtrain, ytrain)
# - cross validataion
scores = cross_val_score(xgbr, xtrain, ytrain, cv=5)
print("Mean cross-validation score: %.2f" % scores.mean())
kfold = KFold(n_splits=10, shuffle=True)
kf_cv_scores = cross_val_score(xgbr, xtrain, ytrain, cv=kfold )
print("K-fold CV average score: %.2f" % kf_cv_scores.mean())
ypred = xgbr.predict(xtest)
mse = mean_squared_error(ytest, ypred)
print("MSE: %.2f" % mse)
print("RMSE: %.2f" % np.sqrt(mse))
x_ax = range(len(ytest))
plt.scatter(x_ax, ytest, s=5, color="blue", label="original")
plt.plot(x_ax, ypred, lw=0.8, color="red", label="predicted")
plt.legend()
plt.show()

--------------------------------------------------------------------------------------------------------------------

https://www.datatechnotes.com/2019/07/classification-example-with.html
from xgboost import XGBClassifier
from sklearn.datasets import load_iris
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score, KFold
iris = load_iris()
x, y = iris.data, iris.target
xtrain, xtest, ytrain, ytest=train_test_split(x, y, test_size=0.15)
xgbc = XGBClassifier()
print(xgbc)
xgbc.fit(xtrain, ytrain)
# - cross validataion
scores = cross_val_score(xgbc, xtrain, ytrain, cv=5)
print("Mean cross-validation score: %.2f" % scores.mean())
kfold = KFold(n_splits=10, shuffle=True)
kf_cv_scores = cross_val_score(xgbc, xtrain, ytrain, cv=kfold )
print("K-fold CV average score: %.2f" % kf_cv_scores.mean())
ypred = xgbc.predict(xtest)
cm = confusion_matrix(ytest,ypred)
print(cm)

--------------------------------------------------------------------------------------------------------------------

from sklearn.datasets import load_boston
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split
from sklearn.ensemble import AdaBoostRegressor
from sklearn.metrics import mean_squared_error, make_scorer, r2_score
import matplotlib.pyplot as plt
boston = load_boston()
x, y = boston.data, boston.target
xtrain, xtest, ytrain, ytest=train_test_split(x, y, test_size=0.15)
abreg = AdaBoostRegressor()
params = {
 'n_estimators': [50, 100],
 'learning_rate' : [0.01, 0.05, 0.1, 0.5],
 'loss' : ['linear', 'square', 'exponential']
 }
score = make_scorer(mean_squared_error)
gridsearch = GridSearchCV(abreg, params, cv=5, return_train_score=True)
gridsearch.fit(xtrain, ytrain)
print(gridsearch.best_params_)
best_estim=gridsearch.best_estimator_
print(best_estim)
best_estim.fit(xtrain,ytrain)
ytr_pred=best_estim.predict(xtrain)
mse = mean_squared_error(ytr_pred,ytrain)
r2 = r2_score(ytr_pred,ytrain)
print("MSE: %.2f" % mse)
print("R2: %.2f" % r2)
ypred=best_estim.predict(xtest)
mse = mean_squared_error(ytest, ypred)
r2 = r2_score(ytest, ypred)
print("MSE: %.2f" % mse)
print("R2: %.2f" % r2)
x_ax = range(len(ytest))
plt.scatter(x_ax, ytest, s=5, color="blue", label="original")
plt.plot(x_ax, ypred, lw=0.8, color="red", label="predicted")
plt.legend()
plt.show()

--------------------------------------------------------------------------------------------------------------------

#      Train XGBoost model, save to file using joblib, load and make predictions
from numpy import loadtxt
from xgboost import XGBClassifier
from joblib import dump
from joblib import load
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
#     load data
dataset = loadtxt('pima-indians-diabetes.csv', delimiter=",")
#     split data into X and y
X = dataset[:,0:8]
Y = dataset[:,8]
#     split data into train and test sets
seed = 7
test_size = 0.33
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=test_size, random_state=seed)
#     fit model on training data
model = XGBClassifier()
model.fit(X_train, y_train)
#     save model to file
dump(model, "pima.joblib.dat")
print("Saved model to: pima.joblib.dat")

#     some time later...

#     load model from file
loaded_model = load("pima.joblib.dat")
print("Loaded model from: pima.joblib.dat")
#     make predictions for test data
predictions = loaded_model.predict(X_test)
#    evaluate predictions
accuracy = accuracy_score(y_test, predictions)
print("Accuracy: %.2f%%" % (accuracy * 100.0))

---------------------------------------------------------------------------------------------------------------------

#     Train XGBoost model, save to file using pickle, load and make predictions
from sklearn.model_selection import train_test_split
from numpy import loadtxt
import xgboost
import pickle
from sklearn import model_selection
from sklearn.metrics import accuracy_score
#     load data

dataset = loadtxt('pima-indians-diabetes.csv', delimiter=",")
# split data into X and y
X = dataset[:,0:8]
Y = dataset[:,8]
#     split data into train and test sets
seed = 7
test_size = 0.33
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=test_size, random_state=seed)
#     fit model no training data
model = xgboost.XGBClassifier()
model.fit(X_train, y_train)
#    save model to file
pickle.dump(model, open("pima.pickle.dat", "wb"))
#    some time later...
#     load model from file
loaded_model = pickle.load(open("pima.pickle.dat", "rb"))
#     make predictions for test data
y_pred = loaded_model.predict(X_test)
predictions = [round(value) for value in y_pred]
#    evaluate predictions
accuracy = accuracy_score(y_test, predictions)
print("Accuracy: %.2f%%" % (accuracy * 100.0))

---------------------------------------------------------------------------------------------------------------------

import time
import xgboost as xgb
from sklearn.model_selection import RandomizedSearchCV
x_train, y_train, x_valid, y_valid, x_test, y_test =                          # load datasets
clf = xgb.XGBClassifier()
param_grid = {
        'silent': [False],
        'max_depth': [6, 10, 15, 20],
        'learning_rate': [0.001, 0.01, 0.1, 0.2, 0,3],
        'subsample': [0.5, 0.6, 0.7, 0.8, 0.9, 1.0],
        'colsample_bytree': [0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0],
        'colsample_bylevel': [0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0],
        'min_child_weight': [0.5, 1.0, 3.0, 5.0, 7.0, 10.0],
        'gamma': [0, 0.25, 0.5, 1.0],
        'reg_lambda': [0.1, 1.0, 5.0, 10.0, 50.0, 100.0],
        'n_estimators': [100]}
fit_params = {'eval_metric': 'mlogloss',  'early_stopping_rounds': 10,   'eval_set': [(x_valid, y_valid)]}
rs_clf = RandomizedSearchCV(clf, param_grid, n_iter=20,   n_jobs=1, verbose=2, cv=2,
                            fit_params=fit_params,   scoring='neg_log_loss', refit=False, random_state=42)
print("Randomized search..")
search_time_start = time.time()
rs_clf.fit(x_train, y_train)
print("Randomized search time:", time.time() - search_time_start)
best_score = rs_clf.best_score_
best_params = rs_clf.best_params_
print("Best score: {}".format(best_score))
print("Best params: ")
for param_name in sorted(best_params.keys()):
    print('%s: %r' % (param_name, best_params[param_name]))

---------------------------------------------------------------------------------------------------------------------

import numpy as np
from numpy import loadtxt
from xgboost import XGBClassifier
from bayes_opt import BayesianOptimization
from sklearn.model_selection import cross_val_score
pbounds = {
    'learning_rate': (0.01, 1.0),
    'n_estimators': (100, 1000),
    'max_depth': (3,10),
    'subsample': (1.0, 1.0),  # Change for big datasets
    'colsample': (1.0, 1.0),  # Change for datasets with lots of features
    'gamma': (0, 5)}
def xgboost_hyper_param(learning_rate, n_estimators, max_depth, subsample, colsample, gamma):
    dataset = loadtxt('pima-indians-diabetes.csv', delimiter=",")
    X = dataset[:,0:8]
    y = dataset[:,8]
    max_depth = int(max_depth)
    n_estimators = int(n_estimators)
    clf = XGBClassifier( max_depth=max_depth, learning_rate=learning_rate, n_estimators=n_estimators,
         subsample=subsample, colsample=colsample, gamma=gamma)
    return np.mean(cross_val_score(clf, X, y, cv=3, scoring='roc_auc'))
optimizer = BayesianOptimization( f=xgboost_hyper_param, pbounds=pbounds, random_state=1)
optimizer.maximize(init_points=3, n_iter=24, acq='ei', xi=0.01)

------------------------------------------------------------------------------------------------------------------
>>> from sklearn import datasets, linear_model
>>> from sklearn.model_selection import cross_val_score
>>> diabetes = datasets.load_diabetes()
>>> X = diabetes.data[:150]
>>> y = diabetes.target[:150]
>>> lasso = linear_model.Lasso()
>>> print(cross_val_score(lasso, X, y, cv=3))
[0.33150734 0.08022311 0.03531764]



---------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------
https://scikit-optimize.github.io/stable/auto_examples/hyperparameter-optimization.html#sphx-glr-auto-examples-hyperparameter-optimization-py
--------------------------------------------------------------------------------------------------------------------
"""
============================================
Tuning a scikit-learn estimator with `skopt`
============================================

Gilles Louppe, July 2016
Katie Malone, August 2016
Reformatted by Holger Nahrstaedt 2020

.. currentmodule:: skopt

If you are looking for a :obj:`sklearn.model_selection.GridSearchCV` replacement checkout
:ref:`sphx_glr_auto_examples_sklearn-gridsearchcv-replacement.py` instead.

Problem statement
=================

Tuning the hyper-parameters of a machine learning model is often carried out
using an exhaustive exploration of (a subset of) the space all hyper-parameter
configurations (e.g., using :obj:`sklearn.model_selection.GridSearchCV`), which
often results in a very time consuming operation.

In this notebook, we illustrate how to couple :class:`gp_minimize` with sklearn's
estimators to tune hyper-parameters using sequential model-based optimisation,
hopefully resulting in equivalent or better solutions, but within less
evaluations.

Note: scikit-optimize provides a dedicated interface for estimator tuning via
:class:`BayesSearchCV` class which has a similar interface to those of
:obj:`sklearn.model_selection.GridSearchCV`. This class uses functions of skopt to perform hyperparameter
search efficiently. For example usage of this class, see
:ref:`sphx_glr_auto_examples_sklearn-gridsearchcv-replacement.py`
example notebook.
"""
print(__doc__)
import numpy as np

#############################################################################
# Objective
# =========
# To tune the hyper-parameters of our model we need to define a model,
# decide which parameters to optimize, and define the objective function
# we want to minimize.

from sklearn.datasets import load_boston
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.model_selection import cross_val_score

boston = load_boston()
X, y = boston.data, boston.target
n_features = X.shape[1]

# gradient boosted trees tend to do well on problems like this
reg = GradientBoostingRegressor(n_estimators=50, random_state=0)

#############################################################################
# Next, we need to define the bounds of the dimensions of the search space
# we want to explore and pick the objective. In this case the cross-validation
# mean absolute error of a gradient boosting regressor over the Boston
# dataset, as a function of its hyper-parameters.

from skopt.space import Real, Integer
from skopt.utils import use_named_args


# The list of hyper-parameters we want to optimize. For each one we define the
# bounds, the corresponding scikit-learn parameter name, as well as how to
# sample values from that dimension (`'log-uniform'` for the learning rate)
space  = [Integer(1, 5, name='max_depth'),
          Real(10**-5, 10**0, "log-uniform", name='learning_rate'),
          Integer(1, n_features, name='max_features'),
          Integer(2, 100, name='min_samples_split'),
          Integer(1, 100, name='min_samples_leaf')]

# this decorator allows your objective function to receive a the parameters as
# keyword arguments. This is particularly convenient when you want to set
# scikit-learn estimator parameters
@use_named_args(space)
def objective(**params):
    reg.set_params(**params)

    return -np.mean(cross_val_score(reg, X, y, cv=5, n_jobs=-1,
                                    scoring="neg_mean_absolute_error"))

#############################################################################
# Optimize all the things!
# ========================
# With these two pieces, we are now ready for sequential model-based
# optimisation. Here we use gaussian process-based optimisation.

from skopt import gp_minimize
res_gp = gp_minimize(objective, space, n_calls=50, random_state=0)

"Best score=%.4f" % res_gp.fun

#############################################################################

print("""Best parameters:
- max_depth=%d
- learning_rate=%.6f
- max_features=%d
- min_samples_split=%d
- min_samples_leaf=%d""" % (res_gp.x[0], res_gp.x[1],
                            res_gp.x[2], res_gp.x[3],
                            res_gp.x[4]))

#############################################################################
# Convergence plot
# ================

from skopt.plots import plot_convergence

plot_convergence(res_gp)
--------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------


핑백

덧글

  • 바죠 2020/01/04 17:15 # 답글

    from sklearn.datasets import load_boston
    from keras.models import Sequential
    from keras.layers import Dense, Conv1D, Flatten
    from sklearn.model_selection import train_test_split
    from sklearn.metrics import mean_squared_error
    import matplotlib.pyplot as plt

    boston = load_boston()
    x, y = boston.data, boston.target
    print(x.shape)

    x = x.reshape(x.shape[0], x.shape[1], 1)
    print(x.shape)

    xtrain, xtest, ytrain, ytest=train_test_split(x, y, test_size=0.15)

    model = Sequential()
    model.add(Conv1D(32, 2, activation="relu", input_shape=(13,1)))
    model.add(Flatten())
    model.add(Dense(64, activation="relu"))
    model.add(Dense(1))
    model.compile(loss="mse", optimizer="adam")
    model.summary()
    model.fit(xtrain, ytrain, batch_size=12,epochs=200, verbose=0)

    ypred = model.predict(xtest)
    print(model.evaluate(xtrain, ytrain))
    print("MSE: %.4f" % mean_squared_error(ytest, ypred))

    x_ax = range(len(ypred))
    plt.scatter(x_ax, ytest, s=5, color="blue", label="original")
    plt.plot(x_ax, ypred, lw=0.8, color="red", label="predicted")
    plt.legend()
    plt.show()
  • 바죠 2020/01/23 09:18 # 답글

    https://towardsdatascience.com/a-beginners-guide-to-xgboost-87f5d4c30ed7?source=email-1e0b00610a37-1579705862126-digest.reader------1-71------------------8ed92c3e_4182_42ea_906e_620dd4f59f98-27-----

    from sklearn import datasets
    import xgboost as xgb

    iris = datasets.load_iris()
    X = iris.data
    y = iris.target

    from sklearn.model_selection import train_test_split
    X_train, X_test, Y_train, Y_test = train_test_split(X, y, test_size=0.2)
    D_train = xgb.DMatrix(X_train, label=Y_train)
    D_test = xgb.DMatrix(X_test, label=Y_test)

    param = {
    'eta': 0.3,
    'max_depth': 3,
    'objective': 'multi:softprob',
    'num_class': 3}

    steps = 20 # The number of training iterations

    model = xgb.train(param, D_train, steps)

    import numpy as np
    from sklearn.metrics import precision_score, recall_score, accuracy_score

    preds = model.predict(D_test)
    best_preds = np.asarray([np.argmax(line) for line in preds])

    print("Precision = {}".format(precision_score(Y_test, best_preds, average='macro')))
    print("Recall = {}".format(recall_score(Y_test, best_preds, average='macro')))
    print("Accuracy = {}".format(accuracy_score(Y_test, best_preds)))


  • 바죠 2021/01/13 15:08 # 답글

    https://scikit-optimize.github.io/stable/auto_examples/hyperparameter-optimization.html#sphx-glr-auto-examples-hyperparameter-optimization-py
댓글 입력 영역

최근 포토로그



MathJax