首页 > 编程知识 正文

企业战略管理案例分析,税收筹划真实案例分析

时间:2023-05-05 04:38:32 阅读:196105 作者:891

xgboost 使用Dmatrix格式 转换 安装xghoost包:pip install xgboost -i https://pypi.tuna.tsinghua.edu.cn/simpleimport numpy as npimport pandas as pdimport pickle #pickle提供了一个简单的持久化功能。可以将对象以文件的形式存放在磁盘上import xgboost as xgbfrom sklearn.model_selection import train_test_split #皮马印第安人糖尿病数据集 包含很多字段:怀孕次数 口服葡萄糖耐量试验中血浆葡萄糖浓度 舒张压(mm Hg) 三头肌组织褶厚度(mm) #2小时血清胰岛素(μU/ ml) 体重指数(kg/(身高(m)^2) 糖尿病系统功能 年龄(岁)# 基本例子,从csv文件中读取数据,做二分类# 用pandas读入数据data = pd.read_csv('pima-indians-diabetes.csv')'''Pregnancies:怀孕次数 Glucose:葡萄糖 BloodPressure:血压 (mm Hg) SkinThickness:皮层厚度 (mm) Insulin:胰岛素 2小时血清胰岛素(mu U / ml BMI:体重指数 (体重/身高)^2 DiabetesPedigreeFunction:糖尿病谱系功能 Age:年龄 (岁) Outcome:类标变量 (0或1)'''data.head()  PregnanciesGlucoseBloodPressureSkinThicknessInsulinBMIDiabetesPedigreeFunctionAgeOutcome061487235033.60.62750111856629026.60.35131028183640023.30.672321318966239428.10.16721040137403516843.12.288331data.info() #没有缺失值'''<class 'pandas.core.frame.DataFrame'>RangeIndex: 768 entries, 0 to 767Data columns (total 9 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Pregnancies 768 non-null int64 1 Glucose 768 non-null int64 2 BloodPressure 768 non-null int64 3 SkinThickness 768 non-null int64 4 Insulin 768 non-null int64 5 BMI 768 non-null float64 6 DiabetesPedigreeFunction 768 non-null float64 7 Age 768 non-null int64 8 Outcome 768 non-null int64 dtypes: float64(2), int64(7)memory usage: 54.1 KB''' data.describe()  PregnanciesGlucoseBloodPressureSkinThicknessInsulinBMIDiabetesPedigreeFunctionAgeOutcomecount768.000000768.000000768.000000768.000000768.000000768.000000768.000000768.000000768.000000mean3.845052120.89453169.10546920.53645879.79947931.9925780.47187633.2408850.348958std3.36957831.97261819.35580715.952218115.2440027.8841600.33132911.7602320.476951min0.0000000.0000000.0000000.0000000.0000000.0000000.07800021.0000000.00000025%1.00000099.00000062.0000000.0000000.00000027.3000000.24375024.0000000.00000050%3.000000117.00000072.00000023.00000030.50000032.0000000.37250029.0000000.00000075%6.000000140.25000080.00000032.000000127.25000036.6000000.62625041.0000001.000000max17.000000199.000000122.00000099.000000846.00000067.1000002.42000081.0000001.000000data.groupby(['Outcome']).count()  PregnanciesGlucoseBloodPressureSkinThicknessInsulinBMIDiabetesPedigreeFunctionAgeOutcome        05005005005005005005005001268268268268268268268268data.corr()  PregnanciesGlucoseBloodPressureSkinThicknessInsulinBMIDiabetesPedigreeFunctionAgeOutcomePregnancies1.0000000.1294590.141282-0.081672-0.0735350.017683-0.0335230.5443410.221898Glucose0.1294591.0000000.1525900.0573280.3313570.2210710.1373370.2635140.466581BloodPressure0.1412820.1525901.0000000.2073710.0889330.2818050.0412650.2395280.065068SkinThickness-0.0816720.0573280.2073711.0000000.4367830.3925730.183928-0.1139700.074752Insulin-0.0735350.3313570.0889330.4367831.0000000.1978590.185071-0.0421630.130548BMI0.0176830.2210710.2818050.3925730.1978591.0000000.1406470.0362420.292695DiabetesPedigreeFunction-0.0335230.1373370.0412650.1839280.1850710.1406471.0000000.0335610.173844Age0.5443410.2635140.239528-0.113970-0.0421630.0362420.0335611.0000000.238356Outcome0.2218980.4665810.0650680.0747520.1305480.2926950.1738440.2383561.000000# 做数据切分train,test = train_test_split(data) #把数据划分成训练集和测试集 #转换成Dmatrix格式。是Xgboost特有的# 特征列feature_columns = ['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI', 'DiabetesPedigreeFunction', 'Age']#目标列target_column = 'Outcome'xgtrain = xgb.DMatrix(train[feature_columns].values,train[target_column].values) #训练集xgtest = xgb.DMatrix(test[feature_columns].values,test[target_column].values) #测试集#Xgboost自带的读取格式DMatrix()#XGBoost可以加载libsvm格式的文本数据,加载的数据格式可以为Numpy的二维数组和XGBoost的二进制的缓存文件。#加载的数据存储在对象DMatrix中。 #参数设定param = {'max_depth':5, #树最大深度 'eta':0.1, #学习率 'silent':1, # 训练的过程中是否打印训练的信息 'subsample':0.7, # 行采样百分比 'colsample_bytree':0.7, # 列采样 'objective':'binary:logistic' } # 用logistic逻辑回归做分类器 #设定watchlist 用于查看模型状态watchlist = [(xgtest,'eval'),(xgtrain,'train')]bst = xgb.train(param,xgtrain,10,watchlist) #参数设定,训练集数据,10次,查看状态# 使用模型预测preds = bst.predict(xgtest)'''[0]eval-logloss:0.66464train-logloss:0.65518[1]eval-logloss:0.64236train-logloss:0.62582[2]eval-logloss:0.62212train-logloss:0.60128[3]eval-logloss:0.60074train-logloss:0.57535[4]eval-logloss:0.58476train-logloss:0.55096[5]eval-logloss:0.56853train-logloss:0.53070[6]eval-logloss:0.55099train-logloss:0.50928[7]eval-logloss:0.54226train-logloss:0.49490[8]eval-logloss:0.53426train-logloss:0.47898[9]eval-logloss:0.52509train-logloss:0.46670''' # 判断准确率labels = xgtest.get_label() #预测的标签要和正确的做对比print ('错误类为%f' % #preds[i]>0.5 做为预测为1的阈值(sum(1 for i in range(len(preds)) if int(preds[i]>0.5)!=labels[i]) /float(len(preds)))# 模型存储bst.save_model('1.model')错误类为0.21354166666666666

 

使用xgboost的sklearn包

Joblib是一组用于在Python中提供轻量级流水线的工具。特点:·透明的磁盘缓存功能和懒惰的重新评估(memoize模式)·简单的并行计算

Joblib可以将模型保存到磁盘并可在必要时重新运行

import warningswarnings.filterwarnings("ignore")import numpy as npimport pandas as pdimport pickleimport xgboost as xgbfrom sklearn.model_selection import train_test_splitimport joblib # 用pandas读入数据data = pd.read_csv('Pima-Indians-Diabetes.csv')# 做数据切分train, test = train_test_split(data)# 取出特征X和目标y的部分feature_columns = ['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI', 'DiabetesPedigreeFunction', 'Age']target_column = 'Outcome'train_X = train[feature_columns].valuestrain_y = train[target_column].valuestest_X = test[feature_columns].valuestest_y = test[target_column].values# 初始化模型xgb_classifier = xgb.XGBClassifier(n_estimators=20, max_depth=4, learning_rate=0.1, subsample=0.7, colsample_bytree=0.7)# 拟合模型xgb_classifier.fit(train_X, train_y)# 使用模型预测preds = xgb_classifier.predict(test_X)# 判断准确率print ('错误类为%f' %((preds!=test_y).sum()/float(test_y.shape[0])))#错误类为0.270833# 模型存储#joblib.dump(xgb_classifier, '2.model') XGBoost 和GBDT 区别

相同点: 基本原理都是一样的

xgboost 也是串行训练 一棵树一棵树训练

特征的处理是并行的

对比GBDT优点

训练速度快

支持自定义损失函数

传统的GBDT是以CART树作为基分类器,xgboost还支持线性分类器,这个时候xgboost相当于带L1和L2正则化项的逻辑回归(分类问题)或者线性回归(回归问题),泛化能力更强

可以调节的参数更多

版权声明:该文观点仅代表作者本人。处理文章:请发送邮件至 三1五14八八95#扣扣.com 举报,一经查实,本站将立刻删除。