在房地产领域,确定房地产价格涉及许多因素,从位置和规模到便利设施和市场趋势。简单线性回归是机器学习的一项基础技术,它提供了一种根据房间数量或平方英尺等关键特征来预测房价的实用方法。
在本文中,我深入研究了将简单线性回归应用于住房数据集的过程,从数据预处理和特征选择到构建可以提供有价值的价格洞察的模型。无论您是数据科学新手还是寻求加深理解,该项目都可以让您亲身探索数据驱动的预测如何塑造更明智的房地产决策。
首先,您首先要导入库:
import pandas as pd import seaborn as sns import numpy as np import matplotlib.pyplot as plt
#Read from the directory where you stored the data data = pd.read_csv('/kaggle/input/california-housing-prices/housing.csv')
data
#Test to see if there arent any null values data.info()
#Trying to draw the same number of null values data.dropna(inplace = True)
data.info()
#From our data, we are going to train and test our data from sklearn.model_selection import train_test_split X = data.drop(['median_house_value'], axis = 1) y = data['median_house_value']
y
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2)
#Examining correlation between x and y training data train_data = X_train.join(y_train)
train_data
#Visualizing the above train_data.hist(figsize=(15, 8))
#Encoding non-numeric columns to see if they are useful and categorical for analysis train_data_encoded = pd.get_dummies(train_data, drop_first=True) correlation_matrix = train_data_encoded.corr() print(correlation_matrix)
train_data_encoded.corr()
plt.figure(figsize=(15,8)) sns.heatmap(train_data_encoded.corr(), annot=True, cmap = "inferno")
import pandas as pd import seaborn as sns import numpy as np import matplotlib.pyplot as plt
#Read from the directory where you stored the data data = pd.read_csv('/kaggle/input/california-housing-prices/housing.csv')
data
ocean_proximity
内陆 5183
近海 2108
近湾 1783
第五岛
名称:计数,数据类型:int64
#Test to see if there arent any null values data.info()
#Trying to draw the same number of null values data.dropna(inplace = True)
data.info()
#From our data, we are going to train and test our data from sklearn.model_selection import train_test_split X = data.drop(['median_house_value'], axis = 1) y = data['median_house_value']
y
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2)
#Examining correlation between x and y training data train_data = X_train.join(y_train)
train_data
#Visualizing the above train_data.hist(figsize=(15, 8))
#Encoding non-numeric columns to see if they are useful and categorical for analysis train_data_encoded = pd.get_dummies(train_data, drop_first=True) correlation_matrix = train_data_encoded.corr() print(correlation_matrix)
train_data_encoded.corr()
plt.figure(figsize=(15,8)) sns.heatmap(train_data_encoded.corr(), annot=True, cmap = "inferno")
train_data['total_rooms'] = np.log(train_data['total_rooms'] + 1) train_data['total_bedrooms'] = np.log(train_data['total_bedrooms'] +1) train_data['population'] = np.log(train_data['population'] + 1) train_data['households'] = np.log(train_data['households'] + 1)
train_data.hist(figsize=(15, 8))
0.5092972905670141
#convert ocean_proximity factors into binary's using one_hot_encoding train_data.ocean_proximity.value_counts()
#For each feature of the above we will then create its binary(0 or 1) pd.get_dummies(train_data.ocean_proximity)
0.4447616558596853
#Dropping afterwards the proximity train_data = train_data.join(pd.get_dummies(train_data.ocean_proximity)).drop(['ocean_proximity'], axis=1)
train_data
#recheck for correlation plt.figure(figsize=(18, 8)) sns.heatmap(train_data.corr(), annot=True, cmap ='twilight')
0.5384474921332503
我真的想说,训练机器并不是最简单的过程,但为了不断改进上面的结果,您可以在 param_grid 下添加更多功能,例如 min_feature,这样您的最佳估计器分数就可以不断改进。
如果您到目前为止,请在下面点赞并分享您的评论,您的意见非常重要。谢谢!??❤️
以上是房屋价格预测的详细内容。更多信息请关注PHP中文网其他相关文章!