Flame Guardian: Deep Learning-Based Fire Detection System
Introduction
Imagine waking up to the smell of smoke, heart racing as you ensure your family’s safety. Early detection is crucial, and “Flame Guardian,” a deep learning-powered fire detection system, aims to make a life-saving difference. This article guides you through creating this technology using CNNs and TensorFlow, from data gathering and augmentation to model construction and fine-tuning. Whether you’re a tech enthusiast or a professional, discover how to leverage cutting-edge technology to protect lives and property.
Learning Outcomes
- Gain skills in preparing, organizing, and augmenting image datasets to optimize model performance.
- Learn how to construct and fine-tune convolutional neural networks for effective image classification tasks.
- Develop the ability to assess and interpret model performance using metrics and visualizations.
- Learn how to deploy and adapt DL(Deep Learning) models for practical applications, demonstrating their utility in real-world problems like fire detection.
This article was published as a part of theData Science Blogathon.
Table of contents
- Revolution of Deep Learning in Fire Detection
- Challenges in Fire Detection
- Dataset Overview
- Setting Up the Environment
- Data Preparation
- Visualizing the Distribution of Images
- Displaying Fire and Non-Fire Images
- Enhancing Training Data with Augmentation Techniques
- Constructing the Fire Detection Model
- Model Fitting: Training the Convolutional Neural Network
- Evaluating the Model
- Example Usage: Predicting Fire in New Images
- Frequently Asked Questions
Revolution of Deep Learning in Fire Detection
In recent times, theDeep Learning has revolutionized colorful fields, from healthcare to finance, and now, it’s making strides in safety and disaster operations. One particularly instigative operation of Deep Learning is in the realm of fire discovery. With the adding frequency and inflexibility of backfires worldwide, developing an effective and dependable fire discovery system is more pivotal than ever. In this comprehensive companion, we will walk you through the process of creating an important fire discovery system using convolutional neural networks( CNNs) and TensorFlow. This system, aptly named” Flame Guardian,” aims to identify fire from images with high delicacy, potentially abetting in early discovery and forestallment of wide fire damage.
Fires, whether wildfires or structural fires pose a significant threat to life, property, and the environment. Early detection is critical in mitigating the devastating effects of fires. Deep-Learning based fire detection systems, can analyze vast amounts of data quickly and accurately, identifying fire incidents before they escalate.
Challenges in Fire Detection
Detecting fire using Deep Learning presents several challenges:
- Data Variability: Fire images can vary greatly in terms of color, intensity, and surrounding environment. A robust detection system must be able to handle this variability.
- False Positives: It’s crucial to minimize false positives (incorrectly identifying non-fire images as fire) to avoid unnecessary panic and resource deployment.
- Real-Time Processing: For practical use, the system should be able to process images in real-time, providing timely alerts.
- Scalability: The system should be scalable to handle large datasets and work across different.
Dataset Overview
The dataset used for the Flame Guardian fire detection system comprises images categorized into two classes: “fire” and “non-fire.” The primary purpose of this dataset is to train a convolutional neural network (CNN) model to accurately distinguish between images that contain fire and those that do not.
Composition of Fire and Non-Fire Images
- Fire Images :These images contain various scenarios where fire is present. The dataset includes images of wildfires, structural fires, and controlled burns. The fire in these images may vary in size, intensity, and the environment in which it is present. This diversity helps the model learn the different visual characteristics of fire.
- Non-Fire Images :These images do not contain any fire. They include a wide range of scenarios such as landscapes, buildings, forests, and other natural and urban environments without any fire. The inclusion of diverse non-fire images ensures that the model does not falsely identify fire in non-fire situations.
You can download the dataset from here.
Setting Up the Environment
First, we need to set up our terrain with the necessary libraries and tools. We will be using Google Collab for this design, as it provides a accessible platform with GPU support. We’ve formerly downloaded the dataset and uploaded it on drive.
#Mount drive from google.colab import drive drive.mount('/content/drive') #Importing necessary Libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import plotly.express as px import plotly.graph_objects as go from plotly.subplots import make_subplots import os import tensorflow as tf from tensorflow.keras.preprocessing import image from tensorflow.keras.preprocessing.image import ImageDataGenerator #setting style grid sns.set_style('darkgrid')
Data Preparation
We require a dataset with pictures of fire and non-fire scripts in order to train our algorithm. A blank DataFrame and a function to add images from our Google Drive to it will be created.
# Create an empty DataFrame df = pd.DataFrame(columns=['path', 'label']) # Function to add images to the DataFrame def add_images_to_df(directory, label): for dirname, _, filenames in os.walk(directory): for filename in filenames: df.loc[len(df)] = [os.path.join(dirname, filename), label] # Add fire images add_images_to_df('/content/drive/MyDrive/Fire/fire_dataset/fire_images', 'fire') # Add non-fire images add_images_to_df('/content/drive/MyDrive/Fire/fire_dataset/non_fire_images', 'non_fire') # Shuffle the dataset df = df.sample(frac=1).reset_index(drop=True)
Visualizing the Distribution of Images
Visualizing the distribution of fire and non-fire images helps us understand our dataset better. We’ll use Plotly for interactive plots.
Creating a Pie Chart for Image Distribution
Let us now create a pie chart for image distribution.
# Create the scatter plot fig = px.scatter( data_frame=df, x=df.index, y='label', color='label', title='Distribution of Fire and Non-Fire Images' ) # Update marker size fig.update_traces(marker_size=2) fig.add_trace(go.Pie(values=df['label'].value_counts().to_numpy(), labels=df['label'].value_counts().index, marker=dict(colors=['lightblue','pink'])), row=1, col=2)
Displaying Fire and Non-Fire Images
Let us now write the code for displaying fire and non-fire images.
def visualize_images(label, title): data = df[df['label'] == label] pics = 6 # Set the number of pics fig, ax = plt.subplots(int(pics // 2), 2, figsize=(15, 15)) plt.suptitle(title) ax = ax.ravel() for i in range((pics // 2) * 2): path = data.sample(1).loc[:, 'path'].to_numpy()[0] img = image.load_img(path) img = image.img_to_array(img) / 255 ax[i].imshow(img) ax[i].axes.xaxis.set_visible(False) ax[i].axes.yaxis.set_visible(False) visualize_images('fire', 'Images with Fire') visualize_images('non_fire', 'Images without Fire')
By displaying some sample images from both fire and non-fire categories we would get a sense of what our model will be working with.
Enhancing Training Data with Augmentation Techniques
We’re going to apply image addition ways to ameliorate our training data. Applying arbitrary image adaptations, similar as gyration, drone, and shear, is known as addition. By generating a more robust and different dataset, this procedure enhances the model’s capacity to generalize to new images.
from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Conv2D, MaxPool2D, Flatten, Dense generator = ImageDataGenerator( rotation_range= 20, width_shift_range=0.1, height_shift_range=0.1, shear_range = 2, zoom_range=0.2, rescale = 1/255, validation_split=0.2, ) train_gen = generator.flow_from_dataframe(df,x_col='path',y_col='label',images_size=(256,256),class_mode='binary',subset='training') val_gen = generator.flow_from_dataframe(df,x_col='path',y_col='label',images_size=(256,256),class_mode='binary',subset='validation') class_indices = {} for key in train_gen.class_indices.keys(): class_indices[train_gen.class_indices[key]] = key print(class_indices)
Visualizing Augmented Images
We can visualize some of the augmented images generated by our training set.
sns.set_style('dark') pics = 6 # Set the number of pics fig, ax = plt.subplots(int(pics // 2), 2, figsize=(15, 15)) plt.suptitle('Generated images in training set') ax = ax.ravel() for i in range((pics // 2) * 2): ax[i].imshow(train_gen[0][0][i]) ax[i].axes.xaxis.set_visible(False) ax[i].axes.yaxis.set_visible(False)
Constructing the Fire Detection Model
Our model will correspond of several convolutional layers, each followed by a maximum- pooling subcaste. Convolutional layers are the core structure blocks of CNNs, allowing the model to learn spatial scales of features from the images. Max- pooling layers help reduce the dimensionality of the point maps, making the model more effective. We will also add completely connected( thick) layers towards the end of the model. These layers help combine the features learned by the convolutional layers and make the final bracket decision. The affair subcaste will have a single neuron with a sigmoid activation function, which labors a probability score indicating whether the image contains fire. After defining the model armature, we’ll publish a summary to review the structure and the number of parameters in each subcaste. This step is important to insure that the model is rightly configured.
from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Conv2D, MaxPool2D, Flatten, Dense model = Sequential() model.add(Conv2D(filters=32,kernel_size = (2,2),activation='relu',input_shape = (256,256,3))) model.add(MaxPool2D()) model.add(Conv2D(filters=64,kernel_size=(2,2),activation='relu')) model.add(MaxPool2D()) model.add(Conv2D(filters=128,kernel_size=(2,2),activation='relu')) model.add(MaxPool2D()) model.add(Flatten()) model.add(Dense(64,activation='relu')) model.add(Dense(32,activation = 'relu')) model.add(Dense(1,activation = 'sigmoid')) model.summary()
Compiling the Model with Optimizers and Loss Functions
Next, we’ll compile the model using the Adam optimizer and the binary cross-entropy loss function. The Adam optimizer is widely used in deep learning for its efficiency and adaptive learning rate. Binary cross-entropy is appropriate for our binary classification problem (fire vs. non-fire).
We’ll also specify additional metrics, such as accuracy, recall, and area under the curve (AUC), to evaluate the model’s performance during training and validation.
Adding Callbacks for Optimal Training
Callbacks are a powerful feature in TensorFlow that allows us to monitor and control the training process. We’ll use two important callbacks:
- EarlyStopping: Stops training when the validation loss stops improving, preventing overfitting.
- ReduceLROnPlateau: Reduces the learning rate when the validation loss plateaus, helping the model converge to a better solution.
#Compiling Model from tensorflow.keras.metrics import Recall,AUC from tensorflow.keras.utils import plot_model model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy',Recall(),AUC()]) #Defining Callbacks from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau early_stoppping = EarlyStopping(monitor='val_loss',patience=5,restore_best_weights=True) reduce_lr_on_plateau = ReduceLROnPlateau(monitor='val_loss',factor=0.1,patience=5)
Model Fitting: Training the Convolutional Neural Network
Model fitting refers to the process of training a machine learning model on a dataset. During this process, the model learns the underlying patterns in the data by adjusting its parameters (weights and biases) to minimize the loss function. In the context of deep learning, this involves several epochs of forward and backward passes over the training data.
model.fit(x=train_gen,batch_size=32,epochs=15,validation_data=val_gen,callbacks=[early_stoppping,reduce_lr_on_plateau])
Evaluating the Model
After training, we’ll evaluate the model’s performance on the validation set. This step helps us understand how well the model generalizes to new data. We’ll also visualize the training history to see how the loss and metrics evolved over time.
eval_list = model.evaluate(val_gen,return_dict=True) for metric in eval_list.keys(): print(metric f": {eval_list[metric]:.2f}") eval_list = model.evaluate(val_gen,return_dict=True) for metric in eval_list.keys(): print(metric f": {eval_list[metric]:.2f}")
Example Usage: Predicting Fire in New Images
Finally, we’ll demonstrate how to use the trained model to predict whether a new image contains fire. This step involves loading an image, preprocessing it to match the model’s input requirements, and using the model to make a prediction.
Downloading and Loading the Image
We’ll download a sample image from the internet and load it using TensorFlow’s image processing functions. This step involves resizing the image and normalizing its pixel values.
Making the Prediction
Using the trained model, we’ll make a prediction on the loaded image. The model will output a probability score, which we’ll round to get a binary classification (fire or non-fire). We’ll also map the prediction to its corresponding label using the class indices.
# Downloading the image !curl https://static01.nyt.com/images/2021/02/19/world/19storm-briefing-texas-fire/19storm-briefing-texas-fire-articleLarge.jpg --output predict.jpg #loading the image img = image.load_img('predict.jpg') img img = image.img_to_array(img)/255 img = tf.image.resize(img,(256,256)) img = tf.expand_dims(img,axis=0) print("Image Shape",img.shape) prediction = int(tf.round(model.predict(x=img)).numpy()[0][0]) print("The predicted value is: ",prediction,"and the predicted label is:",class_indices[prediction])
Conclusion
Developing an Deep Learning-based fire detection system like “Flame Guardian” exemplifies the transformative potential of Deep Learning in addressing real-world challenges. By meticulously following each step, from data preparation and visualization to model building, training, and evaluation, we have created a robust framework for detecting fire in images. This project not only highlights the technical intricacies involved deep learning but also emphasizes the importance of leveraging technology for safety and disaster prevention.
As we conclude, it’s evident that DL Model can significantly enhance fire detection systems, making them more efficient, reliable, and scalable. While traditional methods have their merits, the incorporation of Deep Learning introduces a new level of sophistication and accuracy. The journey of developing “Flame Guardian” has been both enlightening and rewarding, showcasing the immense capabilities of modern technologies.
Key Takeaways
- Understood Data handling and Visualization techniques.
- Understood proper data collection and augmentation ensure effective model training and generalization.
- Implemented Model Building and Model Evaluation.
- Understood Callbacks like EarlyStopping and ReduceLROnPlateau to optimize training and prevent overfitting.
- Learnt Building Convolutional Neural Network For Fire Detection using CNN.
Frequently Asked Questions
Q1. What is “Flame Guardian”?A. “Flame Guardian” is a fire detection system that uses convolutional neural networks (CNNs) and TensorFlow to identify fire in images with high accuracy.
Q2. Why is early fire detection important?A. Early fire detection is crucial for preventing extensive damage, saving lives, and reducing the environmental impact of fires. Rapid response can significantly mitigate the devastating effects of both wildfires and structural fires.
Q3. What challenges are involved in building a fire detection system using deep learning?A. Challenges include handling data variability (differences in color, intensity, and environment), minimizing false positives, ensuring real-time processing capabilities, and scalability to handle large datasets.
Q4. How does image augmentation help in training the model?A. Image augmentation enhances the training dataset by applying random transformations such as rotation, zoom, and shear. This helps the model generalize better by exposing it to a variety of scenarios, improving its robustness.
Q5. What metrics are used to evaluate the model’s performance?A. The model is evaluated using metrics like accuracy, recall, and the area under the curve (AUC). These metrics help assess how well the model distinguishes between fire and non-fire images and its overall reliability.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.
The above is the detailed content of Flame Guardian: Deep Learning-Based Fire Detection System. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

The article reviews top AI art generators, discussing their features, suitability for creative projects, and value. It highlights Midjourney as the best value for professionals and recommends DALL-E 2 for high-quality, customizable art.

Meta's Llama 3.2: A Leap Forward in Multimodal and Mobile AI Meta recently unveiled Llama 3.2, a significant advancement in AI featuring powerful vision capabilities and lightweight text models optimized for mobile devices. Building on the success o

The article compares top AI chatbots like ChatGPT, Gemini, and Claude, focusing on their unique features, customization options, and performance in natural language processing and reliability.

ChatGPT 4 is currently available and widely used, demonstrating significant improvements in understanding context and generating coherent responses compared to its predecessors like ChatGPT 3.5. Future developments may include more personalized interactions and real-time data processing capabilities, further enhancing its potential for various applications.

The article discusses top AI writing assistants like Grammarly, Jasper, Copy.ai, Writesonic, and Rytr, focusing on their unique features for content creation. It argues that Jasper excels in SEO optimization, while AI tools help maintain tone consist

2024 witnessed a shift from simply using LLMs for content generation to understanding their inner workings. This exploration led to the discovery of AI Agents – autonomous systems handling tasks and decisions with minimal human intervention. Buildin

The article reviews top AI voice generators like Google Cloud, Amazon Polly, Microsoft Azure, IBM Watson, and Descript, focusing on their features, voice quality, and suitability for different needs.

Shopify CEO Tobi Lütke's recent memo boldly declares AI proficiency a fundamental expectation for every employee, marking a significant cultural shift within the company. This isn't a fleeting trend; it's a new operational paradigm integrated into p
