Skip to content

ArtDetectiveAI is an AI model trained on a Jetson Nano that specializes in detecting whether art is AI created or human created.

Notifications You must be signed in to change notification settings

Adam-Dalloul/ArtDetectiveAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

30 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ArtDetectiveAI

thumbnail

ArtDetectiveAI specializes in detecting whether an image (specifically an artistic image) is AI generated or human generated.

With the recent advancements of AI art generation and AI image generation, it is becoming increasingly difficult for humans to differentiate between what is real (human generated) and what is fake (AI generated). This model serves as a way for humans to detect whether images are AI generated or not to prevent plagiarism, dishonest work, and more.

The Algorithm

The full dataset can be viewed here. The dataset was slightly modified by splitting some data between the AI and Non_AI training folders to use them for validation instead.

The model utilizes the dataset to classify images into two categories:

  1. AI
  2. Non_AI

AI refers to images detected as AI art, while Non_AI refers to images generated by humans.

The model was optimized for the highest accuracy as it was trained with googlenet, resnet-18, different epochs, etc. and the highest accuracy model was found to use googlenet.

non_ai_detection
ai_detection

How It Works

ArtDetectiveAI uses a neural network to classify images. The process involves:

  1. Input Image: Upload an image file for classification.
  2. Processing: The image is processed using a pre-trained model (googlenet).
  3. Classification: The model predicts the category of the image and provides a confidence score.

Script Explanation

Here’s a breakdown of the my-recognition.py script:


#!/usr/bin/python3
from jetson_inference import imageNet
from jetson_utils import loadImage
import argparse
    
# parse the command line
parser = argparse.ArgumentParser()
parser.add_argument("filename", type=str, help="filename of the image to process")
parser.add_argument("--network", type=str, default="googlenet", help="model to use, can be:  googlenet, resnet-18, ect.")
args = parser.parse_args()

# load an image (into shared CPU/GPU memory)
img = loadImage(args.filename)

# load the recognition network
net = imageNet(model="../jetson-inference/python/training/classification/models/ai_art/googlenet.onnx",
 labels="../jetson-inference/python/training/classification/models/ai_art/labels.txt", input_blob="input_0", output_blob="output_0")

# classify the image
class_idx, confidence = net.Classify(img)

# find the object description
class_desc = net.GetClassDesc(class_idx)

# print out the result
print("image is recognized as '{:s}' (class #{:d}) with {:f}% confidence".format(class_desc, class_idx, confidence * 100))
  1. Imports: Load necessary libraries for image processing and classification.
  2. Arguments: Parse the image filename and network type from the command line.
  3. Load and Classify: Load the image, run it through the selected network, and print the classification result.

Training a Custom Model

To train a custom model, follow these steps:

  1. Prepare Dataset:
    wget [DATASET_URL] -O dataset.tar.gz
    tar xvzf dataset.tar.gz
  2. Configure Environment:
    echo 1 | sudo tee /proc/sys/vm/overcommit_memory
    ./docker/run.sh
  3. Train Model:
    python3 train.py --model-dir=models/my_model data/my_dataset
  4. Export Model:
    python3 onnx_export.py --model-dir=models/my_model
  5. Verify Model:
    Ensure the exported model (googlenet.onnx) is in the models/my_model directory.

Run Local App for Detection (easy)

  1. Download the exe. file here and run it.
  2. Find and select the .onnx model to be used and the labels.txt file to be used.
  3. Upload an image to the app, it will process it and tell you the output of its detection and the accuracy.

Running the Trained Model

You can run your trained model using either of the following two options: imagenet.py for image output or my-recognition.py for text output.

Option 1: Using imagenet.py for Image Output

This option allows you to generate an output image with the predicted label and confidence overlaid on it:

  1. Set Up:
    Ensure the model file (.onnx) and the labels file (.txt) are in the correct directory.
  2. Run Classification with Image Output:
    python imagenet.py --model=models/my_model/googlenet.onnx --input_blob=input_0 --output_blob=output_0 --labels=data/my_dataset/labels.txt path/to/image.jpg image_output_name.jpg

    Example:

    python imagenet.py --model=models/my_model/googlenet.onnx --input_blob=input_0 --output_blob=output_0 --labels=data/my_dataset/labels.txt non_ai.jpg output.jpg
  3. The output image will be generated with the predicted label and confidence score overlaid on it.

Option 2: Using my-recognition.py for Text Output

This option provides a simplified approach where the prediction and confidence score are displayed as text in the terminal:

  1. Set Up:
    Ensure the model file (.onnx) and the labels file (.txt) are in the correct directory.
  2. Run Classification with Text Output:
    python my-recognition.py path/to/image.jpg --network=models/my_model/googlenet.onnx

    Example:

    python my-recognition.py non_ai.jpg --network=models/my_model/googlenet.onnx
  3. The top prediction and confidence score will be printed to the terminal:
    image is recognized as '{class_desc}' (class #{class_idx}) with {confidence}% confidence

Resources

About

ArtDetectiveAI is an AI model trained on a Jetson Nano that specializes in detecting whether art is AI created or human created.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages