Skip to content

ARMINSHIVES/Auto-Encoder-Model-Development-Research-Code

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Graph Neural Networks for Material Property Prediction

This repository contains code developed during my research at Rensselaer Polytechnic Institute (RPI), specifically as part of the "Designing Materials to Revolutionize and Engineer our Future" (DMREF) project. The goal of this project was to apply deep learning—particularly Graph Neural Networks (GNNs) and Autoencoders—to analyze and optimize material properties for aircraft engine components operating under extreme conditions.


Research Overview

In this project, I focused on developing and optimizing deep learning models that predict material properties based on crystal structure and composition. The models were trained using data from the Materials Project Database and the AFLOW Database, provided by RPI's Mechanical, Aerospace, and Nuclear Engineering (MANE) Department.

Objectives

  • Identify new materials suitable for use in high-temperature jet engine environments.
  • Use graph-based deep learning models to extract low-dimensional representations of materials.
  • Cluster and analyze materials based on learned representations to infer desired properties.

Methodology

1. Graph-Based Autoencoders (GAEs)

I implemented several models to process material data represented as graphs:

  • Graph Convolutional Network (GCN)
  • Graph Attention Network (GAT)
  • Graph Autoencoders (GAEs) using GCN and GAT encoders

Each model was trained with 5-fold cross-validation and optimized for performance using PyTorch and supporting libraries like scikit-learn.

2. Image-Based Autoencoders with CNNs

In a separate phase of the project, I designed and trained convolutional autoencoders using Keras and TensorFlow to analyze graphical representations of material properties. The goal was to leverage CNNs for extracting complex patterns from visual data.

  • The encoder and decoder components both used convolutional layers.
  • Metadata was also integrated to analyze categorical patterns (each labeled 1–4 across three columns).
  • Results showed how visual and structural data could be combined for better material characterization.

Key Techniques Used

  • Graph Neural Networks (GCN, GAT)
  • Graph Autoencoders (GAEs)
  • Convolutional Neural Networks (CNNs)
  • Dimensionality Reduction & Clustering
  • 5-Fold Cross-Validation
  • Metadata-driven pattern analysis
  • PyTorch, Keras, TensorFlow, scikit-learn, and Google Colab

Notes and Attribution

  • Data Acknowledgment: The datasets used in this project were provided by Rensselaer Polytechnic Institute's MANE Department and are not included in this repository.
  • Original Work: All code in this repository was written by me during my independent research contributions to the DMREF project at RPI.
  • Presentations: Research progress and findings were presented weekly to RPI graduate faculty during the summer session.

License

This repository is licensed under the MIT License. You are free to use, modify, and distribute the code with proper attribution.


Acknowledgments

Thanks to the research group at RPI’s MANE Department for the opportunity to contribute to this exciting area of materials science, and for providing access to valuable datasets and feedback during model development.

About

No description or website provided.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published