BalckBoard Auto-Logger
Intro
A python based application that automatically logs in into your accounts and attends your classes on cuchd.blackboard.com website. I wrote this script STRICTLY for EDUCATIONAL PURPOSES and I am NOT RESPONSIBLE for any non-disciplinary action commited by its users, i.e., all the students of Chandigarh University (yes, ALL, not just CSE or AIT !)
Usage
So you’re done with the installation. But, how do you use it? It’s really simple actually, you run the same runME.sh and voila, you’re in your class. It’s recommended that you don’t interact with the browser window while it’s connecting but, meh. Do what you want !
In theory this will log in onto BlackBoard and join you’re class according to the timetable provided by you while the installation.
NOTE :: This is still under testing and I am sorry if you face any problems feel free to contact. Also, let me know if you have any suggestions for the program or want to help in this project. You know where to find me. Adios !
AfterShock Predictions
AfterShock
Analysis of the 2018 article ‘Deep learning of aftershock patterns following large earthquakes’ by Phoebe M. R. DeVries, Fernanda Viégas, Martin Wattenberg & Brendan J. Meade, published in Nature.
Intro
In a world, divided by fear, of losing your loved ones, of losing your loved belongings,of losing your life, we hope to come up with a solution that keep you and your dreams safe. Because that’s what EarthQuake’s take away… Even after the major tremor, what hurts more is the AfterShocks that follow. These are produced by the stress that was caused by the earthquake.
This project gives us a second chance at saving lives by using Artificial Intelligence to determine where the next tremor is going to be. So that you can move, and get to a safer place. Methods like Columnb’s Stress Criterion are being used in current times to explain the spatial distributions of afterShocks, but as the advent of science & technology is improving, we hope to introduce Machine Learning models that can find an undiscovered pattern which will be helpful in predicting the fair locations of AfterShocks.
Once we have our predictions, it is very important to display them in a good manner so that Uncle Bob can understand them and move himself to safety. We have created a React web-app just for this purpose so that it is easily acessible to people and move them from harm’s way. Thereby, reducing the damage to both people and resources, thus, making this world a better place.
Federated Learning with IoT
Federated Learning On Distributed Private Health Data On Smartphones
The smartphones of the people probably carry the most valueable but also private data. Since using data promisses to be one of the best ways to fight back against COVID-19, it is highly desirable to get access.
By using a Federated Learning approach with PySyft it is possible to learn from the private data right on the smartphone, with the data never leaving the device.
A short YouTube-Video has been created for this project.
There is also an Devpost-project.
Approach
- Since there is no private dataset with health data during a virus outbreak, a simulated dataset has been used to show the prove of concept.
- The dataset contains the health status of each person (e.g. temperature, movement, … ) for several days during the virus outbreak.
- Using PySyft-Workers the data for each single person is distributed to a worker (virtual smartphone). Therefor each worker only knows its own health status.
- A simple feedforward network is send to each worker during the training process. The learning takes place directly on the virtual smartphone itself and an updated network is returned to the host. This way the data did never leave the smartphone and stays protected.
- The target variable to predict is the total number of infected people in this notebook.
Conclusion
It is possible to make us of the private health data of the people without lowering the protection of the data.
The notebook can be seen as prove of concept that learning on distributed individual health data can start a learning process in neural network.
Limitations (With Possible Solutions)
Limitation:
Simulated data without a connection to the real world has been used.
Solution:
Exchanging the dataset and adjusting the code should be pretty easy.
Limitation:
A trusted App with permission to store private health data on the device is needed on many smartphones.
Solution:
Probably another team created a similar App during the hackathon or there is an existing one already out there. Merging this approach with such an App is necessary.
Limitation:
The hardware limitations have been very strict for this notebook.
Solution:
Running the simulation and training process on a much larger scale should indicate if the approach is promising.
Sentiment Analysis
Introduction
This project aims to classify the emotion on a person’s face into one of seven categories, using deep convolutional neural networks. The model is trained on the FER-2013 dataset which was published on International Conference on Machine Learning (ICML). This dataset consists of 35887 grayscale, 48x48 sized face images with seven emotions - angry, disgusted, fearful, happy, neutral, sad and surprised.
Working
-
This implementation by default detects emotions on all faces in the webcam feed. With a simple 4-layer CNN, the test accuracy reached 63.2% in 50 epochs.
-
First, the .xml
file is used to detect faces in each frame of the webcam feed.
-
The region of image containing the face is resized to 48x48 and is passed as input to the CNN.
-
The network outputs a list of softmax scores for the seven classes of emotions.
-
The emotion with maximum score is displayed on the screen.