Hash Maps

In this project I wanted to study hashing and hash maps by learning how the C++ std::map and std::unordered_map function internally.

The web demo shows how an unordered_map would insert key/value pairs into buckets with two different collision resolution techniques.

The full C++ implementation of map and unordered_map has nearly all of the same functionality as the c++ std library counterparts, like iterators, operator overloads, and insertions/deletions.

LL(1) Parser Generator

I created this and the LR(1) projects to help me better understand the different parsers we were learning about in my compilers class.

LL(1) Parser Generator is a tool that generates a parser for a given grammar. The parser is a top-down parser that reads the input from left to right and constructs a parse tree from the root down using a parsing table by constructing FIRST and FOLLOW sets.

LR(1)/LALR(1) Parser Generator

LR(1) and LALR(1) Parser Generator is a tool that generates a parser for a given grammar. The parser is a bottom-up parser that reads the input from left to right and constructs a parse tree from the leaves up using a parsing table by constructing a state machine.

LALR(1) parsers are constructed by merging similar states in the LR(1) state machine. Both are more powerful than LL(1) parsers because they can handle a larger class of grammars.

Regular Expression Engine

In this project I wanted to implement some parsing techniques I learned while creating the LL(1) and LR(1) parser generators, and further study formal language theory.

This C++ implementation uses a hand crafted recursive descent parser that constructs the given regular expression into an NFA. I use a modified version of Thompson's Construction that creates only a single state for each character and metacharacter, which is shown.

Matrix

This project is a C++ Matrix library that I designed while learning about parallel and IO efficient algorithms. I wanted to see how practical cache aware algorithms were in terms of performance.

The 2D Matrix operations are designed to be efficient on the CPU, taking advantage of SIMD operations plus parallel and cache aware algorithms. All of these combined gave me a 99% improvement in speed for matrix multiplication on 3000x3000 matrices, and a modest 75% speed boost with transpose on 30000x30000 matrices.

Neural Network

The Neural Network is a very important concept in Machine Learning and Artificial Intelligence. I wanted to dive deep into designing one to fully understand how a neural network learns with gradient descent, how forward and backward propagation work, and the interesting optimization problems of training, such as dynamically setting the learning rate with RMSProp.

My implementation is very customizable, with options to do stochastic or batch gradient descent, applying the softMax function to the output layer, and allowing the user to define their own activation functions. With this I was able to achieve a 95% accuracy on the MNIST dataset and a 96% accuracy on the Iris dataset.

The online demo visualizes the trained network for the MNIST dataset, allowing the user to draw numbers to predict.

Fourier Series

I was inspired by this 3blue1brown video to use math to make stunning visuals with a surprisingly simple function ƒout(t)=∑cne-2πit, where the coefficients cn are computed from the input function cn=∫01ƒin(t)e2πit.

The Fourier Series is a way to represent any periodic function as a sum of sines and cosines. It can also approximate non-periodic functions as well. Allowing for the user to draw their own picture to approximate.