Hi, I'm Evan Unmann. I'm a software engineer and I like to make things. I'm curious about how things work and then how to use them. Below is a brief summary of my education, career, and interests.
I attended Penn State University from Fall 2012 to Spring 2016. I graduated with a B.S. in Computer Engineering. My education spanned both software and hardware. My main interest was in high performance computing. Understanding hardware enough to design algorithms and code to fully utilize it.
I attended UNC Charlotte from Fall 2019 to Spring 2021. I graduated with a M.S. in Computer Science. My education focused on ML/AI. I also wrote a thesis: Performance Analytics of Graph Algorithms Using Intel Optane DC Persistent Memory. In short, this was a study of how to use Intel's new persistent memory technology to speed up graph algorithms. The basic idea was, if you can keep the graph completely in memory by utilizing the more dense persistent memory, you can speed up the algorithms. I absoulutely loved working on this thesis. I spent so many hours tweaking, testing, reading other papers, writing code to maximuze the performance. The freedom to explore and learn was amazing.
I also published research with researchers at Norwegian University of Science and Technology: Microfluidic droplet detection via region-based and single-pass convolutional neural networks with comparison to conventional image analysis methodologies. My contribution was writing code to train and test the neural networks. I also wrote a program to generate test data of microfluidic droplets. The input would be the number of droplets, the range of size of the droplets, and the range of speed of the droplets. The output was a video of simulating the droplets moving through the microfluidic channel and another file with the positions of the droplets in every frame of the video. I implemented static collision physics to make the droplets bounce off the walls and each other. With the video and positions, we could test the neural networks to see how well they could detect the droplets. This provided a quick and easy way to test the models without having to manually annotate videos of real data. For completeness, we also tested the models with real videos of droplets, which required more manual effort.
After graduating from Penn State, I started working at Siemens on a product called Active Workspace. To summarize, this is a web application that allows users to view CAD files in a browser. The user can load entire cars, airplanes, or any other assembly. This might not seem like much, but these CAD files are much larger than something you would see in video game for example. These files include details about the measurements of each part, the material, how they work together, etc. So the idea of loading all of this data in a browser (at the time) was a big deal.
I was responsible for the front and middle tier of the stack. This included writing JavaScript code on the client to manage the state of the viewer. Then, I wrote Java code on the server to handle requests and view sessions from the client. These clients need to be intelligently routed to the sessions in the C++ backend servers. The C++ backend was responsible for loading the CAD files. So I spent time making sure the client was routed to the correct backend server, new sessions were created on the optimal instance, and reducing the latency of the requests.
One of my favorite accomplishments in this role was writing a use case JavaScript Unit Test Framework, named JUF, that included a UI. At the time, there were no good frameworks for testing the 3D viewer. There was an internal framework, but it was written in Java and compiled to JavaScript using GWT. GWT was a dead project and the idea of have Java for the frontend sounded like a long term disaster. I saw the opportunity to make something better and natively in JavaScript.
What made JUF work for us was the UI component of it. It would dynamically build the UI based on the tests written. Then, each test had its own sandbox to play with (literally just a <div>
haha). But, the idea was that we could load models, add them to the div, and see the tests execute in real time. You could even load as many models as you want. Additionally, I built in a way to run multiple test with different configurations. This was useful for loading the same model with different file types of rendering techniques. There was also a terminal like textbox that would output the logs specific for that test. You could set the tests to a debug mode that would slow down the execution and keep the model open after the test was done for inspection. It could send the results of the run to a server to store. All of these features were a huge improvement over the previous framework.
I was given full creative liberty to build this framework. The opportunity was tremendously fun and I hope it is still useful to the team.
I left Siemens to pursue my Master's degree. After graduating, I started working at Amazon. I'm in the CDO part of Amazon, also known as not-AWS. My team is responsible for the technology for the non-inventory material management standard operations and procedures. In short, we build tools for fulfillment centers to count, record, order, and forecast usage for items likes boxes, tape, bubble wrap etc. These tools include websites, mobile apps for barcode and RFID hand scanners, sending alerts to user in fulfillment centers to perform actions, etc.
When first starting at Amazon. I was responsible for maintaining this internal website and a few other services and websites. However, this one website was clearly the most important. It was a few years old, had a tremendous amount of tech debt, but it was mostly functional and served its purpose. We were told to just keep the lights on, but this didn't quite sit right with me. The team quickly aligned that making this website better and expanding it globally was the right move.
We implemented a full migration from internal legacy systems and services into full native AWS. We did this without any downtime and minor changes to the user. The users needed to use a new URL to access the site (we kept the old one just incase we needed to revert users back). Overall, this allowed the website to expand in other continents and regions. This was a huge win for the team and for Amazon. New developing countries were able to use this website to manage their fulfillment centers in a more formal process to startup quickly and efficiently.
My particular role in that effort was to figure out the best way to migrate, generate a plan, and help the team execute it. I dived into best practices in AWS to ensure we're minimizing tech debt and starting with a strong foundation. Much of the team was inexperienced with AWS (as was I), so I was able to teach myself and the team how to use AWS. I developed code patterns and standards to ensure we're writing code that is easy to maintain and expand.
During the execution, I was responsible for the backend data processing service that integrate with 3rd party systems. I built reliable and scalable architecture to ingest and process the data periodically. These systems were critical in order to keep our internal systems in sync with the 3rd party systems.
The general strategy was to lift-and-shift as much as we could and rewrite the rest. This effort took close to a year to complete. Afterwards, we needed to focus back on our customers to enhance their experience. At this point, I was working on a project to build an on-box advertisement automation feature that allows users to seamlessly transition between ordering standard and promotional boxes. On-box advertisements are the designs on the outside of the box. These typically are advertisements for movies, shows, or Prime Days. The original box and the promotional box are technically different and the fulfillment centers needs to order the proper amount during the transitions to ensure adherence to the advertisement contract while also mitigating stockouts. Overall, this feature was a huge success and is now used globally. The fulfillment centers are now able to easily track dozens of promotions without the need for manual intervention in ordering.
After that, I lead a project to develop a new service that provides real-time alerts to fulfillment center employees, ensuring compliance to standard operations and procedures. The general idea was to increase awareness of process misses and directly alert the employees responsible for the process. This service integrates with multiple other internal services using event driven architecture.
As with many other software engineers, video games are a huge part of my life. They were a big part of why I got into programming. I've made a few games in my life, but nothing that I would consider a finished product. A few moble apps and a few PC games. I've always been interested in making video games. One day, I hope to make a game that I can be proud of.
Another interest of mine is high performance computing. For some reason, the idea of efficiently utilizing a computer is exciting to me. I've spent a lot of time learning about how to optimize code, writing algorithms, and the importance of writing efficient code. This is critical in a few programs, like video game engines, servers, FinTech algorithmic trading, etc. At some point, Iwould like to work on a project that requires high performance computing.