Machine Integration
Lab communication system for interaction between database website and individual tools
Last updated
Lab communication system for interaction between database website and individual tools
Last updated
The goal of the Machine Integration project is to develop an extensible interface that can be used to communicate between the primary web application and each individual machine in the hacker fab.
The information in this section will explain some basics about web application design that will allow for a better understanding of the current machine integration system.
An API is an application programming interface. For the purposes of this project, it is the abstraction of the interface between the server and client devices. The client devices can invoke GET, POST, UPDATE and other HTTP (Hyper Text Transfer Protocol) endpoints for a specific URL. We can send test API requests easily using POSTMAN. For those more familiar with linux networking, you could also use a CURL command.
Postman is a user friendly way to send API requests to test endpoints. Note that in production, another program will call an API endpoint. Postman is just a way for us to easily test these endpoints. If you are familiar with the CURL linux command, you can use that instead.
Download POSTMAN from their website (do a google search). Below is a basic overview of the user interface.
A database is a structured collection of data that enables efficient storage, retrieval, and management of information.
You can think of a single Table within the database as an excel spreadsheet. The entire database is like an excel workbook.
Each table has multiple column names (shown as the first row in the image below). One of these columns are dedicated to be the primary key. The primary key uniquely identifies a single row, or entry, in the database. The other columns are used to store information about that entry.
The entire structure of the database (names of all the tables, column names, etc) is referred to as the schema.
Now moving onto the details of this project, here is a system level architecture diagram of the system.
Web Application Frontend & Backend: This is the first two boxes on the architecture diagram above. This is how the end user will interface with the system. The user can create/edit/delete jobs. The exact details of how the web application frontend and backend work are covered in more detail in that section of the documentation (not on this page).
Jobs Queue on AWS: This is where the list of jobs that the machines will need to run is stored. The jobs queue on AWS will have an API interface that is accessible to the web application (in order to take in new jobs) and the Raspberry PI (in order to complete jobs).
Raspberry PI mini computer: This raspberry PI will be physically located near the tool that will be controlled (spin coater to start). The raspberry PI will be connected to the device using either USB or I/O pins on the raspberry PI. The raspberry PI will pull jobs from the jobs queue and run them on the device. There will also be a keyboard, mouse, and portable monitor connected to the Raspberry PI to monitor the status of the machine (these components will be optional, but provide additional redundancy).
Device to be controlled: Will receive control signals from the raspberry PI. For this version of the project, we will be controlling the spin coater.
The primary way that jobs will be created is through the hacker fab website. The hacker fab website will call the appropriate API call to manage the jobs database. The implementation of this is out of the initial scope for my project. For testing purposes for this semester, I will use POSTMAN to send API calls to the AWS jobs database.
Users can also create new jobs directly via the Raspberry Pi UI. This allows for redundancy in the case that the machine needs to be controlled without the database.
There will be an option to run the job immediately or to send it to the database to be added to the queue.
The AWS server maintains a centralized queue of jobs.
The Raspberry Pi fetches and dequeues jobs by querying the API. Only jobs for the specific machine are dequeued.
The Raspberry Pi receives the job and displays it on the connected UI.
Upon user interaction (manual start) or automatically (if set), the job is run on the spin coater (or other device in the future)
After execution, the Raspberry Pi sends the success/failure status to the AWS server, updating the job's record in the database.
This will significantly automate the process of chip development. Our goal is to make basic chip tape outs like 3D printing.
Answer: KISS: Keep it simple stupid. Adding the additional communication step between a central raspberry pi to peripheral arduinos would significantly increase complexity of the system in multiple ways:
Initial implementation time is significantly longer for an arduino solution. The additional communication link between the raspberry pi and arduinos is not trivial, especially considering issues with the CMU wifi.
The user experience is significantly worse for the arduino solution. The user who is standing at the tool is completely reliant on the automation system in this case. By contrast, with my proposal, the user will still have direct control of the tool via the on-screen gui at the machine.
A raspberry pi for each machine would be a significantly more robust solution, while still offering all of the potential for automation.
Less complexity (removed additional wireless link)
Tools can still easily be operated manually (using on screen gui) in case of issues with the website/database.
I will now go through the jobs queue and Raspberry PI mini computer in more detail, as these are the key innovations of the machine integration framework.
The jobs queue is where jobs that are requested by the web application are stored before they are fetched by the appropriate tool's raspberry PI.
Other programs (such as the web application and the RPI) interface/communicate with the jobs queue via API calls. These are HTTP requests.
API Gateway Endpoints from AWS for web application:
AWS Configuration for jobs queue:
This is the jobs queue that the tools (right now only the spincoater) pull from. Jobs are enqueued from the primary web application.
The API gateway routes are configured as follows:
The dynamo DB is configured with default settings. job_id is the primary key.
I am currently using a free tier AWS account from hacker fab. This takes care of all licensing requirements for this project.
The majority of the logic is in the lambda function. The lambda function is the code that runs on AWS servers that processes API requests to the database. The python code is as follows (up to date as of 4/10/2025):
This code scores a 9.31/10 on pylint (google code guidelines). The failures are described in comments and are intentional, or are due to import statements not being found (since this code runs on AWS, not locally).
Other AWS resources (File Transfer):
S3: S3 is amazon's blob storage service. When a file is uploaded to the upload link from the generate_upload_url, it is stored here. It is setup with default configurations.
IAM management policy. This policy must be added to allow the Lambda script to access the S3 data. This is needed to generate the upload and download URLs.
To add the policy, go to IAM from the AWS panel, then roles, then create role.
Here is an automated test script to run that will test all endpoints except /generate_download_url and /generate_upload_url. Run this script locally on your PC. The script should pass if the AWS database is configured correctly. You may need to change the base URL.
The following python script tests the functionality of the s3 file upload and download system. Note you made need to change the base URL to the current AWS instance.
The raspberry PI pulls job requests from the AWS jobs queue
Raspberry PI 5 with case and heatsink. There are two ways the RPI can connect to tools to control. The first method is using jumper wires to connect GPIO pins to external device. See image attached. The device could also interface with the raspberry PI over one of the Raspberry PIs USB ports. This is the approach that is used for the spincoater
To control the Raspberry PI 5 GPIO ports, use the gpiod python package (see screenshot below). It is critical to use gpiochip4. If you are interfacing with the device over USB, the exact setup instructions will depend on the device you are controlling. I will detail how to interface with the spincoater in this guide.
Here are the reasons why we are using UART over USB instead of using the GPIO TX and RX on the rasperry PI and arduino. Keep this in mind if you are integrating future devices other than the spincoater.
The raspberry PI runs on 3.3V while the arduino runs on 5V. This presents a problem for the signal from the arduino to the RPI. We have to use either a level shifter or a voltage divider. I didn't have a level shifter on hand, so I tried building a voltage divider using a 20k (on the bottom) and 10k resistor (on the top). This filtered the voltage as needed, but it's possible this was part of the issue. Here is an image of the basic voltage divider I tried.
I also attempted to bitbang a UART connection between the RPI and arduino. This was also unsuccessful. After a bit of research, it looks like this is because Raspberry PI OS is not a RTOS (real time operating system), so the timing is not precise enough to bitbang a UART connection. Keep this in mind before trying this approach for a new tool.
At this point I was pretty stumped, so I searched around the internet a bit. I then realized it would be MUCH MUCH easier to simply use UART over the USB connection between the two devices. The reasons why I settled with this approach are as follows:
I was able to get UART over USB from RPI to arduino working, but I was not able to get it working using the GPIO UART module.
This solution is more scalable as we can have multiple arduinos that control individual devices plugged into one raspberry PI as the RPI only has one UART TX/RX vs multiple usb ports.
We can leverage existing consumer grade USB extension cables and hubs
The USB cable is physically studier than the multiple thin GPIO cables. Also, there is only one cable to deal with.
This is the code running on the raspberry PI. I'll first show it from the user's perspective:
When the machine is IDLE, the monitor connected to the RPI just displays a message that we are waiting for the next job
If autorun is not turned on, you'll need to manually approve the job
Next, the job will be run
Finally, the user can type in input for the final job status (this will vary based on the machine automated)
The user can also create a job from the RPI gui itself. If the RPI is connected to the internet, it will upload the job details to the database. Otherwise, the job will run completley offline. This is a intentional redundancy feature.
Fetches jobs from AWS jobs queue
Displays the currently running job on the GUI. Also allows users to control whether jobs run automatically or require manual confirmation.
Runs the job on the device
Sends completion details back to the AWS jobs queue.
I will now detail three important pieces of the code. These three parts should be the only parts of the code that you need to modify when integrating a new tool.
Note that the three sections that you need to edit to integrate a new tool begin with the comment
The first portion is an initialization block. In this case, it just opens the UART port to communicate with the Arduino. In general though, put any code here that needs to run once when the raspberry PI starts up.
The second portion sends the command to the arduino. Note that it is a self-contained method for this new device. To add this device to the code, all we needed to do was add the peripheral config and write this method.
This method also reads out the debugging messages from the arduino. Once we actually get the spincoater working, this could be messages if the spincoater is malfunctioning.
Note that I show two methods: one for run_led and another for the spincoater. In the next section, you'll select which you want to run when a job is recieved.
The last section that needs to be reconfigured when adding a new device sets parameters for the GUI and main action of the program.
The editor will specify a default job param template (needed to construct the GUI automatically), the name for the machine (so it knows what jobs to grab from the server) and the name of the function to run when a new job is fetched.
Whoever is extending this software to a new tool will need to write firmware for the ardunio or other microcontroller that controls the final device. For example, the spincoater has an internal ardunio that acts as a microcontroller. The RPI talks to this microcontroller over UART.
Here is the code that will run on the arduino FOR THE SPINCOATER INTEGRATION. Notice that only a section was added for the USB UART interface.
The spin coater's motor must be powered by an AC 120-V power supply. The spin coater also needs to be connected to an Air Compressor, which is also AC powered.
The AC power to the air compressor and the spin coater are connected to the "normally off" outlets of the relay switch. GPIO pin 17 of the Raspberry PI is connected to the positive terminal of the relay switch DC input. The ground pin of the Raspberry Pi GPIO pins is connected to the negative terminal of the relay switch.
In this section, I will explain how the lab communication system was used to successfully connect the stepper to the database website with minimal changes to the system architecture.
There are two main differences between the spincoater and stepper when considering their integration with the lab com system.
The stepper solution already has a control PC. This eliminates the need for the RPI, as we can just run the lab_com software directly on the control PC.
The stepper needs to have an image sent from the website to describe the image to pattern. Although it is theoretically possible to encode this within the existing input_params JSON, this is not a robust solution. We will need to develop a file transfer system to transmit images from the website to the stepper. This file transfer system will be soley used for images here, but it can be used with other types of files when connecting other devices to the lab_com system.
The diagram below shows the overall dataflow for the interaction between the website and the stepper.
The website first adds a job to the job queue to start the spincoater. For testing purposes, we will use a simple web gui as shown in the image below.
After the user hits submit, the image is uploaded to AWS S3 blob storage. AWS S3 blob storage is a service that allows shorter-term storage of files as part of a web application.
As a result, the jobs queue database will then have the following entry added. It is shown here in JSON where the keys are the column names and the values are the values within this specific row of the table.
Note that the table schema (names of the columns) did not change for the stepper. It is the same as the table schema for the spincoater. All jobs are stored in the same table. We are able to differentiate between jobs for each type of machine by the "machine" column (as seen above).
The file that stores the pattern to image is not stored directly in the jobs queue database, but is instead stored in AWS S3 storage. Within the jobs queue database table, the input parameter stores a key to access the specific file uploaded for this job. For the job above, the key is: "image_s3_key": "uploads/129f420c-ac04-46af-a935-a1ded153de1c"
The job is now stores in the queue and ready to be fetched and run by the stepper.
At this point, the stepper has all of the data needed to complete the patterning job. The patterning job is run, and the job_completion endpoint is called on success.
This system was designed on the ground up from both the macro and micro-level for robust operation.
The main high level design principle that ensures robust operation is redundancy. The lab_com system for this semester is not intended to fully operate with no human intervention. Although the infrastructure is now in place for operation without human intervention, we will have a human in the loop for the short to medium term. For example, the current spin coater requires a human to place the chip on top and line it up before the spin can begin. So, the current system includes a GUI control attatched to the RPI that allows the user of the spincoater to approve or deny jobs sent from the website. Furthermore, the GUI also allows the user to manually start a local job in case the lab_com system goes completely offline. Last, the physcial buttons are still present on the spincoater and will override any automated controls.
A related high level design principle that has been employed to increase reliability is the KISS principle: Keep it simple stupid. In practice, this has meant using the HIGHEST possible level of abstraction for hardware and software tools. This not only reduces design complexity substantially, but it also improves design reliability as we are using tools and systems that have already been validated by others. The main example of this is using HTTP API requests instead of opening sockets for network communication. HTTP requests are the foundation of all web-based traffic. There are countless resources available for developing, debugging, and testing HTTP requests. The barrier to entry for future students working on this project will also be much lower.
scored 10/10 on google pylint
scored 10/10 on google pylint
The essential behavior of can also be described in the following state machine:
The code completes the following high level actions.
scores 9.44/10 on pylint (used by google code guidelines). The failures are purposeful design decisions and are described in the comments.
In this section I will give additional comments on how the spincoater was integrated to the lab_com system. Many of the specific implementation details were highlighted in prior sections (see run_spincoater method of in earlier section), but this section will fill in a few gaps. This will hopefully guide the design process for integrating additional tools.
The existing control PC does not need to run the entire lab_com_gui software. Carson, the current project lead, was simply able to incorperate a series of API calls to the database into his existing python code for the stepper GUI. The details of the stepper GUI are out of scope of this project (see that section of the doucmentation on gitbook). He directly copied the code from . This file is linked
See get_file_upload_url_and_key and upload_file functions in for the details on this process. After the upload is complete, the web application gets back a s3_key. This is a string that uniquely identifies the image that was just uploaded. The web application then POSTs to the jobs endpoint. The request will have the machine name field set to stepper and the input_parameters will include the image_s3_key.
The control PC for the stepper will poll the jobs queue by repeatedly calling the jobs/next endpoint. Eventually, the json object shown above will be received. The control PC for the stepper will then download the image from AWS S3 storage using the image_s3_key in the input parameters. See the download_file() method of for details on how this work. This method can be reused when automating other machines.
On the micro-level, the code has been written in a way to catch as many error states as possible. For example, in the, the method for returning data to the server after a job has completed has a try-catch block in case the API request fails. This will prevent the application from failing catastrophically and will provide a clear error message for the user. These try-catch blocks are used for many of the HTTP api calls, as this is the most likely point of failure (e.g. the RPI isn't connected to the internet).