- Implemented in Flask. Run on top of Gunicorn and Nginx.
- Extensively described with Swagger.
- It uses Docker to run isolated Packet Tracer instances in a lightweight manner.
- It uses Celery to manage Docker containers’ lifecycle.
How does it work?
The internal API uses Docker to create Packet Tracer instances. However, requests always interact with Docker indirectly through Celery.
Packet Tracer instances are run isolated in a Docker container. Instead of creating a new instance/container for each PT Anywhere session, containers are reused as much as possible.
To do this, we made a distinction between Instance resources and Allocation resources. Instance resources can be mapped to Docker containers. An Allocation resource represent the exclusive use of one of these containers during a certain period.
To reduce containers’ CPU consumption when they are not being used (i.e., they are not allocated by a session) they are paused. When a new allocation is requested and an instance is available, the internal API unpauses it and loads the initial file removing the data from any previous session. All the containers are created using the same image and mounting a data volume container which contains Packet Tracer’s installation and configuration files.
Celery runs the following tasks in the background:
- Create instance. Containers are created whenever they are needed only if certain thresholds have not been reached. E.g., general maximum CPU and memory limits are defined to avoid overloading the server.
- Delete instance.
- Allocate instance. This task either allocates available instances or triggers a creation task to get a new instance.
- Deallocate instance. Once a PTAnywhere session has ended, the associated instance (i.e., container) can be released.
- Wait for instance to be ready. This task checks whether the Packet Tracer instance inside a container answers to commands. In plain words, it is a sort of ping to Packet Tracer. After certain unsuccessful retries, it marks it as erroneous.
- Monitor instance. Celery checks instances marked as faulty to see if their associated containers can be restarted. Otherwise they are deleted. This is a periodic task handled by Celery Beat.
To do these tasks, we use two queues: a normal one and a high priority one. The high priority queue is used to ensure that allocations and deallocations are done as fast as possible. Both tasks are the ones that the public API triggers and can be done immediately.