What is auto-scaling?
Auto-scaling is a process that horizontally scales the number of virtual machines or containers based on an increased number of requests from system users.
Please note that auto-scaling is formally outside of our support. As such, it is best to consult your infrastructure team if you run into any issues.
Considerations
There are a number of factors to consider with auto-scaling. Many vendors have their own processes and procedures, so its best to follow the advice of your infrastructure team. We do have the following documentation which can help point you in the right direction:
https://solutions.posit.co/architecting/launcher/autoscaling/index.html
We've seen that scale-down events in the past have caused a few issues with other customers. The key components to consider when scaling down are:
- Ensuring that sessions are suspended prior to scale-down.
- Ensuring that nodes are removed from the database prior to the termination of the server/container.
-
From there, delete any nodes that don't currently exist:rstudio-server list-nodes
Where n are the nodes that exist on the database that don't currently exist on the cluster. You'll only need to perform this on the single node, as these changes will be reflected in the database.sudo rstudio-server delete-node n
-
- Deactivation of your license key from the server/container that is expected to be terminated.
Stale entries in the database and records in users' home directories can cause a number of issues when launching new sessions. I'd recommend having the steps performed above prior to instance termination.
Whilst we aren't able to provide support on the vendor side, these commands will need to be run prior to terminating the instance or container during a scale-down event.
Support Ticket
If you still have issues after completing the above, you can always lodge a support ticket, where our group of friendly, and incredibly knowledgeable staff can assist with any issues that you may be having. You can submit a ticket here:
https://support.posit.co/hc/en-us/requests/new
Comments