Shiny Server Pro allows you to scale Shiny applications to support multiple simultaneous users. The scaling is accomplished by setting 3 arguments in the configuration file (/etc/shiny-server/shiny-server.conf
) that dictate the behavior of the utilization scheduler.
To understand the utilizations scheduler it is necessary to understand Shiny Server Pro’s architecture.
In Scenario A, Shiny Server Pro connects the second user to the same R process. Because R is single threaded, if user A and user B both change an input and trigger R calculations their requests will be handled sequentially. (User B will have to wait for user A’s calculation to complete and their own calculation to complete before they will see an updated output).
In Scenario B, Shiny Server Pro will connect the second user to their own R process. If user A and user B both change an input their calculations will happen simultaneously.
A natural question is why wouldn’t Shiny Server Pro always chose Scenario B? The answer has to do with memory. When 2 users are connected to the same R process, they get to share everything that is loaded outside of the shiny server function.
To see this, consider when the different pieces of Shiny application code are executed:
This works because Shiny makes use of R’s unique scoping rules - read more here. In Scenario B, all of the shiny code has to be re-run, including loading any globally available data. This means the memory usage is 2x what it would be in Scenario A. Additionally, spinning up an R process and executing all of the shiny code takes time. So, while the application is more responsive to both users after the web page is loaded, it will take longer for them to connect to the web page in the first place.
The utilization scheduler tells Shiny Server Pro to act somewhere in-between Scenario A and Scenario B, to maximize the trade-off between app responsiveness and memory consumption/load time. 3 parameters specify this behavior:
Max Connections (maxRequestsPerProc) - The maximum number of connections per R process. Default value is 20.
Pick a small number if your application involves heavy computation. Pick a larger number if your application takes a long time to load, but after loading is very responsive to user selections.
Load Factor (loadFactor) - Determines how aggressively new R processes will be created. This parameter can take on values from 0-1. A value close to 0 means new processes will be spun up aggressively so that the number of connections per process will be small. A value close to 1 means the number of connections per process will close to max connections. Default value is 0.9.
Pick a small number if your application loads quickly but involves expensive computation. Pick a number closer to 1 if your application loads slowly, but after loading is fast OR if you want to minimize the amount of memory.
Max Processes (maxProc) - Determines the maximum number of R processes that will be created. Max processes x max connections = total number of connections to an application. Default value is 3.
Pick a value that will support the expected number of concurrent users.
Example Configuration:
server {
listen 3838;
... #any other configuration options like authentication
location /myApp {
app_dir /srv/shiny-server/myApp
# Utilization Scheduler
utilization_scheduler 20 0.9 3; # 20 max connections, 0.9 load factor, 3 max processes
}
# Include the admin dashboard to track performance metrics
admin 4151 {
required_user admin;
}
}
These 3 parameters can have a significant impact on application performance.
Tuning an application is often an iterative process. This Shiny Server Pro admin dashboard contains metrics and real-time information at every level: applications, processes, and connections.
My application is still slow …
If your deployed application is slow to respond after optimizing the utilization scheduler, then you should try refactoring your application code. No settings will make up for poorly written code - profiling code is highly recommended before deploying an app). A good place to start is using the Profiler to understand where your code is running slowly. It can also be important to check your reactive dependencies - watch this video to learn more.
Comments