See the Deployment Overview chapter for a brief outline of jiffy-application deployment artifacts
In a development or test setting it may make sense to deploy everything in one system. In this context ‘system’ refers to a laptop, bare-metal server or a VM of some sort. To deploy an all-in-one type setup, the following artifacts are required:
The All-In-One Deployment diagram shows a jiffy-application instance running on a system with an ip-address of 192.168.1.5. The application instance is accepting http client connections on port :8080 and using port :7070 to listen for failure-detector & cache synchronization messages. The failure detector ‘bus’ is used for group-leadership and inter-process cache messages, even when the application is run standalone (single process).
It is not necessary to run an external KVS if the jiffy-application will be run as a standalone process. The KVS is used to hold the current group-leader information when jiffy applications are deployed with more than one instance. When running without a KVS, set the ‘local_standalone’ KVS to active in the xxx.config.json file. The application instance will assign itself a PID of 1 and assert leadership inside it’s own process. The short version is: don’t worry about the KVS if you want to run a single instance of your jiffy application.
If you do choose to run an All-In-One / single application instance with a KVS, using sluggo / setting the ‘sluggo’ KVS to active in the xxx.config.json file is the easiest way to get started. You may also choose to use Redis, Memcached or create your own implementation of the gmcom.GetterSetter interface to support another KVS.
The application communicates with the KVS using whatever protocol/transport is required. Remember that the selection of KVS is arbitrary and an interface is provided to allow the implementer to write their own code to communicate with any KVS.
The application is also shown to be accessing the locally installed database over a tcp connection. In this example we show PostgreSQL accepting connections on it’s default port (tcp/5432), but any of the supported databases can be used.
Single instance deployments contain one jiffy-application instance, an optional KVS and a database. In this scenario the database is running on a separate system and is available over the local network. Ip-addresses are shown for illustrative purposes only.
The Single-Instance Deployment diagram shows a jiffy-application instance running on a system with an ip-address of 192.168.1.5. The application instance is accepting http client connections on port :8080, using port :7070 to listen for failure-detector & cache synchronization messages. The failure detector ‘bus’ is used for group-leadership and inter-process cache messages, even when the application is run standalone (single process).
It is not necessary to run an external KVS if the jiffy-application will be run as a standalone process. The KVS is used to hold the current group-leader information when jiffy applications are deployed with more than one instance. When running without a KVS, set the ‘local_standalone’ KVS to active in the xxx.config.json file. The application instance will assign itself a PID of 1 and assert leadership inside it’s own process. The short version is: don’t worry about the KVS if you want to run a single instance of your jiffy application.
If you do choose to run an All-In-One / single application instance with a KVS, using sluggo / setting the ‘sluggo’ KVS to active in the xxx.config.json file is the easiest way to get started. You may also choose to use Redis, Memcached or create your own implementation of the gmcom.GetterSetter interface to support another KVS.
The application communicates with the KVS using whatever protocol/transport is required. Remember that the selection of KVS is arbitrary and an interface is provided to allow the implementer to write their own code to communicate with any KVS.
The application is also shown to be accessing a Postgres database via tcp @192.168.1.31:5432. In this example we show PostgreSQL accepting connections on it’s default port (tcp/5432), but any of the supported databases can be used.
Multi-instance deployments contain more than one jiffy-application instance, a KVS and a database. In this scenario the database is running on a separate system and is available over the local network and jiffy-application instances are running in parallel on multiple hosts. In this arrangement the KVS is not optional, as the group-leadership sub-system requires it in order to persist the group-leader information.
The Multi-Instance Deployment diagram shows a number of jiffy-application instances running on individual hosts. Each application instance is accepting http client connections on port :8080 and using port :7070 to listen for failure-detector & cache synchronization messages. The failure detector ‘bus’ is used for group-leadership and inter-process cache messages. Each application instance is shown making a web-socket connection to its peers on ws:7070.
The KVS is used to hold the current group-leader information when jiffy applications are deployed with more than one instance. See the Interprocess Communication section for details regarding the use of the KVS with group-membership and group-leader election.
Running sluggo and making use of the default gmcom.GetterSetter implementation is the easiest way to get started. You may also choose to use Redis, Memcached or create your own implementation of the gmcom.GetterSetter interface to support another KVS.
The application communicates with the KVS using whatever protocol/transport is required. Remember that the selection of KVS is arbitrary and an interface is provided to allow the implementer to write their own code to communicate with any KVS.
Application instances are shown to be accessing a Postgres database via tcp @192.168.1.31:5432. In this example we show PostgreSQL accepting connections on it’s default port (tcp/5432), but any of the supported databases can be used.
When running multiple jiffy-application instances a load-balancer of some sort should be used to route traffic based on end-point, system-load or other locally important criteria.