Key Value Store Support

Overview

A KVS is required in order to persist the current group-leader information in multi-instance jiffy-application deployments. This is discussed in the Joining Overview section of the Interprocess Communication documentation. Although we refer to the persistent store as a KVS, anything that manages access and allows read/write operations can be used.

Jiffy applications support the use of Redis, Memcached, Stand-Alone/Local or Sluggo KVS systems out of the box. The active group-leadership KVS can be configured in the .xxx.config.json file by making appropriate settings in the “group_leader_kvs” key. As shown in the json excerpt below, the default config.json files are generated with the ‘local_standalone’ group-leader KVS active. In this mode, a single instance of the application may be tested without worrying about the group-leadership setup.

  "group_leader_kvs": {
    "local_standalone": {
      "active": true,
      "internal_address": "127.0.0.1:4444"
  },
  "redis": {
    "active": false,
    "max_idle": 80,
    "max_active": 12000,
    "redis_protocol": "tcp",
    "redis_address": "127.0.0.1:6379"
  },
  "memcached": {
    "active": false,
    "memcached_addresses": [
        "192.168.112.50:11211"
      ]
    },
    "sluggo": {
      "active": false,
        "sluggo_address": "127.0.0.1:7070"
    }
  }

If you wish to test a multi-application server deployment but do not want to install redis or memcached, the application supports a simple KVS called sluggo that was written as a lightweight-non-productive KVS testing solution. For now, the docs will refer to sluggo when discussing the KVS, but it recommended to use something more robust for real deployments.

Sluggo may be installed in your environment the following go get command:

go get -u github.com/1414C/sluggo

Sluggo may be run from a terminal session via the following command:

go run main.go -a 192.168.1.40:7070

If none of the supported KVS solutions are appealing, support can be added for the desired KVS via the implementation of interface gmcom.GMSetterGetter{} in appobj/lead_set_get.go.

Example gmcom.GetterSetter Implementation

Interface gmcom.GMGetterSetter

Interface gmcom.GMGetterSetter contains methods to facilitate read/write access to the persisted leader record in any key-value-store. Applications implementing the interface may choose to store the persisted current leader in any number of mediums; flat-file, db table record, redis, memcached etc.

Interface gmcom.GMGetterSetter
GetDBLeader() (*GMLeader, error)
GetDBLeader is provided in order to allow the implementer to retrieve the current group-leader information from the persistent store. The implementation of this method is intended to be self-contained.

For example, if the implementation calls for the current leader information to be persisted in redis, the implementer should code a self-contained method to connect to redis, retrieve the leader information and return it in the GMLeader pointer. Failure to read a current leader from the persistent store should result in the return of a nil in place of the *GMLeader pointer and a non-nil error value.

SetDBLeader(l GMLeader) error
SetDBLeader is provided in order to allow the implementer to persist a newly elected leader's information in the persistent store. The implementation of this method is intended to be self-contained.

For example, if the implementation calls for the current leader information to be persisted in redis, the implementer should code a self-contained method to connect to redis and store the leader information provided by input parameter *l*. Failure to persist the provided leader information should result in the implementer returning a non-nil value in the *error* return parameter.

Cleanup() error
Cleanup is provided in order to allow the implementer to cleanup connection pools, open and/or busy connections before the hosting application shuts down in response to a SIGKILL, os.Interrupt or SIGINT event. If the implementation does not require any cleanup, this method can be implemented to simply return nil.

Interface gmcom.GMSetterGetter Sample Implementation

// LeadSetGet provides a sample implementation of the GMLeaderSetterGetter
// interface in order to support persistence of the current group-leader
// information.  re-implement these methods as you see fit to facilitate
// storage and retrieval of the leader information to and from the persistent
// storage.  This example uses a quick and dirty web-socket-based cache to
// handle the persistence.  It works well enough for testing, but you should
// use something more robust like a database, redis etc.  The methods in the
// GMLeaderSetterGetter interface are called when a new process is attempting
// to join the group and also when a new leader is selected via the coordinator
// process.
//
// To test with the delivered interface implementation, install and run sluggo:
// go get -u github.com/1414C/sluggo
//
// Execute sluggo from the command-line as follows:
// go run main.go -a <ipaddress:port>
// For example:
// $ go run main.go -a 192.168.1.40:7070
//
type LeadSetGet struct {
  gmcom.GMLeaderSetterGetter
}

// GetDBLeader retrieves the current leader information from
// the persistence layer.
func (sg *LeadSetGet) GetDBLeader() (*gmcom.GMLeader, error) {

  // access the database here to read the current leader
  l := &gmcom.GMLeader{}
  wscl.GetCacheEntry("LEADER", l, "192.168.1.40:7070")
  return l, nil
}

// SetDBLeader stores the current leader information in the
// persistence layer.
func (sg *LeadSetGet) SetDBLeader(l gmcom.GMLeader) error {

  // access the database here to set a new current leader
  wscl.AddUpdCacheEntry("LEADER", &l, "192.168.1.40:7070")
  return nil
}

// Cleanup closes connections to group-leadership KVS when
// prior to application shutdown.
func (sg *LeadSetGet) Cleanup() error {
  
  // perform cleanup(s)
  return nil
}