Advertise Raft on the Fully Qualified Domain Name

Currently, we configure Raft’s address as the transport’s local address, and the server will advertise its address as ::8400. We want to use the fully qualified domain name instead so the node will properly advertise itself to its cluster and to its clients.

In internal/log/config.go, change your Config to this:

DeployLocally/internal/log/config.go
 type​ Config ​struct​ {
  Raft ​struct​ {
  raft.Config
» BindAddr ​string
  StreamLayer *StreamLayer
  Bootstrap ​bool
  }
  Segment ​struct​ {
  MaxStoreBytes ​uint64
  MaxIndexBytes ​uint64
  InitialOffset ​uint64
  }
 }

Change your DistributedLog’s bootstrap code to use the configured bind address:

DeployLocally/internal/log/distributed.go
 if​ l.config.Raft.Bootstrap && !hasState {
  config := raft.Configuration{
  Servers: []raft.Server{{
  ID: config.LocalID,
» Address: raft.ServerAddress(l.config.Raft.BindAddr),
  }},
  }
  err = l.raft.BootstrapCluster(config).Error()
 }

And in distributed_test.go, update your log configuration to set the address:

DeployLocally/internal/log/distributed_test.go
 config := log.Config{}
 config.Raft.StreamLayer = log.NewStreamLayer(ln, nil, nil)
 config.Raft.LocalID = raft.ServerID(fmt.Sprintf(​"%d"​, i))
 config.Raft.HeartbeatTimeout = 50 * time.Millisecond
 config.Raft.ElectionTimeout = 50 * time.Millisecond
 config.Raft.LeaderLeaseTimeout = 50 * time.Millisecond
 config.Raft.CommitTimeout = 5 * time.Millisecond
 config.Raft.BindAddr = ln.Addr().String()

Run your log tests to verify they pass.

Finally, in agent.go, update setupMux and setupLog to configure the mux and Raft instance:

DeployLocally/internal/agent/agent.go
 func​ (a *Agent) setupMux() ​error​ {
  addr, err := net.ResolveTCPAddr(​"tcp"​, a.Config.BindAddr)
 if​ err != nil {
 return​ err
  }
  rpcAddr := fmt.Sprintf(
 "%s:%d"​,
  addr.IP.String(),
  a.Config.RPCPort,
  )
  ln, err := net.Listen(​"tcp"​, rpcAddr)
 if​ err != nil {
 return​ err
  }
  a.mux = cmux.New(ln)
 return​ nil
 }
 
 func​ (a *Agent) setupLog() ​error​ {
 // ...
  logConfig := log.Config{}
  logConfig.Raft.StreamLayer = log.NewStreamLayer(
  raftLn,
  a.Config.ServerTLSConfig,
  a.Config.PeerTLSConfig,
  )
» rpcAddr, err := a.Config.RPCAddr()
»if​ err != nil {
»return​ err
» }
» logConfig.Raft.BindAddr = rpcAddr
  logConfig.Raft.LocalID = raft.ServerID(a.Config.NodeName)
  logConfig.Raft.Bootstrap = a.Config.Bootstrap
 // ...
 }

Now we’re ready to deploy the service in our Kubernetes cluster.

Install Your Helm Chart

We’ve finished writing our Helm chart and we can install it in our Kind cluster to run a cluster of our service.

You can see what Helm renders by running:

 $ helm template proglog deploy/proglog

You’ll see that the repository is still set to the default: nginx. Open up deploy/proglog/values.yaml and replace the entire contents to look like this:

DeployLocally/deploy/proglog/values.yaml
 # Default values for proglog.
 image:
  repository: ​github.com/travisjeffery/proglog
  tag: ​0.0.1
  pullPolicy: ​IfNotPresent
 serfPort: ​8401
 rpcPort: ​8400
 replicas: ​3
 storage: ​1Gi

The point of the values.yml is to set good defaults and show what parameters users can set if they must.

Now, install the Chart by running this command:

 $ helm install proglog deploy/proglog

Wait a few seconds and you’ll see Kubernetes set up three pods. You can list them by running $ kubectl get pods. When all three pods are ready, we can try requesting the API.

We can tell Kubernetes to forward a pod or a Service’s port to a port on your computer so you can request a service running inside Kubernetes without a load balancer:

 $ kubectl port-forward pod/proglog-0 8400 8400

Now we can request our service from a program running outside Kubernetes at :8400.

Let’s write a simple executable to get the list of servers. Create a file named cmd/getservers/main.go that looks like this:

DeployLocally/cmd/getservers/main.go
 package​ main
 
 import​ (
 "context"
 "flag"
 "fmt"
 "log"
 
  api ​"github.com/travisjeffery/proglog/api/v1"
 "google.golang.org/grpc"
 )
 
 func​ main() {
  addr := flag.String(​"addr"​, ​":8400"​, ​"service address"​)
  flag.Parse()
  conn, err := grpc.Dial(*addr, grpc.WithInsecure())
 if​ err != nil {
  log.Fatal(err)
  }
  client := api.NewLogClient(conn)
  ctx := context.Background()
  res, err := client.GetServers(ctx, &api.GetServersRequest{})
 if​ err != nil {
  log.Fatal(err)
  }
  fmt.Println(​"servers:"​)
 for​ _, server := ​range​ res.Servers {
  fmt.Printf(​"​​\t​​- %v​​\n​​"​, server)
  }
 }

Then, run the command to request our service to get and print the list of servers:

 $ go run cmd/getservers/main.go

You should see the following output:

 servers:
 - id:"proglog-0" rpc_addr:"proglog-0.proglog.default.svc.cluster.local:8400"
 - id:"proglog-1" rpc_addr:"proglog-1.proglog.default.svc.cluster.local:8400"
 - id:"proglog-2" rpc_addr:"proglog-2.proglog.default.svc.cluster.local:8400"

This means all three servers in our cluster have successfully joined the cluster and are coordinating with each other!